IS4SI 2021

The 2021 Summit of the International Society for the Study of Information

September 12-19, 2021

IS4SI Summit

General Program of Plenary Sessions

2021/09

There are two Plenary Sessions (called here prime-time blocks) each day:

Block 1: 4:00-7:00 UTC and Block 2: 13:00-16:00 UTC.


In these two blocks of time in each day the organizers of Contributing Conferences were requested NOT TO SCHEDULE any activities, except for the allocated to the Conference block of prime-time. All Contributing Conferences have allocated time within prime-time, typically one three-hour prime-time block.


Contributing Conferences can schedule their sessions which do not belong to this schedule of plenary sessions on any day (September 12-19) and any time between the opening and closing of the Summit outside of the two blocks of the plenary prime-time.


The schedule below includes the abstracts of Keynote or Invited Speakers who were invited by the organizers of Contributing Conferences for their plenary contributions and by the organizers of the Summit without affiliation to any particular contributing conference. The schedules of the presentations, discussions, etc. not allocated for Plenary Session are planned and implemented by the organizers of Contributing Conferences. These schedules are not presented here, but they will be displayed on the IS4SI website.


The Schedule of Plenary Sessions presented here is intended to be final. However, in a case of unexpected and unavoidable need for revisions they will be announced in the Updates on the website of the Summit.


Several presentations, lectures, discussions in this plenary schedule are contributions to the Summit from Contributing Conferences. They are listed here and at the same time they are listed in the schedules of their respective conferences. The majority of presentations belong to the schedules of Contributing Conferences which frequently run in parallel sessions. These presentations are not listed here in the plenary program, but only in the programs of conferences. All participants are invited to join audiences of several Contributing Conferences.


All Plenary Sessions (and ONLY these sessions) will be ZOOM Meetings hosted by Marcin Schroeder. Invitation link will be provided later. Online parallel sessions will be hosted in the internet platforms (some on alternative platforms not necessarily based on ZOOM) decided by organizers of conferences, but the links to these sessions will be displayed on the website of the Summit.


This document is about plenary events of the 2021 IS4SI Summit. Information about the contributing conferences can be found on the website of the Summit (https://summit-2021.is4si.org ).


Some contributing conferences have their own web pages:

Dighum web page:: https://gsis.at/2021/08/10/is4si-2021-digital-humanism-workshop-programmed/

TFPI web page: Theoretical and Foundational Problems (TFP) in Information Studies (tfpis.com)

IWNC web page: https://www.natural-computing.com/#iwnc-13

SIS web page: Schedule for the Conference “Symmetry, Structure and Information” in theIS4SI 2021 Summit – The International Society for the Interdisciplinary Study of Symmetry (symmetry-us.com)

Web page courtesy of Aaron Sloman: IS4SI-IWMNC-MORCOM-DIGHUM-TFPI (bham.ac.uk)

MORCOM page with schedule: IS4SI_MORCOM_SCHEDULE.pdf - Google Drive


The abbreviations for the names of contributing conferences used here are as follows:

TFPI – Theoretical and Foundational Problems in Information Studies

BICA – Information in Biologically Inspired Computing Architectures

Dighum – Digital Humanism

SIS – Symmetry, Structure, and Information

MORCOM – Morphological Computing of Cognition and Intelligence

H&R – Habits & Rituals

IWNC – 13th International Workshop on Natural Computing

APC – Philosophy and Computing

ICPI – The 5th International Conference on Philosophy of Information

GFAI – Global Forum for Artificial Intelligence



Book of Abstracts for Plenary Presentations and Introductions to Discussions of the 2021 IS4SI Summit

(edited by Marcin J. Schroeder)

The list shows the titles of abstracts and names of authors in the chronological order of their presentation at plenary sessions. The list is followed by the abstracts in the same order.

1. Introduction to IS4SI Panel Discussion “What is the SI in IS4SI?” moderated by Marcin J. Schroeder

2. Autopoietic machines: Going beyond the half-brained AI and Church-Turing Thesis presented by Rao Mikkilineni

3. Research in the area of Neosentience, Biomimetics, and Insight Engine 2.0 by Bill Seaman

4. Mind, Nature, and Artificial Magic by Rossella Lupacchini

5. Non-Diophantine arithmetics as a tool for formalizing information about nature and technology by Michele Caprio, Andrea Aveni and Sayan Mukherjee

6. Ontological information - information as a physical phenomenon by Roman Krzanowski

7. Materialization and Idealization of Information by Mark Burgin

8. Paradigm Shift, an Urgent Issue for the Studies of Information Discipline by Yixin Zhong

9. Structural Analysis of Information: Search for Methodology, by Marcin J. Schroeder

10. Quality of information by Krassimir Markov

11. A QFT Approach to Data Streaming in Natural and Artificial Neural Networks by Gianfranco Basti and Giuseppe Vitiello

12. Arithmetic loophole in Bell's theorem: Overlooked threat to entangled-state quantum cryptography by Marek Czachor

13. Advanced NLP procedures as premises for the reconstruction of the idea of knowledge by Rafal Maciag

14. Toward a Unified Model of Cognitive Functions by Pei Weng

15. A Nested Hierarchy of Analyses: From Understanding Computing as a Great Scientific Domain, through Mapping AI and Cognitive Modeling & Architectures, to Developing a Common Model of Cognition by Paul Rosenbloom

16. The Development and role of Symmetry in Ancient Scripts by Peter Z. Revesz

17. Symmetry and Information: An odd couple (?) by Dénes Nagy

18. Antinomies of Symmetry and Information by Marcin J. Schroeder

19. Introduction to SIS Conference Panel Discussion by Dénes Nagy & Marcin J. Schroeder

20. Digital Humanism by Julian Nida-Rümelin

21. Humanism Revisited by Rainer E. Zimmermann

22. The Indeterminacy of Computation: Slutz, Shagrir, and the mind by B. Jack Copeland

23. Falling Up: The Paradox of Biological Complexity by Terrence W. Deacon

24. Almost disjoint union of Boolean algebras appeared in Punch Line by Yukio Pegio Gunji

25. Why don't hatching alligator eggs ever produce chicks? by Aaron Sloman

26. Morphogenesis as a model for computation and basal cognition by Michael Levin

27. Cross-Embodied Cognitive Morphologies: Decentralizing Cognitive Computation Across Variable-Exchangeable, Distributed, or Updated Morphologies by Jordi Vallverdú

28. Designing Physical Reservoir Computers by Susan Stepney

29. The Aims of AI: Artificial and Intelligent by Vincent C. Müller

30. Cognition through Organic Computerized Bodies. The Eco-Cognitive Perspective by Lorenzo Magnani

31. Digital Consciousness and the Business of Sensing, Modeling, Analyzing, Predicting, and Taking Action by Rao Mikkilineni

32. On Leveraging Topological Features of Memristor Networks for Maximum Computing Capacity by Ignacio Del Amo and Zoran Konkoli

33. Habits and Rituals as Stabilized Affordances and Pregnances: A Semiophysical Perspective by Lorenzo Magnani

34. A neurocomputational model of relative value processing: Habit modulation through differential outcome expectations by Robert Lowe

35. Capability and habit by Matthias Kramm

36. Collective Intentionality and the Transformation of Meaning During the Contemporary Rituals of Birth by Anna M. Hennessey

37. Habitual Behavior: from I-intentionality to We- intentionality by Raffaela Giovagnoli

38. Machines computing and learning? by Genaro J. Mart ́ınez

39. Computing with slime mould, plants, liquid marbles and fungi by Andy Adamatzky

40. Introduction to IWNC Panel Discussion by Marcin J. Schroeder

41. Exploring open-ended intelligence using patternist philosophy by Ben Goertzel

42. The Artificial Sentience Behind Artificial Inventors by Stephen Thaler

43. Potential Impacts of Various Inventorship Requirements by Kate Gaudry

44. Panel Commentary by Peter Boltuc

45. On Two Different Kinds of Computational Indeterminacy by Oron Shagrir, Philippos Papayannopoulos, and Nir Fresco

46. Cognitive neurorobotic self in the shared world by Jun Tani

47. The Future of Anthroposociogenesis – Panhumanism, Anthroporelational Humanism and Digital Humanism by Wolfgang Hofkirchner

48. The Philosophy – Science Interaction in Innovative Studies by Yixin Zhong

49. Information and the Ontic-Epistemic Cut by Joseph Brenner

50. A Chase for God in the Human Exploration of Knowledge by Kun Wu, Kaiyan Da, Tianqi Wu

51. The Second Quantum Revolution and its Philosophical Meaning by Hongfang L.

52. Information and Disinformation with their Boundaries and Interfaces by Gordana Dodig-Crnkovic

53. A Quantum Manifestation of Information by Tian’en Wang

54. Computation and Eco-Cognitive Openness-Locked Strategies, Unlocked Strategies, and the Dissipative Brain by Lorenzo Magnani

55. In what sense should we talk about the perception of other minds? by Duoyi Fei

56. An a Priori Theory of Meaning by Marcus Abundis

57. Some Problems of Quantum Hermeneutics by Guolin Wu

58. The fast-changing paradigm of war calls for great wisdom of peace by Lanbo Kang

59. Technologies, ICTs and Ambiguity by Tomáš Sigmund

60. The Data Turn of Scientific Cognition and the Research Program of Philosophy of Data by Xinrong Huang

61. Testimony and Social Evidence in the Covid Era by Raffaela Giovagnoli

62. Developments of research on the Nature of Life from the Information Theory of Individuality by Dongping Fan, Wangjun Zhang

63. On Information Interaction between the Hierarchy of the Material System by Zhikang Wang

64. Informational Aesthetics and the Digital Exploration of Renaissance Art by John Douglas Holgate

65. Practice, Challenges and Countermeasures of Accelerating the Development of new Generation of Artificial Intelligence in Xinjiang by Hong Chen

66. A Basic Problem in the Philosophy of Information Science:Redundant Modal Possible World Semantics by Xiaolong Wan

67. Paradigm Revolution Creates the General Theory of AI by Yixin Zhong

68. Intelligence Science Drives Innovation by Zhongzhi Shi

69. On the Essential Difference Between the Intelligence Body and the Program Body by He Huacan & He Zhitao

70. Human body networks mechanisms of the Covid-19 symptoms by Pin SUN, Rong LIU, Shui GUAN, Jun-Xiu GAO, and Chang-Kai SUN

71. The Development and Characterization of A New Generic Wearable Single-Channel Ear-EEG Recording Platform by Rong Liu

72. Brain Imitating Method for Social Computing - Illumination of Brain Information Processing System by Liqun Han

73. Research and Prospects of Artificial Intelligence in Traditional Chinese Medicine by Zixin Shu, Ting Jia, Haoyu Tian, Dengying Yan, Yuxia Yang, and Xuezhong Zhou

74. A Framework of "Quantitative ⨁ Fixed Image ⇒ Qualitative " induced by contradiction generation and Meta Synthetic Wisdom Engineering by Jiali Feng

75. Paradox, Logic and Property of Infinity by Jincheng Zhang

76. A Call for Paradigm Shift in Information Discipline by Zhong Yixin

IS4SI Summit General Program of Plenary Sessions

SEPTEMBER 12-19

SUNDAY, SEPTEMBER 12

The 2021 Summit of the International Society for the Study of Information

Block 1:

4:00-7:00 UTC

Sun 12th Sep

IS4SI

Marcin Schroeder

1) Opening with the Presidential Welcome and Short (5-10 minute) presentations of all conferences by organizers (60-90 minutes)

2) Discussion "What is SI in IS4SI?" (Moderator: Marcin J. Schroeder) (ca. 1 hour)

PANEL DISCUSSION

5:00-6:00 UTC

Sun 12th Sep

PANEL DISCUSSION (following short presentations of all Contributing Conferences)

1.“What is the SI in IS4SI?”

Moderated by Marcin J. Schroeder

Confirmed Panelists: Joseph Brenner, Mark Burgin, José Maria Diaz-Nafria, Gordana Dodig-Crnkovic, Wolfgang Hofkirchner, Pedro C. Marijuán, Yixin Zhong

(Moderator’s Introduction)

The question about the Study of Information (spoiler alert: yes, this is the SI in IS4SI) is highly non-trivial and at the same time there is an urgent need for the discussion addressing misconceptions surrounding information and its inquiries. The goal of such discussion is not to close SI into a compartment of the classification of human intellectual or practical activities by providing a formal definition, but rather to set it free from the limitations coming from the habits of thinking and the use of the word “information” in the narrow contexts of specialized disciplines.

To set SI free does not mean to give up the high standards of intellectual discipline or to object developments of coordinated programs of inquiry. There is need for continuing discussion of the ways information can be defined and related to other concepts, in particular concepts such as knowledge, communication, uncertainty, truth, complexity, etc. The fact that the concept of information is defined in many different ways without any sight of consensus in predictable future should not be a reason for despair. It is just the best evidence for its fundamental importance. Was there any non-trivial concept in science or philosophy with an universally accepted and permanent definition?

The diversity of conceptualizations of information and its presence in virtually all domains of inquiry from the mathematical and physical sciences to the social studies and the humanities are challenging, but at the same time they give a special position to SI as a way to achieve or at least to reasonably pursue a great synthesis of human knowledge. We know that information, as it is understood in physics, in computer science and the study of computation, in semiology has characteristics so similar to information as it is understood in biology and the study of life or other complex systems at the organismic or population level, or in the study of human organizations that there is very little risk that the customary use of the same term “information” in all these contexts is accidental. So, the danger of searching in vain for the synthesized description of reality with the concept of information as the main tool is negligible. The actual danger is rather in dominating this search by the methods and habits of thinking derived from specific disciplines of higher level of specialization and advancement and neglecting alternative perspectives.

I would like to ask the panelists to share their view of SI as it is or as it should be. These views may differ from those presented above, they may provide alternative perspectives, or they may amplify the importance of the few characteristics of SI already presented here. This is especially important when we interpret the question in a normative way: “What should be the SI in IS4SI?” For instance, someone could object the emphasis on the synthesis of knowledge and to defend the view that the present existential threats to humanity due to the climate change, destruction of the ecosystem, misuse of technology (especially information technology) for gaining political or economic power make it necessary to prioritize the direction of SI. There are many different ways in which priorities may be set.

There are some follow up questions which may be addressed instead of the one in the title of our discussion.

- How would you encourage young scholars to choose SI as the theme for their study or their future academic or professional career?

- How urgent or important is reaching a consensus on the definition of information or at least developing mutual understanding between different conceptualizations of information?

- How to prevent the perpetual misattribution of the term “information science” to narrow sub-disciplines of SI, such as computer science or communication engineering which due to their high level of specialization and little interest in the concept of information should not be considered representatives of SI?

- How should SI inform governmental policies, in particular educational policies? Some governmental agencies promote or enforce the naive idea that mandatory classes about computer programing in K12 curriculum will create an information competent society. Is it just a first step in right direction or rather waste of time and resources?

Plenary Sessions Contributed by Theoretical and Foundational Problems in Information Studies (TFPI) Conference

Block 2:

13:00-16:00 UTC

Sun 12th Sep

TFPI

Mark Burgin

13:00-13:35 UTC

Sun 12th Sep

2. Autopoietic machines: Going beyond the half-brained AI and Church-Turing Thesis

Rao Mikkilineni

Ageno School of Business, Golden Gate University, San Francisco, CA 94105, USA

Introduction

All living organisms are autopoietic and cognitive. Autopoiesis refers to a system with well-defined identity and is capable of reproducing and maintaining itself. Cognition, on the other hand, is the ability to process information, apply knowledge, and change the circumstance. The autopoietic and cognitive behaviors are executed using information processing structures that exploit physical, chemical and biological processes in the framework of matter and energy. These systems transform their physical and kinetic states to establish a dynamic equilibrium between themselves and their environment using the principle of entropy minimization. Biological systems have discovered a way to encode the processes and execute them in the form of genes, neurons, nervous system, the body and the brain etc., through evolutionary learning. The genome, which is the complete set of genes or the genetic material present in a cell, defines the blueprint that includes instructions on how to organize resources to create the functional components, organize the structure and the rules to evolve the structure while interacting with environment using the encoded cognitive processes. Placed in the right environment, the cell containing the genome executes the processes that manage and maintain the self-organizing and self-managing structure adopting to fluctuations. The mammalian neocortex and the reptilian cortical columns provide the information processing structures to assess risk and execute strategies to mitigate it. The genome and the networks of genes and neuronal structures are organized to function as a system with various components which have local autonomy but share information to maintain global stability with high degree of resiliency and efficiency in managing the resources.

General Theory of Information tells us that information is represented, processed and communicated using physical structures. The physical universe, as we know it, is made up of structures that deal with matter and energy. As Mark Burgin points out “Information is related to knowledge as energy is related to matter.” A genome in the language of GTI [2 - 4], encapsulates “knowledge structures” coded in the form of DNA and executed using the “structural machines” in the form of genes and neurons. It is possible to model the autopoietic and cognitive behaviors using the “structural machines” described in the GTI.

In addition, GTI also allows us to design and build digital autopoietic machines with cognitive behaviors building upon current generation information processing structures built using both symbolic computing and neural networks. The autopoietic and cognitive behavior of artificial systems function on three levels of information processing systems and are based on triadic automata [4 - 7]. The efficient autopoietic and cognitive behaviors employ the structural machines.

Following four papers presented in the Theoretical and Foundational Problems (TFP) in Information Studies (IS) provide a framework to design and build the new class of autopoietic and cognitive machines:

  1. Mikkilineni, Rao; The Science of Information Processing Structures and the Design of a New Class of Distributed Computing Structures

  2. Mikkilineni, Rao; and Burgin, Mark; Designing a New Class of Digital Autopoietic Machines

  3. Renard, Didier; Fitness in a change of era: Complex Adaptive Systems, Neocortex and a New Class of Information Processing Machines

  4. Morana, Giovanni; Implementing a Risk Predictor using an Autopoietic Machine


The Theory and Practice of Information Processing Structures

The structural machines supersede the Turing machines by their representations of knowledge and the operations that process information [2, 3]. Triadic structural machines with multiple general and mission-oriented processors. enable autopoietic behaviors.

1. From Turing Machines to Structural Machines [2, 3]:

Structural machines process knowledge structures which incorporate domain knowledge in the form of entities, their relationships and process evolution behaviors as a network of networks with each node defining functional behaviors and links defining the information exchange (or communication). The operations on the knowledge structure schema define the creation, deletion, connection and reconfiguration operations based on control knowledge structures. They are agnostic to what the functions of the nodes or what information is exchanged between them. This provides the composability of knowledge structures across domains in processing information. In contrast, the Turing machines process data structures which incorporate domain knowledge in the form of entities and relationships only and their process evolution behaviors are encapsulated in algorithms (programs) which operate of the data structures. Therefore, the Turing machine operations are domain knowledge specific and lacks composability across domains and increases complexity in processing information and its evolution.


2. Changing system’s behaviors using functional communication [3, 4]:

The behavioral changes are embedded in the knowledge structures and therefore functional communication or information exchange induces the behavioral changes in various entities in the knowledge structures. Changes are propagated through knowledge structures when events produce changes in arbitrary attributes of the system entities. This enables self-regulation of the system. In contrast to self-regulation, the external control causes the behavioral changes by the rules embedded in the algorithms outside the data structures and to execute the behavioral changes, the programs have to understand the domain knowledge of the data structures in order to perform operations on them.


3. Triadic structural automata and autopoietic behavior [3, 4]:

A triadic structural machine with hierarchical control processors provides the theoretical means for the design of auto-poietic automata allowing transformation and regulation of all three dimensions of information processing and system behavior – the physical, mental and structural dimension. The control processors operate on the downstream information processing structures, where a transaction can span across multiple distributed components by reconfiguring their nodes, links and topologies based on well-defined pre-condition and post-condition transaction rules to address fluctuations; for example, in resource availability or demand.


4. Providing global optimization using shared knowledge and predictive reasoning to deal with large fluctuations [5]:

The hierarchical control process overlay in the design of the structural machine, allows implementing 4E (embedded, embodied, enactive and extended) cognitive processes with downstream autonomous components interacting with each other and with their environment using system-wide knowledge-sharing, which allows global regulation to optimize the stability of the system as a whole based on memory and historical experience-based reasoning. Downstream components provide sensory observations and control using both neural network and symbolic computing structures.

We present utilization of this theory for building self-managing federated edge cloud network deploying autopoietic federated AI applications to connect people, things, and businesses, which can enable global communication, collaboration and commerce with high reliability, performance, security, and regulatory compliance.


References

[ 1] Conference on Theoretical and Foundational Problems (TFP) in Information Studies (IS), September 12 – 19, 2021, (On Line) as a part of IS4SI Summit 2021 (is4si.org). Theoretical and Foundational Problems (TFP) in Information Studies (tfpis.com)

[ 2] Burgin, M. Theory of Information: Fundamentality, Diversity and Unification, World Scientific: Singapore, 2010.

[ 3] Mark Burgin and Rao Mikkilineni, (2021) On the Autopoietic and Cognitive Behavior. EasyChair Preprint no. 6261. https://easychair.org/publications/preprint/tkjk

[ 4] Burgin, M. Triadic Automata and Machines as Information Transformers, Information, v. 11, No. 2, 2020, 102; doi:10.3390/info11020102

[ 5] Burgin, M., Mikkilineni, R. and Phalke, V. Autopoietic Computing Systems and Triadic Automata: The Theory and Practice, Advances in Computer and Communications, v. 1, No. 1, 2020, pp. 16-35

[ 6] Burgin, M. and Mikkilineni, R. From Data Processing to Knowledge Processing: Working with Operational Schemas by Autopoietic Machines, Big Data Cogn. Comput. 2021, v. 5, 13 (https://doi.org/10.3390/bdcc5010013)

[ 7] Mikkilineni, R. Information Processing, Information Networking, Cognitive Apparatuses and Sentient Software Systems. Proceedings 2020, 47, 27. https://doi.org/10.3390/proceedings2020047027


13:35-14:20 UTC

Sun 12th Sep

3. Research in the area of Neosentience, Biomimetics, and Insight Engine 2.0

Bill Seaman

Professor, Computational Media, Arts and Cultures;

Co-dir. Emergence Lab, Durham, NC. Duke. USA

Abstract

• Neosentience

The goal is to arrive at a model for an intelligent autonomous learning robotic system via transdisciplinary information processes and information exchanges. The long-term goal of this model is to potentially enable Neosentience to arise via the system’s functionality. Research related to this goal is accomplished through the use of an intelligent transdisciplinary database, search engine, a natural language API, a dynamic set of visualization modes, and a series of independent AI collaborators (what we call Micropeers) — The Insight Engine 2.0 (I_E).

Pragmatic benchmarks are used to define Neosentient robotic entities (as opposed to the Turing Test): the system can exhibit well defined functionalities: It learns (enactive approach and others like conversation theory); it intelligently navigates; it interacts via natural language; it generates simulations of behavior; it metaphorically “thinks” about potential behaviors before acting in physical space; it is creative in some manner; it comes to have a deep situated knowledge of context through multimodal sensing (the embodied, embedded approach); and it displays mirror competence. Seaman and Rössler have entitled this robotic entity The Benevolence Engine. They state that the inter-functionality of such a system is complex enough to operationally mimic human sentience. Benevolence can in principle arise in the interaction of two such systems. Synthetic emotions would also become operative within the system. The System would be benevolent in nature. The concept of Neosentience (coined by Seaman) was first articulated in the book Neosentience / The Benevolence Engine by Seaman and Rössler.1

• The 4 Es of Cognition

The goal is to enfold the Embodied, Embedded, Enactive and Extended approaches to understanding cognition in the human, and then seek to articulate the entailment structures that enable this set of dynamic interrelations to function. Because there are many different biological as well as machinic information systems involved in mapping and articulating such processes, this necessitates a new transdisciplinary holistic approach to biological study and its abstraction via biomimetics, to enable entailment structures to be re-applied in defining a model for a Neosentient system (a new branch of AI). The idea is to define a transdisciplinary holistic approach which seeks to examine dynamic, time-based Mind/Brain/Body/Sensing/ Environment relationalities.

• Information Processing Structures, Sentience, Resilience and Intelligence The initial goal is to make the Insight Engine function in such a way as to “point” to potential new research data across disciplinary boundaries by using advanced information processing, computational linguistics, a Natural language API, and additional forms of AI acting as Micropeers (AI collaborators) to enable intelligent bridging of research questions, and the development of new information paradigms through bisociation (after Arthur Koestler) and poly-association (Seaman). These I_E information systems support researchers, empowering them to access relevant transdisciplinary information from the database, to contribute to the higher order goal over time of articulating a functional Neosentient Model. Such a model is informed from many intellectual perspectives and transdisciplinary conversations facilitated by the I_E system, a listserv, and future information oriented conferences. The Insight Engine embodies a series of intelligent processing structures, visualization systems, the mapping of relationalities related to the corpus of papers, books, media objects, key words, abstracts, diagrams, etc. (initially textually structured with pattern recognition visual and sonic systems later integrated, that will help build and navigate the database) and help outline the articulation of a very new variety of Bio-algorithm – informed by 1 Rössler, O., Seaman W. (2011) Neosentience / The Benevolence Engine, Intellect Press, London. the human body. Bio-informatic processing structures are to be abstracted and then re-articulated in a bio-mimetic manner in the Neosentient model. This dynamic combinatoric, self-organizing system seeks to be resilient and interactive with the environment, building new knowledge up somewhat like humans do, through pattern-flows of multi-modal sense pertubations, as well as incorporating a layering of other potential learning systems. Meta-levels of self-observation and the development of language to articulate such contextual learning is central for the embodiment of the system.

• A New Combinatoric

N-dimensional Bio-algorithm Cognitive Behavior is approached through a series of information-oriented processes. Central is to define all of the entailment structures that inform the emergent arising of sentience in the human (new incomplete territory), and seek to abstract those into an autonomous robotic system. The system will bring together a series of technologies from the research of diverse scientists and cyberneticists, and the study of complex systems, to help map this time-based set of relationalities that bridge mind / brain / body — multi-modal sensing systems, and environment. The notion here is to devise a self-organising bio-algorithm of combinatoric algorithms by studying the body, that will be derived from mind / brain / body / environment relationalities, and the sentience/consciousness that arises out of the body's interoperative functionality. This would necessitate moving back to exploring the biomimetic as opposed to the purely functional aspects of AI production. No single discipline of science, the humanities and/or the arts can tackle such a difficult information-related problem set. A special transdisciplinary team of teams would need to arise out of the use of I_E. This overarching research team (or set of teams) would potentially consist of groups of specialists from a series of fields that would also learn enough about the other member fields to be able to talk across disciplines. Conversation would be central to the ongoing development of this variable Bio-algorithmic network. Perhaps an earlier version of this kind of thinking was witnessed in the Biological Computer Lab headed by Heinz von Foerster, 1958-1976. Historical items related to the topic areas would also be included in the database. Perhaps one first must define a set of Boundary Objects. This approach is articulated in Susan Leigh Star’s, 'The Structure of Ill-Structured Solutions: Boundary Objects and Heterogeneous Distributed Problem Solving', in M. Hubs and L. Gasser (eds), Readings in Distributed Artificial Intelligence (Menlo Park, CA: Morgan Kaufmann, 1989

• Research areas for the Insight Engine 2.0

Each research area will have a Micro-peer, these include the following (although new research areas will be added as needed): Neosentience; N-dimentional Combinatoric Bio-algorithm development; Bodily entailment structures; Mindful Awareness – self-observation; 2 nd -order Cybernetics; Neuroscience; Neuroscience and the arts; AI and the arts – Computational Creativity; Biomimetics; The Connectome; AI; AI and Ethics; EI; The Biological Computer Lab (Cybernetics and 2nd Order Cybernetics); Science Fiction; The History of AI; Bridge Building between disciplines; Transdisciplinarity – A Multi-perspective Approach to Knowledge Production; Information – new approaches; Approaches to Learning - Conversation Theory etc.; Robotics and situated knowledge; Computational Intuition; Android Linguistics (Donahue); related new forms of mathematics; synthetic emotions; embodied computation.

The Research team consists of Professor Bill Seaman, PhD, Computational Media, Arts and Cultures, Duke University; John Herr, Duke Office of Information Technology; Dev Seth, Computer Science student, Duke University; Ashley Kwon, Computer Science student Duke University, Quran Karriem, PhD student, CMAC, Duke University. Mingyong Chen, PhD student, UC San Diego


1 Rössler, O., Seaman W. (2011) Neosentience / The Benevolence Engine, Intellect Press, London.

14:20-14:45 UTC

Sun 12th Sep

4. Mind, Nature, and Artificial Magic

Rossella Lupacchini

Department of Philosophy and Communication Studies, University of Bologna, Italy

Abstract

The ambition to invent a machine for the ‘perfect imitation’ of mind appears to flow as a logical consequence from the ambition to invent a device for the ‘perfect imitation’ of nature. From perspective drawing to photography, the Western science of art has taken advantage of mechanical means, such as lenses and mirrors, to replicate our visual experience of nature. Its main concern has always been to capture the ‘magic’ of nature into the ‘synthetic instant’ of picture. Accordingly the main achievement of ‘visual art’ might be described as sight enhancing. In a similar way, the science of logic has taken outstanding advantage of computing machines to simulate our thinking experience. For the ‘art of reasoning’, however, the main goal appears to be nothing less than to capture the ‘nature’ of mind into artificial magic. How does it make sense to pursue it? To which extent can the cognitive experience due to artificial magic be regarded as life enhancing?

1. Mimesis: the demiurge's invention

2. Seeing, knowing, and creating

3. Imitation game: from Leonardo to Turing

De-constructing the mind of nature

Encoding the power of imagination

Ways of intelligence: living, mechanical

4. Light, matter, and will to form

Existence as a quantum phenomenon

Knowledge as a mind-nature entanglement

Information as a quantization of the meaning field

14:45-15:10 UTC

Sun 12th Sep

5. Non-Diophantine arithmetics as a tool for formalizing information about nature and technology

Michele Caprio, Andrea Aveni and Sayan Mukherjee

Duke University, Durham, NC, USA

Abstract

The theory of non-Diophantine arithmetics is based on a more general structure called an abstract prearithmetic. A generic abstract prearithmetic A is defined as A = (A, +A, A, A), where A ℝ+ is the carrier of A (that is, the set of the elements of A), A is a partial order on A, and +A and A are two binary operations defined on the elements of A. We conventionally call them addition and multiplication, but they can be any generic operation. Abstract prearithmetic A is called weakly projective with respect to a second abstract prearithmetic B = (B, +B, B, B) if there exist two functions g A → B and h B → A such that, for all a,b A, a +A b = h(g(a) +B g(b)) and a A b = h(g(a) B g(b)). Function g is called the projector and function h is called the coprojector for the pair (A,B). The weak projection of the sum a +B b of two elements of B onto A is defined as h(a +B b), while the weak projection of the product a B b of two elements of B onto A is defined as h(a B b). Abstract prearithmetic A is called projective with respect to abstract prearithmetic B if it is weakly projective with respect to B, with projector f −1 and coprojector f. We call f, that has to be bijective, the generator of projector and coprojector. Weakly projective prearithmetics depend on two functional parameters, g and h—one, f, if they are projective—and recover Diophantine arithmetic (the conventional arithmetic, called Diophantine from Diophantus, the Greek mathematician who first approached this branch of mathematics) when these functions are the identity. To this extent, we can consider nonDiophantine arithmetics as a generalization of the Diophantine one. A complete account on non Diophantine arithmetics can be found in the recent book by Burgin and Czachor [4]. In this work, we consider three classes of abstract prearithmetics, {AM}M1, {A-M,M}M1, and {BM}M0. These classes of prearithmetics are useful to describe some natural and computer science related phenomena for which the conventional Diophantine arithmetic fails. For example, the fact that adding one raindrop to another one gives one raindrop, or that putting a lion and a rabbit in a cage, one will not find two animals in the cage later on (cf. [2] and [5]). They also allow avoiding the introduction of inconsistent Diophantine arithmetics, that is, arithmetics for which one or more Peano axioms were at the same time true and false. For example, in [1] Rosinger points out that electronic digital computers, when operating on the integers, act according to the usual Peano axioms for ℕ plus an extra ad-hoc axiom, called the machine infinity axiom. The machine infinity axiom states that there exists M ℕ far greater than 1 such that M + 1 = M. Clearly, Peano axioms and the machine infinity axiom together give rise to an inconsistency, which can be easily avoided by working with the prearithmetics we introduce. In addition, {AM}M1 and {A-M,M}M1 allow to overcome the version of the paradox of the heap (or sorites paradox) stated in [3, Section 2]. The setting of this variant of the sorites paradox is adding one grain of sand to a heap of sand, and the question is, once a grain is added, whether the heap is still a heap. We show that every element AM’ of {AM}M1 is a complete totally ordered semiring, and it is weakly projective with respect to R+, the conventional Diophantine arithmetic of positive real numbers. Furthermore, we prove that the weak projection of any series n an of elements of ℝ+ = [0,) is convergent in each AM. This is an exciting result because it allows the scholar that needs a particular series to converge in their analysis to reach that result by performing a weak projection of the series onto AM, and then continue the analysis in AM. The second class, {A-M,M}M1, allows to overcome the paradox of the heap and is such that every element A-M’,M’ is weakly projective with respect to the conventional real Diophantine arithmetic R = (ℝ, +, , ℝ) . The weak projection of any non-oscillating series n an of terms in ℝ is convergent in A-M’,M’, for all M’ 1. The drawback of working with this class is that its elements are not semirings, because the addition operation is not associative. The last one, {BM}M0, is such that every element BM’ is a semiring and is projective with respect to the conventional real Diophantine arithmetic R = (ℝ, +, , ℝ). The weak projection of any non-oscillating series n an of terms in ℝ is convergent in BM’, for all M’ 0. The drawback of working with this class is that its elements do not overcome the paradox of the heap.


References

[1] Elemer E. Rosinger. On the Safe Use of Inconsistent Mathematics. Available at arXiv:0811.2405, 2008.

[2] Hermann von Helmholtz. Zahlen und Messen in Philosophische Aufsatze. Fues’s Verlag, Leipzig, pages 17–52, 1887.

[3] Mark Burgin and Gunter Meissner. 1 + 1 = 3: Synergy Arithmetic in Economics. Applied Mathematics, 08(02):133–144, 2017.

[4] Mark Burgin and Marek Czachor. Non-Diophantine Arithmetics in Mathematics, Physics and Psychology. World Scientific, Singapore, 2020.

[5] Morris Kline. Mathematics: The Loss of Certainty. Oxford University Press, New York, 1980


15:10-15:35 UTC

Sun 12th Sep

6. Ontological information - information as a physical phenomenon

Roman Krzanowski

The Pontifical University of John Paul II, Krakow, Poland

Abstract:

Ontological information is information conceived as a natural phenomenon, i.e., as an element of physical world. We will denote such information with the predicate “ontological” as in “ontological information”, as well as by the symbol “IO” and the indexed term “informationO.”. The properties attributed to ontological information in (Krzanowski, 2020) reflect its physical nature. We claim that ontological information is characterized by epistemic neutrality (EN), physical embodiment (PE), and formative nature (FN). The property of epistemic neutrality1 (EN) means that informationO has no meaning by itself. From specific ontological information, an agent may derive something (some value) that has significance for that agent’s existence or functioning. The same ontological information may result in a different meaning for different agents. Likewise, this information may have no meaning at all to some agents. However, an agent can in principle be any system, whether organic or artificial, if it senses ontological information or the organization of natural phenomena. Natural agents (i.e., biological systems) have been shaped by nature to perceive nature’s properties, including organizational properties, but artificial agents are of our own making of course, so in a sense, they also have biological origins. We are therefore creations of nature, which are not separated from it. We are built to interpret nature, not to falsify it, and evolution assumes this, because it is likely that organisms that fail to correctly perceive the environment will not survive. This is also the general idea for building our artificial agents. The property of physical embodiment (PE) means that informationO is a physical phenomenon. So, it may be conceptualized in a matter–energy-information complex2 (one that is indirectly implying Aristotelian hylemorphism), and it is fundamental to nature (i.e. whatever exists physically contains information). The claim that “ontological information is a physical phenomenon” means several things. Ontological information is not an abstract concept in the way that mathematical objects, ideas, or thoughts are abstract. Ontological information does not belong to the Platonic realm of Forms in either the classic or neo-Platonic sense. Ontological information is real, observable, and measurable. Thus, we can claim that information exists much like other physical phenomena exist, because they exhibit the same class of properties (quantifiability, operational properties) as physical phenomena do. Furthermore, it seems that whatever exists in a physical sense contains information, so there is no physical phenomenon without information.

1 A concept is “epistemically neutral” when it does not have intrinsic epistemic import, or in other words, it does not mean anything by itself.

2 The matter-energy-information complex has a status of a conjecture but not of a theory.

Finally, the property of formative nature (FN) means that information is responsible for the organization of the physical world, so information is expressed through structures/forms and the organization of things3, but information is not a structure itself. Organization is a fairly broad concept that may be, and is, interpreted as structure, order, form, shape, or rationality (if perceived by a cognitive entity). We do not posit that information is structure, although this has been claimed several times. The problem with such a statement is that we do not know precisely what a structure is and what kinds of structures we would associate with information, as well as how this would be achieved. Information is certainly not the visible structure or shape of an object, but we concede that the shape or structure of an object is how information discloses itself or how we sense its presence. Thus, the shape of a tea cup is not information, but information is being expressed in the shape of a tea cup. The more thoroughgoing discussion of informationO is provided in (Krzanowski, 2020; 2020a; 2020b). The interpretation of ontological information (in particular its causal potentiality) in the context of the general theory of information (GTI) developed in (Burgin, 2010) is provided in (Burgin and Krzanowski, 2021).

References

Burgin. M. (2010). Theory of Information. New York: World Scientific Publishing.

Burgin, M. R. Krzanowski. (2021). Levels of ontological information, Proceedings, IS4IS 2021. Krzanowski, R. (2020). Ontological Information. Investigation into the properties of ontological information. Ph.D. thesis. UPJP2. Available at http://bc.upjp2.edu.pl/dlibra/docmetadata?id=5024&from=&dirids=1&ver_id=&lp=2&QI=

Krzanowski, R. (2020a). What Is Physical Information? Philosophies. 5. 10.3390/philosophies5020010 Krzanowski, R. (2020b). Why can information not be defined as being purely epistemic? Philosophical Problems in Science (Zagadnienia Filozoficzne w Nauce). 68. P. 37-62.

3 The synonymity of the terms “structure”, “form”, “organization”, and “information” should not be accepted a priori despite the fact that these terms are often used synonymousl

15:35-15:50 UTC

Sun 12th Sep

7. Materialization and Idealization of Information

Mark Burgin

University of California, Los Angeles, CA, USA

Abstract:

Information is an important phenomenon in nature, society, and technology. This situation brought some researchers to the conclusion that information is physical (cf., for example, (Landauer, 2002)). At the same time, according to the general theory of information (GTI), information belongs to the ideal World of Structures, which is the scientific incarnation of the World of Plato Ideas or Forms (Burgin, 2011; 2017). This place of information looks contradictory to the assumption that information is physical and to the fact of the incessant presence of information in nature, society, and technology. The goal of this work is to solve this paradox explaining the connections between the ideal and material and further developing the approach to materialization introduced in (Burgin and Markov, 1991).

We begin with the global structure of the world. It is described by the Existential Triad of the World, which consists of three components: the Physical (Material) World, the Mental World, and the World of Structures (Burgin, 2010). The Physical (Material) World represents the physical reality studied by natural and technological sciences, the Mental World encompasses different forms and levels of mentality, and the World of Structures consists of various kinds and types of ideal structures.

While the Physical and Mental Worlds are accessible by human senses, the World of Structures can be achieved only the intellect as Plato predicted (Burgin, 2017). To better understand the World of Structures, it is helpful to perceive its necessity for the completion and elucidation of the interplay between two sensible Worlds. With the stipulation of the increase of sophistication of science and complexity of studied phenomena, the world of ideal structures becomes indispensable for correct understanding of the Physical and Mental Worlds. Starting with physicists, who understood the key role of abstract mathematics for physics, people will begin to comprehend necessity and expediency of the structural reality.

According to the Ontological Principle O2 of the GTI and its additional forms (Burgin, 2010), information plays the same role in the World of Structures as energy plays in the Physical (Material) World.

However, according to the Ontological Representability Principle of the GTI, for any portion of information I, there is always a representation Q of this portion of information for a system R. Often this representation is material, and as a result, being materially represented, information becomes physical. Consequently, a physical representation of information can be treated as the materialization of this information.

Moreover, according to the Ontological Embodiment Principle of the GTI, for any portion of information I, there is always a carrier C of this portion of information for a system R. This carrier is, as a 2 rule, material, and this even more makes information physical. A physical carrier of information can be also treated as the materialization of this information, or more exactly, the materialization of the second level.

Now we can see that the paradox of the existing impact of such an ideal essence as information in the physical reality is caused by the very popular confusion of information per se, its representations, and carriers.

The difference between a portion of information, its representation, and its carrier is demonstrated by the following example. Let us consider a letter/text written/printed on a piece of paper. Then the text is a representation of information in this text while the piece of paper is a carrier of this information. Note that the text is not information because the same information can be represented by another text.

In this context, the materialization of information has two meanings. First, materialization of information is the process of representing this information by a material object/system. Second, it is a material/physical representation of this information, that is, a result of the materialization process.

Note that material/physical representations of information can be natural or artificial. For instance, DNA is a natural representation and carrier of information while a computer memory is an artificial carrier of information and the state of a computer memory is an artificial representation of information

There is also the process of information idealization, which goes in the opposite direction and is reciprocal but not always inverse to the materialization of information. Both these processes are formally represented as named sets and chains of named sets. This allows utilization of named set theory as a tool for exploration of information materialization and idealization.

References

Burgin, M. Theory of Information: Fundamentality, Diversity and Unification, World Scientific, New York/London/Singapore, 2010

Burgin, M. (2011) Information in the Structure of the World, Information: Theories & Applications, v.18, No. 1, pp. 16 - 32

Burgin, M. (2017) Ideas of Plato in the context of contemporary science and mathematics, Athens Journal of Humanities and Arts, v. 4, No. 3, pp. 161 – 182

Burgin, M. and Markov, K. A formal definition of materialization, in Mathematics and Education in Mathematics, Sofia, 1991, pp. 175-179

Landauer, R. (2002) Information is Inevitably Physical, in Feynman and Computation: Exploring the limits of computers, Westview Press, Oxford, pp. 76-92

15:50-16:00 UTC

Sun 12th Sep

General discussion


MONDAY, SEPTEMBER 13

Theoretical and Foundational Problems in Information TFPI 2021

Block 1:

4:00-7:00 UTC

Mon 13th Sep

TFPI

Mark Burgin

4:00-4:25 UTC

Mon 13th Sep

8.Paradigm Shift, an Urgent Issue for the Studies of Information Discipline

Yixin Zhong

Beijing University of Posts and Telecommunications, Beijing 100876, China

Abstract:

1. The Definition of the Paradigm for a Scientific Discipline

The paradigm for a scientific discipline is defined as the integrity of scientific view and methodology for that discipline in which the scientific view defines what the essence of the discipline is while the methodology related defines how to determine the scientific approach to the studies of the discipline. Thus, the paradigm for a scientific discipline delimits the norm that the studies for that discipline should follow. As result, the studies of the category of a scientific discipline should employ its own paradigm. Therefore, the studies of a physical discipline should employ the paradigm for physical discipline whereas the studies of an information discipline should employ the paradigm for the information discipline.

2. The Role The Paradigm Plays in Scientific Studies

The paradigm as defined above plays the role that leads the studies of the related scientific discipline. As a matter of fact, whether the studies of the discipline would be successful or failure in practice will depends on if the paradigm employed for the discipline is correct or not. So, if the paradigm for the information discipline has been employed, the studies of the information discipline would make successes no matter how difficult the information discipline is. Otherwise, the studies of the information discipline will encounter a series of misunderstanding and setbacks.

3. The Real Situation Concerning The Paradigm in Information Discipline

It is a very surprising discovery through the investigation in depth that the paradigm employed for the study of information discipline has ever been the one for a physical discipline, see Table 1, not the one for an information discipline, see Table 2 below.


Table 1 Major Features for the Paradigm of Physical Discipline

  1. Scientific View:

    1. Object for study: Physical with no subjective factor

    2. Focus of study: The structure of physical system

    3. property of the object: Deterministic in nature

  2. Methodology

    1. General approach: Divide and conquer

    2. Means for description/analysis: Purely formal methods

    3. Means for decision-making: Form matching


Table 2 Major Features for the Paradigm of Information Discipline

Scientific View:

    1. Object for study: Info process within subject-object interaction

    2. Focus of study: To achieve the goal of double win(subject-object)

    3. property of the object: Non-deterministic in nature

  1. Methodology

    1. General approach: Methodology of Information Ecology

    2. Means for description/analysis: Form-utility-meaning trinity

    3. Means for decision-making: Understanding-based


The use of the paradigm for the physical discipline to the study of the information discipline is surely the roots for all problems related to the study of the information discipline. The major problems existed in the studies of the information discipline include, at least, the followings: (1) diversity without unity in theory, separation among the studies of information in various sectors, separation between the studies of information and the studies of intelligence, all due to the physical methodology of “divide and conquer”; (2) merely formal analysis for the studies of information, knowledge, and intelligence without considering the high importance of subject factors, also due to the physical methodology of “purely formal analysis”.

4. Conclusion

It is an appeal presented in the paper that the paradigm practically executed so far in the studies of the information discipline worldwide should be shifted as soon as possible.


4:25-4:50 UTC

Mon 13th Sep

9.Structural Analysis of Information: Search for Methodology

Marcin J. Schroeder

Global Learning Center, Tohoku University, Sendai, 980-8576, Japan

Abstract

The apparent terminological simplicity is the most deceiving disguise of complex concepts and questions whose real depth is obscured by our habits of thinking. There are many words which we use every day believing that we know well their meaning until someone asks us for their explanation. This applies to the question about the meaning of the concept of information. The term “information” belongs to the most often used in a myriad of contexts but the concept which it represents escapes all efforts to provide a commonly acceptable definition. There is nothing unusual about information being elusive. There have been many fundamental concepts generating never ending discussions. Maybe, as it was suggested already by E. C. Shannon, the most celebrated pioneer of the study of information, we need more than one concept of information. Another possibility is that the study of information understood in multiple and not always precisely defined ways and in very diverse contexts should continue and the future will show which definition provides the most adequate and inclusive description of information or how to integrate the multiple definitions into one acceptable for everyone.

However, if we want to maintain the identity and integrity of the study of information carried out in its further development in the absence of the uniform definition, we have to establish methodological tools not necessarily identical for all forms of inquiry of informational phenomena, but at least being consistent and preferably allowing comparisons of the results of inquiries. Thus, the methodological unity of the study of information even if it may not be complete should serve as a guiding ideal for its inquiries.

This work is not intended as a study of universal methodological tools for all possible forms of inquiry in diverse disciplines. Its main objective is to search for methodological tools for the study of information with the sufficient level of universality to relate studies of information within different disciplines. However, even with this much more restricted objective it is necessary to clarify some misunderstandings present in methodological analyses of practically all scientific disciplines and in all contexts.

The title of this contribution refers to the structural analysis of information as a distinctive methodological tool for two reasons. The first is that this form of inquiry is clearly underrepresented and inadequate in the study of information. The second, closely related reason is that there are many misconceptions about the distinction between different forms of inquiry with surprisingly little attention paid to the role of structural analysis not only in the study of information, but in the majority of scientific and intellectual domains.

The latter, more general issue that is present not only in the study of information can be identified in the representative example of the relationship between quantitative, qualitative, and structural methodologies. The popular conviction of the apparent complementary, dichotomic opposition of the first two methodologies is based on the misconceptions of the role of mathematics in general and of the numbers in particular which are perpetuated in virtually all scientific inquiries. This mistaken view of the two methodologies, their exclusive and universal role in all inquiries obscures the fact that they both are just instances of structural analysis, in which mathematics can offer methodological tools going well beyond the present toolkit.

The fallacy of the opposition and complementarity of the quantitative and qualitative methodologies has its source in the hidden assumptions that are very rarely recognized in the scientific practice. Another source is in the overextension of the mathematical concepts which have very specific and restricted meaning in mathematics to the applications in science where the conditions of their definitions are not satisfied or not considered.

An outstanding example of this type of confusion is in the use of the concept of measure, which frequently in scientific applications is understood as an arbitrary assignment of real numbers to some set of not always clearly defined objects. This use of the term measure is very far from the understanding of the concept of a measure in mathematics. It would have been just a terminological inconsistency with mathematics, not an error, if this non-mathematical concept of measure was not mixed up with the mathematical concept in making conclusions regarding the results of inquiry. Very often the meaning of the term measure is simply not clarified. Sometimes the intention of the use of the term is consistent with the measure theory, but there is nothing about the related concepts of the theory whose absence makes the central concept meaningless. The reference to a measure without any clarification of how it is defined has as its consequence the hidden import of the structure on which it has to be defined to retain its mathematical meaning (a sigma ortho-algebra of measurable subsets of the measure space). Thus, there is usually a hidden structure associated with the subject of each study which serves as a tool for inquiry, but which is excluded from the overt considerations.

If we decide to disregard the conditions in the mathematical concept of a measure and consider it simply as a real-valued function on some set S, then we define on S just an equivalence relation defined by the partition of S into subsets of elements with the equal values of the function. However, in this case we have a pure case of the qualitative methodology based on partitions of a set into equivalence classes which can be identified with qualities or properties of the elements of S, but which equally well can be identified with numerical values. This shows that the distinction between the quantitative and qualitative methodologies is fuzzy. In both methodologies we assume overtly, or most frequently covertly an essentially the same structure of an equivalence relation imposed on the universe of our study. More importantly, in both cases by imposing hidden mathematical structures on the subjects of our study we actually carry out structural analysis involving equivalence relations. As long as the concept of a measure is not the one from measure theory and a measure is simply an assignment of a numerical value the distinction between the two methodologies is rather conventional and is based on the way how equivalence relations are presented. The engagement of the mathematical concept of a measure adds to the consideration an additional, structure of a non-trivial ortho-lattice of measurable subsets.

This work goes beyond the critical review of the hidden but omnipresent elements of structural methodology in the study of information. There is a legitimate question about the positive, creative aspect of the recognition of the role of structural analysis. The source of the conceptual tools necessary for further development of the structural methodology of information can be identified in the invariance with respect to transformations, the main methodological strategy of physics and several other natural sciences. Surprisingly, this idea was completely missing in the work of Shannon, but was already present in the 1928 paper by R.V.L. Hartley cited by Shannon in the footnote to the first page. Hartley did not refer directly to structural analysis, but used invariance as a tool to derive and to interpret his simpler than Shannon’s formula for the amount of information.

4:50-5:15 UTC

Mon 13th Sep

10. Quality of information

Krassimir Markov

ITHEA®, Sofia, Bulgaria

Abstract:

Introduction. This paper is aimed to present the concept “Quality of information” in the frame of the General Information Theory (GIT). The development of GIT had started in the period 1977- 1980. The first publication on GIT, had been published in 1984 [Markov, 1984]. Further publications on GIT are pointed in [Markov et al, 2007].

Entity. In our examination, we consider the real world as a space of entities. The entities are built by other entities, connected with relationships. The entities and relationships between them form the internal structure of the entity they build.

Interaction. Building the relationship between the entities is a result of the contact among them. During the contact, one entity impacts on the other entity and vice versa. In some cases the opposite impact may not exist, but, in general, the contact may be considered as two mutually opposite impacts which occur in the same time. The set of contacts between entities forms their interaction.

Reflection. During the establishing of the contact, the impact of an entity changes temporally or permanently the internal structure and/or functionality of the impacted entity. In other words, the realization of the relationships between entities changes temporary or permanently their internal structure and/or functionality at one or at few levels. The change of the structure and/or functionality of the entity, which is due to impact of the other entity we denote with the notion "reflection". The entities of the world interact continuously. It is possible, after one interaction, another may be realized. In this case, the changes received by any entity, during the first interaction, may be reflected by the new entity. This means that the secondary (transitive) reflection exists. One special case is the external transitive self-reflection where the entity reflects itself as a secondary reflection during any external interaction. Some entities have an opportunity of internal self-reflection. The internal self-reflection is possible only for very high levels of organization of the entities, i.e. for entities with very large and complicated structure.

INFOS. Further we will pay attention to complex entities with possibilities for self-reflection. To avoid misunderstandings with concepts Subject, agent, animal, human, society, humanity, living creatures, etc., we use the abstract concept “INFOS” to denote every of them as well as all of the artificial creatures which has features similar to the former ones. Infos has possibility to reflect the reality via receptors and to operate with received reflections in its memory. The opposite is possible - via effectors Infos has possibility to realize in reality some of its (self-)reflections from its consciousness.

Information and Information Expectation. If the following diagram exists and if it is commutative, then it represents all reflection relations: 1) in reality: entities and their reflections, 2) in consciousness: mental reflections of real or mental entities; 3) between reality and consciousness: perceiving data and creating mental reflections. In the diagram: 1) in reality: “s” is the source entity and “r” is a reflection in the recipient entity; “e” is a mapping from s in r; 2) in Infos’ consciousness: “si” is a reflection of the source entity and “ri” is a reflection of the reflection of the “s”; “ei” is a mapping from si in ri.

“si” is called “information expectation” and “ri” is called “information” about “s” received from the reflection “r”. Commonly, the reflection “r” is called “data” about “s”.

Quality of information. “si” and “ri” may be coincident or different. In the second case, some “distance” between them exists. The nature of the distance may be different in accordance to the kind of reflections. In any case, as this distance is smaller so the information “si” is more qualitative. In other words, the “quality of information” is the measure of the distance between information expectation and the corresponded information.

Conclusion. This paper was aimed to introduce the concept “quality of information” from point of view of the General Information Theory. Formulas for computing of quantity and quality of information will be given in another paper.

References

Kr. Markov. A Multi-domain Access Method. Proc. of Int. Conf. "Computer Based Scientific Research". Plovdiv, 1984. pp. 558-563.

Kr. Markov, Kr. Ivanova, I. Mitov. Basic Structure of the General Information Theory. IJ ITA, Vol.14, No.: 1, 2007. pp. 5-19


5:15-5:40 UTC

Mon 13th Sep

11. A QFT Approach to Data Streaming in Natural and Artificial Neural Networks

Gianfranco Basti* and Giuseppe Vitiello**

* Faculty of Philosophy, Pontifical Lateran University, 00120 Vatican City

**Department of Physics “E. R. Caianiello”, University of Salerno, 84084 Fisciano (Salerno), Italy

Abstract:

During the last twenty years a lot of research has be done for the development of probabilistic machine learning algorithms, especially in the field of the artificial neural networks (ANN) for dealing with the problem of data streaming classification and, more generally for the real time information extraction/manipulation/analysis from (infinite) data streams. For instance, sensor networks, healthcare monitoring, social networks, financial markets, … are among the main sources of data streams, often arriving at high speed and always requiring a real-time analysis, before all for individuating long-range and higher order correlations among data, which are continuously changing over time. Indeed, the standard statistical machine learning algorithms in ANN models starting from their progenitor, the so-called backpropagation (BP)algorithm–based on the presence of the “sigmoid function” acting on the activation function of the neurons of the hidden layers of the net for detecting higher order correlations in the data, and the gradient descent(GD) stochastic algorithm for the (supervised) neuron weight refresh–are developed for static also huge bases of data (“big data”). Then, they are systematically inadequate and unadaptable for the analysis of data streaming, i.e., of dynamic bases of data characterized by sudden changes in the correlation length among the variables (phase transitions), and then by the unpredictable variation of the number of the signifying degrees of freedom of the probability distributions. From the computational standpoint, this means the infinitary character of the data streaming problem, whose solution is in principle unreachable by a TM, either classical or quantum (QTM).Indeed, for dealing with the data streaming infinitary challenge, the exponential increasing of the computational speed derived by the usage of quantum machine learning algorithms is not very helpful, either using “quantum gates” (QTM), or using “quantum annealing” (quantum Boltzmann Machine (QBM)),both objects of an intensive research during the last years. In the case of ANNs, the improvement given by the Boltzmann-Machine(BM) learning algorithm to GD is that BM uses “thermal fluctuations” for jumping out of the local minima of the cost function (simulated annealing),so to avoid the main limitation of the GD algorithm in machine learning. In this framework, the advantage of quantum annealing in a QBM is that it uses the “quantum vacuum fluctuations” instead of thermal fluctuations of the classical annealing for bringing the system out of swallow (local) minima, by using the “quantum tunnelling” effect. This outperforms the thermal annealing, especially where the energy (cost) landscape consists of high but thin barriers surrounding shallow local minima. However, despite the improvement that, at least in some specific cases, QBM can give for finding the absolute minimum size/length/cost/distance among a large even though finite set of possible solutions, the problem of data streaming remains because in this case this finitary supposition does not hold. Like the analogy with the coarse-graining problem in statistical physics emphasizes very well, the search for the global minimum of the energy function makes sense after the system performed a phase transition. That is, physically, after that a sudden change in the correlation length among variables, generally under the action of an external field, determined a new way by which they are aggregated for defining the signifying number of the degrees of freedom N characterizing the system statistics after the transition. In other terms, the infinitary challenge implicit in the data streaming is related with phase transitions so that, from the QFT standpoint, this is the same phenomenon of the infinite number of degrees of freedom of the Haag Theorem, characterizing the quantum superposition in QFT systems in far from equilibrium conditions. This requires the extension of the QFT formalism to dissipative systems, inaugurated by the pioneering works of N. Bogoliubov and H. Umezawa. The Bogoliubov transform, indeed, allows to map between different phases of the bosons and the fermions quantum fields, making the dissipative QFT – differently from QM and from QFT in their standard (Hamiltonian)interpretation for closed system – able to calculate over phase transitions. Indeed, inspired by the modeling of natural brains as many-body systems, the QFT dissipative formalism has been used to model ANNs[1, 2]. The mathematical formalism of QFT requires that for open (dissipative) systems, like the brain which is in a permanent “trade” or “dialog” with its environment, the degrees of freedom of the system (the brain), say 𝐴, need to be “doubled” by introducing the degrees of freedom 𝐴̃ describing the environment, according to the coalgebraic scheme: 𝐴 → 𝐴 × 𝐴̃. Indeed, Hopf coproducts (sums) are generally used in quantum physics to calculate the total energy of a superposition quantum state. In the case of a dissipative system, the coproducts represent the total energy of a balanced state between the system and its thermal bath. In this case, because the two terms of the coproduct are not mutually interchangeable like in the case of closed systems(where the sum concerns the energy of two superposed particles), we are led to consider the non-commutative q-deformed Hopf bialgebras, out of which the Bogoliubov transformations involving the 𝐴, 𝐴̃ modes are derived, and where the q-deformation parameter is a thermal parameter strictly related with the Bogoliubov transform[3]. These transformations induce phase transitions, i.e., transitions through physically distinct spaces of the states describing different dynamical regimes in which the system can sit. The brain is thus continuously undergoing phase transitions (criticalities) under the action of the inputs from the environment (à modes). The brain activity is therefore the result of a continual balancing of fluxes of energy (in all its forms) exchanged with the environment. The balancing is controlled by the minimization of the free energy at each step of time evolution. Since fluxes “in” for the brain (A modes) are fluxes “out” for the environment (à modes), and vice-versa, the à modes are the time-reversed images of the A modes (Wigner distribution), they represent the Double of the system. In such a way, by the doubling of the algebras – and then of the state spaces, and finally of the Hilbert spaces – the Hamiltonian canonical (closed) representation of a dynamic system can be recovered also in the case of a dissipative system, by inserting in the Hamiltonian the degrees of freedom of the environment (thermal bath).From the theoretical computer science (TCS) standpoint, this means that the system satisfies the notion of a particular type of automaton, the Labelled State Transition Machine (LTM). I.e., the so-called infinite-state LTM, coalgebraically interpreted and used in TCS for modelling infinite streams of data[2, 4]. Indeed, the doubling of the degrees of freedom (DDF) {𝐴, 𝐴̃} just illustrated and characterizing a dissipative QFT system acts as a dynamic selection criterion of admissible because balanced states (minimum of the free energy). Effectively, it acts as a mechanism of “phase locking” between the data flow (environment) and the system dynamics. Moreover, each system-environment entangled (doubled) state is univocally characterized by a dynamically generated code𝒩, or dynamic labelling (memory addresses). In our model, indeed, an input triggers the spontaneous breakdown of the symmetry (SBS) of the system dynamical equations. As a result of SBS, massless modes, called Nambu-Goldstone (NG) modes, are dynamically generated. The NG-bosons are quanta of long-range correlations among the system elementary components and their coherent condensation value 𝒩 in the system ground state (the least energy state or vacuum state |0⟩, that in our dissipative case is a balanced, or 0-sum energy state with T > 0) describes the recording of the information carried by that input, indexed univocally (labeled) in 𝒩. Coherence denotes that the long-range correlations are not destructively interfering in the system ground state[2]. The memory state turns out to be, therefore, a squeezed coherent state:|0(𝜃)⟩𝒩 = ∑𝑗 𝑤𝑗(𝜃) |𝑤𝑗 ⟩𝒩 , to which Glauber information entropy Q directly applies, with |𝑤𝑗 ⟩ denoting states of 𝐴 and 𝐴̃ pairs, θ is the time- and temperature-dependent Bogoliubov transformation parameter. |0(𝜃)⟩𝒩is, therefore, a time-dependent ground state at finite temperature T > 0; it is an entangled state of the modes 𝐴 and𝐴̃, which provides the mathematical description of the unavoidable interdependence between the brain and its environment. Coherence and entanglement imply that quantities relative to the A modes depend on corresponding ones of the à modes. To conclude, the natural implementation of such a quantum computational architecture for data streaming machine learning based on the DDF principle is by an optical ANN using the tools of optical interferometry, just as in the applications discussed in [3]. The fully programmable architecture of this optical chip allows indeed “to depict” over coherent light waves how many interference figures as we like, and overall to maintain stable in time their phase coherences, so to allow the implementation of quantum computing architectures (either quantum gates or squeezed coherent states) working at room temperature. In our application for data streaming analysis, the DDF principle can be applied in a recursive way, by using the mutual information as a measure of phase distance, like an optimization tool for minimizing the input-output mismatch. In this architecture, indeed, the input of the net is not on the initial conditions of the net dynamics, like in the ANN architecture based on statistical mechanics, but on the boundary conditions (thermal bath) of the system, so to implement the architecture of a net in unsupervised learning, as required by the data streaming challenge.

References

[1] E. Pessa e G. Vitiello, «Quantum dissipation and neural net dynamics,» Biochem. and Bioenerg., vol. 48, pp. 339-342, 1999.

[2] G. Basti, A. Capolupo e G. Vitiello, «Quantum Field Theory and Coalgebraic Logic in Theoretical Computer Science,» Prog. in Bioph. & Mol. Biol. Special Issue, Quantum information models in biology: from molecular biology to cognition, vol. 130, n. Part A, pp. 39-52, 2017.

[3] G. Basti, G. G. Bentini, M. Chiarini, A. Parini and al., "Sensor for security and safety applications based on a fully integrated monolithic electro-optical programmable microdiffractive device," in Proc. SPIE 11159, Electro-Optical and Infrared Systems: Technology and Applications XVI, Strasbourg, France, 2019, pp. 1115907 (1-12).

[4] J. J. M. Rutten, «Universal coalgebra: a theory of systems,» Theor. Comp Sc., vol. 249, n. 1, pp. 3-80, 2000.


5:40-6:05 UTC

Mon 13th Sep

12. Arithmetic loophole in Bell's theorem: Overlooked threat to entangled-state quantum cryptography

Marek Czachor

Institute of Physics and Computer Science, Gdańsk University of Technology, Gdańsk, Poland

Abstract:

Bell's theorem is supposed to exclude all local hidden-variable models of quantum correlations. However, an explicit counterexample shows that a new class of local realistic models, based on generalized arithmetic and calculus, can exactly reconstruct rotationally symmetric quantum probabilities typical of two-electron singlet states. Observable probabilities are consistent with the usual arithmetic employed by macroscopic observers, but counterfactual aspects of Bell's theorem are sensitive to the choice of hidden-variable arithmetic and calculus. The model is classical in the sense of Einstein, Podolsky, Rosen, and Bell: elements of reality exist and probabilities are modeled by integrals of hidden-variable probability densities. Probability densities have a Clauser-Horne product form typical of local realistic theories. However, neither the product nor the integral nor the representation of rotations are the usual ones. The integral has all the standard properties but only with respect to the arithmetic that defines the product. Certain formal transformations of integral expressions one finds in the usual proofs a la Bell do not work, so standard Bell-type inequalities cannot be proved. The system we consider is deterministic, local-realistic, rotationally invariant, observers have free will, detectors are perfect, hence the system is free of all the canonical loopholes discussed in the literature.


References

M. Czachor, Acta. Phys. Polon. A 139, 70 (2021)

M. Czachor, Found. Sci. 25, 971-985 (2020)

6:05-6:30 UTC

Mon 13th Sep

13. Advanced NLP procedures as premises for the reconstruction of the idea of knowledge

Rafal Maciag

Institute of Information Studies, Jagiellonian University, Krakow

Abstract:

The purpose of the presented reasoning is to show the natural, historical process of changing the disposition of knowledge from the classical situation described by Plato to the reconstructed situation, in which the disposer/owner/user can be any dynamic complex system that interacts with the environment. It can be assumed that the latter possibility has been at least partially implemented experimentally for language in the form of technical NLP procedures. The aforementioned process is the result of the simultaneous development of metamathematical reflection and the directly following and related process of developing the understanding of language as a representation of the world. Both of these processes stabilized the idea of the existence of world-independent descriptive and analytical systems, i.e. that do not meet the conditions of any reference or systems in which this reference is specific and indirect. The representative of the first possibility is meta-mathematics, the second - language. Such an interpretation of language opened the way to the emergence of various approaches of a generally constructivist character, i.e. variously defining the linguistic system's participation in representing reality, leading to the highlighting and emphasizing of its particularity and locality in a historical and spatial sense. It is an extensive reflection developing in two separate approaches: hermeneutic (philological) and based on the concept of discourse. The closing of this road and a kind of revolution should be considered the appearance of artificial systems generating original, intelligible, and meaningful text in NLP procedures e.g. GPT 3, which meets the previously loosened condition of containing knowledge in the light of the aforementioned linguistic reflection. Such a possibility is expressisverbis included in the theory of discourse. Since any text that is syntactically correct, intelligible and meaningful can be considered a container of knowledge in the light of text theory, the key question becomes the way and conditions of such knowledge existence and the source of its origin in the case of texts generated by machines, e.g. advanced NLP algorithms. This role can be fulfilled by the model of textual knowledge completely isolated from the human. Ultimately breaking this barrier opens the possibility of interpreting knowledge of a much broader nature. This situation requires a reinterpretation of knowledge and the way it exists, although it also updates old problems such as truth or meaning. The answer to this need may be the discursive theory of knowledge, which can also be generalized to the knowledge gathered and articulated in any non-linguistic way

6:30-7:00 UTC

Mon 13th Sep

General Discussion


CONTRIBUTIONS FROM

Symmetry, Structure, and Information Conference SIS 2021

NON PLENARY

7:30-8:00 UTC

Mon 13th Sep

TILINGS FOR CONJOINED ORIGAMI CRANES USING LESS SYMMETRIC QUADRILATERALS

Takashi YOSHINO

8:00-8:30 UTC

Mon 13th Sep

DEMONSTRATION OF THE CONEPASS TO CONSTRUCT THE GOLDEN ANGLE

Akio HIZUME

8:30-9:00 UTC

Mon 13th Sep

ARTISTIC INTUITION: HOW SYMMETRY, STRUCTURE AND INFORMATION CAN COLLIDE IN ABSTRACT PAINTING

Marina ZAGIDULLINA

9:00-9:30 UTC

Mon 13th Sep

FUTURE ETHNOMATHEMATICS FOR A ‘NEW BLETCHLEY’

Ted GORANSON

9:30-10:00 UTC

Mon 13th Sep

FRACTAL-LIKE STRUCTURES IN INDIAN TEMPLES

Sreeya GHOSH, Paul SANDIP and Chanda BHABATOSH

10:00-10:30 UTC

Mon 13th Sep

INNER ANGLES OF TRIANGLES IN PARAMETER SPACES OF PROBABILITY DISTRIBUTIONS

Takashi YOSHINO

CONTRIBUTIONS FROM

Information in Biologically Inspired Computing Architectures Conference (BICA)

Block 2:

13:00-16:00 UTC

Mon 13th Sep


BICA

David Kelley

INVITED LECTURE

13:00-14:00 UTC

Mon 13th Sep

14. Toward a Unified Model of Cognitive Functions

Pei Weng

Temple University

Abstract:

NARS (Non-Axiomatic Reasoning System) provides a unified model for cognitive functions. The system is designed to work in situations where it has insufficient knowledge and resources with respect to the problems to be solved, so it must be adaptive and use whatever available to get the best solutions obtainable at the moment. NARS represents various types of knowledge in a conceptual network that summarizes the system’s experience, and constantly self-organizes the network to better meet the demands. Different aspects of this uniform self-organizing process can be seen as cognitive functions, such as reasoning, learning, planning, predicting, creating, guessing, proving, perceiving, acting, communicating, decision making, problem solving, etc. NARS has been mostly implemented, and the system has produced preliminary results as expected.

INVITED LECTURE

14:00-15:00 UTC

Mon 13th Sep

15. A Nested Hierarchy of Analyses: From Understanding Computing as a Great Scientific Domain, through Mapping AI and Cognitive Modeling & Architectures, to Developing a Common Model of Cognition

Paul Rosenbloom

USC Institute for Creative Technologies

Abstract:

The hierarchy of disciplines that spans computing, AI and cognitive modeling, and cognitive architectures is analyzed in tandem to yield insights into each of these disciplines individually and to jointly illuminate the final, most focused, one. Computing has the widest scope, as characterized here in terms of a Great Scientific Domain that is akin to the physical, life and social sciences. Once this notion is introduced, the field is broken down according to the domains involved in different segments of it and the types of relations that exist among these domains. AI, cognitive modeling and (biologically inspired) cognitive architectures are, in particular, characterized in this manner. With the focus then narrowed down to these three areas, an analysis of them in terms of four general dichotomies and their cross-products induces maps over their underlying technologies that yield both general insight into the contours of the areas themselves as well more specific insight into the structure of cognitive architectures. With the focus now further narrowed to just this latter topic, a Common Model of Cognition is presented that abstracts away from the contents of any particular architecture toward a community consensus concerning what must be in any architecture that is to support a human-like mind.(Non-Axiomatic Reasoning System) provides a unified model for cognitive functions. The system is designed to work in situations where it has insufficient knowledge and resources with respect to the problems to be solved, so it must be adaptive and use whatever available to get the best solutions obtainable at the moment. NARS represents various types of knowledge in a conceptual network that summarizes the system’s experience, and constantly self-organizes the network to better meet the demands. Different aspects of this uniform self-organizing process can be seen as cognitive functions, such as reasoning, learning, planning, predicting, creating, guessing, proving, perceiving, acting, communicating, decision making, problem solving, etc. NARS has been mostly implemented, and the system has produced preliminary results as expected.

PANEL DISCUSSION

15:00-16:00 UTC

Mon 13th Sep

PANEL DISCUSSION Moderated by Peter Boltuc (University of Illinois, Springfield & Warsaw School of Economics)

Panelists: David Kelly (AGI Laboratory, Seattle, USA) and Roman Yampolski (University of Louisville)

Abstract:

NARS (Non-Axiomatic Reasoning System) provides a unified model for cognitive functions. The system is designed to work in situations where it has insufficient knowledge and resources with respect to the problems to be solved, so it must be adaptive and use whatever available to get the best solutions obtainable at the moment. NARS represents various types of knowledge in a conceptual network that summarizes the system’s experience, and constantly self-organizes the network to better meet the demands. Different aspects of this uniform self-organizing process can be seen as cognitive functions, such as reasoning, learning, planning, predicting, creating, guessing, proving, perceiving, acting, communicating, decision making, problem solving, etc. NARS has been mostly implemented, and the system has produced preliminary results as expected.

TUESDAY, SEPTEMBER 14

CONTRIBUTIONS FROM

Symmetry, Structure, and Information Conference SIS 2021

Block 1:

4:00-7:00 UTC

Tue 14th Sep

Symmetry - SIS

Denes Nagy

INVITED LECTURE

4:00-5:00 UTC

Tue 14th Sep

16. THE DEVELOPMENT AND ROLE OF SYMMETRY IN ANCIENT SCRIPTS

Peter Z. REVESZ

Dep. Computer Science and Engineering, University of Nebraska-Lincoln (United States of America)

Abstract:

Many ancient scripts have an unexpectedly high number of signs that contain a type of symmetry where the left and right sides of the signs are mirrored along a central vertical line. For example, within our English alphabet, which is a late derivative of the Phoenician alphabet, the following letters have this type of symmetry: A, H, I, M, N, O, T, U, V, W, X and Y. That is, a total of 12/26 = 46.2% of the letters of the English alphabet has this type of symmetry. Similarly, the ancient Minoan Linear A script, which existed from about 1700 to 1450 BC, contains the following mirrored signs:





These 42 signs are about half of the most frequent signs in the Linear A script, which are estimated to be about 90 signs. In this paper we try to answer the question of “Why is there such a high percent of mirrored signs in ancient scripts?”

We believe that the unexpectedly high number of symmetric signs is due to a development of writing that is called boustrophedonic, or literally as ‘the ox goes’ meaning that at the end of a line the next line continues right below the end and goes in the opposite direction. Hence a left-to-right line is followed by a right-to-left line, which is again followed by a left-to-right line and so on. This is reminiscent of how oxen plow a plot of land. The main problem with boustrophedonic writing is that when we look at a particular line, we do not automatically know which way it should be read, unlike in modern English texts, where every line is read from left-to-right. As a modern example, suppose we would like to write ‘GOD’ in a row that is to be read from right-to-left. This looks like an easy task that can be done by simply writing: ‘DOG’. The problem is that the reader may not recognize that the row needs to be read from right-to-left, hence ‘God’ becomes ‘dog’ for the reader. Ancient scribes compensated for this problem by vertically mirroring any nonsymmetric letter so that the orientation of the words would indicate the direction. Using this concept, instead of ‘DOG’, they would have written:

While boustrophedonic writing with mirroring of asymmetric signs is an attractive solution, and it occurs also in the Mycenaean Linear B script and the Indus Valley Script, it causes the problem of having to know and correctly write the mirrored versions of the asymmetric signs. Many people make mistakes when writing mirrored letters and can read mirrored letters much slower than ordinary texts. We believe that these factors combined with the observation that only a few frequently occurring asymmetric signs are enough to indicate the writing direction led to the development of symmetric forms for most signs. We test this hypothesis by considering earlier scripts where there are no examples of boustrophedonic writings. For example, the Cretan Hieroglyphic script, a predecessor of the Linear A script, contains significantly fewer symmetric signs, and only the following signs of the Phaistos Disk, which may belong to an even earlier layer of scripts, are symmetric signs:



That is, only 13/45 = 28.9% of the Phaistos Disk signs are symmetric. Hence, on the island of Crete, the percent of scripts signs with symmetry nearly doubled within a few centuries, showing the importance of symmetry in writing


5:00-5:30-6:00 UTC

Tue 14th Sep

ENANTIOMORPHIC TALKS ON SYMMETRY (Contributions from Symmetry, Structure and Information Conference)

5:00-5:30-6:00 UTC

Tue 14th Sep

17. Symmetry and Information: An odd couple (?)

Dénes Nagy

International Society for the Interdisciplinary Study of Symmetry (Hungary and Australia)

Symmetry (from the Greek syn + metron, “common measure”), structure (from the Latin structūra, “fitting together, building”), and information (from the Latin īnfōrmātiō, “formation, conception, education”) are scholarly terms that have ancient roots, but gained new meanings in modern science. It is also common that all of these played important roles in interdisciplinary connections, linking even science and art.

We argue that symmetria - asymmetria could have played a relevant role at the birth of mathematics as an abstract science with a deductive methodology (we present a partly new hypothesis which unites those ones by Szabó and by Kolmogorov), then we discuss the related, but different modern meaning-family of symmetry.

(1) The structural approach gained special importance in geometric crystallography from the mid-19th century (14 Bravais-lattices), which, using symmetry considerations, led to a major breakthrough in the 1890s by presenting the complete list all of the possible ideal crystal structures, that is the 230 space symmetry groups (Fedorov, Schoenflies, and Barlow). In the early 20th century, the focus on structures in linguistics (Saussure) also inspired the later developments in social sciences, the intensive study of relations and structuralism as a methodology (Lévi-Strauss, Piaget, and others).

From the mid-1930s, a group of French mathematicians used a similar path in order to present “new math” (Bourbaki group).

(2) Theory of communication led to the study of information in mathematical-statistical context and finally a method for measuring information (Hartley, Shannon, Wiener), which became useful for the emerging computer science in the mid-20th century. Then information theory was also used for the study of aesthetical questions (Moles, Bense).

Looking back, we may see interesting changes:

- The original Greek concept of symmetria was related to measurement, but the usual modern understanding of symmetry implies rather a yes/no question: an object or a process is either symmetric or not. We argue that it is important to go back to the roots and to consider symmetry-measures. In fact, the concept of dissymmetry (as the lack of some possible elements of symmetry) gained special importance in structural chemistry (Pasteur), theoretical physics (P. Curie), and crystallography (Shubnikov and Koptsik), pointed out to such a direction.

- The concept information was originally not related to measurement, but the modern mathematical approach introduced measures in bits, with the number of yes/no questions (Hartley, Shannon). On the other hand, the meaning of information was lost in the case of the mathematical-statistical approach. There were important works related to the meaning of information (MacKay, Bar-Hillel and Carnap, Shreider). It would be important to unite these two and modify them according to the new needs. Quantum computing needs a new information theory related not to bit, but to qubit; here we may need symmetry considerations (cf., Bloch sphere representation).

In some sense, the “odd couple” of symmetry (as an ordering principle) and information (knowledge based on measurements) came together for solving the Maxwell-demon problem. In this thought experiment, which seemingly violates the law of entropy, the demon as the doorman between the two chambers of a closed container filled with gas, introduces new order by separating the high-speed and the low-speed gas molecules by opening the door always in due time. This method would solve our heating and cooling problem in everyday life. The demon, however, should use information, specifically measuring the speed of molecules for the purpose of his work (Szilard). Thus, it would be a very expensive heating and cooling. Another example where symmetry and information work together: The vertices of some regular and semi-regular polyhedra inscribed into a sphere present the centers of circles in the case of densest packing of a given number of equal circles on this sphere (Tammes problem), which is important for spherical coding. The term information asymmetry is well-established in economic science. The fact that it may create an imbalance of power in transactions and, in the worst case, market failure led to various studies and actually the Nobel-prize of three economists (Akerlof, Spence, and Stiglitz).

We suspect that some generalized symmetry and information concepts, which are required by the recent developments in science and art, may help each other.

5:00-5:30-6:00 UTC

Tue 14th Sep

18. Antinomies of Symmetry and Information

Marcin J. Schroeder

IEHE, Tohoku University, Sendai, Japan

This is a proposal of the resolution of several apparent antinomies within the studies of information, symmetry, and of the mutual relationship between symmetry and information. A selection of examples of such antinomies is followed by a nut-shell overview of their solution.

The earliest example of the opposition in the views on information can be found in the critical reaction to the claim of Shannon’s foundational work denying importance of the semantic aspects of communication. This denial exposed his work to the objection that it is not about information. The issue was never completely resolved, although it faded with increased popularity of naive claims that the problem disappears when we demand in the definition that whatever information is, it has to be true.

The relationship between the measure of information given by Shannon in the form of entropy and the measure called negentropy introduced by Schrödinger as a magnitude which although being non-negative has its value opposite to the non-negative entropy is antinomial. This curious pairing, although apparently sufficiently harmless not to attract much attention, is a tip of the iceberg of much deeper internal opposition in the view of information. Shannon’s view of information is tied to the recipient of a message i.e. it is observer’s view. Schrödinger’s negentropy is a numerical characteristic of the acquired freedom in forming organized structure within the system.

An example representing antinomies of symmetry has a form of an opposition of two oppositions. One of them is between the artificial, intentional character of symmetry associated with human aesthetical preference, and the natural character of asymmetry associated with spontaneous, unconstraint generation of forms. The other, reversed opposition is provided by the biological evolution in which the steps in the transition to higher form of life are marked by diverse forms of breaking symmetry leading from the highly symmetric proto-organismic simple systems to the complex human organism with its asymmetric functional specialization.

Finally, there is an example of the opposition in views on the relationship between information and symmetry with its main axis between the claim that information has its foundation in asymmetry and the view that physics is essentially a study of symmetries, so if information is physical we should base its study on the analysis of its symmetries. The former position originates in the Curie Principle that symmetric causes cannot have asymmetric effects justifying the focus on asymmetry, as it can guide us to the actual causes of phenomena. The early expression of Bateson’s metaphor of “information as a difference which makes difference” was in his explanation of the rules of biological asymmetry.

The elimination of these and other antinomies is based on the recognition of the two manifestations of information, selective and structural. The latter requires involvement of symmetry understood as invariance with respect to groups of transformations. The key point is that the apparent antinomies of information, symmetry, and of their relationship are consequences of the fallacious idea of asymmetry, which obscures the relations and transitions between diverse forms of symmetry.

PANEL DISCUSSION

6:00-7:00 UTC

Tue 14th Sep

PANEL DISCUSSION (Contributions from Symmetry, Structure and Information Conference)

19. Moderators’ Introduction to the Panel Discussion

Moderated by Dénes Nagy & Marcin J. Schroeder

Confirmed Panelists: Ted Goranson, Peter Revesz, Vera Viana, Takashi Yoshino

The theme of this discussion and the conference is Symmetry, Structure, and Information. Each of these three ideas escapes a commonly accepted definition. On the other hand, if you ask a passerby whether he or she understands the words symmetry, structure, information most likely the answer would be “sure”. In a very unlikely case, the answer could be “not at all, but I really would like to understand symmetry” showing that the person knows a lot. Then invite him or her to attend the Congress on Symmetry in Porto (https://symmetrycongress.arq.up.pt/) next July.

We can expect a question about the objectives of discussing the triad of symmetry, structure, and information. After all, if we add one more idea of complexity then we have a collection of the most elusive and at the same time most important notions of the philosophical and scientific inquiries. Isn’t it better to focus on each of them separately and only after we have clear results of such inquiries to attempt synthesis? This is the main question addressed to the panelists and the audience.

This question can be reformulated or complemented by the question about the importance, or its lack, of the mutual relationships between the ideas in the leitmotif of the conference. This includes importance for philosophical, theoretical, or practical reasons.

Finally, we can consider the question about what is missing from the picture which is painted by the title with the three ideas only. What ideas, notions, concepts we should include, or even we should give priority in our inquiries of symmetry?

Contribution from Digital Humanities (Dighum) Conference

Block 2:

12:00-16:00 UTC

Tue 14th Sep


Dighum

Wolfgang Hofkirchner & Hans-Jörg Kreowski

KEYNOTE SPEECH

12:00-13:00 UTC

Tue 14th Sep

20. Digital Humanism

Julian Nida-Rümelin

Munich University, Germany

Digital Humanism, as I understand it, defends the human condition against transhumanistic transformations and animistic regressions. The core element of humanism is the idea of authorship: humans are the authors of their lives, they are responsible for what they believe and desire, reasonable insofar as they are affected by reasons, free insofar as they can evaluate and choose.

Humanism in ethics and politics strives at extending human authorship by formation and social policy. Digitization changes the technological conditions of humanist practice, but does not transfrom humans in cyborgs or establish machines as persons. Digital humanism rejects transhumanistic an animistic perspectives alike, it rejects the idea of homo deus, the human god that creates e-persons as friends as possibly one day as enemies.

In my talk I will outline the basic ideas of digital humanism and draw some ethical and political conclusions.

Biographic Note:

Julian Nida-Rümelin is a well-known philosopher, he teaches at Munich university. He was president of the German Philosophical Association and is member of the American Philosophical Association and the European Academy of Sciences and Arts among others. He is Honorary Professor at the Humboldt University in Berlin. He was visiting professor at Minneapolis, St Gallen, Cagliari, Trieste, Rome (CNR), Turin et al.

Nida-Rümelin was Stateminister for Culture and Media in the first cabinet of chancellor Gerhard Schröder.

His main areas of interest are: theory of rationality (practical and theoretical), ethics and political philosophy. He has published more than hundred scientific articles in these fields and several books.

Nida-Rümelin publishes also outside academia on topics like: economics and ethics, philosophy of architecture, digitisation. His book on „Digital Humanism“, written together with his wife Nathalie Weidenfeld, was awarded „The best political book of the year 2018“ in Austria.

KEYNOTE SPEECH

13:00-14:00 UTC

Tue 14th Sep

21. Humanism Revisited

Rainer E. Zimmermann

Institute for Design Science Munich e.V. / Clare Hall, UK - Cambridge

For a long while by now, we live in an inflationary world of „-isms“, at least as far as the intellectual discourse is being concerned. Very often, this apparently generic designation (actually of Greek origin), mainly owed to the alleged conceptual strife for precision, notably in the analytic philosophy of Anglo-Saxon descent, is neither helpful nor even sufficiently redundant, if not superfluous al- together in the first place. In particular, if accompanied by another fashionable adjective. (Unfortunately, I have to admit that I myself have once introduced such a construction, when talking of “transcendental materialism” – but sometimes there is no other way available in order to achieve a minimal amount of clarification. This is probably the exception from the rule.) It turns out after all that most of the time, the meticulous differentiation of concepts is more appropriate to veil clarity and pretend an ostensive depth of reflexion rather than to access an actual gain in acquired knowledge.

This having said, we cannot deny however that the concept of “humanism” is indeed one of the oldest and most omnipresent concepts, but also one of the most iridescent and enigmatic concepts aiming at a designation of species while be- longing to the afore-mentioned set of –isms. Nevertheless, as far as it goes, it is also a concept of considerable proper strength when pointing to fundamental components of what can be understood as a kind of basic ethics. In fact, human- ism shares with ethics the disadvantage of being usually ill-defined and a source of misunderstandings. Hence, in order to avoid the re-invention of what is al- ready known und sufficiently understood, it is always useful to ask for the con- ceptual origins of the concept in question. And this is what we will do in this present talk: We will look for the Greek and Roman origins, trace the development within the Renaissance framework, and turn then to more recent aspects. In the end, we will find that the origins of humanism provide a suitable entry into the epistemological foundations of living an adequately reflected life, despite the underlying suspicion of triviality that is always connected with the classificatory utilization of –isms. We also find that it is quite unnecessary to (re-)formulate new versions of humanism, because essentially, the mentioned origins stay structurally invariant through space and time. (The same is actually true for ethics.)

PANEL DISCUSSION

14:15-15:45 UTC

Tue 14th Sep

PANEL DISCUSSION (Contribution from Dighum Conference)

Digital Humanism – How to shape digitalisation in the age of global challenges?

Panelists: Kirsten Bock, Yagmur Denizhan, José María Díaz Nafría, Rainer Rehak


WEDNESDAY, SEPTEMBER 15

The 2021 Summit of the International Society for the Study of Information.

Block 1:

4:00-7:00 UTC

Wed 15th Sep

IS4SI

Marcin Schroeder

  1. Keynote Jack Copeland 4:00-5:00 UTC

  2. Keynote Terry Deacon 5:00-6:00 UTC

  3. Keynote Yukio-Pegio Gunji 6:00-7:00 UTC

INVITED LECTURE

4:00-5:00 UTC

Wed 15th Sep

22. The Indeterminacy of Computation: Slutz, Shagrir, and the mind

B. Jack Copeland

University of Canterbury, Christchurch, New Zealand

Some computational systems have the counterintuitive property that it is indeterminate what mathematical function they compute. One might say that such a system simultaneously performs multiple computations, one or another of which may be selected by a second system accessing the first, or by a number of systems in a milieu of selecting systems surrounding the first. This talk outlines the potential role the concept of the indeterminacy of computation has to play in the philosophy of information and emphasizes its importance. I begin by examining the concept’s history. It seems first to have emerged in the work of American electronic engineer Ralph Slutz, during the boom in computer development following the second world war. Decades then passed, with only one or two tiny bursts of interest shown in the concept by philosophers — until, around 2000, Israeli philosopher Oron Shagrir reinvented the concept and developed it in a series of recent important papers. In this overview of what is now an emerging field, I introduce a system of levels useful for describing computationally indeterminate systems, together with the concept of ‘computational multi-availability’ and the associated ‘trough model’ for exploiting computational indeterminacy. Turning to potential applications of computational indeterminacy, I sketch the role the concept can play in engineering and also in the philosophy of mind.

INVITED LECTURE

5:00-6:00 UTC

Wed 15th Sep

23. Falling Up: The Paradox of Biological Complexity

Terrence W. Deacon

UC Berkeley

There is an unexpected twist to the evolution of the complexity of biological information. A survey of living complexities at many levels suggest that it is often a spontaneous loss of capacity, a breakdown of individuation, and decreased complexity at one level that serendipitously contributes to the emergence of a more complex collective integrity at a higher level of scale, such as from individual cells to multi celled organisms like ourselves. This points to a critical nonDarwinian process that is the inverse of a progressive improvement of adaptation.

I will provide evidence gleaned from a wide range of phenomena to demonstrate that evolutionary complexification often results from a tendency to simplify, to do less, to shift the burden elsewhere if possible. It is an expression of Life’s least work principle. Life just backs into ever more intricate webs of dependency as it explores ways to avoid work. And this web of interdependencies only becomes more entangled with time—producing a complexity ratchet.

In particular, cases of hierarchic complexification may result from displacement or externalization of functional information onto some outside influence, whether environmental or social. This reduces the selection maintaining the corresponding intrinsically provided information, which consequently becomes susceptible to spontaneous degeneration. With its degeneration there is increasing selection to maintain access to this extrinsic source. As a result duplication, displacement, degeneration, and complementation can build recursively, level upon level, from molecular information to organism adaptations to mental and social cognition. The result is that what we call 'information' tends to spontaneously complexity in depth, with higher levels dependent on and emergent from lower levels, thus making a single level concept of information increasingly inadequate for use in biology.

INVITED LECTURE

6:00-7:00 UTC

Wed 15th Sep

24. Almost disjoint union of Boolean algebras appeared in Punch Line

Yukio Pegio Gunji

Department of Intermedia Art and Science, School of Fundamental Science and Technology, Waseda University, Tokyo, Japan

While humor is one of the most intriguing topics in human behaviors, there is little mathematical research with respect to the universal structure of humor. Recently, quantum psychology attempts to describe how humor is arisen from the uncertainty. Although quantum theory is sufficient condition, it is not necessary condition to describe humor. Instead of starting from quantum theory, we start from describing a sequence of humor text in stand-up comedy. The relation between preceding and subsequent words is expressed as a binary relation, which leads to a lattice by rough set approximation techniques. We here show the binary relation found in stand-up comedies entails a lattice called almost disjoint union of Boolean algebras which is general form of orthomodular lattice. It implies that quantum-like structure can be obtained even if we start not from quantum theory.

In a binary relation, a cat is distinguished from non-cat in a focused context, and i.e., there is no relation between cat and non-cat in a sub-relation. However, a cat is mixed up with non-cat outside the focused context, and i.e., there is a relation between a cat and non-cat. The ambiguity of relation and no relation implies uncertainty with respect to indication. Since each element in a focused context is distinguished from any other elements, focused context is expressed as a diagonal relation. If there are two contexts, 2 by 2 and 3 by 3 in a 5 by 5 symmetrical relation, the 5 by 5 relation consists of 2 by 2 and 3 by 3 diagonal relations and relations between any other pairs outside the diagonal relations. By using a rough set lattice approximation, fixed points with respect to upper and lower approximation based on the binary relation entails a lattice. In that case of 2 by 2 and 3 by 3 diagonal relations, each diagonal relation entails Boolean algebra, and the relations between any other pairs outside the diagonal relations glue Boolean algebras at the top and bottom, and that entails almost disjoint union of Boolean algebras.

We here define subjective probability for a lattice which satisfies that if A B then P(A) P(B). This probability reveals that the probability of an element appearing before punch line is very low and the probability of an element at the punch line is very high. It implies that tension in audience increases before the punch line since the audience cannot understand the event which can rarely happen, and that the tension is relaxed and released at the punch line since the audience faces the event that can frequently happen. Humor is here explained, based on quantum-like structure, not starting from quantum theory.

Block 2:

13:00-16:00 UTC

Wed 15th Sep

IS4SI

Gordana Dodig-Crnkovic

  1. Keynote Aaron Sloman 13:00-14:00 UTC

  2. Keynote Michael Levin 14:00-15:00 UTC

  3. Discussion 15:00-16:00 UTC (Moderator: Gordana Dodig-Crnkovic)

KEYNOTE LECTURE

13:00-14:00 UTC

Wed 15th Sep

25. Why don't hatching alligator eggs ever produce chicks?

Aaron Sloman

School of Computer Science, University of Birmingham, UK

[Retired, honorary professor of AI and Cognitive Science]

How does a child understand the impossibility of separating linked rings?

Neither ancient forms of spatial reasoning, used by mathematicians and engineers centuries before Euclid, nor spatial abilities of intelligent species such as squirrels, crows, elephants, pre-verbal humans, and newly hatched creatures, like the young avocets in this video clip from a BBC Springwatch programme: https://www.cs.bham.ac.uk/research/projects/cogaff/movies/avocets/avocet-hatchlings.mp4 can be explained by fashionable neural net theories, since neural nets cannot be trained inside eggs, and they cannot represent, let alone prove, spatial impossibility or necessity. As Immanuel Kant pointed out in 1781, necessity and impossibility are not very high and very low probabilities. Recently developed logic-based formal reasoning mechanisms can't explain abilities of ancient humans, pre-verbal toddlers, and other intelligent species. The only remaining possibility seems to be that hitherto unnoticed chemistry-based mechanisms, required for biological assembly, also underpin these complex, species-specific, forms of intelligence. Different hatchlings, such as baby alligators or turtles, have very different physical forms and very different capabilities. What chemical processes in eggs can determine both complex physical forms (including intricate internal physiology) and complex physical behaviours, unmatched by current robots? Production, within each individual, of bones, tendons, muscles, glands, nerve fibres, skin, hair, scales, or feathers, and also intricate networks of blood vessels, nerve-fibres and other physiological structures, are clearly chemistry-based, and far more complex than chemistry based behaviours of shape changing organisms, such as slime molds. The combination of complexity, compactness, energy-efficiency, and speed of production of processes in an egg are also unmatched by human designed assembly-lines. Early stages of gene expression are well understood, but not the later processes producing species-specific forms of intelligence in eggs. How are these extraordinarily complex assembly processes controlled? I'll suggest that they use virtual machines with hitherto unknown, non-space occupying mechanisms, whose construction needs to be boot-strapped via multi-layered assembly processes far more complex than anything achieved in human designed assembly plants, yet using far less matter and energy in their operation. Developing explanatory theories will need new forms of multi-disciplinary collaboration, with profound implications for theories of brain function, replacing current theories that cannot explain ancient mathematical discoveries. The mechanisms must be primarily chemistry-based, since neurons develop relatively late. We need an entirely new form of brain science giving far more credit to chemical processes whose computational powers exceed those of both digital computers and neural nets. Is that why Alan Turing was exploring chemistry-based morphogenesis shortly before he died?

For more details see:

https://www.cs.bham.ac.uk/research/projects/cogaff/misc/sloman-morcom.html

KEYNOTE LECTURE

14:00-15:00 UTC

Wed 15th Sep

26. Morphogenesis as a model for computation and basal cognition

Michael Levin

Tufts Center for Regenerative and Developmental Biology, Tufts University

Embryos and regenerating systems produce very complex, robust anatomical structures and stop growth and remodeling when those structures are complete. One of the most remarkable things about morphogenesis is that it is not simply a feed-forward emergent process, but one that has massive plasticity: even when disrupted by manipulations such as damage or changing the sizes of cells, the system often manages to achieve its morphogenetic goal. How do cell collectives know what to build and when to stop? In this talk, I will highlight some important knowledge gaps about this process of anatomical homeostasis that remain despite progress in molecular genetics. I will then offer a perspective on morphogenesis as an example of a goal-directed collective intelligence that solves problems in morphospace and physiological space. I will sketch the outlines of a framework in which evolution pivots strategies to solve problems in these spaces and adapts them to behavioral space via brains. Neurons evolved from far more ancient cell types that were already using bioelectrical network to coordinate morphogenesis long before brains appeared. I will show examples of our work to read and write the bioelectric information that serves as the computational medium of cellular collective intelligences, enabling significant control over growth and form. I will conclude with a new example that sheds light on anatomic plasticity and the relationship between genomically-specified hardware and the software that guides morphogenesis: synthetic living proto-organisms known as Xenobots. In conclusion, a new perspective on morphogenesis as an example of unconventional basal cognition unifies several fields (evolutionary biology, cell biology, cognitive science, computer science) and has many implications for practical advances in regenerative medicine, synthetic bioengineering, and AI.

PANEL DISCUSSION

15:00-16:00 UTC

Wed 15th Sep

PANEL DISCUSSION – DIALOGUE

Moderated by Gordana Dodig-Crnkovic

Panelists: Aaron Sloman, Michael Levin


THURSDAY, SEPTEMBER 16

Contributed by Morphological Computing of Cognition and Intelligence Conference MORCOM 2021

Block 1:

4:00-7:00 UTC

Thu 16th Sep

MORCOM


Gordana Dodig-Crnkovic/Marcin Schroeder

KEYNOTES

4:00-4:20 UTC

Thu 16th Sep

27. Cross-Embodied Cognitive Morphologies: Decentralizing Cognitive Computation Across Variable-Exchangable, Distributed, or Updated Morphologies

Jordi Vallverdú

Universitat Autònoma de Barcelona, Catalonia, Spain

Most of the bioinspired morphological computing studies have departed from a human analysis bias: to consider cognitive morphology as encapsulated by one body, which of course can have enactive connections with other bodies, but that is defined by clear bodily boundaries. Such complex biological inspiration has been directing the research agenda of a huge number of labs and institutions during the last decades. Nevertheless, there are other bioinspired examples or even technical possibilities that go beyond biological capabilities (like constant morphological updating and reshaping, which asks for remapping cognitive performances). And despite the interest of swarm cognition (which includes superorganisms of flocks, swarms, packs, schools, crowds, or societies) in such non-human-centered approaches, there is still a biological constraint: such cognitive systems have permanent bodily morphologies and only interact between similar entities. In all cases, and even considering amazing possibilities, such as the largest living organism on Earth, specific honey fungus Armillaria solidipes measuring 3.8 km across in the Blue Mountains in Oregon, it hasn’t been put over the table the possibility of thinking about cross-morphological cognitive systems. Nests of intelligent drones as a single part of AI systems with other co-working morphologies, for example. I am therefore suggesting the necessity of thinking about cross-embodied cognitive morphologies, more dynamical and challenging than any other existing cognitive system already studied or created.

INVITED SPEAKERS

4:20-4:40 UTC

Thu 16th Sep

28. Designing Physical Reservoir Computers

Susan Stepney

University of York, UK

Abstract:

Computation is often thought of as a branch of discrete mathematics, using the Turing model. That model works well for conventional applications such as word processing, database transactions, and other discrete data processing applications. But much of the world's computer power resides in embedded devices, sensing and controlling complex physical processes in the real world. Other computational models and paradigms might be better suited to such tasks. For example is the reservoir computing model, which can be instantiated in a range of different material substrates. This approach can support smart processing `at the edge', allow a close integration of sensing and computing in a single conceptual model and physical package.

As an example, consider an audio-controlled embedded device: it needs to sense sound input, compute an appropriate response, and direct that response to some actuator such as an electrical motor. We can have an unconventional solution using reservoir computing, which exploits the dynamics of a material to perform computation directly. One form of MEMS (microelectromechanical system) device is a microscopic beam that oscillates when it is accelerated and outputs an electrical signal. This kind of device is used in a car's airbag as an accelerometer to detect crashes. Such a device might be used in an audio-controlled system as follows. The incident sound waves make the beam vibrate (in an analogous way to how they make a microphone's diaphragm vibrate). This vibrating beam can be configured as a reservoir computer, where the non-linear dynamics of the complex vibrations are used directly to compute and classify the audio input. The electrical output from the device is this classified response, sent directly to the motor. Here, the sensor and the computer are the very same physical device, which also performs signal transduction (from sound input to electrical output), with no power-hungry conversion between analogue and digital signals, and no digital computing.

Such systems, implementable in a wide range of materials, offer huge potential for novel applications, of smart sensors, edge computing, and other such devices, reducing, and in some cases potentially eliminating, the need for classical digital central resources. Many novel materials are being suggested for such uses, leading to interdisciplinary collaborations between materials scientists, physicists, electronic engineers, and computer scientists. Before such systems can become commonplace, multiple technical and theoretical issues need to be addressed.

In order to ensure that these novel materials are indeed computing, rather than simply acting as physical objects, we need a definition of physical computing. I describe one such definition, called Abstraction-Representation Theory, and show how this framework can then be exploited to help design correctly functioning physical computing devices.

INVITED SPEAKERS

4:40-5:00 UTCThu 16th Sep

29. The Aims of AI: Artificial and Intelligent

Vincent C. Müller

TU/e (& U Leeds, Turing Institute)

Abstract:

Explanation of what ‘artificial’ means, esp. in contrast to ‘living’. First approximation of what ‘intelligent’ means, esp. in contrast to a discussion of the Turing Test: Do not focus on ‘intellectual intelligence’; do not focus on the human case; do not rely on behaviour alone. Intelligence vs. rational behaviour, e.g. instrumental vs. general intelligence. Formulation of an aim for full-blown AI – a computing system with the ability to successfully pursue its goals. This ability will include perception, movement, representation, rational choice, learning, as well as evaluation and revision of goals - thus morphology will contribute to the orchestration of intelligent behaviour in many but not all these cognitive functions.

5:10-5:30 UTC

Thu 16th Sep

30. Cognition Through Organic Computerized Bodies. The Eco-Cognitive Perspective

Lorenzo Magnani

University of Pavia, Pavia, Italy

Abstract:

Eco-cognitive computationalism sees computation in context, exploiting the ideas developed in those projects that have originated the recent views on embodied, situated, and distributed cognition. Turing’s original intellectual perspective has already clearly depicted the evolutionary emergence in humans of information, meaning, and of the first rudimentary forms of cognition, as the result of a complex interplay and simultaneous coevolution, in time, of the states of brain/mind, body, and external environment. This cognitive process played a fundamental heuristic role in Turing’s invention of the universal logical computing machine. It is by extending this eco-cognitive perspective that we can see that the recent emphasis on the simplification of cognitive and motor tasks generated in organic agents by morphological aspects implies the construction of appropriate mimetic bodies, able to render the accompanied computation simpler, according to a general appeal to the “simplexity” of animal embodied cognition.

Hence, in computation the morphological features are relevant. It is important to note that, in the case of morphological computation, a physical computer does not need to be intelligently conceived: it can be naturally evolved. This means that living organisms or parts of organisms (and their artefactual copies) can potentially execute information processing and can potentially be exploited to execute their computations for us. It is by further deepening and analyzing the perspective opened by these novel fascinating approaches that we see ignorant bodies as domesticated to become useful “mimetic bodies” from a computational point of view, capable to carry cognition and intelligence. This new perspective shows how the computational domestication of ignorant entities can originate new variegated unconventional cognitive embodiments, so joining the new research field of the so-called natural computing. Finally, I hope it will become clear that eco-cognitive computationalism does not aim at furnishing a final and fixed definition of the concept of computation but stresses the historical and dynamical character of the concept.

5:30-5:50 UTC

Thu 16th Sep

31. Digital Consciousness and the Business of Sensing, Modeling, Analyzing, Predicting, and Taking Action

Rao Mikkilineni

Golden Gate University, US

Abstract:

“In brief, neither qualia nor free will seems to pose a serious philosophical problem for the concept of a conscious machine. …. The richness of information processing that an evolved network of sixteen billion cortical neurons provides lies beyond our current imagination. Our neuronal states ceaselessly fluctuate in a particularly autonomous manner, creating an inner world of personal thoughts. Even when confronted with identical sensory inputs, they react differently depending on our mood, goals, and memories.”

Stanislas Dehaene (2014) “Consciousness and the Brain: Deciphering How the Brain Codes our Thoughts” Penguin Books, New York. P 265

Preamble:

Recent advances in various disciplines of learning are all pointing to a new understanding of how information processing structures in nature operate and not only this knowledge may yet help us to solve the age-old philosophical question of “mind-body dualism” but also pave a path to design and build self-regulating automata with a high degree of sentience, resilience and intelligence.

Classical computer science with its origins from the John von Neumann’s stored program implementation of the Tring machine has given us tools to decipher the mysteries of physical, chemical, and biological systems in nature. Both symbolic computing and neural network implementations have allowed us to model and analyze various observations (including both mental and physical processes) and use information to optimize our interactions with each other and with our environment. In turn, our understanding of the nature of information processing structures in nature using both physical and computer experiments is pointing us to a new direction in computer science going beyond the current Church Turing thesis boundaries of classical computer science.

Our understanding of information processing structures and their internal and external behaviors causing their evolution in all physical, chemical and biological systems in nature are suggesting the need for a common framework where function, structure and fluctuations of these systems composed of many autonomous components interacting with each other under the influence of physical, chemical and biological forces. As Stanislas Dehaene (Stanislas Dehaene (2014) “Consciousness and the Brain: Deciphering How the Brain Codes our Thoughts” Penguin Books, New York. P 162) points out “What is required is an overreaching theoretical framework, a set of bridging laws that thoroughly explain how mental events relate to brain activity patterns. The enigmas that baffle contemporary neuroscientists are not so different from the ones that physicists resolved in the nineteenth and twentieth centuries. How, they wondered, do the macroscopic properties of ordinary matter arise from a mere arrangement of atoms? Whence the solidity of a table, if it consists almost entirely of a void, sparsely populated by a few atoms of carbon, oxygen, and hydrogen? What is a liquid? A solid? A crystal? A gas? A burning flame? How do their shapes and other tangible features arise from a loose cloth of atoms? Answering these questions required an acute dissection of the components of matter, but this bottom-up analysis was not enough; a synthetic mathematical theory was needed.”

Fortunately, our understanding of the theory of structures and information processing processes in nature points a way for a theoretical frame work that allows us to:

  1. Explain the information processing architecture gleamed from our studies of physical, chemical and biological systems to articulate how to model and represent cognitive processes that bind the brain-mind-body behaviors and also,

  2. Design and develop a new class of digital information processing systems that are autopoietic. An autopoietic machine is capable of “of regenerating, reproducing and maintaining itself by production, transformation and destruction of its components and the networks of processes downstream contained in them.”

All living systems are autopoietic and have figured out a way to create information processing structures that exploit physical and chemical processes to manage not only their own internal behaviors but also their interactions with their environment to assure their survival in the face of constantly changing circumstances. Cognition is an important part of living systems and is the ability to process information through perception using different sensors. Cognitive neuroscience has progressed in “cracking open the black box of consciousness ” to discern how cognition works in managing information with neuronal activity. Functional magnetic resonance imaging used very cleverly to understand the “function of consciousness, its cortical architecture, its molecular basis, and even its diseases” allows us now to model the information processing structures that relate cognitive behaviors and consciousness.

In parallel, our understanding of the genome provides insight into information processing structures with autopoietic behavior. The gene encodes the processes of “life” in an executable form, and a neural network encodes various processes to interact with the environment in real time. Together, they provide a variety of complex adaptive structures. All of these advances throw different light on the information processing architectures in nature.

Fortunately, a major advance in new mathematical framework allows us to model information processing structures and push the boundaries of classical computer science just as relativity physics pushed the boundary of classical Newtonian physics and statistical mechanics pushed the boundaries of boundaries of thermodynamics by addressing function, structure and fluctuations in the components constituting the physical and chemical systems. Here are some of the questions we need to answer in the pursuit of designing and implementing an autopoietic machine with digital consciousness:

  • What is Classical Computer Science?

  • What are the Boundaries of Classical Computer Science?

  • What do We learn from Cognitive Neuroscience about The Brain and Consciousness?

  • What do we Learn from the Mathematics of Named Sets, Knowledge Structures, Cognizing Oracles and Structural Machines?

  • What are Autopoietic Machines and How do they Help in Modeling Information Processing Structures in Nature?

  • What are the Applications of Autopoietic Digital Automata and how are they different from the Classical Digital Automata?

  • Why do we need to go beyond classical computer science to address autopoietic digital automata?

  • What are knowledge structures and how are they different from data structures in classical computer science?

  • How are the operations on the schema representing the data structures and knowledge structures differ?

  • How do “Triadic Automata” help us implement hierarchical intelligence?

  • How does an Autopoietic Machine move us to Go Beyond Deep Learning to Deep Reasoning Based on Experience and Model-based Reasoning?

  • What is the relationship between information processing structures in nature and the digital information processing structures?

  • What are the limitations of digital autopoietic automata in developing same capabilities of learning and reasoning as biological information processing structures?

  • How do the information processing structures explain consciousness in living systems and can we infuse similar processes in the digital autopoietic automata?

In a series of blogs, we will attempt to search the answers for these questions and in the process, we hope to understand the new science of information processing structures, which will help us build a new class of autopoietic machines with digital consciousness.

However, as interesting as the new science is, more interesting is the new understanding and the opportunity to transform current generation information technologies without disturbing them with an overlay architecture just like the biological systems evolved an overlay cognitive structure to provide global regulation while keeping local component autonomy intact while coping with rapid fluctuations in real-time. We need to address following questions:

  • How are the knowledge structure different from current data structures and how will database technologies will benefit from autopoiesis to create a higher degree of sentience, resilience, and hierarchical intelligence at scale?

  • Will the operations on knowledge structure schemas improve the current database schema operations and provide higher degree of flexibility and efficiency?

  • Today, most databases manage their own resources (memory management, network performance management, availability constraints etc.) which increase complexity and lower efficiency. Will autopoiesis simplify the distributed database resource management complexity and allow application workloads become PaaS and IaaS agnostic and provide location independence?

  • Can we implement autopoiesis without disturbing current operation and management of information processing structures?

5:50-6:10 UTC

Thu 16th Sep

32. On Leveraging Topological Features of Memristor Networks for Maximum Computing Capacity

Ignacio Del Amo and Zoran Konkoli

Chalmers University of Technology, Sweden

Abstract::

Memristor networks have been suggested as a promising candidate for achieving efficient computation for embedded low-power information processing solutions. The goal of the study was to determine the topological features that control the computing capacity of large memristor networks. As an overarching computing paradigm, we have use reservoir computing approach. A typical reservoir computer consists of two parts. First, a reservoir transforms a time-series data into the state of the network. This constitutes the act of computation. Second, a readout layer is used to label the state of the network which produces the final output of the computation. The reservoir was implement using a cellular automata model of a memristor network. The ideas were tested on a binary classification problem with the goal of determining whether a protein sequence is toxic or not.

DISCUSSION

6:20-7:20 UTC

Thu 16th Sep

DISCUSSION (Contribution from MORCOM Conference)


PLENARY PRESENTATIONS Contributed by Habits & Rituals Conference 2021

Block 2:

13:00-16:00 UTC

Thu 16th Sep


Habits & Rituals

Raffaela Giovagnoli

13:00-13:30 UTC

Thu 16th Sep

33. Habits and Rituals as Stabilized Affordances and Pregnances A Semiophysical Perspective

Lorenzo Magnani

Department of Philosophy and Computational Philosophy Laboratory, University of Pavia, 27100 Pavia, Italy;

Abstract:

The externalization/disembodiment of mind is a significant cognitive perspective able to unveil some basic features of abduction and creative/hypothetical thinking; its success in explaining the semiotic interplay between internal and external representations (mimetic and creative) is evident. This is also clear at the level of some intellectual issues stressed by the role of artifacts in ritual settings, in which interesting cases of creative affordances are also at play. I will stress the abductive activity of creating some external artifacts or symbols in ritual events able to provide what we can call stabilized affordances. I contend that these ritual artifacts and events, and the habits they promote, can be usefully represented as endowed with stabilized affordances that “mediate”, and make available, the story of their origin and the actions related to them, which can be learned and/or re-activated when needed. In a semiophysical perspective these stabilized affordances can be seen as pregnant forms [1]. Consequently certain ritual artifacts (that in turn are intertwined with habits) afford meaning as kinds of “attractors”, as states of the eco-cognitive dynamical system in which individuals and collectives operate: they are states into which the eco- cognitive system repeatedly falls, states that are consequently stationary.

An example of ritual artifacts which can be considered “transformers of energy” can be seen in the behavior of some primitive people. They are formed by a process of semiotic delegation of meanings to external natural or artificial objects, for final practical purposes, through the building of external representations capable to afford humans. To make an example, a ritual artifact can be an analogue of female genitals, which, through a reiterated dance, affords a pregnant habit shared by a collective- in turn mimicking the sexual act, suggesting that the hole is in reality a vulva, and refers to the implementation of some agriculture [2]. Another example refers to the violent scapegoating of animals in sacrificial rituals - like in Abel’s sacrifice of an animal and Abraham’s sacrifice of a ram in place of his son, which are strongly related to the moral and religious pregnant meanings proper of monotheistic traditions. In the case of sacrifices of living organisms, Thom usefully observes that the ritual (and its consequent habit) is also related to the desire of “modifying/distorting” the regular space time situations, so they - paradoxically - aim at affording the environment in a desired way:

In order to realize these excited forms [of the regular space-time] it is necessary to breathe into the space a supplementary “energy”, or a “negentropy” which will channel a multitude of local fluctuations in a prescribed manner. Such was the aim of rituals and magical procedures, which frequently involved the sacrifice of living animals. It is as if the brutal destruction of a living organism could free a certain quantity of “negentropy” which the officiant will be able to use in order to realize the desired distortions of space-time. We can see how little the conceptual body of magic differs basically, from that of our science. Do we not know in the theory of the hydrogen atom for example, that the energy level of a stationary state of the electron is measured by the topological complexity of the cloud which this electron makes round the nucleus? In the same way, certain quantum theorists such as Wheeler, tried to interpret quantum invariants in terms of the topology of space-time. And, in General Relativity, the energy density of the universe is interpreted as a geometric curvature (pp. 135–136).

In the case of rituals of initiation, the target is similar, but not related to modify something external such as some regular states of the space-time, but the envisaged distortion assumes the form of the changing of something internal, that is the desires, conferring on them in this way a function through which the subject’s being identifies itself or announces itself as such, through which the subject fully become a man, but also a woman (cf. for example the case of mutilations of female genitals, that serves to orientate desires and so to form new individual habits).

The ritual artifact and event make possible and promote through appropriate affordances the related inferential cognitive processes embedded in the rite. Once the representations at play are built by the related human collective, they can - completely or partially - afford in a sensory way, and they are learnt, if necessary: indeed, the collective ritual also plays a pedagogical role addressed to other individuals not previously involved in its construction and who ignore or partially ignore the full outcome of the ritual (and of the related habits). They can in turn manipulate and reinternalize the meanings semiotically embedded in the artifact, to complete the process of being appropriately afforded.

The whole process of building ritual artifacts - configured as attractors that favor habits - is occurring thanks to what I have called manipulative abduction. When ritual artifacts are created for the first time this happens thanks to an abductive creative social process.

However, when meanings are subsequently picked up through the stabilized affordances involved by the symbolic features of the ritual artifacts or events and suitably reproduced, they are no longer creative, and firmly favors pregnant habits, at least from the point of view of the affected collectives. Of course, it can still be seen as a “new” creative entity from the perspective of individuals who are afforded for the first time, to the aim of getting new cognitive achievements and learning.

In sum, it is possible to infer (abduce) from the ritual artifacts and events - thanks to the fact they offer stable affordances - the original meanings that generated them, and thus the clear and reliable cognitive functions which can in turn trigger related responses (also addressed to possible embodied and motor outcomes). They yield information about the past, being equivalent to the story they have undergone. The available affordances are reliable “external anchors” (indexes) of habits and assist abducibility (and so “recoverability”) of relevant meanings and of both “psychic” and “motor” actions.

I have contended above that the human mind is unlikely to be a natural home for complicated concepts, because such concepts do not exist in a definite way in the available environment. For example, humans always enriched the environment by resorting to “external” magical formalities and religious ceremonies, which can release deep emotion and cognitive forces. It was (and it is) indeed necessary to “disembody” the mind, and after having built a ritual artifact or event through the hybrid cognitive internal/external interplay of representations, it is possible to pick the new meanings up, once they are available and afforded out there.

The activity of delegation to external objects of cognitive value through the construction of ritual artifacts and events is certainly semiotic in itself, as the result of the emergence of new intrinsic afforded meanings, expressed by what Jung [2], for example, calls a symbol. Jung also nicely stresses the protoepistemic role that can be played by magical ritual/artifactual externalizations in creative reasoning, and he is aware that these magical externalizations constitute the ancestors of scientific artifacts, like those—mainly explicit—concerning the discovery of new geometrical properties through external diagrams: Jung says “Through a sustained playful interest in the object, a man may make all sorts of discoveries about it which would otherwise have escaped him. Not for nothing is magic called the ‘mother of science’ “([1], p. 46).

Finally, I will quickly refer to the following important issue: abduced pregnances in many rituals mediate salient signs and work in a triple hierarchy: feelings, actions, and concepts. They are partially analogous to Peirce’s “habits” and, in some cases also involve both proto-morality and morality, which obviously also consist in habits, that is, various generalities as pregnant responses to some signs. To make an example ritual sacrifices are always related to some moral meanings, as I have indicated above.


References

1. Thom, R., Esquisse d’une sémiophysique. InterEditions: Paris, 1988. Translated by V. Meyer, Semio-Physics: A Sketch, Addison Wesley: Redwood City, CA, 1990.

2. Jung, C.G. On psychic energy. In The Collected Works of C. G. Jung, 2nd ed.; Translated by Hull, R.F.C.; Princeton University Press: Princeton, NJ, USA, 1972; Volume 8, pp. 3–66.

13:30-14:00 UTC

Thu 16th Sep

34. A neurocomputational model of relative value processing: Habit modulation through differential outcome expectations

Robert Lowe

Department of Applied IT, University of Gothenburg, Sweden

Abstract:

Animal and human learning is classically conceived in terms of the formation of stimulus and response associations. Such associations, when trained to excess, in turn, induce the formation of habits. Animal learning models and Reinforcement learning algorithms most typically valuate stimuli or states of the world in terms of scalar (singular) value, which conflates potentially multiple dimensions of reinforcement, e.g. magnitude, acquisition probability. Evidence from neurological studies of human and non-human primates indicates that populations of neurons in parts of the brain are sensitive to the relative reward value assigned to stimuli. That is, neural activity is found to occur in response to stimuli predictive of rewards according to their being of lower or higher subjective value with respect to alternatives (Cromwell et al. 2005, Schultz 2015, Isoda 2021).

Here is presented the computational and theoretical basis for a neurocomputational model of relative value processing adapted from previous work (e.g. Lowe & Billing 2017, Lowe et al. 2017, Lowe et al. 2019). This neural-dynamic temporal difference reinforcement learning model computes relative reward valuations in the form of differential outcome expectations (see Urcuioli 2005) for stimuli/states. This model, inspired by Associative Two-Process theory (Trapold 1970, Urcuioli 2005), computationally accounts for action/response selection according to two memory processes: i) retrospective or stimulus-response, wherein habits can be formed; ii) outcome expectancy (‘prospective’). The latter processing route entails the neural representation of valued outcomes preceding, and thereby permitting cueing of, responses, which can occur in the presence of, in place of, or in competition with, the habit-based processing route. As such, habit formation may be modulated by such a memory mechanism. The model of relative value processing is also presented in relation to its potential for differential parameterization for predicting clinical (e.g. Alzheimer’s disease) subjects’ learning performance on differential outcomes (as studied by, e.g. Plaza et al. 2012; Vivas et al. 2018). Such modelling may serve forms of intervention based therapy (including gamified memory training) for optimization of outcome expectancy based learning for modulating the more habit-like learning.

References

Cromwell, H. C., Hassani, O. K., & Schultz, W. (2005). Relative reward processing in primate striatum. Experimental Brain Research, 162(4), 520-525.

Isoda, M. (2021). Socially relative reward valuation in the primate brain. Current Opinion in Neurobiology, 68, 15-22.

Lowe, R., & Billing, E. (2017). Affective-Associative Two-Process theory: A neural network investigation of adaptive behaviour in differential outcomes training. Adaptive Behavior, 25(1), 5-23.

Lowe, R., Almér, A., Billing, E., Sandamirskaya, Y., & Balkenius, C. (2017). Affective– associative two-process theory: a neurocomputational account of partial reinforcement extinction effects. Biological cybernetics, 111(5), 365-388.

Lowe, R., Almér, A., Gander, P., & Balkenius, C. (2019, September). Vicarious value learning and inference in human-human and human-robot interaction. In 2019 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW) (pp. 395-400). IEEE.

Plaza, V., López-Crespo, G., Antúnez, C., Fuentes, L. J., & Estévez, A. F. (2012). Improving delayed face recognition in Alzheimer's disease by differential outcomes. Neuropsychology, 26(4), 483.

Schultz, W. (2015). Neuronal reward and decision signals: from theories to data. Physiological reviews, 95(3), 853-951.

Trapold, M. A. (1970). Are expectancies based upon different positive reinforcing events discriminably different?. Learning and Motivation, 1(2), 129-140.

Urcuioli, P. J. (2005). Behavioral and associative effects of differential outcomes in discrimination learning. Animal Learning & Behavior, 33(1), 1-21.

Vivas, A. B., Ypsilanti, A., Ladas, A. I., Kounti, F., Tsolaki, M., & Estévez, A. F. (2018). Enhancement of visuospatial working memory by the differential outcomes procedure in mild cognitive impairment and Alzheimer’s disease. Frontiers in aging neuroscience, 10, 364.

14: 00-14:30 UTC

Thu 16th Sep

35. Capability and Habit

Matthias Kramm

Wageningen University & Research

Abstract:

Some scholars of philosophy, sociology, and economics have discovered the philosophical legacy of John Dewey as a source of ideas with which they can supplement Amartya Sen’s work on the capability approach. David Crocker (Crocker 2008, 203) refers to ‘Dewey’s ideal of democracy and Sen’s ideal of citizen agency’ to describe how social choice procedures can be organized. Jean de Munck and Bénédicte Zimmermann (Munck and Zimmermann 2015,

123) explore Dewey’s distinction between prizing and appraising in order to combine Sen’s concept of evaluation with ‘Dewey’s sense of practical judgment’. Ortrud Leßmann (Leßmann 2009, 454) makes use of Dewey’s theory of learning to outline the process by which ‘human beings learn to choose’ capabilities. And a paper by Michael Glassmann and Rikki Patton (Glassmann and Patton 2014) deals with Dewey and Sen in the context of educational philosophy.

In this paper, I would like to make two suggestions for how Sen’s capability approach (Sen 2001; 2004; 2009) can be supplemented by Dewey’s concepts of habit and character (Dewey 2007; 1998; 1891). In the course of my analysis, I will explore the hitherto neglected connections between the concept of capability and the concept of habit. And I will suggest a pragmatist framework which is particularly suitable for applications of the capability approach in contexts where researchers or development practitioners have to be aware of the socio-cultural environment and the character of the affected individuals.

My paper will start with a brief comment on Sen’s capability approach and how an action theory might enrich it. After delineating the core concepts of Sen’s capability approach and Dewey’s action theory, I will make two proposals for how Dewey’s action theory might strengthen Sen’s theoretical treatment of the environment and supplement his capability framework with a notion of character development. Subsequently, I will show how one could develop a pragmatist capability theory which builds on Sen’s capability approach while drawing from Dewey’s action theory. Finally, I will draw the consequences of this framework for the conceptualization of impartiality and freedom, before concluding the paper.

References

Crocker, David. 2008. Ethics of Global Development: Agency, Capability and Deliberative Democracy. New York: Cambridge University Press.

Dewey, John. 1891. Outlines of a Critical Theory of Ethics. Ann Arbor: Michigan Register Publishing Company.

———. 1998. ‘Philosophies of Freedom’. In The Essential Dewey: Volume 2.

Bloomington, Indianapolis: Indiana University Press.

———. 2007. Human Nature and Conduct. New York: Cosimo.

Glassmann, Michael, and Rikki Patton. 2014. ‘Capability Through Participatory Democracy: Sen, Freire, and Dewey’. Educational Philosophy and Theory 46 (12).

Leßmann, Ortrud. 2009. ‘Capabilities and Learning to Choose’. Studies in Philosophy and Education 28 (5).

Munck, Jean de, and Bénédicte Zimmermann. 2015. ‘Evaluation as Practical Judgement’. Human Studies 38 (1).

Sen, Amartya. 2001. Development as Freedom. Oxford, New York: New York University Press.

———. 2004. Rationality and Freedom. Cambridge MA, London: Belknap Press.

———. 2009. The Idea of Justice. London: Allen Lane.

14: 30-15:00 UTC

Thu 16th Sep

36. Collective Intentionality and the Transformation of Meaning During the Contemporary Rituals of Birth

Anna M. Hennessey

Visiting Scholar, Berkeley Center for the Study of Religion University of California, Berkeley

Abstract:

This paper examines collective intentionality, one of the three fundamental elements in a classic theory of social ontology, and how we locate its emergence in the way that individuals and social groups transform the meaning of art and other objects used in the context of contemporary birth rituals. In this context, religious art and other objects often undergo an ontological transformation during the rituals of birth when participants secularize them, marking them with new status functions that diverge from their original functions as religious objects. However, some of these same objects are then re-sacralized when used ritualistically during birth. In these cases, the social ontology of the object shifts away from that of a religious or secular identification, collectively recognized instead as encompassing sacred meaning. This sacredness is not part of the object’s original symbolic function as a religious object, however. Instead, the object is re- sacralized and takes on a new ontological status associated with a collective understanding that the nonreligious act of birth is a sacred act in itself.

The term “collective intentionality” derives from John Searle’s 1990 paper “Collective Intentions and Actions,” which Searle then developed in other works, including his 1997 book, The Construction of Social Reality. Earlier and later scholars have also examined the same or similar concepts, sometimes using different terminology, such as is found, for example, in French sociologist Émile Durkheim’s study of what Durkheim termed “collective consciousness.” This paper primarily utilizes the term as found in Searle’s classic theory of social ontology, studying examples of how individuals involved in birth rituals become organically part of a larger production devoted to the making of new meaning out of objects used in the rituals. As such, we note instances in which the individual thinks at the level of the whole, not as that of the part even though the social context of transforming meaning is inextricable from the personal context of experiencing the ritual of birth.

In a classic theory of social ontology collective intentionality refers to intentional states that humans share. These states include belief, desire, and intention. Collective intentionality is not composed of individual intentionalities. Instead, singular human intentions are part of and derived from the collective intentionality that individuals share. As such, collective intentionality can neither be reduced to individual intentionality, nor can it be described as a collective consciousness, as Émile Durkheim would term it. It is a special type of mental cooperation between individuals. John Searle gives examples of a violinist who plays a part in a (collective) symphony performance, and of an offensive lineman playing a part in a (collective) football game as representative of collective intentionality. In these cases, the individual’s performance, while distinct, is organically part of a larger performance; the individual thinks at the level of the whole, not as that of the part. Animals also express collective intentionality, though this intentionality is attached to collective behavior that is biologically innate. Therefore, although animal collective intentionality is a type of social fact, it is not institutionalized in any way (it is not an institutional fact). It is a social behavior that is still lacking in institution.

In certain rituals of birth, those who are involved in the process of birth (women, partners, midwives, doctors, etc.) have historically used different objects as part of the rite of passage. Metaphysically speaking, the ontologies of these objects are defined by their physical make-ups. The complete ontology of the individual object, however, rests on another level—a social level — which is dependent entirely upon how people have historically and collectively defined, used, and perceived of it. This is the social ontology of the object.

This paper examines the social ontology of different objects used in these rituals, looking closely at how the ontologies of the objects have the capacity to change depending on how groups of people collectively perceive of the objects’ meanings and make use of the objects. One object examined is the Sheela-na-gig. The name refers not to a single object but to a type of medieval stone figure carving from Europe, often referred to simply as a “sheela,” whose original meaning has been interpreted in a number of ways. Scholars disagree on the origins of these objects. Some scholars believe that the sheelas were historically understood as sacred devices used during the pre-Christian rituals of birth, while others believe that the figures functioned primarily as apotropaic devices, collectively understood and carved for the purpose of protecting churches and other buildings. Another group of scholars disagrees and believes that these objects acted as didactic representations used to transmit Christian themes of sin and a collective understanding of the female body as profane. Regardless of the original meaning of the sheela figures, the research in all cases shows that they were historically used within the context of religion. The common social ontology of the object as it was originally conceived is therefore understood to have been of a religious nature.

The Sheela-na-gig is one of the clearest cases in which we can locate how a religious object goes through ontological transformation in the context of the contemporary rituals of birth. This paper shows how groups of people are in the twenty-first century secularizing and re-sacralizing the object, collectively utilizing and understanding of the figure in a new way during birth as a rite of passage. These social groups, which come from around the world and are also using other objects and images in a similar way during these contemporary rituals, transmit the new meanings of the objects to one another through the internet and other technology. This paper provides several examples of these objects, showing a variety used ritualistically.

Collective intentionality is integral to the philosophy of social ontology, and an understanding of how it emerges during the contemporary rituals of birth when participants in the ritual define an object’s meaning more broadly shows how the symbolic functions of material objects have the capacity to shift between religious, secular, sacred and nonreligious identifications depending upon social collective recognition of those functions.

15: 00-15:30 UTC

Thu 16th Sep

37. Habitual Behavior: from I-intentionality to We- intentionality

Raffaela Giovagnoli

Faculty of Philosophy, Pontifical Lateran University

Abstract:

The central question of the debate on Collective Intentionality is how to grasp the relationship between individual and collective intentions when we do something together in informal and institutionalized contexts. We'll briefly introduce the debate and the theoretical aspects of this complex issue. Moreover, we suggest to investigate habitual behavior that represents a fundamental part of the nature of human beings that could represent a bridge between I-intentionality and We- intentionality.

We'll consider the role of habits in human individual and social ordinary life and we move from the fact that habitual behavior is fundamental to organize our activities in individual as well as in social contexts. Instead of considering classical and revised version of intentionality, we prefer to focus on habits that reduce the complexity of daily life, and also on their corresponding activity in social life where we take part to informal joint practices as well as to institutionalized ones. We cooperate to create and to participate in social practices because we need to organize our life together with other people to create common spaces that have different functions and significance depending on the corresponding practice (for example, we all pay the ticket to take a train and many of us participate in religious rituals or similar activities).

We’ll propose a fruitful the relationship between habits and rituals that could provide the link to harmonize I-intentionality and We-intentionality. We begin with presenting a plausible sense for the notion of habit, which goes beyond the mere repetitive behavior or routine. We argue for a plausible account of the notion of habit that rests on some Aristotelian thesis also by reference to research in psychology and neuroscience. A habit is not only a mere automatism or a repetitive behavior, but also a stable disposition for action (practical skill), that implies the relationship between automatism and flexibility. The same process is involved on our participation and constitution of social informal and formal spaces.

Recent studies from cognitive neuroscience, biology and psychology show converging perspectives on the organization of goal-directed, intentional action in terms of (brain, computational) structures and mechanisms. They conclude that several cognitive capabilities across the individual and social domains, including action planning and execution, understanding others’ intentions, cooperation and imitation are essentially goal-directed. To form habits we need goal representations both in individual and social contexts.

Routines and goal-directed behavior characterize habits both in the case of individual and social behavior. We create our own habits while fulfilling our basic needs and desires. But, we are social beings and we need to organize our activities also to participate in different social practices. For example, rituals have the important function to create social spaces in which individuals can share emotions, experiences, values, norms and knowledge. The function to share experiences is fulfilled when there exist a social space created by cooperation for reaching certain goal. If we want to get a positive result about the extension of habits in the social dimension we need to move from a sort of goal-directed activity that we can perform together.

References

J. Bernacer and J.I. Murillo, The Aristotelian Conception of Habit and Its Contribution to Human Neuroscience. Frontiers in Human Neuroscience, (8), 2014.

C. Castelfranchi and G. Pezzulo, Thinking as the Control of Imagination: a Conceptual Framework for Goal-directed Systems., Psychological Research (73), (4), 559-577, 2009.

R. Giovagnoli, Habits and Rituals, in Proceedings MDPI of the IS4SI 2017 Summit, Gothenburg, 2017.

R. Giovagnoli, From Habits to Rituals: Rituals as Social Habits, in Open Information Science De Gruyter, v.2, Issue 1 (2018).

R. Giovagnoli, Habits, We-intentionality and Rituals in Proceedings MDPI of the IS4SI 2019 Summit, Berkeley 2019.

R. Giovagnoli., From Habits to Rituals: Rituals as Social Habits in R. Giovagnoli and R. Lowe (Eds.), The Logic of Social Practices, Springer, Sapere, Cham, 2020, pp. 185-199.

A. Graybiel, Habits, Rituals and the Evaluative Brain, Annual Review of Neuroscience, /31), 2008, pp. 359-87.

D. Levinthal, Collective Performance: Modelling the Interaction of Habit-based Actions, Industrial and Corporate Change, vol 23, n. 2, 2014, pp. 329-360.

J. Lombo and J. Gimenez-Amaya, The unity and stability of human behavior. An interdisciplinary approach to habits between philosophy and neuroscience, Frontiers in Human Neuroscience, (8), 2017.

DISCUSSION

15:30-16:00 UTC

Thu 16th Sep

DISCUSSION (Contributed by H&R Conference)


FRIDAY, SEPTEMBER 17

Contributed by Natural Computing IWNC 2021

Block 1:

4:00-7:00 UTC

Fri 17th Sep

IWNC


Marcin Schroeder

INVITED LECTURE

4:00-5:00 UTC

Fri 17th Sep

38. Machines computing and learning?

Genaro J. Mart´ınez

Artificial Life Robotics Laboratory, Escuela Superior de C´omputo, Instituto Polit´ecnico Nacional, M´exico.

Unconventional Computing Lab, University of the West of England, Bristol, United Kingdom.

Abstract:

A recurrent subject in automata theory and computer science is an interesting problem about how machines are able to work, learn, and project complex behavior. In this talk, particularly I will discuss how some cellular automata rules are able to simulate some computable systems from different interpretations, it is the problem about universality. These systems are able to produce and handle a huge number of information massively. In this context, an original problem conceptualized by John von Neumann from the 40s years is: How primitive and unreliable organisms are able to yield reliable components? How machines could construct machines? In biological terms it refers to the problem of self- reproduction and self-replication. In our laboratories, implement these problems in physical robots, where some particular designs display computable systems assembled with modular robots and other constructions display collective complex behavior. Modular robots offer the characteristic to assemble and reconfigure every robot. In Particular, we will see in this talk a number of robots constructed by Cubelets to simulate Turing machines, Post machines, circuits, and non-trivial collective behavior. We will discuss if these machines learn and develop knowledge as a consequence of automation and information.

References

[1] Mart´ınez, G.J., Adamatzky, A., Figueroa, R.Q., Schweikardt, E., Zaitsev, D.A., Zelinka, I., & Oliva-Moreno, L.N. (2021) Computing with Modular Robots, submitted.

[2] Mart´ınez, S.J., Mendoza, I.M., Mart´ınez, G.J., & Ninagawa, S. (2019) Universal One-dimensional Cellular Automata Derived from Turing Machines. International Journal Unconventional Computing, 14(2), 121-138.

[3] Mart´ınez, G.J., Adamatzky, A., Hoffmann, R., D´es´erable, D., & Zelinka, I. (2019) On Patterns and Dynamics of Rule 22 Cellular Automaton. Complex Systems, 28(2), 125-174.

[4] Figueroa, R.Q., Zamorano, D.A., Mart´ınez, G.J., & Adamatzky, A. (2019) A Turing machine constructed with Cubelets robots. Journal of Robotics, Networking and Artificial Life 5(4) 265–268.

[5] Mart´ınez, G.J. & Morita, K. (2018) Conservative Computing in a Onedimensional Cellular Automaton with Memory. Journal of Cellular Automata, 13(4), 325-346.

[6] Mart´ınez, G.J., Adamatzky, A., & McIntosh, H.V. (2014) Complete characterization of structure of rule 54. Complex Systems, 23(3), 259-293.

[7] Mart´ınez, G.J., Seck-Tuoh-Mora, J.C., & Zenil, H. (2013) Computation and Universality: Class IV versus Class III Cellular Automata. Journal of Cellular Automata, 7(5-6), 393-430.

[8] Mart´ınez, G.J., Adamatzky, A., & Alonso-Sanz, R. (2013) Designing Complex Dynamics in Cellular Automata with Memory. International Journal of Bifurcation and Chaos, 23(10), 1330035-131.

[9] Mart´ınez, G.J., Adamatzky, A., Morita, K., & Margenstern, M. (2010).

Computation with competing patterns in Life-like automaton. In: Game of Life Cellular Automata (pp. 547-572). Springer, London.

INVITED LECTURE

5:00-6:00 UTC

Fri 17th Sep

39. Computing with slime mould, plants, liquid marbles and fungi

Andy Adamatzky

Unconventional Computing Lab, UWE, Bristol, UK

Abstract:

Dynamics of any physical, chemical and biological process can be interpreted as a computation. The interpretation per se might be non-trivial (but doable) because one must encode data and results as states of a system and control the trajectory of a system in its state space. One can make a computing device from literally any substrate. I will demonstrate this on the examples of computing devices made from slime mould Physarum polycephalum, growing plant roots, vascular system of a plant leaf, mycelium networks of fungi and liquid marbles. The computing devices developed are based on geometrical dynamics of a slime mould’s protoplasmic network, interaction of action potential like impulses travelling along vasculates and mycelium networks, collision-based computing of plant roots’ tips and droplets of water coated by hydrophobic powder. Computer models and experimental laboratory prototypes of these computing devices are presented.

PANEL DISCUSSION

6:00-7:00 UTC

Fri 17th Sep

PANEL DISCUSSION (Contributed by IWNC Conference)

40. Moderator’s Introduction to “Natural Question about Natural Computing”

Moderated by Marcin J. Schroeder

Confirmed Panelists: Andy Adamatzky, Masami Hagiya, Genaro J. Mart´ınez, Yasuhiro Suzuki,

The question about Natural Computing may be natural but the attempt to answer it by providing a formal definition would be pointless. Definitions of concepts serve the purpose of closing them into an existing framework of concepts with the already established intention or meaning. Natural computing is an open idea that serves the opposite purpose to transcend the currently dominating paradigm of computing. The qualifier “natural” that for centuries was a subject of philosophical disputes is used here not in the restrictive sense. After all, its common-sense negation “artificial” is associated with human skills or competencies which there is no reason to consider non-natural or at least inconsistent with human nature, human inborn capacities.

This conference is the 13th in the long series of International Workshops on Natural Computing whose participants and contributors have had diverse ways of understanding this subject. However, there was never a risk of mutual misunderstanding and there is no such risk now. What was and is common and uniting in these diverse inquiries can be expressed as the search for dynamic processes involving information that have all or some characteristics of computing, but are different from it in the form and means of implementation, procedural description, intention, outcomes, etc. The adjective “natural” reflects the interest in natural processes studied in several different disciplines of science independently from any application in computing, but it did not exclude the interests in the dynamics of information in cultural, social contexts of human life. Just opposite, Natural Computing is an attempt to bridge technological interests with natural aspects of information processing to transcend the limitations of computing, including the limitations of its present applications.

The panelists represent diverse directions of research and study within Natural Computing. I would like to ask them the question: “Quo Vadis?” (Where are you going?) Unlike in the Scriptural origin of this question, this is not a call to return to Rome. It is a request for sharing with the audience panelists’ vision of the direction and future of Natural Computing. This is a question about their motivation to pursue this path of inquiry. Finally, the panelists may choose to reflect on the more general question of the future not just of Natural Computing but Computing in general.

Contributions from Philosophy and Computing Conference APC 2021

Block 2:

13:00-16:00 UTC

Fri 17th Sep


APC

Peter Boltuc

13:00-14:30 UTC

Fri 17th Sep

41. Exploring open-ended intelligence using patternist philosophy

Ben Goertzel

Abstract:

The patternist philosophy of mind begins from the simple observation that key aspects of generally intelligent systems (in particular those aspects lying in Peirce's Third metaphysical category) can be understood by viewing such systems as networks of patterns organized to recognize patterns in themselves and their environments. Among many other applications this approach can be used to drive formalization of the concept of an "open ended intelligence", a generally intelligent system that is oriented toward ongoingly individuating itself while also driving itself through processes of radical growth and transformation. In this talk I will present a new formalization of open-ended intelligence leveraging paraconsistent logic and guided by patternist philosophy, and discuss its implications for practical technologies like AGI and brain-computer interfacing. Given the emphatically closed-ended nature of today's prevailing AI and BCI technologies, it seems critical both pragmatically and conceptually to flesh out the applicability of broader conceptions of intelligence in these areas.

PLENARY PANEL DISCUSSION

14:30-16:00 UTC

Fri 17th Sep

Artificial Inventors, Ai, Law and Institutional Economics

Presenting Panelists: Stephen Thaler, Kate Gaundry

Commenting Panelists: Peter Boltuc, David Kelley


14:30-15:00 UTC

Fri 17th Sep

42. The Artificial Sentience Behind Artificial Inventors

Stephen Thaler

Imagination Engines Inc.

Abstract:

Using a new artificial neural network paradigm called vast topological learning [1], a multitude of artificial neural networks bind themselves into chains that geometrically encode complex concepts along with their anticipated consequences. As certain nets called “hot buttons” become entangled with these chains, simulated volume neurotransmitter release takes place, selectively reinforcing the most advantageous of such topologically expressed ideas. In addition to providing important clues about the nature and role of sentience (i.e., feelings) within neurobiology, this model helps to explain how an artificial inventor called “DABUS” has autonomously generated at least two patentable inventions. [2][3]

[1] Vast Topological Learning and Sentient AGI”, Journal of Artificial Intelligence and Consciousness, Vol. 8, No. 1 (2021) 1-30.

[2] https://www.globallegalpost.com/news/south-africa-issues-worlds-first-patent-listing-ai-as-inventor-161068982

[3] https://www.thetimes.co.uk/article/patently-brilliant-ai-listed-as-inventor-for-first-time-mqj3s38mr

15:00-15:30 UTC

Fri 17th Sep


43. Potential Impacts of Various Inventorship Requirements

Kate Gaudry

Kilpatrick Townsend & Stockton LLP

Abstract:

Though many entities are discussing A.I. and patents, this umbrella topic covers a vast diversity of situations. Not only can artificial intelligence can be tied to inventions in multiple ways, but the involvement of various types of parties can shift potential outcomes and considerations. This presentation will walk through various potential scenarios that may arise (or arise more frequently) as A.I. advances and consider when and how patents may be available to protect the underlying innovation.

15:30-16:00 UTC

Fri 17th Sep


44. Panel Commentary

Peter Boltuc

University of Illinois, Springfield

Warsaw School of Economics

Presentation has a legal part and a philosophical part:

Part I is based on Gaudry: “With reference to Univ. of Utah v.Max-Planck-Gesellschafl zur Forderung der Wissenschaflen e. V, the USPTO explained that the Federal Circuit has ruled that a state could not be an inventor because inventors are individuals who conceive of an invention and conception is a “formation of the mind of the inventor” and “a mental act.” [Gaudry et al. https://www.jdsupra.com/legalnews/should-we-require-human-inventorship-3076784/ ] This criterion is clearly satisfied by modern advanced AI engines, except if epistemic human chauvinism is presupposed.

Following the above, “The USPTO reasoned “conception—the touchstone of inventorship—must be performed by a natural person.” [Gaudry op.cit.] Since the conceptions, according to which a “formation of the mind of the inventor” pertains to human and not artificial minds are flawed just because artificial minds are now much more advanced than this statement presumes based on the computer science at the time. The above quotation does not originate from the 2013 case. It comes from a 1994 case Burroughs Wellcome Co. v. Barr Labs., Inc., 40 F.3d 1223, 1227-28 (Fed. Cir. 1994), which makes a substantial difference since in 1994 there were no computer programs characterized by inventor qualities; thus, the claims were ipso facto used to pertain solely to human persons. Since the 2013 case pertains to institutions (universities) versus persons (human inventors) the 1994 case was appropriate to quote; no claims of non-human inventors involved whatsoever. Therefore, the 1994 ruling, also in its 2013 reiteration, is relevant to Dabus only de dicto; this is because the words inventor in those cases pertained only to individuals, emphasizing human individuals in opposition to institutions. This contrast is visible in the last clause: “To perform this mental act, inventors must be natural persons and cannot be corporations or sovereigns.” [2013], which does not pertain to machines or algorithms.

Part II is based on Boltuc 2017; Hardegger 2021: Based on this observation the author drafts the social structure that incorporates robots, and even cognitive engines, to partake in the ‘social’ life well enough to be its members tout court, which includes their interests morphous enough to allow for their meaningful patent ownership.

The part about robots relies on Boltuc 2017: "Church-Turing Lovers." In Abney, K. A.; Lin, Patrick, J., Ryan R., (Eds.), Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence. Oxford: Oxford University Press’. Part II develops Daniel Hardegger’s session on ‘The4th.Space’ (an APC panel at this conference)

SATURDAY, SEPTEMBER 18

Contributions from Philosophy and Computing Conference APC 2021

Block 1:

4:00-5:30 UTC

Sat 18th Sep

1.APC (90 min.)

Peter Boltuc

4:00-4:30 UTC

Sat 18th Sep

45. On Two Different Kinds of Computational Indeterminacy

Oron Shagrir with Philippos Papayannopoulos, and Nir Fresco

Hebrew University of Jerusalem

Abstract:

A follow up on the project on computational indeterminacy, parts of which are also presented in Jack Copeland's keynote. This talk and discussion is focused on two kinds of computational indeterminacy

4:30-5:30 UTC

Sat 18th Sep

46. Cognitive neurorobotic self in the shared world

Jun Tani

Cognitive Neurorobotics Research Unit, Okinawa Institute of Science and Technology (OIST)

Abstract:

My research has investigated how cognitive agents acquire structural representation via iterative interaction with their environments, exercising agency and learning from resultant perceptual experience. Over the past two decades, my group has tackled this problem by applying the framework of predictive coding and active inference to development of cognitive constructs of robots. Under this framework, intense interaction occurs between top-down intention, which acts proactively on the outer world, and the resultant bottom-up perceptual reality accompanied by prediction error. The system tries to minimize the error either by modifying the intention or the outer world by acting on it. I argue that the system should become conscious when this error minimization process costs some effort. Otherwise, everything can go just smoothly and automatically wherein no space for consciousness remains. We have found that compositionality, which enables some conceptualization, including senses of minimal self and narrative self, can emerge via iterative interactions, as a result of downward causation in terms of constraints such as multiple spatio-temporal scale properties applied to neural network dynamics. Finally, I will introduce our recent results that may account for how abnormal development leads to some developmental diseases, including autism spectrum disorders (ASD) and schizophrenia, which may be caused by different types of failures in the top-down bottom-up interaction.

Reference

(1) Tani, J. (2016). “Exploring Robotic Minds: Actions, Symbols, and Consciousness as Self-Organizing Dynamic Phenomena.” Oxford University Press.

Contributions from International Conference on Philosophy of Information ICPI 2021

Block 1:

5:30-7:30 UTC

Sat 18th Sep

2. ICPI (90 min.)

Wu Kun

EIGHT TALKS CONTRIBUTED BY International Conference on Philosophy of Information (ICPI) (each 15 minutes)

5:30-5:45 UTC

Sat 18th Sep

47. The Future of Anthroposociogenesis – Panhumanism, Anthroporelational Humanism and Digital Humanism

Wolfgang Hofkirchner

The Institute for a Global Sustainable Information Society, Vienna, Austria;

Abstract:

The emergence of human life on Earth is not yet finished. Social evolution has reached a point at which the continuation of human life is even at stake. The reason why that is the case lies in dysfunctionalities of the organization of social systems. Those dysfunctionalities came to the fore when hominization was as successful as to cover the whole globe. What social systems could externalize so far, became part of an ever more deteriorating environment on which every system, in turn, has to live on. This system theoretical insight concerns the organization of relations among humans, of relations with nature and of relations with technology. It is tantamount with a next step of humanization and requires an update of humanism: a panhumanism, an anthroporelational humanism and a digital humanism. The first makes a case for an all-embracing system of humankind, the second for a delicate integration of non-human natural systems, and the third for an intelligent design of techno-social systems. The crisis of anthroposociogenesis will last as long as that insight will have become common sense or the social systems will have broken down.

Keywords: global challenges; humanism; Evolutionary Systems Theory; Unified Theory of Information; critical thinking; social systems; information society

1. Introduction

According to Evolutionary Systems Theory [1], the emergence of existential risks signifies an evolutionary crisis of complex systems. Those crises are caused by an environment more complex than the options of the systems are. If the organizational relations of the systems undergo a qualitative change, they can help the systems catch up with the complexity of their environment (the environment might be external or internal). Such a change transforms the systems into elements of a metasystem or suprasystem that represents a complexity gain from which they benefit.

According to Unified Theory of Information [1], by generating information that is required for those systems to increase their complexity with regard to the challenges they are confronted with, they are able to master the challenges. If they fail to generate the required information, they might break down.

According to a critical social systems theory, based upon Evolutionary Systems Theory, the emergence of global challenges by ushering in a new age of the history of mankind 75 years ago is evidence for an evolutionary crisis of the social systems that have grown interdependent but not yet got integrated with each other. It means that they confront a Great Bifurcation of their possibility space of future trajectories. It needs to be passed by a Great Transformation that chooses the right trajectory to guarantee the continuation of anthroposociogenesis.

According to a critical information society theory, based upon both the Unified Theory of Information and a critical social systems theory, the Great Transformation can be realized only if the required information had been generated. This information is about the ultimate cause of the current multi- or poly-crisis as French intellectual Edgar Morin called it [2] and the means of choice for its overcoming. It illuminates that it is the plethora of social relations that have to undergo a decisive change – the social relations among humans, the social relations with nature and the social relations with technology. Their underlying logics have turned anachronistic and require replacement by new logics that adapt to the new situation.

2. The Logic of Egocentrism needs to be replaced by Panhumanism

The social relations among humans are still determined by a logic of egocentrism. Egocentrism signifies the denial of belonging of a social partition to a greater social whole, whatever the partition may be – a nation, an ethnic group, a private enterprise or an individual. That logic deprives the masses of the world population as well as classes of populations of the access to the societal common good. It is a logic of domination, exploitation and oppression in the anthroposphere yielding a gap between rich and poor, hunger, diseases, and much more.

The anti-colonial liberation struggle – with Frantz Fanon’s “Les damnés de la terre” in 1963 [3] – was the first sign of an awakening worldwide awareness of the role of social relations in the age of global challenges. It provoked the emergence of the solidarity movement all over the world.

The current pandemic situation is but another example for the anti-humanism of egocentric social relations, since a zero-Covid strategy is only implementable if all countries of the world are put in the position to fight the virus by sufficient vaccination. No country can reach a zero-Covid state without the rest of the countries committed to the same policy, just like no single person can be protected from infection unless a sufficient number of persons is vaccinated without being there free riders that frustrate solidarity.

Egocentrism must give way to a logic of Panhumanism as Morin underlined in a recent article [4]. Panhumanism can be defined as a logic of conviviality [5] of the single integrated humanity, of living together for the good of all. That logic is inclusive. It does not exclude any part of the common humankind by antagonisms (zero-sum plays that benefit some at the cost of others) but includes all of them in synergisms (the composition of parts achieves any benefit shared by any part). Being an objective community of destiny denies humanity any rationality of competition that does not serve the co-operation for the common good.

Since Panhumanism includes the concern for the next generations being provided with at least the same developmental potential as the current generation, it implies principles for the social relations with nature and technology in the sense of German philosopher Hans Jonas [6].

3. The Logic of Hubris needs to be replaced by Anthroporelational Humanism

As long as the social relations among humans are still egocentric, the social relations with nature are determined by a logic of hubris. As long as social systems do not take care of other co-existing social systems, they also do not take care of co-existing systems of natural origin. Such a logic undermines the ecological foundations of human life on Earth. It is a logic of extractivism and contamination of the biosphere and the geosphere yielding a decrease in biodiversity, an increase of the heating of the planet, and much more.

The book of US-American biologist Rachel Carson “Silent Spring” in 1962 [7] was the trigger of the global environmental movement. Concerned with the risks of the misuse of chemical pesticides, it has brought to the fore the role of the social relations with nature in the age of global challenges.

Again, the Covid-pandemic is an example for the risks of repressing and penetrating wildlife nature.

Hubris must give way to a logic of Anthroporelational Humanism [8, 9]. Anthroporelational Humanism means that humans relate to nature by not giving up their specific position of an animal sociale or zoon politikon when giving up their anthropocentric perspective. In a systemic perspective, humans as self-organizing systems need to concede self-organizing capacities to non-human natural self-organizing systems according to their place in physical and biological evolution when integrating them with their panhuman social systems. They concede intrinsic values to them in a staged way. Thus, humans are prompted to relativize their own positions. Social relations with nature while taking into consideration human values need to do justice to natural intrinsic values. Is the objective function of a panhuman system the global common good, so is the objective function of anthroporelationalism an alliance with nature in the sense of German philosopher Ernst Bloch [10].

Anthroporelational Humanism implies principles for the social relations with technology.

4. The Logic of Megalomania needs to be replaced by Digital Humanism

As long as the social relations with nature are still hubristic, the social relations with technology are determined by a logic of megalomania. As long as social systems do not take care of co-existing systems of natural origin, they also allow themselves the production and use of tools that are not designed to take care of those systems. Such a logic hypostatizes the effectivity of technology beyond any rational measure. It is a logic of omnipotence ascribed to the technosphere and yielded the deployment of nuclear first strike capabilities, the use of chemical weapons, waging information wars, the development of autonomous artificial intelligence, surveillance, trans- and post human developments, and much more.

This global challenge became clear in 1945 and gave rise to the international peace movement, documented by the Einstein-Russell Manifesto in 1955 [11].

The Covid-pandemic, however, belies the omnipotence of technology, since many states have been experiencing the limits of their health services that were not prepared for a pandemic despite anticipating warnings. Though zero-Covid strategies are followed by some states, in many other states politicians, economic interests and misinformed people are hindering the acceptance of recommendations of scientists.

Megalomania must give way to a logic of Digital Humanism [12, 13]. Digital Humanism is the logic of civilizational self-limitation as Austrian-born writer Ivan Illich coined it [14] – a limitation of the technological tools to their role of serving anthroporelational and panhuman purposes only. Digitalization can provide solutions for boosting those purposes, since any information technology helps smoothen frictions in the functioning of any technology. But digitalization must be ethically designed and the tools cultivated. The observance of the precautionary principle – the “Prevalence of the Bad over the Good Prognosis” [6] (31) is a sine qua non.

5. Conclusion

The becoming of humankind is not yet finished. The ushering in of the age of global challenges is evidence for a Great Bifurcation of anthroposociogenesis that needs to be passed by a Great Transformation. In order to accomplish the next step in social evolution the social relations among humans, the social relations with nature and the social relations with technology have to undergo a decisive change. The logics those relations have been following have brought about the social evolution so far but are not functional anymore. They need to be replaced by logics that adapt to the conditions of humanity being an objective community of destiny. Humanity is on the point of transforming into a meta- or suprasystem, becoming a subject of its own. Evolutionary Systems Theory, Unified Theory of Information, a critical social systems theory and a critical information society theory build cornerstones for an understanding of those processes.

References

1. Hofkirchner, W. Emergent Information; World Scientific: Singapore, 2013.

2. Morin, E.; Kern, A.-B. Terre-Patrie; Seuil: Paris, France, 1993.

3. Fanon, F.; Les damnés de la terre; Maspero: Paris, France, 1961.

4. Morin, E. Abenteuer Mensch. Freitag. 2021, 28. Available online: https://www.freitag.de/autoren/the-guardian/abenteuer-mensch (accessed on 25 August 2021).

5. Convivialist International. The second convivialist manifesto. Civic Sociology. 2020. Available online: https://online.ucpress.edu/cs/article/1/1/12721/112920/THE-SECOND-CONVIVIALIST-MANIFESTO-Towards-a-Post (accessed on 25 August 2021).

6. Jonas, H. The imperative of responsibility: In search of an ethics of the technological age; University of Chicago: Chicago, IL, USA, 1984.

7. Carson, R. Silent spring; Houghton Mifflin: Boston, MA, USA, 1962.

8. Deutsches Referenzzentrum für Ethik in den Biowissenschaften. Anthroporelational. Available online: https://www.drze.de/im-blickpunkt/biodiversitaet/module/anthroporelational (accessed on 25 August 2021).

9. Barthlott, W., Linsenmair, K.E., Porembski, S. (eds.). Biodiversity: structure and function, vol. ii; EOLSS: Oxford, UK, 2009.


5:45-6:00 UTC

Sat 18th Sep

48. The Philosophy – Science Interaction in Innovative Studies

Yixin Zhong

Beijing University of Posts and Telecommunications, Beijing 100876, China,

Abstract:

A deep investigation on the studies of information discipline has been made and a serious problem related to the paradigm, practically employed since its coming into being of the discipline, was found, that is, the paradigm employed has not been the one for information discipline but the one for the physical discipline. Because of this historical mistake, the entirety of information discipline has been divided into a number of sub-disciplines, mutually independent to each other. This has brought to the development of the discipline a lot of difficulties. For the purpose of having a healthy advancement of information discipline, the paradigm change has to be urgently carried out.

Key Words: Paradigm Ex-Leading, Scientific View, Methodology, and Information Discipline

1. The Definition of Paradigm for a Scientific Discipline

Paradigm for a scientific discipline is defined as the integrity of scientific view and methodology for that discipline in which the scientific view defines what the essence of the discipline is while the methodology related defines how to determine the scientific approach to the studies of the discipline. Thus, the paradigm for a scientific discipline delimits the norm that the studies for that discipline should follow.

As result, the studies of a category of scientific discipline should employ its own paradigm. Therefore, the studies of physical discipline should employ the paradigm for physical discipline whereas the studies of information discipline should employ the paradigm for information discipline.

2. The Role The Paradigm Plays in Scientific Studies

The paradigm as defined above plays the role that leads the studies of the related scientific discipline. As a matter of fact, whether the studies of the discipline would be successful or failure in practice will depends on if the paradigm employed for the discipline is correct or not. So, if the paradigm for information discipline has been employed, the studies of information discipline would make successes no matter how difficult the information discipline is. Otherwise, the studies of information discipline will encounter a series of misunderstanding and setbacks.

3. The Real Situation Concerning The Paradigm in Information Discipline

It is a very surprising discovery through the investigation in depth that the paradigm employed for the study of information discipline has ever been the one for physical discipline, see Table 1, not the one for information discipline, see Table 2 below.

Table 1 Major Features for the Paradigm of Physical Discipline

  1. Scientific View

    • Object for study: Physical system with no subjective factor

    • Focus of study: The structure of physical system

    • Property of the object: Deterministic in nature

  2. Methodology

    • General approach: Divide and conquer

    • Means for description/analysis: Purely formal methods

    • Means for decision-making: Form matching


Table 2 Major Features for the Paradigm of Information Discipline

  1. Scientific View

    • Object for study: Info process within subject-object interaction

    • Focus of study: To achieve the goal of double win (subject-object)

    • Property of the object: Non-deterministic in nature

  2. Methodology

    • General approach: Methodology of Information Ecology

    • Means for description/analysis: Form-utility-meaning trinity

    • Means for decision-making: Understanding-based


The use of the paradigm for physical discipline to the study of information discipline is surely the roots for all problems related to the study of information discipline. The major problems existed in the studies of information discipline include, at least, the followings: (1) diversity without unity in theory, separation among the studies of information in various sectors, separation between the studies of information and the studies of intelligence, all due to the physical methodology of “divide and conquer”; (2) merely formal analysis for the studies of information, knowledge, and intelligence without considering the high importance of subject factors, also due to the physical methodology of “purely formal analysis”.

4. Conclusion

It is an appeal presented in the paper that the paradigm practically executed so far in the studies of information discipline worldwide should be shifted as soon as possible.

References

[1] Kuhn, T. S. The Structure of Scientific Revolution [M], University of Chicago Press, 1962

[2] Zhong, Yixin,. From The Methodology of Mechanical Reductionism to The One of Information Ecology [J], Philosophy Analysis, No.5, p.133-144, 2017

[3] Burgin, Mark and Zhong, Yixin., Methodology of Information Ecology in the Context of Modern Academic Research [J],Philosophy Analysis, 119-136, 2019

[4] Zhong Yixin. Principles of Information Science [M]. Beijing: BUPT Press, 1988

[5] Zhong Yixin. Universal Theory of AI [M]. Beijing: Science Press, 2021

6:00-6:15 UTC

Sat 18th Sep

49. Information and the Ontic-Epistemic Cut

Joseph Brenner

Switzerland

Abstract:

1. Introduction

The standard definitions of ontology and epistemology are the studies of, respectively, 1) the real, what is/exists, what we can know and 2) how we know and validate that what we know is correct. If so, does information belong more in the domain of science or philosophy, the abstract or the concrete? Is it primarily ontic, part of the real world or epistemic, part of knowledge? While the question may not be significant in all areas of information research, it may be relevant for anything that involves the value of such research for the common good of the society, including the establishment of a Global Sustainable Information Society. I suggest that it is necessary to identify what parts of contemporary information science and information philosophy are relevant to that objective.

2. Information

Many definitions exist about what information is, what kinds of information exist and their properties. Included are concepts of the relation of information to data, knowledge and meaning. Most of the time, these concepts of information are not related to ontology and epistemology. I believe they should be, as I will try to show in this paper.

3. The Ontic-Epistemic Cut

This concept reflects the general view that ontological and epistemological perspectives are disjunct, their separability residing among other things in the subjective, 1st person content of epistemology as opposed to ontology. Ontology is frequently not to say generally limited to a system classification or categorization of the concrete ‘furniture’ of the world, without expressing the features of interaction and change. One example is the epistemic cut in biology, formulated by Howard Pattee to take account of the apparent matter-symbol distinction.

4. Information and Logic in Reality

I have developed a concept of logic as a non-semantic Logic in Reality (LIR), grounded in physics, a new non-propositional logic observable in the evolution of real processes. Information is such a process evolving according to a general Principle of Dynamic Opposition (PDO). The concept of information as process is consistent with an overlap between physical-ontic and non-physical-epistemic aspects of information.

5. The Informational Convergence of Ontology and Epistemology

A recent article with Wu Kun on the convergence of science and philosophy under the influence of information, suggested a similar convergence of ontology and epistemology. It suggested a restatement of the ontic-epistemic distinction in informational terms, reflecting the underlying principle of dynamic opposition in nature as formulated in Logic in Reality. It could be related to Deacon’s ideas about information as absence and Poli’s ideas about the ontology of what is not, or not fully there. Wolfgang Hofkirchner’s concept of a Praxio-Onto-Epistemology is relevant to the proposed convergence.

6. Directions for Further Work

Information thus expresses non-separability as fundamental scientific-philosophical as well as logical principle. I see two directions for the applicability of this principle applied to

1) the ontic-epistemic distinction as discussed in a new synthetic natural philosophy in its relation to science;

2) the conceptual structure and role of information a potential sustainable information society.

Keywords: Information; The Ontic-Epistemic Cut; Logic in Reality

6:15-6:30 UTC

Sat 18th Sep

50. A Chase for God in the Human Exploration of Knowledge

Kun Wu, Kaiyan Da, Tianqi Wu

Department of Philosophy, School of Humanities and Social Sciences, Xi’an Jiaotong University, Xi’an 710049, China;

Abstract:

In the human exploration of knowledge (including philosophy and science), there is always a chase for God (a strong desire for pursuing perfection, because God is considered as the only infinite, self-caused, and unique substance of the universe), which is a simple and extreme thinking paradigm that people have in the pursuit of complete idealization, infinite eternity, and absolute ultimateness. On the one hand, it is good for people to chase for God, because it guides people to pursue love, beauty, and all perfect things in theory and practice; however, on the other hand, this kind of thinking paradigm have obvious limitations too, because the existence and evolution of the world is very complex, and it is full of multi-dimensional, multi-layered, and multidirectional uncertainties and randomness interwoven and interacted with each other. In fact, the world is not merely a counting machine.

Keywords: knowledge, philosophy, science, human, God

1. A Chase for God in Human Philosophical Thinking

The pursuit of perfect wisdom and ability is one of the oldest traditions of mankind. The original paradigm of this tradition is always associated with God. Philosophers of ancient Greece have realized very early that human existence and human thoughts are limited, so they have attributed the perfect wisdom to God.

Heraclitus emphasized that “there is one wisdom, to understand the intelligent will by which all things are governed through all”, while pointing out that only God has the “wisdom by which all things are governed through all”. Similarly, Socrates has claimed that mankind does not have wisdom, but they can acquire wisdom by obeying the will of God. Plato proposed his theory of the Forms are also responsible for both knowledge or certainty, and are grasped by pure reason, and reason teaches that God is perfect. Feuerbach shows that in every aspect God corresponds to some feature or need of human nature. As he states: “if man is to find contentment in God, he must find himself in God”. Thus, God is nothing else than human: he is, so to speak, the outward projection of a human’s inward nature.

From the views of the above philosophers, we can conclude that in their eyes, God is the representation of perfection, who is universal, eternal, and absolute in his full wisdom and infinite ability. However, everything in the material world, including men and animals, is special, temporal, and relative with limited wisdom and ability. Thus, God becomes the embodiment of absolute truth.

2. A Chase for God in Human Philosophical Science

In the process of the development of modern science, there is also a chase for God, for example, the first cause of Newton: The God. We know that Newtonian mechanics is based on Newton’s belief in God. He asserted that “God in he beginning formed. Matter in solid, massy, hard, impenetrable, moveable particles, of such sizes and figures, and with such other properties, and in such proportion to space, as most conduced to the end for which he formed them; and that these primitive particles, being solids, are incomparably harder than any porous bodies compounded of them; even so very hard, as never to wear or break in pieces; no ordinary power being able to divide what God himself made one in the first creation” [1].

In this paragraph by Newton, the issue is not one of establishing the reality of a God whose existence might be in doubt, rather, the aim is to learn more about God and to get to know him better. Newton writes here not only of belief in God, but knowledge of God.

3. A Chase for God in Information Science Including Artificial Intelligence

In information science, including the research methods, characteristics, and possibilities of the future development of the practical research and theoretical assumptions of AI, there is also a tendency for a chase for God.

In the early stage of information science, because of the success of computationalism, many researchers believe in Pythagoras’ philosophy, arguing that all the intelligent behaviors can be realized by number and computation. The famous physicist John Wheeler wrote an article in 1989 titled Information, Physics, Quantum: The Search for Links. In this article, he put forward a new thesis “It from bit”. The similar expression of this thesis is based on some other deep thoughts of him: “I think of my lifetime in physics as divided into three periods. In the first period,” “I was in the grip of the idea that Everything Is Particles.” “I call my second period Everything Is Fields.” “Now I am in the grip of a new vision, that Everything Is Information.” [2]

Besides, with the development of the technology of deciphering, replacement, and recombination of genetic genes, along with the research results of nanotechnology, some researchers believe that humans can produce anything including organics, inorganics, even life, and intelligence. Based on this, many researchers begin to talk about “superman” and the possibility of immortality.

References

10. Henry, J. Enlarging the Bounds of Moral Philosophy. Notes and Records: the Royal Society Journal of the History of Science. 2017, 71(1), 21–39.

11. Wheeler, J. A. Geons, Black Holes, and Quantum Foam: A Life in Physics. W. W. Norton & Company, New York, USA, 1998.

6:15-6:30 UTC

Sat 18th Sep

51. The Second Quantum Revolution and its Philosophical Meaning

Hongfang L.

School of Marxism University of Chinese Academy of Sciences

Abstract:

We have a strong desire to understand everything from a single or very few origins. Driven by such a desire, physics theories were developed through the cycle of discoveries: unification, more discoveries, bigger unification. Here, we would like review the development of physics and its four revolutions. Especially, the second quantum revolution and its philosophical meaning, which realizes a unification of force and matter by quantum information. In other words, quantum information unifies matter. It from qubit, not bit.

The first revolution in physics is the mechanical revolution, which tells all matter as formed by particles, which obey Newton’s laws. Interactions are instantaneous over distance. The success and the completeness of Newton’s theory gave us a sense that we understood everything.

The second revolution in physics is the electromagnetic revolution. The true essence of the electromagnetic revolution is the discovery of a new form of matter --wave-like matter: electromagnetic waves, which obey Maxwell equation, and is very different form the particle-like matter governed by Newton equation. Thus, the sense that Newton theory describes everything is incorrect. Newton theory does not apply to wave-like matter. Moreover, unlike the particle-like matter, the new wave-like matter is closely related to a kind of interaction-electromagnetic interaction. In fact, the electromagnetic interaction can be viewed as an effect of the newly discovered wave-like matter. Wave-like matter causes interaction.

The third revolution in physics is relativity revolution, which achieves a unification of space and time, mass and energy, as well as interaction and geometry, and a unification of gravity and space-time distortion. Since the gravity is viewed as a distortion of space and since the distortion can propagate, Einstein discovered the second wave-like matter—gravitation wave. Since that time, the geometric way to view our world has dominated theoretical physics.

However, such a geometric view of world was immediately challenged by new discoveries from microscopic world. The experiments in microscopic world tell us that not only Newton theory is incorrect, even its relativity modification is incorrect. This is because Newton theory and its relativistic modification are theories for particle-like matter. But through experiments on very tiny things, such as electrons, people found that the particles are not really particles. They also behave like waves at the same time. Similarly, experiments also reveal that the light waves behave like a beam of particles (photons) at the same time. So the real matter in our world is not what we thought it was. The Newton theory for particle-like matter and Maxwell/Einstein theories for wave-like matter can not be the correct theories for matter. We need a new theory for the new form of existence: particle-wave-like matter. The new theory is the quantum theory that explains the microscopic world. The quantum theory unifies the particle-like matter and wave-like matter.

Quantum revolution is the fourth revolution in physics, which tells us there is no particle-like matter nor wave-like matter. All the matter in our world is particle-wave-like matter, particle-like matter=wave-like matter. In other words, quantum theory reveals the true existence in our world to be quite different from the classical notion of existence in our mind. What exist in our world are not particles or waves, but somethings that are both particle and wave. Such a picture is beyond our wildest imagination, but reflects the truth about our world and is the essence of quantum theory. The quantum theory represents the most dramatic revolution in physics.

After realizing that even the notion of existence is change by quantum theory, it is no longer surprising to see that quantum theory also blurs the distinction between information and matter. It appears that we are now entering into a new stage of the second quantum revolution, where qubits emerge as the origin of everything. It is known as “it from qubit”, which realizes a unification of force and matter by quantum information. Qubit is the simplest element in quantum information. In fact, it implies that information is matter, and matter is information. matter and space=information(qubits), quantum information unifies matter. This is because the frequency is an attribute of information. Quantum theory tells us that frequency is energy E=h, and relativity tells us that energy is mass m=E/c2. Both energy and mass are attributes of matter. So matter=information. That is, the essence of quantum theory is that the energy-frequency relation implies that matter=information. This represents a new way to view our world.

The above point of view of “matter=information” is similar to Wheeler’s “it from bit”, which represents a deep desire to unify matter and information. However, in our world, “it” are very complicated. Most “it” are fermions, while “bit” are bosonic. Can fermionic “it” come form bosonic “bit”? The statement “matter=information” means that those wave equations can all come from qubits. In other words, we know that elementary particles (i.e. matter) are described by gauge fields and anti-commuting field in a quantum field theory. Here, we try to say that all those very different quantum fields can arise from qubits. Is this possible? What is the microscopic structure of the space? What kind of microscopic structure can, at the same time, give rise to waves that satisfy Maxwell equation, Dirac/Weyl equation, and Einstein equation?

According to Wen Xiaogang and others, since our space is dynamical medium, the simplest choice is to assume the space to be an ocean of qubits. Scientists have given such and ocean a formal name “qubit ether”. Then the matter, i.e. the elementary particles, are simply the waves, “bubbles” and other defects in the qubit ocean (or qubit ether). This is how “it from qubit” or “matter =information”. We need to find a qubit ether with a microscopic structure.

However, for a long time, scientists do not know how waves satisfying Maxell equation or Yang-Mills equation can emerge from any qubit ether. So, even though quantum theory strongly suggests “matter=information”, trying to obtain all elementary particles from an ocean of simple qubits is regarded as impossible by many and has never become an active research effort.

So the key to understand “matter=information” is to identify the microscopic structure of the qubit ether (which can be viewed as space). The microscopic structure of our space must be very rich. Since our space not only can carry gravitational wave and electromagnetic wave, it can also carry electron wave, quark wave, gluon wave, and the waves that correspond to all elementary particles. Is such a qubit ether possible?

According to Wen Xiaogang, in condensed matter physics, the discovery of fractional quantum Hall states bring us into a new world of highly entangled many-body systems. When the strong entanglement becomes long range entanglement, the systems will possess a new kind of order-topological order, and represent new states of matter. Wen Xiaoguang finds that the waves (the excitations) in topologically ordered state can be very strange: they can be waves that satisfy Maxwell equation, Yang-Mills equation, or Dirac/Weyl equation. So the impossible become possible: all elementary particles can emerge from long range entangled qubit ether.

The above picture is “it from qubit” is very different from Wheeler’s “it from bit”. This is because here the observed elementary particles can only emerge from long range entangled qubit ether. The requirement of quantum entanglement implies that “it cannot from bit”. In fact, “it from entangled qubits”.

This leads scientists to wonder that maybe photons, electrons, gravitons, etc, are also collective motions of a certain underlying structure that fill the entire space. They may not have smaller parts. Looking for the smaller parts of photons, electrons, and gravitons to gain a deeper understanding of those elementary particles may not be a right approach. That is, the reductionism approach is unsuitable in quantum world.

Here, scientists will use a different approach, emergence approach, to gain a deeper understanding of elementary particles. In the emergence approach, we view space as an ocean of qubits, i.e. a qubit ether. The empty space (the vacuum) corresponds to the ground state of the qubit ether, and the elementary particles (that form the matter) correspond to the excitations of the qubit ether. That is, in the emergence approach, there is only form of “matter”—the space (the vacuum) itself, which is formed by qubits. What we regarded as matter are distortions and defects in this “space-matter”.

According to Wen Xiaogang, if particles/qubits form large oriented string and if those strings form a quantum liquid state, then the collective motion of the such organized particles /qubits will correspond to waves described by Maxwell equation and Dirac equation. The strings in the string liquid are free to join and cross each other As a result, the strings look more like a network. For this reason, the string liquid is actually a liquid of string-nets, which is called string-net condensed state.

We see that qubit that organize into string-net liquid naturally explain both light and electrons (gauge interactions and Fermin statistics), In other words, string-net theory provides a way to unify light and electrons. So, the fact that our vacuum contains both light and electrons may not be a mere accident. It may actually suggest that the vacuum is indeed a long-range entangled qubit state, whose order is described by a string-net liquid.

We would like to stress that the string-nets are formed by qubits. So in the string-net picture, both the Maxwell equation and Dirac equation, emerge from local qubit model, as long as the qubits form a long-range entangled state (i.e. a string-net liquid). In other words, light and electrons are unified by the long-range entanglement of qubits. Information unifies matter!

Gauge fields are fluctuations of long-range entanglement. String-net is only a description of the patterns of long-range entanglement. According to Wen xiaogang, using long-range entanglement and their string-net realization, we can obtain the simultaneous emergence of both gauge bosons (as string density waves) and fermions (as string ends) in any dimensions and for any gauge group. This result gives us hope that maybe all elementary particles are emergent and can be unified using local qubit models. Thus, long-range entanglement offers us a new option to view our world: maybe our vacuum is a long-range entangled state. It is the pattern of the long-range entanglement in the vacuum that determines the content and the structures of observed elementary particles.

Moreover, the string-net unification of gauge bosons and fermions is very different from the superstring theory for gauge bosons and fermions. In the string-net theory, gauge bosons and fermions come from the qubits that form the space, and “string-net” is simply the name that describe how qubits are organized in the ground state. So string-net is not a thing, but a pattern of qubits. In the string-net theory, the gauge bosons are waves of collective fluctuations of the string-nets, and a fermion corresponds to one end of string. This is an emergence approach. This research approach is very different from the superstring theory. In contrast, gauge bosons and fermions come from strings in the superstring theory. Both gauge bosons and fermions correspond to small pieces of strings. Different vibrations of the small pieces of strings give rise to different kind of particles. The superstring theory is still reductionism approach.

To summarize, topological order and long-range entanglement give rise to new states of quantum matter. Topological order, or more generally, quantum order have many new emergent phenomena, such as emergent gauge theory, fractional charge, etc.

Keywords: quantum revolution; quantum field theory; qubit

6:30-6:45 UTC

Sat 18th Sep

52. Information and Disinformation with their Boundaries and Interfaces

Gordana Dodig-Crnkovic[1,2]

1 Department of Computer Science and Engineering, Chalmers University of Technology and the University of Gothenburg, 40482 Gothenburg, Sweden;

2 School of Innovation, Design and Engineering, Mälardalen University, 721 23 Västerås, Sweden

Abstract:

This paper presents highlights from the workshop Boundaries of Disinformation held on Chalmers University of Technology. It addresses the phenomenon of disinformation, its historical and current forms. Digitalization and hyperconnectivity have been identified as leading contemporary sources of disinformation. In the effort to counteract disinformation, it is important not to forget the need for the balance between individual freedom of expression and societal institutionalized thinking used to prevent spreading of disinformation. The debate about this topic must involve major stakeholders.

Keywords: information; disinformation; demarcation

1. Introduction

Last year a workshop has been held on Chalmers University of Technology on the topic of Boundaries of Disinformation [1]. It gathered Swedish and international thinkers, representing different approaches to the diverse topics of: Artificial Intelligence (Max Tegmark, physicist and AI researcher, MIT), Democracy (Daniel Lindvall, sociologist and independent researcher), Epistemology (Åsa Wikforss (philosopher, Stockholm University), Ethics (Gordana Dodig-Crnkovic, Chalmers University of Technology), Human-Computer Interaction (Wolfgang Hofkirchner, Vienna University of Technology) and Law (Chris Marsden, University of Sussex). The workshop was organized and moderated by Joshua Bronson (philosopher) and Susanne Stenberg (legal expert within R&D) from RISE, Research Institutes of Sweden.

The workshop topic was introduced by Bronson and Stenberg. Disinformation was presented as a significant problem of contemporary societies, bringing “the challenge of dealing with disinformation magnified by digitalization and increasing use and dependence on AI in more and more aspects of our society.

Disinformation is typically defined as purposefully spreading false information to deceive, cause harm to, or disrupt an opponent. Disinformation can be generated and spread by individuals, groups, organizations, companies, or governments and equally disinformation can target any of these. According to Bronson and Stenberg, today we have effectively lowered the barrier for content creation and dissemination to such an extent that traditional gatekeepers, such as governments, universities, publishers, and media, are unable to steer information and content creation.

The aim of the workshop Boundaries of Disinformation was to map the edges of this problem.

In what follows I will present my take on the problem of disinformation, its relation to information and its role in society, after having participated in the workshop and learned a great deal from my colleagues discussing various manifestations of disinformation and possibilities of its control.

2. Phenomenon of Disinformation, Old and Omnipresent

Historical examples of disinformation are many, as illustrated by the following three ancient examples of "fake news”: the donation of Constantine from 8th century, a sanctioned surrender of the Hospitaliers of the Knights Templar in 1140s and the story from 1782 when Benjamin Franklin created a fake issue of a Boston newspaper, as reported in [2]. War- and political propaganda and counterpropaganda are classical cases of disinformation and misinformation.

We meet information and disinformation daily on both micro (individual)- meso (group)- and macro- (global) scales.

3. What is New?

As never before, content production today has become simple and affordable to all and, consequently, it has run out of societal control.

“The idea that different people can get a piece of paper that states the same thing is powerful. It's equalizing. It's easy to trust the information in this case because accepting that a huge group of people are being misled is, well, unbelievable. There isn't a way to prevent fake news entirely, but it starts with critical reading and conversations.” [2] Not only general public/“ordinary people” have got voice that can reach around the globe, but also politicians can directly tweet to their followers circumventing democratic goalkeepers.

Proposed automated means and Artificial Intelligence used for fighting disinformation bring their own challenges as presented in the overview of self-regulation, co-regulation, and classic regulatory responses, as currently adopted by social platforms and EU countries [3], connecting the technological, legal, and social dimensions.

4. Digitalization and Hyperconnectivity as Sources of Disinformation

New online content production and communication has as a consequence the phenomenon of ”informational bubbles” – isolated groups that share information and values independently of the rest of the world. Easily such groups can assume extreme positions such as anti-vaxxers or groups claiming that the Earth is flat.

Social networks, electronic web-based media, digital platforms, web bots – provide ways for disinformation to uncontrollably develop in dangerous ways and proportions.

New technologies make content creation and dissemination easy, avoiding the traditional gatekeeping mechanisms of publishers, (predefined) media formats, (existing) institutions, universities and governments.

Joshua Bronson and Susanne Stenberg asked the following questions:

Can we establish new gatekeepers who would:

  • tell the difference between managing disinformation and censoring

  • establish relationship between facts and disinformation

  • find out if and when information can be traced

  • establish the possibilities and limits of AI solutions to disinformation

  • increase media literacy in our radically changing digital landscape

  • help framing laws to protect freedom of expression while guarding against disinformation

Boundaries of Disinformation

There are number of questions that must be answered to understand disinformation, its role and production means, such as:

  • Who decides what is “the case”/ “the fact”/ ”the truth”?

  • What is ”authoritative information” / “trustworthy information”?

  • Who are authorities and for what?

It is important, in the effort to identify and counteract disinformation, not to forget the need for boundary/balance between individual freedom and societal institutionalized thinking. We need to better understand and formulate the relationship between Authority vs. Freedom vs. Responsibility in this context.

Moving towards a more truth-based society is about elucidating and explicating

  • Not only: “How?” (AI, computers and media literacy, etc.)

  • But also: “Why?” (philosophy, ethics, law, critical thinking, etc.) which is a question for democracies to decide.

Interfaces between Information and Disinformation

As Wu argues [4], there is interaction and convergence of the philosophy and science of information in sciences. Consequently, there are in parallel with Information vs. Disinformation, related questions of demarcation between Science vs. Pseudoscience according to Popper, [5] which caused a lot of discussion among philosophers of science.

There are cases in the history of science in which false information/knowledge (false for us here and now) has led to the production of true information/knowledge (true for us here and now). The whole development of science can be seen as a refinement and replacement of inadequate knowledge by the more adequate one. A classic example of the mechanism leading to new discoveries and new insights is serendipity, making unexpected discoveries by accident.

The pre-condition for the discovery of new scientific ‘truths’ (where the term ‘true’ is used in its limited sense to mean ‘true to our best knowledge‘) is not that we start with a critical mass of absolutely true information, but that in continuous interaction (feedback loop) with the world we refine our set of (partial) truths. With good reason, truth is not an operative term for working scientists. Instead, it is the notion of correctness which refers to a given reference frame. Each change of the frame of reference (like in scientific revolutions where change from geocentric to heliocentric view developed) will lead to different understanding of what is “true” and “correct”.

Interestingly, Christopher Columbus had, for the most part, incorrect information about his proposed journey to India. He never saw India, but he made a great discovery. The "discovery" of America was not incidental; it was a result of a combination of many favorable historical preconditions combined with both true and false information about the state of affairs. Similar discoveries are constant occurrences in science.

“Yet libraries are full of ‘false knowledge’”, as Floridi points out in his Afterword [5]. And yet we find them useful.

How much should we be worried? Current debates about Covid-19 vaccines show how harmful disinformation (about the danger of vaccines in this case) can be for the society. What can be done to assure and maintain correctness and trustworthiness of information? Whose responsibility is it to keep media free from misinformation, disinformation, malinformation? It is very important that we discuss it here and now, broadly involving diverse stakeholders, while rapid development of AI makes content production increasingly simple, available, and vastly abundant.

References

  1. http://www.gordana.se/work/PRESENTATIONS-files/20201202-ETHICS-of-DISINFORMATION.pdf

  2. https://blogs.scientificamerican.com/anthropology-in-practice/three-historical-examples-of-fake-news/ Three Historical Examples of "Fake News"

  3. https://www.europarl.europa.eu/RegData/etudes/STUD/2019/624278/EPRS_STU(2019)624278_EN.pdf Automated tackling of disinformation

  4. Wu, K. The Interaction and Convergence of the Philosophy and Science of Information. Philosophies 2016, 3, 228-244. https://doi.org/10.3390/philosophies1030228

  5. Popper, K. The Logic of Scientific Discovery (2nd ed.). London: Routledge. 2005

  6. Floridi, L. LIS as Applied Philosophy of Information: A Reappraisal. Library Trends 2004, 52(3), 658-665.

6:45-7:00 UTC

Sat 18th Sep

53. A Quantum Manifestation of Information

Tian’en Wang

Shanghai University

Abstract:

There is still a way to go for quantum information studies. And the “information quantum mechanics” of today refers to studies on quantum physics and has little to do with understanding information itself; yet starting from the view of information is indeed a key for deepening our understanding of quantum physics.

Since human cannot directly perceive microscopic objects, human as advanced receiver cannot avoid self-involvement in the same macro scale. In this vein, that “where there is intention there is information” no longer applies. Instead, through approaches similar to back away, we can perceive scenarios that, before, were hard to be perceived; and something unprecedented emerges in the quantum domain, that is,

while the picture of matter/energy blurs, that of information gradually comes into focus--quantum information itself is a typical receptive relation.

From a macroscopic perspective, regarding the green of leaves as an objective fact is like covering your eyes with leaves so that you can see nothing else. It is not just the green, in fact, our concept of the shape of a leaf is also “perceived” by our human eye. To perceive the leaf with a receptor that perceives differently from human eye, considering that the room a leaf occupies is way much larger than the summation of the room that every atom of the leaf occupies, the leaf may be perceived as something similar to how we perceive the solar system, thus the leaf can be regarded as something “hollow and spacious”, which is totally different from the characters of leaf in our concept--let alone that the space an entity occupies is relatively spoken and the positions referred to in observations are mutually dependent. In addition, the space itself is something stipulated by human receiver according to perception, which can be easily neglected because of its objectivity. In this vein, the leaf in our eyes is “perceived”, and as an effect of the perception it is information as receptive relation; a photo of the leap works in a way similar to that of human eye, that is, it is also the result of encoding, a matter/energy or conceptual (symbols or codes) existence that can serve as the secondary source. It is of great importance to make clear of this for the understanding of information as well as quantum mechanics.

The information understanding of quantum mechanics is closely related to the special informational correlations of quantum mechanics, and if we carry it through, we may even reach an informational quantum theory in its true sense. With regard to the purely matter/energy natural science, “the sorrow of physics” comes from the natural

limitations of human perception and the shackles of anthropological characters. Luckily, shapes and forms play a lesser role (if not no role at all) in the domain of information. In today’s information domain there is a need similar to that in Newton’s time, as we need modern Newtons to restart the journey from the level of information and lead our cognition by assiduously depicting the concept of information.

In the history of human cognition, the establishment of every significant theory goes through a process where the vague fundamental concept gradually comes into focus and becomes a precise scientific concept. Same with basic concepts such as “energy” and “movement”, the concept of “information” must also go through such a process of clarification. Since information is far more complex, the process can be so

winding that, due to a lack of mature conditions, we have missed the clear presentation of information by quantum mechanics for many times.

The clear presentation of information in the domain of quantum mechanics, with its special characters, can be easily neglected. On the one hand, it is specialized and no many can systematically understand its basic concepts; on the other, the quantum domain has always embodied a large amount of mysteries and conundrums, thus experts have been engaged in exploring quantum mechanics and relevant scientific and philosophical problems. In addition, information is presented via quantum thanks to the indirectness of the observer’s perception, which in turn conceals the way to understanding information as receptive relation because quantum mechanics is presented by mathematical forms. This makes the presentation of information in quantum mechanics a totally different role from that of the presentation in big data as our understanding of information develops. Nevertheless, the theory of quantum has a unique advantage in information understanding and studies, that is, the empirical scientific correlations of information.


Keywords: quantum mechanics; quantum information; physics

7:0-7:15 UTC

Sat 18th Sep

54. Computation and Eco-Cognitive Openness-Locked Strategies, Unlocked Strategies, and the Dissipative Brain

Lorenzo Magnani

Department of Humanities, Philosophy Section and Computational Philosophy Lab University of Pavia, Pavia, Italy

Abstract:

Locked and unlocked strategies are at the center of my presentation, as ways of shedding new light on the cognitive aspects of computational machines. The character and the role of these cognitive strategies, which are occurring both in humans and in computational machines, is indeed strictly related to the generation of cognitive outputs, which range from weak to strong levels of knowledge creativity. I maintain that these differences lead to important consequences when we analyze computational AI programs, such as AlphaGo/AlphaZero, which aim at performing various kinds of abductive hypothetical reasoning. In these cases, the programs are characterized by locked abductive strategies: they deal with weak (even if some-times amazing) kinds of hypothetical creative reasoning because they are limited in what I call eco-cognitive openness, which instead qualifies human cognizers who are performing higher kinds of abductive creative reasoning, where cognitive strategies are instead unlocked. This special kind of “openness” is physically rooted in the constitutive character of human brain as an open system permanently coupled with the environment (an ``open'' or ``dissipative'' system): its activity is the continuous attempt to reach the equilibrium with the environment in which it is embedded, and this interaction can never be switched off without producing severe damage to the brain. This means that we cannot even think of the system deprived of its physical essence which is its openness (even from the physiological point of view an isolated brain is considered a dead brain; namely, if “closed”, it does not exist as a brain). Consequently, in the brain, contrary to the computational case, ordering is not imported from the outside thanks to what I have called “computational domestication of ignorant entities”, but it is the outgrowth of an “internal” dynamical process of the system..

Keywords: computation; cognitive science; philosophy of information; abduction; eco-cognitive openness; dissipative brain.


References

  1. L. Magnani (2020), Computational domestication of ignorant entities. Unconventional cognitive embodi-ments, Synthese, Special Issue edited by L. Magnani and S. Arfini “Knowing the Unknown: Philosophical Perspectives on Ignorance”, online first.


Block 2:

13:00-16:00 UTC

Sat 18th Sep

ICPI

Wu Kun

13:00-13:15 UTC

Sat 18th Sep

55. In what sense should we talk about the perception of other minds?

Duoyi Fei

China University of Political Science and Law, China

Abstract:

By means of spontaneous and unconscious imitation, an observer may be able to directly experience the inner states of another person because the observer and the observed share similar neural pathways. This discovery of a common neural basis reveals the correlative mechanisms through which the intentions of others are perceived. While analysing the implications of this discovery, this paper notes that the correlation does not provide a complete explanation of our understanding of other minds. Instead, the correlation comes into play only to a minimal degree. The paper also explores the epistemological characteristic of the knowledge harboured by other minds. That is, as a kind of private knowledge, the experience of other minds can help us arrive at a relatively consistent understanding of others and engage in communication with them through public expression and the description of mental states. However, it is impossible to truly understand other minds because conscious experience in the strict sense resides only in its owner, and the unique qualia emerging inside the subject cannot be directly observed from a third-person perspective. In this sense, the so-called perception of other minds is not suited to seeking a causal explanation of the ways in which others act, but for reading the meanings expressed by them in a given situation.

The perception of other minds has helped form the foundation of social behaviour. But how do we understand another person’s thoughts? Can we perceive other people’s mental states? If so, what is the basis for and approach to that effort, and what is the nature of such knowledge?

In past centuries, philosophers proposed various solutions based on introspection, which was used to discover evidence and refute conjectures in their examinations of the mind. Today, with significant advances in knowledge and the tools of investigation at our disposal, progress in neuroscience has revealed certain brain processes underlying human thoughts and emotions, which can provide useful insights for us to examine issues pertaining to the mind.

Various solutions to the problem have emerged. In recent research, Sollberger (2017) revisited the received dogma, which holds that people cannot know someone else’s mental life in the same way that they know their own minds, and reinforced the dialectical position of inferentialists who believe that we have knowledge of someone else’s mind by virtue of analogical inference. Gangopadhyay and Pichler (2017) addressed the epistemological debate between emerging perceptual accounts of knowing other minds and traditional approaches based on the theory of mind. Roelofs (2018) argued that our knowledge of other minds involves both perception and inference through ‘perceptual co-presentation’, which yields knowledge that is simultaneously perceptual and inferential. These studies have involved a discussion of the epistemological issues of the perception of other minds. Despite progress in this area, consensus on the issue remains elusive. In my opinion, the key to the problem is to determine precisely what we mean when we talk about knowledge of other minds.

I would like to point out that the view under discussion here differs from the conventional view, which is concerned more about whether the knowledge of other minds is epistemically direct in the sense of being inferential and observational, and this paper does not directly argue for the superiority of my view but instead offers a philosophical alternative.

The remainder of this paper proceeds as follows. I first examine the discovery of the neural basis as well as the correlative mechanisms through which the intentions of the actions of others are perceived, explain the advantages of direct projection theory over analogical theory, and then go on to analyse the predicaments and challenges facing neurological explanations. After a critical analysis of mental causation, I further explore the epistemological characteristic of the knowledge of other minds. That is, as a kind of private knowledge, the so-called perception of other minds is not suited to seeking a causal explanation of the ways in which others act, but for reading the meanings expressed by others in a given situation. I close with a brief summary of the primary results and suggestions for future research in the area.

we can conclude that our perceived ability to compare our own activities with what we have observed in other people’s physical activities is a type of understanding at the primary level. The method of analogy adopted is not intended to trace back to the cause from the result, nor to inquire about a causal explanation for the behaviours of others, but to read other people’s expressions (movements, postures and facial expressions). That reading process has to be placed in a certain situation that forms the background and context of understanding others. For that reason, the neural correlation of understanding the minds of others cannot provide sufficient and necessary explanations for the cognition of their minds. The examined targets of studies that have clearly established such associations have mostly been adapted during the history of human evolution, and are more related to reflective behaviour, but rational processes in real life are much more complicated, involving the capabilities to process abstract symbols and conceptual skills. Neurological interpretations apply to the simplest behaviours of humans and other animals, such as sensory exchanges, movements, foraging and so on. They are only useful at a minimal level compared with thinking about abstract and complex decisions and choices with far-reaching influences, which falls precisely in the realm of traditional cognitive theory that requires the shift in the focus of neuroscience research from basic cognitive processes to the so-called advanced functions (such as reasoning, social judgement and decision-making). In the case of self-knowledge, we focus on the so-called ‘transparency method’ and the extent to which its use delivers inferential self-knowledge. By contrast, in the case of our knowledge of others’ thoughts, we discuss the role of perception as a source of such knowledge and argue that even so-called ‘perceptual’ knowledge of other minds is inferential.

When discussing the nature of the knowledge of other minds, we should not pursue knowledge that is absolutely unmistakable and universally inevitable, and should not expect the sender and receiver of the information to both have the same realization of the meaning of the given information. That kind of knowledge does not exist. The true meaning of understanding the minds of others is to be able to predict their behaviours; that is, to already ‘know’ what action other people would take, or tend to take, before they do anything. And, in cases where the action has already been completed, we can explain the reasons or motivation for it. If we view the problem of other minds from this perspective, we may still be able to obtain some knowledge with a certain degree of certainty and reliability. The point is to make clear the following three questions:

  • In what circumstances do the two mechanisms of direct perception and mental speculation apply to daily life?

  • Are they in a competitive or a cooperative relationship?

  • If the latter, in what circumstances is cooperation between them possible?


References

1. Borg E (2017) Mirroring, mind-reading and smart behaviour-reading. Journal of Consciousness Studies 24(5–6): 24–49.

2. Brincker M (2015) Beyond sensorimotor segregation: On mirror neurons and social affordance space tracking. Cognitive Systems Research 34–35: 18–34.

3. Burton RA (2014) A skeptic’s guide to the mind: What neuroscience can and cannot tell us about ourselves. Choice: Current Reviews for Academic Libraries 51(5): 861.

4. Davidson D (1993) Truth, Meaning, Actions and Events. Translated by Mou B. Beijing: The Commercial Press (in Chinese).

5. Fenici M (2015) Social cognitive abilities in infancy: Is mindreading the best explanation? Philosophical Psychology 28(3): 387–411.

6. Ferrari PF and Rizzolatti G (2014) Mirror neuron research: The past and the future. Philosophical Transactions: Biological Sciences 369(1644): 1–4.

7. Gallagher S and Varga S (2014) Social constraints on the direct perception of emotions and intentions. Topoi: An International Review of Philosophy 33(1): 185–199.

8. Gallese V, Keysers C, Rizzolatti G et al. (2004) A unifying view of the basis of social cognition. Trends in Cognitive Sciences 8(9): 396–403.

9. Gangopadhyay N and Pichler A (2017) Understanding the immediacy of other minds. European Journal of Philosophy 25(4): 1305–1326.

10. Goldman AI (2012) Theory of mind. In: Margolis E, Samuels R and Stich S (eds) Oxford Handbook of Philosophy and Cognitive Science. New York: Oxford University Press, pp.410–412.

11. Herschbach M (2012) Mirroring versus simulation: On the representational function of simulation. Synthese 189(3): 483–513.

12. Herschbach M (2015) Direct social perception and dual process theories of mindreading. Consciousness and Cognition 36: 483–497.

13. Jiang Q, Wang Q, Li P, Li H et al. (2016) The neural correlates underlying belief reasoning for self and for others: Evidence from ERPs. Frontiers in Psychology 7: 1–7.

14. Kiverstein J (2015) Empathy and the responsiveness to social affordances. Consciousness and Cognition 36: 532–542.

15. Leibniz GW (1982) New Essays on Human Understanding. Translated by Chen XZ. Beijing: The Commercial Press (in Chinese).

16. Lurz R, Krachunb C, Mahovetz L et al. (2018) Chimpanzees gesture to humans in mirrors: Using reflection to dissociate seeing from line of gaze. Animal Behaviour 135: 239–249.

17. Mccarthy, G., Viola, R. J., Pelphrey, K. A. et al. (2010). When Strangers Pass: Processing of Mutual and Averted Social Gaze in the Superior Temporal Sulcus, Psychological Science, 15 (9):598-603.

18. Mitchell JP, Banaji MR and Macrae CN (2005) General and specific contributions of the medial prefrontal cortex to knowledge about mental states. Neuroimage 28(4): 757–762.

19. Nagel T (2000) What is it like to be a bat? In: Gao XM and Chu ZH (eds) The Philosophy of Mind. Beijing: The Commercial Press, p.109 (in Chinese).

20. Rizzolatti G and Craighero L (2004) The mirror-neuron system. Annual Review of Neuroscience 27(1): 169–192.

21. Rodríguez ÁG (2018) Direct perceptual access to other minds. International Journal of Philosophical Studies 26(1): 24–39.

22. Roelofs L (2018) Seeing the invisible: How to perceive, imagine, and infer the minds of others. Erkenntnis 83(2): 205–229.

23. Sameen N, Thompson J, Carpendale JIM et al. (2013) Toward a second-person neuroscience. Behavioral and Brain Sciences 36(4): 393–414.

24. Smortchkova J (2017) Seeing emotions without mindreading them. Phenomenology and the Cognitive Sciences 16(3): 525–543.

25. Sollberger M (2017) The epistemological problem of other minds and the knowledge asymmetry. European Journal of Philosophy 25(4): 1476–1495.

26. Tye M (2017) Philosophical problems of consciousness. In: Schneider S and Velmans M (eds) The Blackwell Companion to Consciousness. West Sussex: John Wiley and Sons Ltd, pp.17–29.

27. Vaughn DA. Savjani RS, Cohen MS, Eagleman DM et al. (2018) Empathic neural responses predict group allegiance. Frontiers in Human Neuroscience, 12: 1–8.

28. Vrij A, Edward K, Roberts KP et al. (2000) Detecting deceit via analysis of verbal and nonverbal behavior. Journal of Nonverbal Behavior 24(4): 239–263.

29. Wittgenstein L (2000) Philosophical Investigations. Translated by Li BL. Beijing: The Commercial Press (in Chinese).

13:15-13:30 UTC

Sat 18th Sep

56. An a Priori Theory of Meaning

Marcus Abundis

Bön Informatics, Aarau, SwitzerlandBön

Bön Informatics, Kirchweg 7, 5015 Erlinsbach, Switzerland

Abstract:

This paper covers a key issue in information theory noted by Claude Shannon and Warren Weaver as a missing ‘theory of meaning’. It names structural fundaments to address the topic. The paper first examines varied informatic roles, noting likely elements for a general theory of meaning. It next deconstructs Shannon Signal Entropy in a priori terms to mark the signal literacy (logarithmic subject-object sign primitives) central to ‘classic’ views of information. A dualist-triune (2-3) role therein shows a core pattern. Next, two further Nature based 2-3 roles are shown vis-à-vis Signal Entropy to illustrate domain-neutral Subject-Object modeling. Lastly, the three examples frame a general theory of meaning – as an Entropic continuum of serially varied informatic traits – with supporting definitions given.

Keywords: information theory; philosophy of information; entropy; structural fundaments; subject-object; dualism; triads; naturalism

This paper/talk covers a key issue in information theory noted by Claude Shannon, Warren Weaver, and others as a missing ‘theory of meaning’. It posits structural fundaments to address the issue. The paper begins by exploring varied informatic roles, noting likely elements for a general theory of meaning. The resulting ‘informatic types’ (re Type Theory) are:

  • (S)ubject and (O)bject are a foremost core type — a dual-material aspect — where Shannon’s Signal Entropy invokes a dualist S-O split, with ‘semantic aspects . . . irrelevant to the engineering problem.’ A firm grasp of this dualist view is key to framing a general theory of meaning, and in positing the following informatic types where, next

  • Metadata is a first meaningful type, with sub-roles of domains and material/symbolic primitive content. Metadata is a well known scientific view (e.g., the Standard Model of particle physics, periodic table, chemistry, and more) as such, a general theory of meaning only requires that one map above S-O roles to already-set scientific models.

  • Meta-meta is a second meaningful type, with domain-neutral logical primitive content. Key meta-meta examples are noted in the paper/talk. Vitally, Signal Entropy is a scientific (meta) and domain-neutral (meta-meta) view, at times called ‘the mother of all models’. This ‘universal vista’ helps to frame other meta-meta views, for a general theory of meaning. Next, it also follows that,

  • Raw Data (qualia) are a meaning-less type, known, but without ‘meaning’. To create meaning agents must ‘interpret’ Raw Data in functional roles (S-O empiricism ⇒ Metadata).

  • Voids are a second meaning-less type as ‘missing thing(s)’: that which we know about but do not fully grasp (i.e., dark matter, dark energy, quantum mechanics, origins of Life, etc.), and things we are wholly blind to, failing to sense or imagine them in any useful way (S-deficient agency).

  • Levels: All above roles employ meta-logic, in differed hierarchical roles, as Raw Data, material/symbolic primitives (meta), logical primitives (meta-meta), and more. That hierarchy means that ‘informatic levels’ exist alongside the aforementioned informatic types.

  • Entropy: this oft-used informatic notion invokes ‘two entropies’ that are un-reconciled (Shannon and Boltzmann). Reconciliation is needed for a general theory of meaning, which comes by seeing both parts in a more-generic role as: a general tendency toward material and logical dispersion, or degrees of fredoom, seen in many roles (S-O dispersion)


This naming of ‘informatic types’ establishes Entropic S-O Signs as a main organizing principle for a general theory of meaning; an approach herein labeled S-O modeling.


To further illustrate S-O modeling, three practical examples are next given:

  1. Signal Entropy, as a ‘classic’ view of information, is reduced to primitive S-O Signs and three Entropic levels. This reductive analysis shows a dualist-triune (2-3) pattern evident throughout the study, but noted here as discrete ‘noise-free’ O-S-O functional steps.

  2. Signal Entropy is next shown in a ‘noisy’ (v/V)ariable adaptive role (O-S-v), beside broader adaptive (S-O-V) events. This adaptive logic adds v/V-detail, augmenting discrete O-S-O steps (#1 above) with v/V-functional adjacent possibilities. Augmented S-O-V modeling now deposes S-O modeling, in showing a conceptually-broader Entropic adaptive informatic vista (beyond ‘noise-free’).

  3. The third example holds all of energy-matter across all space-time as a full non-adaptive cosmogony, for a context against which adaption must prove itself. This cosmogony shows mostly as turbid Prime O-S-O (Cosmic) functions. Adaption is thus Secondary, being affirmed/negated/afforded by Prime Cosmic functioning. The resulting O-S-v, S-O-V, O-S-O, and (turbid) S-O V roles have discrete/regular O-S-O functions as set points along an unfolding adaptive informatic continuum. That continuum shows an Entropically uniform view, of diverse discrete informatic roles, as adjacent logical steps, along with functionally adjacent ‘branching events’.


The above three vistas jointly frame a general ‘theory of meaning’ continuum. This study includes a formal definition of ‘information’ (and more), in logically primitive terms, as:

PRINCIPLE OF INFORMATIC (S-O) DUALISM — relational logic, meaning is central.

  • In a dynamic simple-to-complex (contiguous) cosmos, all informatic cases reflect,

    1. (O)bjects: fermions, matter, genes, agents, bits, memes, crude ideas, etc. as ‘nouns’,

    2. Inter-acting with other Os, via bosons, force, binding, interpretation, etc. as ‘verbs’,

    3. For (S)ubject-topic meaning of ‘how O interacts (S) with O’ or an O-S-O function.

  • Regardless of how common, unique, vague, fleeting, or fixed any O-S-O event may seem.


A current DRAFT of the full paper (8 pages, 4,700 words) is found at:

https://drive.google.com/file/d/1r1eGAe_1lkXPGbS2cVoja4n6CyGtzVlW/view?usp=sharing

13:30-13:45 UTC

Sat 18th Sep

57. Some Problems of Quantum Hermeneutics

Guolin Wu

Institute for Advanced Study in Science, Technology and Philosophy, South China University of Science and Engineering, Guangzhou, P. R. China

Abstract:

Contemporary quantum science and technology represented by quantum gravity theory (including superstring theory, loop quantum gravity theory, etc.) and contemporary quantum technology (including quantum information theory) have jointly caused the second quantum revolution. However, there is a big debate about how to understand contemporary quantum science and technology. To understand and interpret contemporary quantum science and technology, it is imperative to form quantum hermeneutics. Quantum hermeneutics has become an academic hotspot. Contemporary hermeneutics cannot understand the quantum world, contemporary quantum theory and contemporary quantum technology. For this reason, hermeneutics itself needs to be creatively developed to form quantum hermeneutics.

Research on quantum theory will form quantum text, and hermeneutics will form quantum hermeneutics when used to explore quantum text. Taking quantum theory text and quantum experience text as text, the quantum world and the quantum technology world are two different worlds. The advantage of this classification is that it is divided according to the original state of things. Quantum text is the text of both classical texts and mathematical language that reflects the characteristics of quantum theory. Its meaning and references reveal the quantum world inherently.

Quantum text has the following characteristics: quantum text has uncertainty, certainty, autonomy, objectivity, and trans-empiricality. Trans-empirical means to be beyond experience, without leaving the experience, but also making the experience possible.

Different from the interpretation of classical science and the interpretation of humanistic texts, quantum interpretation shows the following aspects: (1) Quantum interpretation is the unity of certainty and uncertainty. (2) The cycle of understanding. The understanding of quantum text exists in the cycle of hermeneutics. (3) The truth of quantum interpretation. The truth of hermeneutics = the truth of practical ontology + the truth of practical epistemology = the truth of practice. Truths of practical ontology + different descriptions of quantum mechanics (different truths of practical epistemology) → different interpretations of quantum mechanics.

Keywords: Quantum Hermeneutics; Quantum Information Theory; Quantum Text


13:45-14:00 UTC

Sat 18th Sep

58. The fast-changing paradigm of war calls for great wisdom of peace

Lanbo Kang

Department of Political Science, Engineering University, Xi’an 710077, China

Abstract:

The in-depth development of information and intelligent technology has opened up a complex world of interaction between ‘material and information’ for mankind. The paradigm of war has correspondingly changed dramatically in such a complex world. With the use of various information and intelligent weapons and equipment and the change of combat style, war will leap out of the inherent boundaries in all aspects and bring great disaster to mankind. Whether human beings end up in harming themselves, or actively explore new ways and paths to eliminate conflicts and disputes, all of which are calling for human wisdom. Therefore, under the guidance of the concept of a community of shared future for mankind, we Chinese are actively exploring such great wisdom in the pursuit of world peace.

Keywords: information, intelligence, fast-changing paradigm of war, peace

Thomas Kuhn, an American historian of science, believes that the scientific revolution is the transformation of scientific research from one paradigm to another, and such transformation is often the change of world outlook. The so-called ‘paradigm ' is the model or pattern、example、convention and so on recognized by the community of scientists [1] (p. 21), which is related to the common beliefs, traditions, rationality and methods of the community of scientists [2] (p. 176).

Using Kuhn's theory and method to think about the war in the information age, it’s not hard to find that today's human beings are entering an era in which the paradigm of war is changing rapidly.

The development of information science and technology has gone deep into the higher stage of big data and intelligent technology, and more new technologies and new stages will emerge in the future. A more wide, sophisticated and fast-changing information world has had been opened up and constructed by human being by using information science and technology. With the advent of the information world, the real world in people’s lives has evolved into a world with complex interactions between “material and information”. The world is thus full of more incalculable complexity. Therefore, in such a real world with interaction of “material and information” complexly, human being’s war would be carried out comprehensively and deeply with more inestimable complexity. Thus, it would be stipulating the fast changing of the whole war paradigm.

Military scientists have summarized the characteristics of war in the information age in many ways, and from information technology to intellectual technology can be said to be the initial manifestation of the super complexity of war in the information age.

Previous information warfare is mainly through the expansion of information capabilities of combatants, especially by giving weapons and ammunition certain information collection and storage, processing, control and transmission capabilities to enhance combat effectiveness. It is shown that the network supports the interconnection of battlefield elements and the emergence of new combat effectiveness, the focus of power control is increasingly focused on the right to information, sky and air control, precision guided weapons become main weapons, the main body of combat is composed of small forces or modules gradually,which is grouped by the highly dispersed or networked, flat command, the combat forms are mainly non-linear, non-symmetrical and non-contact modes to against the enemy, so as to break through the key links and key targets of the enemy ' s combat system and paralyze its system function. The basic way to seize the advantage of war is to optimize the command and decision-making process through information and networks, shorten the “discovery-attack” cycle and release energy in real time and accurately [3] (p. 27-28).

The intelligent war is a “qualitative change and leap” [3] (p.28) based on the information war, which is an inevitable product of the development of war information to a certain stage. The intellectualization of war is mainly integrated war in the fields of land, sea, air, sky, network electromagnetic space and cognition, based on the Internet of things information system, using intelligent weapons and corresponding combat methods [3] (p.84).

. At present, the trend of war intelligence is mainly reflected in the continuous growth and in the initial scale of the number about intelligent unmanned combat system, intelligent weapons and equipment, but with the continuous improvement of high speed, stealth, autonomy and other characteristics, such as unmanned aerial vehicles, ground robots, and other autonomous, intelligent weapon system be development and used. There ware be produced some one new combat force generation mode that could reflect the characteristics of war in the information age, that is, from under the guidance of current information, the use and release of mechanical energy and information energy, further deepen the development to effective utilization and release of Artificial Intelligence under the intelligent information.

In the complex relationship of “man-machine” interaction, the change of the combat effectiveness generation mode was initially started by a large number of research and development and use of intelligent weapons and equipment, but it is imperative to change the combat style, the system of troops, the theory of military war, and even the whole form of war and its supporting military philosophy to make such a new weapon equipment system play the best combat effectiveness and to make such a combat effectiveness generation mode operate efficiently and optimally.

In fact, the war has been upgraded from informatization to intelligence. From the Gulf War, in this fleeting 30 years, it has completely changed people’ s understanding of the whole war. There might be many secrets, but it is obviously that human beings have mastered the information science and technology and information creation technology. Especially the artificial intelligence technology can develop rapidly only under the support of information science and technology. It not only opens the door of the information world for human beings’, but also opens the methods and approaches what making the machines and living organism or material bodies have the capacities of information collection, storage, transmission and creation, of which was belong to other living creatures, especially belong to human beings, so as to reconstruct the whole material world into the intelligent world. That is to say, in the double complex world of “material and information”, human practice constantly explores and constructs a dynamic real world that belongs to human beings, and thus displays a dynamic picture of the real intelligent world. As an extremely important part of human practical activities, war would be occurred and mutates or stimulates in the real intelligent world of “material and information” dual existence interacting complexly.

Actually, War as a competition of human practice power that is about life and death, where developed and projected human practice power, where there would appear the battle; which aspects or links reveal human practice power significantly, which aspects or links would occur the battle. Today, the human essential force has developed to be able to exploit and create all kinds of information, and has been able to give this information and the ability to create information to all things, so that all things have human general intelligence, and thus construct a more complex information world and intelligent world. In this case, there would be occur different forms of fierce competition round some dominate power in all kinds of field about information exploration, utilization and creation. This even includes the brain war, heart war described by General Dai Xu, or the ideological war that has attracted attention.

Wars in the past are mainly carried out in a single physical dimension of the world. It is the direct interaction between people and people, people and things, things and things. In such a war, although information plays a role in news and intelligence, its leading role was not consciously realized and utilized until the Gulf War. For the information world has not been fully consciously constructed and excavated. Due to the restriction of substantial reality, war and its related fields, the complexity of war is still relatively limited in time, space, expression and combat methods.

However, under the motivation of information and intelligent science and technology, people's practical activities can easily realize the transformation from material to information, and then from information to material, and thus excavate and construct the information world or the intelligent world, make the real world of human life change greatly, which really becoming dynamic evolution world of “material and information” dual existence interacting complexly in a way. With such changes in the world picture, it might cause human war appear to the corresponding full expansion. Therefore, the form of war has been completely changed. The Unreality of information has overcome various limitations of material reality to a certain extent. It not only expands the field of war outbreak, but also endows war with new weapons and equipment, new tactics, new combat effectiveness generation mode, and even new war forms.

At the same time, the complexity of war is emerging not only in the single material world, but also in the broader and being “material and information” interactive complexly real world, that including today’s intelligent world.

It is adapted to the competences developing, including exploration 、 utilization or creating some new information and taking all these capabilities of information to the other kinds of things, the war in the age of information might be raised intelligently or appeared large number of new models, include biointelligence.

And in line with the creation、exploitation and utilization of new information, and the abilities of creating、exploiting、utilizing new information to all of things, the war in the information age has also risen from informationization to intellectualization, and in the near future there may be new war modes, including biointelligence. Thus, the direct interaction between people and people, people and things, things and things in the war has also risen to the comprehensive intermediary penetration of information to people and people, people and things, things and things. It would completely change the relationship between people and people, people and things, things and things to control or create different information, and “externalize” the abilities of all kind of information of human beings having. Therefore, the victory and defeat of war are also in these kinds of different relationships.

War is the most brutal contest of human essential strength, and this contest mostly should be innovate the theory of war, broaden the field of war, upgrade weapons and equipment, shape the military system, plan the form of war, design combat methods, expand the results of war, break through the war boundary, through all aspects and links of mining, controlling, creating, transmitting and using information, or of ‘externalizing’ the abilities of all kinds of information of human beings having.

At the same time, the highest realm pursued by war has always been that using smallest price for getting the greatest victory. Eliminating the enemy and preserving and developing ourselves has always been the common pursuit values of the technology and arts of war. However, with the widespread use of information、intelligent technology and other science and technology in war, the art of war has been increasingly coerced and suppressed by war technology. It seems to be a very important choice of the new arms race that use the science and technology in war, and then convert these science and technology into war technology. However, this kind of choice would bring human beings’ enormous disaster.

Firstly, in order to obtain the greatest victory at the minimum cost in the shortest time, some weapons and technologies that are prohibited explicitly by the international community would be used inevitably by the two belligerent parties whom has lost their reason, and eventually bring serious consequences for all of people. Such serious consequences have not been really eliminated, since the depleted uranium bombs was been used in 1991 Gulf War. After more than 30 years upgrading of informatization, intelligence and the transformation of the war needs, those anti-human weapon munitions such as depleted uranium munitions likely to expand their harmful consequences and reveal comprehensively and profoundly the anti-human character and the dual effects of harming others and harming oneself of war in the most extreme ways, for example, some unmanned delivery, bee colony attack and so on.

Secondly, with the disorderly development and application of unmanned autonomous weapons, it could attack human beings even including its developers and controllers inevitably,when these unmanned autonomous weapons “escaped” their commands of combatants and done something independently or have been used by others. In this respect, there were some message online in early June 2021.On the general trend of development, whether this message is true or false, it would be very serious to developed and use of unmanned autonomous weapons.

The third, it would be dangerous and unsustainable for the entire Earth’s ecological environment and its effective use of resources to put a lot of manpower, material and financial resources into such an arms race.

In short, to face the evolutionary trend of war, all of this is calling for human beings’ large wisdom, whether does human beings end in harming themselves or explore new ways and paths to eliminate conflicts and disputes based on the idea of a community of shared future for mankind. Therefore, under the guidance of the concept of a community of shared future for mankind, we Chinese are actively exploring and building such great wisdom in the pursuit of world peace.

References

12. Thomas Kuhn. The structure of scientific revolution. Peking University Press, Beijing, 2003.

13. Huang Shunji. Introduction to Dialectics of Nature. Higher Education Press, Beijing, 2004.

14. Pang Hongliang. Evolution and Conception of War in the 21st Century. Intelligent War. Shanghai Academy of Social Sciences Press, Shanghai, 2018.

14:00-14:15 UTC

Sat 18th Sep

59. Technologies, ICTs and Ambiguity

Tomáš Sigmund

Prague University of Economics and Business, Czech Republic

Abstract:

The expansion of computers draws our attention to quantitative aspects of information at the expense of its qualitative aspects. We thus lose, don’t have sense for and don’t develop the specific human aspects of intuition, creativity and situation involvement. Every man is a genius in his individuality and deserves respect that can’t be expressed in quantitative terms.

Keywords: quantitative information; qualitative information; pharmakon; genius

1. Introduction

Information technologies make the impression that information is unanimous and unambiguous. Because they work with clearly defined information and process it with algorithmized processes only, they make us think all the information is clear-cut.

Technological developments in general help in routine and monotone tasks. However, especially when dealing with qualitative data the use of technologies changes their nature.

2. Quantitative and qualitative information

ICTs are based on positivistic science. Anything qualitative uncovers meanings which can be opened by interpretation only that consists in the interaction between the interpreter and the object of interpretation. Quantitative analysis on the other hand consists in various types of quantitative aggregation, comparison or categorisation.

Qualitative data contain drives, emotions, subjective feelings, understandings and are related to place and time when these vague aspects were perceived. They are indexical – their reference can shift from context to context -, they are fuzzy, the boundaries between them is vague and that is why they don’t fit for classification with digitally working software. If we approach them quantitatively, we destroy or at least harm their meaning.

Quantitative analysis and technologies based on this perspective assumes the world is composed of objects that can be numbered, counted, measured and then processed with mathematical methods to achieve true understanding. Their paradigm are natural sciences. [1]

Qualitative point of view or qualitative approach sees the social world as a continuous interaction between the world and its interpreter. The external world is seen in many perspectives, not just the one of natural sciences. The objective classification and quantitative analysis of observed entities is not the goal of the endeavour, instead the meaning is the guiding principle. The qualitative approach allows for greater sensitivity towards ambiguities and subtle shades of interpretative meaning of reality. It recognizes that the world is rich and complex. Theory is rather produced than tested.

An example may be language which allows for description and representation of various social situations and gives man the experience of being-in-the-world [2]. Language is complex and ambiguous. In the quantitative approach, language is used uncritically without investigating its constitution, operation, influence on thinking, context, intention etc. Language is not questioned or considered a problem. Language is a tool similar to a computer program that can predictably and reliably do its job.

The heterogeneity of qualitative data are a challenge in the qualitative analysis. The qualitative analysis does not proceed linearly, but goes here and back searching for the best approximation and concord with the world. [3]

3. Relationship between quantitative and qualitative information

The basic and fundamental difference between the qualitative and quantitative approach consists in the assumptions made. Quantitative analysis presupposes unanimous world with clear meanings. Paradoxically, the more we use technologies that are based on quantitative analysis and its assumptions, the more we succumb to the empirical world understanding. The subtle understanding of ambiguities, equivocity becomes lost. People require precise specific categories; their truth is the only possible etc. The world instrumentalized by ICTs is losing its richness and secrecy. Paradoxically, the secrecy returns, because people don’t understand the world any more and insist on its univocality. However, this strategy fails without explanation and people become lost and confused. That increases the pressure on ICTs to unify and conjure away the secrets with the result of increased confusion from the world.

All this process is similar to what we know from psychoanalysis where the suppressed content manifests itself in unconsciousness in an uncontrollable way and provokes consciousness to fight against it.

On the philosophical level, we may point out to J. Derrida and his concept of differance [4] which shows the principal impossibility of unambiguity without its combination with ambiguity. Even man’s identity is pervaded by its counterpart – the other. Everything that the man perceives is stored as a trace in his mind and can’t be integrated or made into the self. On the other hand, it is not completely different as it is a part, even constituent of the self. Signals, signs, symbols including words never fully articulate what they mean, we must move to other symbols and symptoms to explicate their meaning and this process is infinite. It is similar to Peirce’s infinite semiosis where the explanation of every sign must be explained and so becomes another sign which must be further explained. The process can’t halt. The second aspect of the process consists in never ending differing of the concepts which prevents any clear relationship between them. The signs are different, but also similar - different from their difference - and no clear relation between them exists. That implies that no stable classification or categorisation is possible. Our psyche is always in flux and no identity exists. Even language requires its counterpart, silence, to function properly. Silence is thus a pharmakon of language with all its three meanings of remedy, poison and scapegoat. It sacrifices itself to help its opposite.


4. Human qualitative characteristics

We can point out three aspects that are harmed by unequivocal treatment of information by ICTs. Three human qualities of creativity, intuition and involvement are harmed. The more people use ICTs with their quantitative approach, the less they are forced to come to terms with unclear or ambiguous situations, the less they are involved in them, the more they rely on rational ways of dealing with the world rid of intuition and the less they use creativity to solve problems.

All these three engendered features are typical for geniuses. Genius is independent, original and arrives and understands completely new concepts. He is also exemplar and serves as an example to others. He doesn’t imitate and is free of every constraint. He deals just with the object of his interest. Genius represents something ambiguous, ungraspable, equivocal. He doesn’t respect the order, but creates a new one. He imitates, but so originally that the imitation is cancelled. He is full of aporias, similar to Derrida’s difference or pharmakon, able to save the world.

ICTs and their operation remove the qualities of geniuses from the world. ICTs can’t imitate them and provides them no space in the world. This rule-governed world afraid of every ambiguity will be surprised to see a genius that can deal with it and even produce it further. The more the world will be governed by the algorithms the more it will be surprised to see a genius not working according to them and the more he will be able to cure people from their effort to destroy ambiguity and will become geniuses. The unity of opposites will be saved.

References

1. Coombes, Hilary. Research using IT. Basingstoke: Palgrave. 2001

2. Gadamer, Hans-Georg. Philosophical Hermeneutics. California: University of California Press, 1976

3. Swingewood, Alan. A Short History of Sociological Thought (2nd edition). Basingstoke: Macmillan. 1991

4. Derrida, Jacqures. Différance, translated by Alan Bass, Margins of Philosophy, Chicago: University of Chicago Press, 1982, pp 3-27.


14:15-14:30 UTC

Sat 18th Sep

60. The Data Turn of Scientific Cognition and the Research Program of Philosophy of Data

Xinrong Huang

Research Center of Management Philosophy, Jiangxi University of Finance and Economics Nanchang, China

Abstract:

We entered the age of big data characterized by data revolution at the beginning of the 21st Century. In this new age, data has become an economic resource, and an important component of scientific cognition as well. Compared with natural language and logic language, data is a scientific language with higher accuracy and convenience. With the advent of the age of big data, ways of scientific cognition are bound to be transformed from the language turn of the 20th Century to the data turn of the 21st Century. And the possible conditions for this data turn have been prepared in terms of scientific premise, technical conditions, social background, and philosophical basis. At present, the ways of scientific cognition have begun to change from language to data, from logic to algorithm, from analysis to synthesis, from proof to discovery, and from causality to relevance. The language turn in the 20th Century will be replaced by the data turn in the 21st Century, and efforts shall be made to construct the philosophy of data, in which data is recognized as the object, algorithm as the tool, and synthesis as the method, and a wide variety of issues such as data and the world, data and language, data and algorithm, data and knowledge, data and truth, and data and ethics will be explored comprehensively.

Keywords: Scientific Cognition; Data Turn; Philosophy of Data

1 The Necessity of Data Turn

The literal meaning of "turn" is that something originally has its own fixed development direction, but for some reason it causes directional changes, so it develops in other directions. In recent decades, the saying of "turn" has been popular in the philosophy circle, so that some people say that "the turn of philosophy is one of the hot topics in the current philosophy circle"(Huang, S.2004).

At the beginning of the 21st century, the revolution of data technology was in full swing, and data language has become a new language in the new age. Therefore, it has become an inevitable trend for philosophical research in the 21st century to turn to a new philosophy of data, which brings about a new movement of data turn. From the perspective of philosophy itself, analytic philosophy, as a symbol of the 20th century, has been declining slowly and unable to lead the development direction of Philosophy in the 21st century. From the perspective of the development of science and technology, the revolution of big data and artificial intelligence promotes the philosophy of the 21st century to carry out the turn of data-centered philosophy.

2 The Possibility of Data Turn

2.1 The Scientific Basis of Data Turn

Since the turn of modern epistemology, epistemology has become the focus of philosophical research. Whether it is the turn of modern language or the turn of modern data, it is the continuation, deepening and development of modern epistemology. Scientific cognition is closely related to scientific development. The data turn of scientific cognition should be based on a solid foundation of scientific development. The development of digital logic, discrete mathematics, computer science, information science, computing science and data science has laid a solid scientific foundation for the data turn of philosophy.

2.2 Technical Conditions for Data Turn

Technically speaking, many new technologies in the 21st century promote the world’s digitalization, including digital electronic technology, computer technology, network technology, cloud computing, data mining, artificial intelligence, etc. Digital electronic technology is the most basic technology to realize the digitalization of the world. Before digital electronic technology, data acquisition, storage, processing and transmission can only be realized by analog technology, which will lead to serious data distortion and difficult to achieve automation.

2.3 The Social Background of Data Turn

With the advent of the age of big data and artificial intelligence, all kinds of intelligent sensing devices are gradually increasing, and all things are gradually converted into data. The rise of 5g network accelerates the formation of the Internet of things, so all things are gradually digitized, and gradually form a data world reflecting the physical world. Our society has gradually become a data society. The digital society and digital survival provide a real space for data cognition, data life and data experience, so that people can really feel the arrival of the data age. The formation of data world and the coming of data society provide a solid social foundation for the data turn of scientific cognition.

2.4 The Philosophical Basis of Data Turn

With the advent of the new generation of information technology revolution and the age of big data, data has become the most valuable resource in our age. As a scientific representation of information, data has become more and more influential. As a result, people pay more and more attention to data, and regard it as a new object of philosophical research. Pythagora’s idea of "all things in life" has been gestated and developed for more than 2000 years, and finally has been echoed echoed in the 21st century. In recent years, Bogen J. & Woodward J.F.(1988), Stanton,B. B. & Bunker,D.(2009), Bogen J. (2011), Woodward J.F.(2011). , David Brooks,(2013) Bruno Tebor(2017), Liu Hong(2012), Ye Shuai(2015), Furner,J.(2017),Huang X.(2001) have put forward the concept of philosophy of data, which has made the most direct contribution to the construction of philosophy of data and data turn.

3 Where does the Date Turn

3.1 Cognitive Object: From Language to Data

With the development of data technology and perception technology, it is more accurate to describe the world with data than with language, that is, data and the world can achieve one-to-one mapping relationship. Therefore, data language is more accurate than natural language, and more convenient to calculate and model than logic language. Therefore, with the advent of the age of big data, the digitalization of all things enables all things in the world to be mapped into data, and the material world or the spiritual world can be transformed into the data world. Through the cognition of the data world, we can recognize the material world or the spiritual world more accurately, conveniently, intelligently and automatically. Therefore, scientific cognition in the 21st century is about to move from language analysis to data analysis, from philosophy of language to Philosophy of data, and from cognitive objects to realize the transformation from language to data.


3.2 Cognitive Tools: From Logic to Algorithm

Since the Renaissance, scientists did scientific exploration by means of data collection, data modeling, data calculation and data verification. Data is the most important tool to depict world phenomenon and essence. That is to say, data is a more suitable scientific language than natural language and logical language.With the advent of the age of big data, everything can be mapped to data. Big data uses bits to represent things and their characteristics, transforms everything into data, and forms a data world outside the natural world and language world.For the data world, we can’t use formal logic or mathematical logic to analyze, we can only use calculation and algorithm to find the rules contained in the data. Therefore, from the perspective of cognitive tools, logical analysis after language turn will give way to data algorithm and calculation after data turn, that is, from logical analysis to algorithmic calculation.

3.3 Cognitive Approach: From Analysis to Synthesis

After entering the 21st century, big data technology has fragmented everything into data encoded by 0 and 1, that is to say, big data has turned the world into data fragments. To find rules and discover knowledge from these data fragments, we have to re integrate and reconstruct these data fragments, and realize the comprehensive integration of data through data cleaning, classification, association, aggregation and other processes. Therefore, in the age of big data, we mainly find the correlation between data through comprehensive methods and discover new knowledge through comprehensive integration. Although it is a typical process of analysis and decomposition to turn the world into data fragments, the comprehensive method will be the main cognitive method after the shift. An important sign of the shift from language to data is that the cognitive method is mainly from analysis to synthesis.


3.4 Cognitive Goal: From Proof to Discovery

With the advent of the age of big data, we see the hope of exploring scientific discoveries. Big data technology finds the correlation between data by mining massive data and by means of data clarity, classification, association and aggregation, so as to discover the hidden regularity of things. Although there is no logical inevitability in the correlation between data, it can provide the direction of guessing the answer and play the role of "helping to find”. The data mining of big data is to discover the possible rules contained in massive data by means of massive data and algorithm calculation, and then prove its reliability by means of knowledge proof. Data mining is also known as knowledge discovery because it may discover new knowledge in massive data.


3.5 Cognitive Outcome: From Causality to Relevance

The advent of the big data age has broken the traditional scientific cognition that takes causality as the ultimate goal. Big data emphasizes experience, phenomenon and practice, and believes that "Knowing what, not why, is good enough."(Schoenberg, V.M. & Cukier.,K.2013:52) it is more important to find the correlation law through the correlation presented by the data itself than through the causality law based on hypothesis,"In place of the hypothesis-driven approach, we can use a data-driven one. Our results may be less biased and more accurate, and we will almost certainly get them much faster."(Schoenberg, V.M. & Cukier.,K.2013:55)Therefore, after the shift of scientific cognition data, our scientific cognition will shift from the pursuit of strict causality between things to the pursuit of correlation between data, "These non-causal analyses will aid our understanding of the world by primarily asking what rather than why." (Schoenberg, V.M. & Cukier.,K.2013: 63).From the pursuit of strict causality to the pursuit of correlation between data is an important direction of scientific cognitive data turn in the results of scientific cognition.

4 How to Construct Philosophy of Data

4.1 Research Object of Philosophy of Data

Of course, the research object of philosophy of data is data. Philosophy of data takes data as the research object to study a series of problems related to data, such as the nature of data, the relationship between data and the world, the nature of algorithms and so on.

4.2 Two Meanings of Philosophy of Data

The first meaning focuses on the philosophical issues contained in the data. The second meaning focuses on the problem of digitalization and data turn of philosophy. A series of unique ideas and methods in data science and data technology, such as digitization, data mining, data cognition, data modeling, algorithm, calculation and so on, will have a great impact on philosophy, especially cognitive philosophy.

4.3 The Subject Orientation of Philosophy of Data

From the two meanings of philosophy of data, we can see the narrow and broad disciplinary positioning of philosophy of data. The so-called narrow sense of subject orientation is to regard philosophy of data as a branch or a field of philosophy of science. The so-called generalized philosophy of data is the philosophical thinking of the relationship between data and reality. It is not only concerned with the ontological data view of what data is, but more importantly, how data relates to the objective world or reality on which we live.

4.4 The Problem Space of Philosophy of Data

Philosophy of data is the philosophical generalization and the highest level of data science system from the perspective of philosophy. Therefore, the philosophy of data is related to data science, data technology and data application, Stanton B. B. and Bunker D.(2009) think that philosophy of data is "a multi-disciplinary problem space": information philosophy, information system, information science and technology, semiotics, philosophy of science, philosophy of technology, information theory and so on all contribute to the foundation of philosophy of data from their own unique perspective.


4.5 Research Framework of Philosophy of Data

Philosophy of data, in short, is a new discipline that takes data as the object, algorithm as the tool, and synthesis as the method to conduct philosophical research on the data world mapped by the objective world. Analytic philosophy and philosophy of language mainly focus on the problem of scientific proof, but they don't care much about the problem of scientific discovery, that is to say, they only care about how to express and deduce the knowledge that has been discovered more in line with the logical rules. Philosophy of data pays more attention to scientific discovery, which focuses on knowledge discovery.

The main issues in philosophy of data are: data and world, data and structure, data and algorithm, data and model, data and calculation, data and language, data and knowledge, data and truth, data and ethics. If we classify these problems, we can find that they involve a series of problems such as ontology, epistemology, methodology, axiology, ethics and sociology.


14:30-14:45 UTC

Sat 18th Sep

61. Testimony and Social Evidence in the Covid Era

Raffaela Giovagnoli

Faculty of Philosophy, Pontifical Lateran University

Abstract:

We discuss the problem of testimony starting from the debate in Social Epistemology which is strictly related to the nature of social evidence. We want to know what we can take for granted regarding shared knowledge especially in critical situations like the Covid era. It is not only a matter of who we can trust or what we can accept using our own epistemic resources. Rather, it is important to establish a plausible connection between testimony and social evidence i.e. the objectivity of the content of beliefs we can share.


Keywords: Testimony, Social Evidence, Covid19, Reductionism, Anti-reductionism, Communitarianism,


1. Introduction

Knowledge by testimony is the major problem for social epistemology and becomes more relevant for laypersons in this pandemic period.


We are exposed to a lot of information from TV and social media in general. We form our opinion on the basis of social evidence coming from testimony. What is the status of this “social evidence” given the uncertainty of information about the nature of Coronavirus, the impact of its effects on our body, the functioning of several vaccines and related issues?


Social epistemology is a crucial area for discussing the problem of the evidence we get from indirect information, namely that information we do not obtain from our own experience or inferential reasoning [1, 2]. Beyond the distinction between “reductionism” and “anti-reductionism”, we argue for a communitarian view about the status of social evidence.


  • Reductionism attributes objectivity to testimony based on perception, memory and inductive inference (Van Cleve, Fricker, Schmitt)

  • Anti-reductionism maintains that the uditor can accept the testimony unless he/she does not have reasons to reject the content of the testimony (Coady, Burge, Foley, Lackey).


2. The Interpersonal View of Testimony

There are several difficulties related to the possibility of trusting our own sources of social evidence in the case of reductionism and to have positive or negative reasons in the case of anti-reductionism. The so-called “interpersonal approach” seems more promising for providing a plausible account of social evidence because it emphasizes the role of the speaker’s responsibility for the truth of the content of his/her claim (Moran, Hinchman, Ross, Zagzebski, Faulkner, McMyler) [1, 2].In this case, testimony is valid because it is grounded on the fact that the speaker is responsible for the truth of his/her assertion (Moran) or he/she invites the uditor to trust him/her.

Actually, we do not know the nature of reasons according to which the speaker invites the uditor to trust him/her. They can be epistemic, ethical or prudential and these different type of reasons are embedded in information that circulate from scientific communities to public sphere on issues about Covid19. Scientists divulgate their discoveries through social media because they think that they grasped some true aspects of the natural and behavior of the coronavirus. According to these discoveries medical industry prepare medicines and vaccines that are subject to market laws. So, from prudential and ethical points of view they recommend people norms and devices for private and public health. It is reasonable to follow them because there exist data at our disposal that show their functioning. So, the source of a valid testimony resides in these data as social evidence that we can trust beyond the different reasons embedded in circulating messages.

3. Social Evidence

Our discussion of social evidence falls in the ambit of “communitarian epistemology” (Welbourne, Hardwig, Kusch, Brandom) [3, 4]. According to communitarian epistemology, evidence is something that emerges from the work of teams and communities i.e. they attribute normative aspects to the content of beliefs we share. This very content can be taken for true because it counts as knowledge in a certain epistemic community and it is declared as true in public contexts. Differently, we can provide a model for the justification of shared beliefs i.e. for social evidence that investigates the nature of their content and the possible attitudes we can undertake toward them.

A plausible option is represented by the structure of a social “space of reasons” that characterizes the content of beliefs in inferential terms so that they become “reasons” that can be publicly recognized even though they grow up from a peculiar practice with its vocabulary. Inference can be described in a “material” sense because we move from knowledge we get in peculiar circumstances and from what can fall under the concept of Coronavirus we recognize consequences of the application of it that have epistemic and ethical aspects. In our case, properties are attributed to Coronavirus from a specific field of medicine whose discoveries are at the basis of social evidence for related practices and for laypersons.

Social evidence represents the ground from which speakers can undertake and attribute commitments and entitlements. It also favors different types of speech acts: assertion, denial, query and challenge. They are all important to recognize the true aspects of objects (living or not) and phenomena.


4. Conclusion

We would provide arguments for a normative structure of social evidence that grounds the content of testimonial knowledge. . It rests on a net of related inferential commitments and entitlements that favor the discussion in scientific and social communities. The model clarifies the dependence from information we get from external sources but, at the same time, it invites the agent to search for good reasons to accept, reject or suspend their judgement on a specific issue such as issues related to Covid19.


References

1. Goldman A., Social Epistemology. Essential Readings, OUP, USA, 2011.

2. Giovagnoli R., Introduzione all’epistemologia sociale, LUP, Vatican, 2018, II Edition.

3. Giovagnoli R., “Indirect” Information. The Debate on Testimony and Its Role in the “Game of Giving and Asking for Reasons” in Information MDPI, 2019, 10 (), 101.

4. Giovagnoli R., Testimony. A Matter of Social Practices, in Proceedings IS4SI 2020, MDPI 47 (1), 43.

14:45-15:00 UTC

Sat 18th Sep

62. Developments of research on the Nature of Life from the Information Theory of Individuality

Dongping Fan, Wangjun Zhang

South China Normal University, Institute for Science, Technology and Society,Center for Systems Science and Systems Management Research Guangzhou, Guangdong 510006

Abstract:

The study of the nature of life is a classic and cutting-edge topic in the philosophy of biology and the philosophy of science. The research on the nature of life from the perspective of information can be traced back to Schrödinger's theory of negative entropy of life. In his book What is Life?, Schrödinger adopted the concept of negative entropy and put forward the famous view that "life feeds on negative entropy". Many system scientists and system philosophers emphasized the relationship between information science and the nature of life building on Schrödinger's research approach. For example, Shannon proposed that the concept of information is associated with entropy, which provides a foundation for relationship between information and the nature of life. Shannon's information concept is regarded as the negative entropy of thermodynamics, which can be used to measure the certainty and the degree of order in systems. Furthermore, building on the theory of negative entropy and Shannon’s concept of information, Prigogine has established the formulation of entropy change in open systems, which extends the understanding of relationship between the degree of order in systems and the negative entropy flow of life.Recently, David Krakauer, director of the Santa Fe Institute, has coined the information theory of individuality based on the definition and formalization of life from the perspective of information, as a mean to rethink the theoretical hypothesis of biological individuality. It abandons the preference for biological level or object exclusively and the dependence on biological characteristics, in addition, it focuses on individual's information characteristics, and defines individuals as aggregates that “propagate”information from their past into their future while maintaining a considerable level of time completeness. Relative to biological individual, the individual defined in the form of information are called informational individual. From the theory of negative entropy of life to the information theory of individuality, the idea of "life feeds on negative entropy" gradually deepened. Informational individual extend living individual from individual at biological level to including but not limited to individual at biological level.

The information theory of individuality defines formally Informational individual by mutual information, and deepen the understanding of life from information. Informational individual is seen as a stochastic process that can maintain an orderly state during the passage of time. Informational individual can be identified by the amount of information in the process of information dissemination-Mutual Information. The future state of system is not only affected by its own current state, but also by the current state of environment. By decomposing mutual information through the chain rule, the process information flow between the system and the environment can be quantified. The future state of informational individual can be determined by the current state of system and the current state of environment, and its predictability is can be quantified by mutual information. In order to rigorously formalize different kind of individuality and clarify whether the factors affecting the state of system originate from system or environment, David Krakauer refers to the information theory framework of "the partial information decomposition" and introduces the notions of unique, shared and complementary information. In this way, mutual information can be fine-grained into terms of unique information of system, unique information of environment, shared information between system and environment, and complementary information between system and environment. The different combinations of these four kinds of information can classify informational individual into Organismal Individuality, Colonial Individuality, and Environment Determined Individuality. The formal definition of informational individual not only makes it universal, but also explain the diversity of life and further deepen the relationship between information and the nature of life.

The information theory of individuality is a contemporary answer to Schrödinger's question "Is life based on the laws of physics?" It also provides a new perspective and enlightenment for the study of artificial life. To the question "Is life based on the laws of physics?" Schrödinger answered in the affirmative and pointed out that the new physics principle may be Quantum Mechanics. Informational individual focuses on its information characteristics and persistently ordered states, which are consistent with Schrödinger's thoughts of life. However, informational individual expands the scope of life individual to different coarse-grained adaptive aggregates at various levels of nature, society, and culture, and deepen our understanding of the nature of life from information. Therefore, we believe that information theory based on entropy can be used as a candidate law for the life and the information theory of individuality, which uses information and information theory to define and explain life, is a contemporary answer to "Is life based on the laws of physics?" The information theory of individuality can also provide new vision and enlightenment for the theoretical hypothesis of artificial life. Informational individual which focus on the characteristics of information provides a theoretical basis for artificial life's view that "life lies in form rather than matter". The information theory of individuality provides a way to quantitatively identify individuals without physical boundaries, and can provides a scientific method for identifying individuals with artificial life. However, the information theory of individuality accounts that replication is not the essential feature of life, and that life based on the hypothesis of replicator is only a special form of life, not a universal form, which provide new vision and enlightenment for the study of artificial life.

In conclusion, the information theory of individuality develops Schrodinger's theory of negative entropy of life, expands the scope of life individuals, and deepens the relationship between information and the nature of life. But this paper argues that how to combine the hypothesis of replicator with the formalization of informational individual is a new and yet to be explored topic.

Keywords: The information theory of individuality; Information; The nature of life; Biological individual; Informational individual

The Project Supported by Major Program of National Philosophy and Social Science Foundation of China (19ZDA037) History of System Philosophy.


15:00-15:15 UTC

Sat 18th Sep

63. On Information Interaction between the Hierarchy of the Material System

Zhikang Wang

Sun Yat-sen University, China

Abstract:

This study focuses on the relationship between material systems and information (Wiener said: Information is neither matter nor energy [Note]. Then, what is that?). The idea of information has greatly enriched human understanding of the objective world, embodied the ancient understanding of the interaction of how all things exist to the level where it can be quantified computation. Interaction is no longer an abstract, empty and arbitrary philosophical category. Information as a concrete form of the interaction of things is an entity. The establishment of “information entity” brings philosophy and science to understand the world into a new and deeper stage, which has played a huge role in promoting the human practice and prediction of the future, and affected all aspects of human production and life. Since the 20th century, the development of philosophy and science has, in a sense, benefited from the proposal of the concept of information and the formation of the concept of information entities.

This paper will recognize and define information from a new special perspective. Information is understood or defined as the form of interaction between different and acrossed layers of the material system. This paper will demonstrate this idea in the following aspects.

1. The hierarchy of material systems and the two unified forms of diversity

Everything in the world, no matter huge and small, is the unity of “multiple” and “single”, which is the concrete manifestation of the unification of material diversity, so everything can be called a material system. The unity of material diversity is not only one form, so the material system is not only one kind. Simple systems and complex systems are the two fundamental forms formed in the concretization process of diversity unification. When we begin to chart a variety of things presented before us in the relationship and order between their qualities (see figure 1), we find a hierarchical relationship between the various specific forms of matter. This is a reality manifestation of the unification of diversity in the material world. The existence of various things at the same physical layer is called “simple diversity”, and we can often easily distinguish them through the conceptual methods of general formal logic. The existence of things at different material layers is called “complex diversity”, and we usually cannot find the logic or other theoretical criteria to distinguish them. In further research, we can also find that the chart reveals the two most basic situations of the unified diversity of things: one is at the same layer and the other is across different levels. For the above reasons, the system composed of the unified diversity of substances with the same layer is called the "simplicity system", while the system composed of the unified diversity of substances with different layers is called the "complexity system". We find the fundamental difference between simplicity systems and complexity systems from their mother, the matter hierarchy (see figure 2).

2. Three types of the reductionism of determinism

The reductionism of mechanical determinism

The reductionism of mechanical determinism holds that any high- layer law can be recovered by complex calculations to low-layer laws, the mechanical laws of a single or a small number of particles. Until decades ago, attempts were made to restore biology and psychology to classical mechanics describing "microscopic" (relative to statistical physical) celestial movements, quantum mechanics reflecting electronic motion, and, quantum electrodynamics and relativistic mechanics characterizing the interaction of electrons with the electronic magnetic field. The meaning of mechanical reductionism is that all material movements can be recognized only by purely mechanical relations. This view holds that the material hierarchy is irrelevant.

The reductionism of statistical determinism

In scientific practice, it is found that the nature of macroscopic things can be understood by the statistical laws of microscopic states. This opens up another way to understand things. The laws of statistical physics are also extended to the understanding of all things, and the scientists trying to make such efforts believe that the relationship between various things and material layers can only be attributed to the laws of statistics, and that all material movements can only be described by purely statistical methods between the two layers. This view holds that non-statistical features between hierarchies are just unknowable and that any further exploration is meaningless.

The reductionism of cryptodeterminism

The formation of the molecular theory of biological genetic code marks an unprecedented new height in the level of discrimination about simplicity and complexity systems. Trying to reveal "mechanical reduction” of microscopic properties in the macroscopic domain, and to reveal “statistical reduction”of macroscopic properties in the microscopic domain,both encounter obstacles in explaining complex things such as biogenetic phenomena. A famous theoretical physicist, one of the founders of molecular biology, Schrodinger was the first to point out that neither of the reduction forms of determinism, mentioned above, is possible in a biological genetic system. Schrodinger then put forward the theory of genetic cryptography, arguing that the high and low leyers of organisms cannot be restored through the traditional scientific theories we know, there are many cryptographic relations between them, only through translation can become those physical and chemical laws we know. The proposal and establishment of cryptodeterminism is a major thought revolution in scientific epistemology and methodology. It marks a new stage in human research on complex things, including Human itself (cryptodeterminism is also known as information- determinism).

3. The interaction between layers in complexity system and its informatization

There are many layers within the complexity system. Part of it is the "native layers ", which produces or disappears with external conditions such as temperature rise and pressure change. The other part is the "derived layers ", resulting from the emergence of the interaction of native levels. The complexity system is usually surrounded by higher and lower layers, in a complex hierarchical interactive state, so the overall nature of which depends on the interrelations of different and crossed layers inside and outside. To explain the changes in the overall properties of complex systems, scholars have created many new concepts for causal analysis, such as downward causality, bidirectional causality, causal feedback, causal cycles, causal networks and causal maps. However, they all encounter the problem of the causal chain fracture between different and acrossed layers in complex systems.

The change of causal ideas

The interreaction between different and acrossed layers, leading to complex causal relations inside and outside the system (see figure 3). However, the traditional concept of causality (once questioned by Hume) is a view of causality focusing only on inevitable connections, which attributes accidental connections to the defects of the cognitive ability of the acquaintance. In this way, the traditional view of causality cannot cope with the complex causal relations of the hierarchy of complexity systems. The basic tenet of traditional determinism is to ensure the smooth flow of necessity and causal chain, namely to set the existence of the universal physical mechanism of "cause" and "fruit". However, life phenomena violate and do not exactly follow such a determinism. The mutual relationship between different layers and acrossed layers inside and outside organisms appears in the form of coding. The same information content can have different material carriers, and the inevitable and accidental results of the interactions between layers can be transformed into information forms. Life exists as an "information entity", transforming and moving between different layers. In biological-similar complexity systems, the interpretation of information causality is far greater than the traditional mechanical and statistical determinism, and the traditional view of causality gives way to the view of information causality

Information entity

Information entity is the "model" of the overall attributes of the system, hidden in the intercommunication of different layers, existed according to the carrier, but independent of the carrier. Information becomes a special form of the interaction between system layers, and through the interaction, the “information entity” moves between the system layers. The existence of the overall attributes of things depends on the information content.

The appearance of the causal connection in the complexity system is transformed to information-dominated: through mechanical decision and statistical decision to coding decision. Information crossing the hierarchical barrier maintains the causal continuity, and realizes the unity of the inevitable and accidental connection by the interactions of the hierarchical elements inside and outside the system. The overall attributes of the system remain stable in the unpredictable environment, because of existence of information entity .

Information causality

The existence of the material system hierarchy indicates that there are not only quantitative points of inflection, but also qualitative points of inflection between layers, which have relations containing and being contained. The attributes of things having sudden or emergent characteristics in different layers can not be normalized or restored, thus, the unfolding form of causal relation of things between hierarchies is not traditional, eternal and universal. Based on mechanical and statistical decisions, the interactions between the layers develop special coding relations, presenting the information causal connection, and the interaction turns to the “information causal form”.

The causal connection (inevitable and accidental) of all layers and elements of material system is written into a password form and stored in the carrier at various layers. The interactions of acrossed layers are the initiator. A large number of accidental and inevitable connections have collided and intersected: The accidental connection at the same level is transformed into the inevitable connection between different levels, which in turn restricts the accidental connection of the same layer, and the accidental connection between different levels is also transformed into the inevitable connection of the same layer, so that any changes happened at all layers within the system are mutations or protrusions relative to the other layers and system as a whole. We are unable to find the direct (physical) cause for the results at another layer, and it is the locking of action by hierarchical interactions on randomness. The originally complex causal connection transforms into simple information connection (From some angle, information is a mapping of complex hierarchical relationships of matter systems). Complex causal relation of different and acrossed-layer translates into “information causal relations” with coded form. Different hierarchies and attributes are the performance of information entities. This explains why the structure and attributes of different layers can not be reduced and reducible between the layers, but can be unified through the transcription, translation and decoding of the information entity to realize the overall attributes of the system. The causal analysis of the complexity system is thus transformed into the information causal analysis.

IV. The flow of Information entity in the material system

V. Hierarchical interaction and information transmission.

VI. The organisms establish information communication connections between different and acrossed layers


15:15-15:30 UTC

Sat 18th Sep

64. Informational Aesthetics and the Digital Exploration of Renaissance Art

John Douglas Holgate

St. George Hospital, Sydney, Australia

Abstract:

The recent rise of radiographic and photographic technology in the examination and interpretation of works of art - which has accompanied the digitization of artistic creativity in the contemporary world – has greatly enhanced the power of the naked eye to view and understand the masterpieces of the past. Through modern ‘digital telescopes’ a new breed of art scientists has emerged to both enhance and challenge traditional art scholarship. In this paper I will first review some of the major projects in this new field of informational aesthetics and provide examples from the impressive discoveries of Maurizio Seracini and Pascal Cotte to the magnificent photographic examination of the Ghent Altarpiece by and his team at the National Gallery of Art in Washington. Secondly, I will present the results of my own two-year exploration of Leonardo da Vinci’s extant paintings and drawings using photographic magnification leading to the identification of recurrent autographic images and motifs only visible through the lens of a digital camera. Finally I will discuss the implications of these studies for the philosophy of photography, for the world of art and for the future of digital humanism.

Keywords: informational aesthetics; philosophy of information; metaphotography; digital humanism; art history; Renaissance art; Leonardo da Vinci

1. Introduction – the restricted gaze of the real world viewer

What is the relationship between information and the creation, understanding and evaluation of the phenomenon we call art? For many centuries observers of works of fine art have been allowed limited information about the works themselves. For the common man the viewing of masterpieces has been restricted by the rulers of society - the kings aristocrats, oligarchs and Church officials – to gazing from a distance. The beholder’s role in the aesthetic experience has historically been to accept the values given to a work by the experts of the art world. Masterpieces were once hidden from the common gaze in monasteries, church recesses and the private collections of the wealthy or displayed at a respectable distance in galleries and exhibitions. With the emergence of the Internet and the high-quality digital versions of artworks in online collections which can be downloaded, magnified and explored by everybody art scholarship is becoming more of an evidence-based science than an anecdotal game based on subjective opinion.

2. The Pioneers of Art Science and Digital Aesthetics

In recent decades we have witnessed the emergence of high-powered radiographic and photographic technologies for the examination and diagnosis of artworks such as those developed by Pascal Cotte in his Lumiere (L.A.M.) projects like the digital restoration of the Mona Lisa and his multispectral analyses of The Lady with the Ermine and La Bella Principessa) 3,4 by Maurizio Seracini in his wonderful radiographic examinations of The Adoration of the Magi and the Battle of Anghiari 8 as well as the tetrahertz imaging (THz) work on Goya’s 1771 Sacrifice of Vesta by Cristina Seco-Martorell and her colleagues in Barcelona and the magnificent project from the University of Antwerp and the NGA using macro x-ray fluorescence (MA-XRF) imaging on the Ghent Altarpiece. 14 Since the 1980 Camera Lucida (La Chambre Claire) by Roland Barthes in which he explored the ‘spectrum’ or the beholder share of the viewer of photographs and Vilem Flusser’s Towards a Philosophy of Photography 7, 13 which examined the apparatus of the ‘black box’ and its role in digital humanization there have been several contributors to this new field. Hans Belting’s marvelous explorations of the iconic image throughout history as well as the growth of digital aesthetics 7 and metaphotography 9 have provided us with a framework for understanding a world which is dominated by a constant tsunami of invading photographic and cinematic messages delivered via the ubiquitous media and social networks of our age.

3. Uncovering the Hidden Worlds of Leonardo

A genuine Leonardo leaps out at the viewer by virtue of the sheer excellence and enchantment of the painting. Its entrancing quality entices you into hidden worlds scarcely visible to the naked eye. Only with the recent emergence of radiographic technologies are we able to fully see and explore their incredible complexity of his oeuvre. In a forthcoming publication “the Hidden Worlds of Leonardo da Vinci’ I have developed an evidence-based protocol (LISA or Leonardo Iconographic Scale of Authentication) based on macrophotographic analyses of his extant paintings drawings and codexes over a two-year period. These observations reveal the presence of dates, a recurring V-shaped monogram (two lion heads joined by a V-shaped line) and a camouflaged autograph. If verified over time by the art community these features will greatly help art historians and scholars to better place Leonardo’s works in their biographical and historical contexts. They would offer us a chronological guide, his personal Baedeker, leading us through the magnificent landscapes and personages of his world.

Is it possible that we may have so far merely scratched the surface of his creative genius? (see Figures 1-4 below).

4. Conclusion

Digital Aesthetics represents a bold new chapter in the Philosophy of Information and the exciting investigations and projects in the art world I have adumbrated here are being matched by the increasing presence of Digital Art as well as the emerging theoretical and philosophical discussions around the nature and role of photography in society. Information, as Einstein once said about imagination, can take us everywhere – even into the undiscovered corners of Renaissance art.

Figures only available with file

Figure 1. The hidden image of Dante wearing his laurel wreath and young Leonardo as Virgil emerging from the Dark Wood of the Inferno in the background of The Adoration of the Magi. Under further magnification the cave entrance to Hell and behind it the City of Woe swim into view. In green cyphers the disguised date 1479 can be discerned.



Figure 2. Heavenly Hands rescuing an animal in the Florence flood of 1333 seen in the background of the Mona Lisa. Leonardo’s omnipresent V-shaped monogram with two lion heads is also visible.

Floating corpses and skeletons and the top of the submerged Torre d’Arnolfo appear bottom right.


Figure 3. After five hundred years Mona Lisa finally reveals her age – the year 1498 is camouflaged across her chest and in the clouds at the top left and the top right of the painting



Figure 4. Diagnosing Mona Lisa’s beauty spot - is there a depiction of a creature inside the skin imperfection? Is the mole benign? Can we see an overlapping 1 4 9 8 in the image?


References


1. Bourriaud N Relational Aesthetics, trans. Pleasance S and Woods F.; Dijon: Le Presse du Reel, 2002.

2. Capurro, R. Homo Digitalis. Beiträge zur Ontologie, Anthropologie und Ethik der digitalen Technik;

Wiesbaden, Germany: Springer, 2017.

3. Capurro, R. Digital futures: a brief essay on sustainable life in the digital age. Metode 10 2020.

4. Cotte, P. Lumiere on the Lady with an Ermine. Unprecedented discoveries; Paris: Vinci Editions, 2012.

5. Cotte, P. Lumière on the Mona Lisa: Hidden Portraits; Paris:Vinci Publications, 2016.

6. Fazi, B. Digital aesthetics: the discrete and the continuous. Theory, Culture and Society. 2019 36 3-26

7. Flusser, V. Towards a philosophy of photography; Islington, London: Reaktion Books, 2001

8. Guidi,G.,Atzeni C., Seracini,M, Lazari,S. Painting Survey by 3D Optical Scanning-the case of Adoration of the Magi by Leonardo da Vinci. Studies in conversation 2004 49 1-12.

9. Guzel, D. Introduction to meta-photography: a self-reflexive and selfcritical mirror for photography in digital culture. (Master’s thesis) Manchester: University of Manchester, 2015

10. Holgate, J. Raphael’s School of Athens from the perspective of angeletics. In Information Cultures in the DigitalAge. A Festschrift in Honor of Rafael Capurro; Kelly, M., Bielby, J., Eds; Wiesbaden, Germany: Springer, 2016.

11. Holgate, J. Codes and Messages in Raphael’s School of Athens. Available online:

http://angeletics.net//CodesHolgate.pdf (accessed on 31 July 2021).

12. Kemp, M. Living with Leonardo. Fifty Years of Sanity and Insanity in the Art World and Beyond. Chapter 8 Science and Seeing. London: Thames & Hudson, 2018.

13. Truscot Smith, C. “The Lens is to Blame”: Three Remarks on Black Boxes, Digital Humanities, and The Necessities of Vilém Flusser’s “New Humanism” Flusser Studies: Multilingual Journal for Cultural and Media Theory 2014 18 1-16

14. Van der Snickt G., Dooley, K.A.Sanyov, J. Dual mode standoff imaging spectroscopy documents the painting process of he Lamb of God in the Ghent Altarpiece by J. and H. Van Eyck. Science Advances 2020, 6, 31

15:30-15:45 UTC

Sat 18th Sep

65. Practice, Challenges and Countermeasures of Accelerating the Development of new Generation of Artificial Intelligence in Xinjiang

Hong Chen

Party School of Xinjiang Uygur Autonomous Region

Committee of the Communist Party of China, Xinjiang, Urumqi, 830002

Abstract:

Artificial intelligence is a strategic technology leading this round of scientific and technological revolution and industrial change, which has a strong spillover and leading effect. From 5G, big data to smart transportation, smart city and counter-terrorism, various smart technology applications are flying into the homes of ordinary people at an unprecedented speed and breadth, allowing people to share their smart lives. Therefore, Xinjiang will promote the application and development of a new generation of artificial intelligence in Xinjiang on the basis of fully considering the needs of artificial intelligence technology in the region. This paper first expounds the practice and major achievements of accelerating the development of a new generation of artificial intelligence in Xinjiang. Then according to the actual needs of Xinjiang, it points out the challenges it faces, and puts forward countermeasures and suggestions.

Key words: Artificial intelligence: Application and development: Counter-terrorism; Challenges; Suggestions

15:45-16:00 UTC

Sat 18th Sep

66. A Basic Problem in the Philosophy of Information Science: Redundant Modal Possible World Semantics

Xiaolong Wan

Professor and Director of Research Center “Philosophy, Logic, and History of Science and Technology”, University of Electronic Science and Technology of China, China

Abstract:

The overall uncertainty of information is just a special case of the double entanglement of the basic problems of integrity, dialectics and uncertainty in many basic disciplines. This double entanglement shows that a more general basic theory is needed, and STRF is a candidate. According to STRF,Information is an uncertainty difference between equivalents. When the whole has not been formed, which variable hidden quantity will work is unpredictable in principle. However, when the whole is formed, because each whole can be transformed into a non- whole containing hidden variables, the hidden variables emerge.

Keywords: Information,STRF,Holism,Emergence

1. Introduction

From the perspective of philosophical history and etymology, there are two interpretations of English "information": the first is "information + information", that is, the nominalization and abstraction of "tell". The second is "in + form + ation", that is, nominalization and abstraction of "entering form" or "non form". According to the texts defined by various information, the first kind of "information" can be extended to "message transmission", and the second kind of "information" can be extended to "some given form or some negative expression about form".

Therefore, starting from the first kind of "information", excluding the individual meaning of "news" in daily or specific science, Shennong's "news to eliminate random uncertainty" can be regarded as "something that can eliminate random uncertainty"; Combined with Wiener's "information is neither matter nor energy" and Holt's "characteristic parameter of matter is information", then information is likely to be the characteristic parameter of matter or energy that can eliminate the random uncertainty of matter or energy. Combined with Holt's "it specifies the value of matter and energy", the provision includes an equivalent relationship between the regulator and the regulated, and the value involves quantitative relationship. Therefore, it can be guessed that information can define the value of matter or energy. Associating with the second "information", we try to express information as "elimination of random uncertainty between matter or energy on both sides of the equation", "message" can be abstracted as "value", and the simplest way to "eliminate" is difference. Therefore, we get a basic definition of pure information I-1: information is an equivalent form between things (situations) with uncertainty difference, or information is the uncertainty difference between things (situations) with some form of equivalence. The simplest information is the equivalent form between different expressions of the same thing (situation) with poor uncertainty.

2. Philosophy of Information

The "thing (situation)" includes material and energy, as well as consciousness, attribute and information itself. The conservation of information is the equivalent of this form; The formation of information is to make equivalent transformation from things (situations) with little uncertainty to things (situations) with great uncertainty. The amount of information is the uncertainty difference. The transmission of information is the process of transmitting the message loaded with information from the source to the sink through the channel.

Now look at Dummett's intuition, knowledge is the unity of form and content, while information is only a form, which is more universal and objective. Therefore, "it is more natural and more basic than knowledge", and pure form information obtains the content of "there is no need to understand the argument that makes it concrete"; The operation level of information flow is more basic than the acquisition and dissemination of knowledge. From the perspective of classical philosophy of science, information as a form, of course, as Dennett said, "the concept of information helps to finally unify the mind, matter and meaning in a single theory", that is, the unity of syntax and semantics of the statement of mind and matter. Zhong Xinyi did see some characteristics of information: "information is the self-expression / self display of the state (situation) of the movement of the thing (the same thing) and the way of state change (different expressions)" (equivalent form). Wu Kun's recent interpretation of the definition of information proposed by him more than 30 years ago also correctly mentioned the classification attributes and some characteristics of information as a subclass of existence: "information is a philosophical category marking indirect existence (about the form of things rather than things themselves), and it is the self display of the way and state of matter (direct existence) (equivalent form). However, both of their definitions have the dual shortcomings of too wide extension and too narrow connotation, and neither of them involves the basic connotation of "eliminating uncertainty" of information. Zhong Xinyi's "state of things moving" cannot cover all things; If Wu Kun's direct existence is only "matter", what is consciousness? If consciousness is "direct existence", it is obvious that there is not only the information of marker materials, but also the information of marker consciousness; If consciousness is also the "indirect existence" of marker material, what is the difference between the "indirect existence" of consciousness and the "indirect existence" of information? Isn't it also the self display of the existence mode and state of matter (direct existence)?

However, in quantum measurement, the measured experimental results indicate a factual relationship between the two statements, while the measured mathematical theoretical results indicate a logical (formal) relationship between the two statements corresponding to the two statements.

3. Metaphysics of quantum information driven analysis

Functional special relativity can provide a complete mathematical analytical solution to holism. Although it will take some time to use this method to deal with complex quantum information experimental results and quantum mechanics mathematics, we must not stop our work in fuzzy metaphysical statements such as "indivisibility" or "non separability".The information theory paradigm of quantum mechanics is fascinating. These explanations are hastily developed without a clear definition of their basic concepts. Of course, it also implies that these explanations may be close to but do not find a basic conceptual problem of quantum mechanics. We believe that if we can understand the more basic problem clearly, an exquisite rather than the simple information theory explanation is reasonable, but its rationality is just like the exquisite modal explanation.

There is a popular research orientation in the academic world, and the "integrity", "non separation" and "non localization" shown by quantum entanglement are compared with multivalued logic or modal logic in non classical logic. We believe that this idea is desirable because it is quite enlightening. However, it should be noted that the metaphysics of quantum information should be based on the narrow scientific and philosophical study of quantum information. This narrow study should first clarify the logic of quantum mechanics. The core of the logic of quantum mechanics is quantum probability. This book cannot directly analyze the essence of quantum probability from the mathematical development logic of quantum mechanics, but reveals the logic of quantum mechanics and the essence of quantum probability with the help of von Neumann, the founder of the standard mathematical form of quantum mechanics, who simplified his own mathematical form - lattice quantum logic. This point will be further analyzed in detail when it comes to STRF and its interpretation of the nature of multivalued logic and modal logic.

References

1. Blackburn, P., van Benthem, J. and Wolter, F. 2007, Handbook of Modal Logic, Elsevier.

2. Copeland, B. J. 2006, Meredith, Prior, and the History of Possible Worlds Semantics, Synthese 150: 373-397.

3. Divers, J. 2006, Possible-Worlds Semantics without Possible-Worlds: The agnostic Approach. Mind 115: 187-226.

4. Hill, Daniel J. &Stephen K. McLeod, 2010. On Truth-Functionality, Review of Symbolic Logic 3 (4):628-632.

5. Kripke, S. A.1963, Semantical Considerations on Modal Logic, Act a Philosophica Fennica 16: 83-94.

6. Li Xiao-Wu, Modal Logic,Zhongshan Univ. 2006.

7. Lukasiewicz, On Three-Valued Logic [J], in Jan Lukasiewicz Selected Works, edited by L. Borkowski, North-Holland Publishing Company-Amsterdam· London, 1970, p88.

8. Marcos,João, 2009,What is a non-truth-functional logic?,Studia Logica 92 (2):215 - 240.

9. James A. Marten.1973. Are There Truth Functional Connectives? Metaphilosophy 4: 187-204.McCarthy, J.1997.

10. Quine, W.V.1947. The Problem of Interpreting Modal Logic, The Journal of Symbolic Logic 12: 43-48.

11. Schnieder, B . 2008,Truth-functionality, Review of Symbolic Logic 1 (1):64-72.

12. Stephen Read.1992.Conditionals are not Truth-Functional: An Argument from Peirce. Analysis 52: 5-12.

13. Xiao-long Wan. 2011. The first exploration to unary operators: Whether can unitary operators be exhausted? Journal of Anhui University (social Science edn, in Chinese),6th.

14. Xiao-long Wan et al 2012. The second exploration to unary operators: Deontic logic and deontic paradox, Journal of Anhui University (social Science edn, in Chinese),3th.

15. Xiao-long Wan. 2012. The third exploration to unary operators: On modern modal logic from STRF. Journal of Huazhong University of Science and Tech. (social Science edn, in Chinese),3th.

16. Xiao-long Wan et alOn G. Birkhoff and J.von Neumann’s Quantum Logic- The forth exploration to unitary operators”, in Journal of The Studies in Philosophy of Science and Technology(in Chinese), 2013.

17. Xiao-long Wan. 2012. The fifth exploration to unary operators: On modern modal logic from STRF. Journal of Shandong University of Science and Technology(Social Sciences, in Chinese),12th.

18. Xiao-long Wan et al: (2014). The equivalent transformation between non-truth and truth-function. Scientific Explanation & Methodology of Science, world scientific 176-211.

19. Radzki, M. M. 2016. On axiom systems of słupecki for the functionally complete three-valued logic. Axiomathes, 1-13.

20. Xiao-long Wan. 2017. On the holism -- is the whole a partial nonlinear function or a non function? Research on Dialectics of nature (in Chinese). 33 (07).

21. Xiao-long Wan. 2019. A comparative study of three typical logic notation systems (in Chinese). The third National Academic Conference of Chinese logical society in 2019.

22. Floridi, L. The Philosophy of Information, 1st ed.; Oxford University Press: Oxford, UK, 2011.

23. Dodig-Crnkovic, G.; Hofkirchner, W. Floridi’s open problems in philosophy of information, ten years after. Information 2011, 2, 327–359.

24. Dretske, F. Knowledge and the Flow of Information, 1st ed.; MIT Press: Cambridge, MA, USA, 2004.

25. Barwise, J.; Seligman, J. Information Flow: The Logic of Distributed Systems, 1st ed.; Cambridge University Press: Cambridge, UK, 1997.

SUNDAY, SEPTEMBER 19

Contributions from Global Forum of Artificial Intelligence GFAI 2021

Block 1:

4:00-7:00 UTC

Sun 19th Sep

GFAI

Yixin Zhong

4:00-4:20 UTC

Sun 19th Sep

67. Paradigm Revolution Creates the General Theory of AI

Yixin Zhong

Beijing University of Posts and Telecommunications, Beijing 100876, China

Abstract:

It is a call for having collective reconsideration concerning the paradigm executed so far in the studies of information discipline, which is in fact not the one suitable for information discipline but is the one suitable for physical discipline. This is just the old saying of “putting A’s hat on B’s head”. The misuse of paradigm has been the unique cause of all problems related to the studies of information discipline and should therefore be shifted as soon, and as clean, as possible if we want to make the studies of information discipline enter into its high stage of development.

Key Words: Paradigm Shift, Paradigm for Physical Science, Paradigm for Information Studies,

1. Introduction

It has been more than 70 years since the publication of the paper “The Mathematical Theory of Communication” by Claude E. Shannon, which has later been renamed as information theory. Up to the present time, the theories and technologies related to information have widely been applied to almost every field of science and technology and made historical progresses and great achievements to the mankind, bringing the world entering into the “information era”.

As scientists and technologists in information discipline, we feel prod and happy. On the other hand, however, could we consider that the studies in information discipline have really been perfect? If it is not the case, what we should do for, and contribute to, the further development of the studies of information discipline then?

To find the clear answer to the questions raised above, we should make a thorough investigation on studies of information discipline so as to see if there really exist any weakness, problems, or even mistakes and big challenges to the discipline.

In the following sections, some findings of the investigation are briefly presented.

2. Concepts and Definitions

To avoid unnecessary misunderstandings, some foundational concepts related to the studies of information discipline need to be discussed and redefined.


(1) Paradigm

There have been many different ways in everyday English for explaining the meaning toward the word of ‘paradigm’, such as model, standard, pattern, typical example, and so on [1]. They are more or less similar to each other, to certain extent.

More precise explanation to the word of paradigm may be expressed as a formula [2],

Paradigm = World-view + Methodology. (2.1)

As is seen from (1.1), the explanation of the word ‘paradigm’ includes two elements. One is the world-view people use for appropriately understanding thing in real world, answering the question of what the thing is indeed, and another is the methodology, or approach, people use for properly dealing with the thing, answering the question of how to suitably deal with the thing.

In the context of scientific research, the formula above can then be changed to be [3]

Paradigm = Scientific view + Scientific methodology” (2.2)

Therefore, we can have the following definition for paradigm:

Definition 1 Paradigm

The paradigm for a scientific discipline is consisted of the scientific view and the scientific methodology for that discipline in which the former defines what the discipline is in essence while the latter defines how to do research in the discipline.

As can be seen from definition 1 that the paradigm for a discipline is the supreme guiding force for the studies of the discipline. Whether the paradigm for a discipline is suitable or not will determine if the studies of the discipline is successful or not.

(2) Information and the Ecological Chain of Information

Matter, energy, and information are regarded as three categories of raw resources widely existed in reality. Through proper manufacturing, handling and processing, the products of the raw resource could provide to humans with various kinds of materials, power, and artificial intelligence respectively.

The information as raw resource is, no doubt, useful. However, on the other hand, it will be much more useful if the information as raw resource is properly processed, transformed, and utilized by its human subject.

In practice, the information as raw resource will have to be perceived, processed and utilized by its subject for implement certain goal(s), thus forming the ecological chain of information within the framework of subject-object interaction as seen in Fig.1.


Figures only available at the downloaded file

Fig.1 Ecological Chain of Information within Subject-Object Interaction

The model in Fig.1 shows a typical process of subject-object interaction commonly existed in reality. The lower part of the model stands for the object existed in certain environment and the upper part represents the subject’s processing functions.

Once the object information originated from the object acts on the subject, the latter would produce an (intelligent) action reacting on the object for achieving, or keeping, the subject’s goal. The subject’s (intelligent) action would be produced through a number of functions of information processing, forming the ecological chain of information as is shown in Fig.1.

It is clearly to see from Fig.1 that there have been two kinds of information occurred in the ecological chain of information: one is named ontological information and another is epistemological information. The ontological information is presented by object in environment whereas the epistemological information produced by both subject and object. Either ontological information or epistemological information is important to human beings.

Definition 2: Ontological Information

Ontological information produced by object in real world is defined as the object’s states and the pattern of the state varying, all presented by the object itself.

Ontological information is more often named object information. It exists without depending on whether or not it is perceived by subject. So, it is purely objective concept of information having nothing to do with subjective factors.

Definition 3: Epistemological Information

Epistemological information a subject perceived from the ontological information of an object has three components: (1) the form (syntactic) information provided by the object sensed by the subject, (2) the utility (pragmatic) information provided by the object evaluated by the subject with respect to his goal, and (3) the meaning (semantic) information provided by the object produced by the subject via mapping the former two components into the space of meaning (semantic) space and naming the result.

Epistemological information is more often named subject’s perceived information. Clearly, it is originated from ontological information but modulated by the subject. So, it is subjective concept of information related to both subject and object.

Note that the definitions of form (syntactic) information and utility (pragmatic) information are obvious and easy to understand while the definition of meaning (semantic) information is not so intuitive and may thus need certain explanations.

The principle for producing meaning (semantic) information from the form (syntactic) and utility (pragmatic) information can specifically be explained in Fig.2 below.

Figures only available at the downloaded file

Fig.2 The Interrelations among form, utility, and meaning

This interrelationship shown in Fig.2 can be expressed in the following equation [4]

Y ~ (X, Z) (2.3)

The symbol X in Eq.(2.3) stands for the form (syntactic) information, Y the meaning (semantic) information, Z the utility (pragmatic) information, and the logic operator of “mapping & naming”, mapping the combination (X, Z) into the space of meaning (semantic) information and then giving it name.

It is obvious from Eq.(2.3) that, whenever the meaning (semantic) information Y is obtained, the combination of form (syntactic) information X and utility (pragmatic) information Z is also obtained. This means the meaning (semantic) information Y can represent the form (syntactic) X and the utility (pragmatic) information Z.

Note that the definition of syntactic, pragmatic, and semantic information stated in Definition 3 and expressed in Eq.(2.3) is the new results in [4] and they were not made clear either by Peirce or by Morris.

Note also that it would not be complete if only one of the definitions of information, either ontological information or epistemological information, is considered. It would also not be complete if only the two definitions of information have been considered without the understanding of the interrelations among X, Y, and Z.

(3) The Information Science, AI, and The Discipline of Information

Definition 4 Information Science [4]

Referring to Fig.1, the information science (IS) can be defined via the four elements:

The objet of IS: information and the ecological chain of information,

The content of IS: the properties and the laws governing its ecological chain,

The research approach for IS: the methodology of information ecology,

The goal of IS: to strengthen all information functions of human intelligence.

Definition 5 Human Intelligence and Artificial Intelligence (AI) [5]

Given problem-goal-knowledge, human intelligence is its ability to solve the given problem and reach the given goal via utilizing the knowledge provided.

AI is the machine’s ability endowed by humans for simulating the human intelligence.

Note that the term of ‘human intelligence’ is a subset of ‘human wisdom’.

Definition 6: Discipline of Information / Information Studies

The discipline of information / information studies is understood as the studies on the entirety of information science containing AI as its high member.

It is quite clear from the definitions 4-6 that the scope of the discipline of information, or information studies, is not limited to, but much wider than, the scope of Shannon information theory.

It is also clear, as an academic discipline dealing with information science, it should establish its own paradigm so as to have the supreme force for guiding and regulating the related academic activities of the discipline.

On the other hand, however, the paradigm of a discipline cannot be formed at the same time as its academic activities occurred. It can only be summarized and refined from the academic practice of the discipline through a sufficiently long period of time in history.

3. A Historical Challenge Information Discipline has been facing

Up to the present time, there have been two major categories of academic discipline so far, that is the discipline of physical science and the discipline of information science, yet only one category of paradigm that is the one for the discipline of physical science yet without the one for the discipline of information science.

This is because of the rule that the formation of the paradigm of an academic discipline will have to be much later than the occurrence of the related research activities of the discipline. This is why the information discipline started to develop since 1940s but has not been able to form its paradigm till the present time.

Because of the facts stated above, the research activities carried on in information discipline have in practice borrowed the paradigm for physics discipline which has been existed for hundreds years.

This is the un-avoided suffer and challenge to the studies of information discipline.

Many colleagues may not believe the above-mentioned challenge existed. To be more convincing, let’s have an investigation on the paradigm for physical discipline and the paradigm practically executed in information discipline more specifically.

The paradigm for physical discipline has the following features as shown in Table 1.

Table 1 Major Features for the Paradigm of Physical Discipline

  1. Scientific View

    • Object for study: Pure physical entity with no subjective factor

    • Focus of study: The structure & function of physical systems

    • Property of the object: Deterministic evolution

  2. Methodology

    • General approach: Divide and conquer

    • Means for description/analysis: Formalism

    • Means for decision-making: Formalistic matching


It is quite unfortunately to see that the paradigm really executed in practice in information discipline including AI (see Table 2), has been almost the same as the one for physics discipline (Table 1).

Table 2 Major Features for the Paradigm Actually Executed in Information Discipline

  1. Scientific View

    • Object for study: Physical system with no subjective factor

    • Focus of study: The structure & Functions of the physical system

    • Property of the object: Deterministic evolution with noise

  2. Methodology

    • General approach: Divide and conquer

    • Means for description/analysis: Formalism

    • Means for decision-making: Formalistic matching


As the results of the fact expressed in Table 2, the information studies has suffered a series of magnificent difficulties. Some examples are as follows below.

(1) No Unified Theory for Information Discipline

Because of the employment of the principle of divide and conquer, which is one methodology of the paradigm for physics discipline as is seen in Table 1 and 2, the information discipline has been divided into a number of pieces mutually isolated from each other at its beginning till 1990s such as sensing (information acquisition), communication (information transferring), computing (information processing), controlling (information execution), and etc. As for AI, it has been divided into three branches isolated from, and inconsistent to, each other, such as artificial neural networks, expert systems, and sensorimotor systems. These separations have led to the lack of the unified, or general, theory for information discipline, and for AI.

(2) Very Low Level of Intelligence in All AI Systems

Due to the employment of the formalism approach, which is another methodology of the paradigm for physics discipline as is seen in Table 1 and 2, both the factors of meaning (semantic) information and utility (pragmatic) information, which are nucleus of the ability of understanding, have been completely ignored. This has led to the very low level of intelligence in all AI systems.


4. What to Do Next?

Both of the problems mentioned above, that is, no unified theory for information discipline and very low level of intelligence for all kinds of AI system, are not tolerant any longer to the society. What we should do is to shift the paradigm from the one suitable for physical discipline to the one for information discipline.

What is the paradigm suitable for information discipline then?

Based on nearly 60 years studies, we have summarized and refined the paradigm suitable for the information discipline, which is now presented in the Table 3 below.

Table 3 Major Features for the Paradigm suitable for Information Discipline

  1. Scientific View

    • Object for study: Info process within subject-object interaction

    • Focus of study: Double win between subject and object

    • Property of the object: Non-deterministic evolution

  2. Methodology

    • General approach: Methodology of information ecology

    • Means for description/analysis: Trinity of form-value-meaning

    • Means for decision-making: Understanding-based

The detailed explanations on the paradigm suitable for information discipline, shown in Table 3, and the significant applications and great implications provided by the paradigm for information discipline will be reported in next article.

References

[1] Kuhn, T. S. The Structure of Scientific Revolution [M], University of Chicago Press, 1962

[2] Zhong, Yixin,. From The Methodology of Mechanical Reductionism to The One of Information Ecology [J], Philosophy Analysis, No.5, p.133-144, 2017

[3] Burgin, Mark and Zhong, Yixin., Methodology of Information Ecology in the Context of Modern Academic Research [J],Philosophy Analysis, 119-136,2019

[4] Zhong Yixin. Principles of Information Science [M]. Beijing: BUPT Press, 1988

[5] Zhong Yixin. Universal Theory of AI [M]. Beijing: Science Press, 2021

4:20-4:40 UTC

Sun 19th Sep

68. Intelligence Science Drives Innovation

Zhongzhi Shi

Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China

Abstract:

Intelligence Science is an interdisciplinary subject which dedicates to joint research on basic theory and technology of intelligence by brain science, cognitive science, artificial intelligence and others. Brain science explores the essence of brain, research on the principle and model of natural intelligence in molecular, cell and behavior level. Cognitive science studies human mental activity, such as perception, learning, memory, thinking, consciousness etc. In order to implement machine intelligence, artificial intelligence attempts simulation, extension and expansion of human intelligence using artificial methodology and technology. At present, intelligence science is an active research area which aims to lead artificial general intelligence and the new generation of artificial intelligence. Brain-inspired research will mainly applied to the research and development of artificial general intelligence. In this talk, I will discuss intelligence age, intelligence science leading, mind models, cognitive machine learning.

Keywords: Intelligence Science; Brain Science; Cognitive Science


4:40-5:00 UTC

Sun 19th Sep

69. On the Essential Difference Between the Intelligence Body and the Program Body

He Huacan1, He Zhitao2

1.Northwestern Polytechnical University,

2. Beijing University of Aeronautics and Astronautics

Abstract:

A major problem in the current artificial intelligence research is to obliterate the essential difference between the intelligence body and the program body, trying to use computer applications without intelligence factors to pretend to be artificial intelligence, use automation, digitization, and networking to pretend to be intelligent, and use big data and cloud Computing pretends to be intelligent and uses perceptual intelligence to pretend to be cognitive intelligence and decision-making intelligence. The improvement of machine intelligence is simply attributed to the improvement of algorithms and the increase of machine computing power, ignoring the core position of human subjective initiative and computing ability in intelligence. To obliterate the unique characteristics of human brain intelligence and the limitations of Turing machine (recursive function) capabilities. This article demonstrates in detail why the program body (Turing machine, recursive function) is essentially just a rigid mechanical tool, and its intelligence is 0. Simply using the program body (Turing machine, recursive function) simply cannot simulate human intelligence activities, especially Intelligent activities related to the process of understanding, learning and evolution, only after upgrading the Turing machine (recursive function) to a higher-order Turing machine (universal recursive function), that is, after supplementing the computer with the intelligence of the human brain, can the program body rise As an agent, it completes various intelligent simulation tasks. We define an infinite high-order Turing machine as a full intelligence body, which represents the sum of the wisdom of all humans in the past, present, and future. Its wisdom degree is 100, and the wisdom degree of a Turing machine (recursive function) is 0. Some intelligence body only undertake the functional simulation of some low-level Turing machines in infinite high-level Turing machines, and their wisdom degree is greater than 0, but far less than 100.

Keywords: Pogram Body; Turing Machine; Full Intelligence Body ;Partial Intelligence Body, Wisdom Degree

5:00-5:20 UTC

Sun 19th Sep

70. Human body networks mechanisms of the Covid-19 symptoms

Pin SUN, Rong LIU, Shui GUAN, Jun-Xiu GAO, Chang-Kai SUN★

★Chang-Kai SUN MD, PhD Dalian University of Technology, China No. 2 Linggong Road, Ganjingzi District, Dalian 116024, P.R. China

Abstract:

Coronavirus disease 2019 (Covid-19) usually presents flu-like symptoms, mainly include fever, cough, fatigue, and/or dyspnea. One or more of these typical manifestations or other non-respiratory symptoms, with which the patients often presented without fever and many did not have abnormal radiologic respiratory findings, all can become severe and even life-threatening in high-risk individuals. We focused and explored the human body networks mechanisms underlying these symptoms with different physiological and pathophysiological significances, and a TPM-HPM-HBUN-HBNU-HBNE engineering model and corresponding diagnosis, treatment, and prevention strategies and methods were proposed. (This work was supported by the National Key Research and Development Program of China under Grant 2018AAA0100300)

5:20-5:40 UTC

Sun 19th Sep

71. The Development and Characterization of A New Generic Wearable Single-Channel Ear-EEG Recording Platform

Rong Liu

School of Biomedical Engineering, Department of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian 116024, China

Abstract:

It has been demonstrated that electroencephalography (EEG) signals can be acquired from electrodes placed on an earpiece inserted into the ear, and the research interest in dry Ear-EEG has increased in recent years. However, the challenge is still the need for good durable electrode material, which guarantees the stable quality signals can still be collected after repeated insertions. To achieve a sustained Ear-EEG recording platform, a new generic earpiece designed with PDMS and the AgCl powder sintered electrode was proposed. First, the electrical and mechanical characteristics of platform were evaluated. Then experiments with 5 paradigms, alpha-band modulation, auditory steady-state response, steady-state visual evoked potential, mismatch negativity, and visual evoked potential with different ear sizes of subjects were carried out. Recordings from the prototyped generic Ear-EEG platform were compared to conventional scalp EEG recordings. The Ear-EEG electrode exhibits good wear resistance. After repeated insertion and removal electrode, the quality of the electrode acquisition signal is stable, and the range voltage is approximately in line with the normal distribution. With both the measuring electrode and the reference electrode located within the ear, statistically significant (p < 0.05) responses were measured for all paradigms.

Keywords: Electroencephalography (EEG); ear-EEG; dry-contact electrode; wearable EEG


5:40-6:00 UTC

Sun 19th Sep

72. Brain Imitating Method for Social Computing - Illumination of Brain Information Processing System

Liqun Han

College of Artificial Intelligence, Beijing Technology and Business University, Beijing 100048, China

Abstract:

Social computing system and human brain information processing system have many similarities in structure and dimension. Therefore, the brain information processing system created by nature may provide simulation and reference for the social computing system created by human beings from the aspects of information processing mechanism, function division and cooperation.


Keywords: Social Computing; artificial society; human brain information system; Neuron mechanism; organizational structures

1. Introduction

Generally speaking, social computing is an interdisciplinary field of modern computing technology and social science. On one hand, social computing studies the application of computer and information techniques in society, which affects the traditional social behaviors. On the other hand, social computing assists human to understand and investigate a variety of problems in social science, with computing and information techniques and based on the knowledge, theories and methods of social science.

The former point of view is limited to microcosmic and technical level, which gets going with Human Computer Interaction (HCI) and other correlate investigation fields, and aims to improves the means of computer and information techniques. Therefore, one of the important functions of social computing is to study information technology tools for implementing social communication and cooperation, so that computer can be more conveniently used to construct the virtual spaces for people to communicate. Such kind of techniques is so called Social Software. By this sense, many traditional network tools such as email, internet forum, OAS, groupware, blog, wiki and so forth. While in the latter point of view, social computing tries to observe society from macroscopical level and solve the problems in social science by virtue of modern computing technology.

2. Two similar ways of system composing

There are a lot of Similarities between social computing and human brain information processing. As the great natural work, the brain information processing system inspires social computing researchers to find out the points for simulating or using for reference.

The information processing system of human brain is biological neural network which based on neurons, the basic information processing element. The biological neural network stores information distributedly and processes information in parallel. The colony cooperated working and incorporated processing and storing of information, as well as the diversity and plasticity of intensity and modes of synapse connection, cause neural network to unfold protean and complicated capacities in information processing. Such capacities can never be realized by simply accumulate each neuron’s function.

The buildup of human society is different from neural network in approach but equally satisfactory in result. In human society, an individual as a basic composing element can form different connections with other people, and make his own social network. As the environment and stimulation varying, the tightness and property of the connections are changing accordingly, and which consequently causes the whole society to appear daedal social phenomena, vogue and ethos.

In social computing system, the smallest information processing unit is the combination of an individual and a certain computing tool. Computer network widely connects billions of such units to form enormous communication network. Likewise, the information storing and processing of the social network are also distributed and parallel, and the information processing mode is also colony cooperated, and the capacity of the network can not be reached by simply piling up the capacity of each unit.

3. Artificial society inspired by Human Brain Information Processing System

One of the important branches of social computing is artificial society. Artificial society is an interdisciplinary which is the integration of computer science, social science, system science, computer simulation technology, multi-agent system technique and artificial intelligent technology, and it opened up a new approach to know and understand society.

As a new method of social science research, the basic idea of artificial society is that since human society is a kind of complex system which consists of large numbers of individuals, each individual model can be built up in computer, which is so called agent. Then keeping to some simple regulations, these agents interact. By observing the property of emergence of the whole agent system, the rules of artificial society will be discovered and then could be used to explain and comprehend the macro-phenomena of human society.

Natural science emphasizes ways of scientific experiments, but it is almost impossible in social science. With the multi-agent based method of sociology simulation, people can study the real society’s “silicon stand-in” which exists in computer world. By modifying the regulations and parameters expediently, A variety of sociological experiments can be carried out.

Both artificial society system and human brain system are complex systems, therefore, there are a lot of Similarities between the two systems. From the system composing point of view, human brain information system is a networking of neuron, while artificial society system is a networking of agent. From the information processing point of view, both brain biological neural network and artificial society’s agent network are computing systems made up of a large number of simple, highly interconnected processing elements, which processes information by its dynamic state response to external inputs. Therefore, using some related mechanisms of brain biological neural network for reference and simulating those mechanisms to study the brain style design methods of artificial society, is a new and viable technological route.

4. The Possible Research Schemes for Artificial Society

1) research scheme for design of individual of artificial society

A single neuron receives signals from thousands upon thousands of nerve fibres with its dendrites and surface, and then through information integration of nucleus, establishes a new signal and passes it out through the axon. Using that mechanism for reference, agent, as the society cell, will possess normative biomimetic structure. Agents will possess the artificial dendrites to receive environmental information, artificial nucleus to process information, and artificial axons to transmit information. Under this frame of structure, the function design of artificial dendrite and axon will be further formalized, while the design of artificial nucleus will embody the specific needs of different artificial societies.

2) Research scheme for self-organizing way of artificial society

A) During the growth of biological nervous system, neuron responds to the genetic and environmental impacts and then acquires its individuality, meanwhile, builds up precise and orderly synaptic connections. By learning from the growth and formation mechanism of biological nervous system, the self-organized networking of agents in artificial society will be worked over.

B) Synapses are specialized cell-to-cell contact points in nervous system. The neurotransmitters released by synapses interact with the receptor molecules on target cell membrane, so that the target cells become excitatory or inhibitory state in order to achieve signal transmission directly or indirectly. Based on such transmission mechanism of synapses of biological nervous system, the technique of information transmission among agents of artificial society will be studied.

C) Study the biological foundation of artificial society’s characters such as self-organization, self-adaptive, self-learning, and self-coordinating, inspired by the plasticity mechanism of synapses of neurons. There are tens of billions synaptic connections in human brain neural network, such great amount of specific synaptic connections are the material basis of fantastic Intelligence presented by human brain. The formation mechanism of synapse, the change of the transmission efficiency, and the Influencing factors, will provide the character design of artificial society with biological foundation.

3) Research scheme for structure design of artificial society

Modern society is highly structured complex systems, but current researches on artificial society rarely deal with the structure of social organizations. In neural system, nerve cells with similar characteristics often are arranged in layers or gathered into clusters to form different organizational structure. Analogies will be drawn between the basis organizational structures in artificial society such as family, community, fellowship, organization, and the layered structural organization of human brain such as nerve cell, local circuit, regional circuit, centrum neural system, and etc. Based on the analogies, essential structural design methods of artificial society will be worked out.

5. Conclusion

What we are doing and going to go have three innovation points:

1) Neuron mechanism based design method for agent which is the essential information processing element of artificial society.

2) Design method of individual interaction based on principles of synaptic transmission.

3) Design method of evolution mechanisms of artificial society based on synaptic plasticity mechanism.

4) Design method of organizational structures of artificial society based on structural organization of levels of brain neural system.


6:00-6:20 UTC

Sun 19th Sep

73. Research and Prospects of Artificial Intelligence in Traditional Chinese Medicine

Zixin Shu, Ting Jia, Haoyu Tian, Dengying Yan, Yuxia Yang, Xuezhong Zhou*

Institute of Medical Intelligence, School of Computer and Information Technology,

Xizhimenwai Street, Haidian District, Jiaotong University, Beijing 100063, China.

Corresponding author: *Prof Xuezhong ZHOU PhD

Abstract:

Traditional Chinese Medicine (TCM) specialises in the treatment of a wide range of chronic and complex diseases, which has formed a unique set of theories, diagnostic and therapeutic systems as the main means of treating diseases clinically. In recent years, with the rapid development of computer information processing technology, especially for the increasing mature of data mining and artificial intelligence(AI), these provide key technology for modernization of TCM. Since 1970s, AI has been widely adopted in TCM to deliver more practical and feasible intelligent solutions for clinical operations. This paper summarizes the main approaches, related typical applications and future directions of AI in TCM to give related researchers a brief useful reference. We find that the AI studies showed abundant experiences and technique trials in expert system, machine learning, data mining, knowledge graph and deep learning. In addition, various types of data, such as bibliographic literatures, electronic medical records and images, were used in the related AI tasks and studies. Furthermore, during this COVID-19 pandemic era, we have witnessed the clinical effectiveness of TCM for COVID-19 treatment, which mostly were detected by real-world data mining applications. This indicates the potential opportunity of the booming of AI researches and applications in various aspects (e.g. effective clinical therapy discovery and network pharmacology of TCM drugs) in TCM fields.

Keywords: Artificial Intelligence, Machine Learning, Knowledge Engineering, Traditional Chinese Medicine, Clinical Therapy, Clinical Diagnosis, Network pharmacology

1. Introduction

Traditional Chinese Medicine (TCM) studies human of human physiology, pathology, and the diagnosis, prevention and treatment of diseases. As of 2015, there were 3,966 TCM hospitals and 45,528 TCM clinics across China (https://www.who.int/westernpacific/health-topics/traditional-complementary-and-integrative-medicine). In 2019, the World Health Organization(WHO) has included TCM in its globally influential medical compendium for the first time, which accelerated the development of TCM[1]. In recent years, TCM has been increasingly adopted to treat various kinds of diseases such as cancer[2], rheumatoid arthritis [3,4], migraine and functional disorders [5]. In this COVID-19 pandemic, as a typical effective solution for virus related infections, TCM plays significant role to the relative low mortality rate of COVID-19 inpatients with the situation of heavy healthcare overload during early days in Feb-Apr, 2020[6]. For example, recent retrospective cohort study had showed the clinical effectiveness of add-on herbal prescriptions for COVID-19 patients with even largely reduce the mortality of COVID-19 severe inpatient cases [7,8].

Due to the empirical nature of medical science, the discovery and updating of clinical evidences and solutions are heavily relied on the utilization of various kinds of clinical data. Therefore, biomedical science needs artificial intelligence (AI) essentially to assist the generation of new knowledge and decision making for clinical operations. Since the early work of medical AI (e.g. MYCIN [9]) in 1970s, the related AI methodologies in medicine has evolved from production rules to data mining and now to deep learning [10,11], including the AI techniques such as fuzzy expert systems, Bayesian networks, artificial neural networks, and hybrid intelligent systems used in different clinical settings in health care. Since the recent ten years, the most popular medical AI techniques are machine learning and deep learning for clinical diagnosis [12], treatment recommendation [13] and healthcare management [14], which recruit big chunk of investments owning to the highly social and business values compared with other fields. The related historical events, trends and techniques are similar for TCM. With the first bunch of work on medical expert systems since 1970s, now the main AI approaches adopted in TCM are data mining and deep learning with the particular applications on decision support system and precision medicine [15,16].

2. Methodological approaches

The AI applications in TCM are heavily relied on the available data sources. Typically, several types of data sources, the bibliographic literature database such as TCMLARS, the pharmacological databases, such as TCMSP[17], ETCM [18], HERB [19] and SymMap [20], TCM medical ontology, such as UTCMLS [21], TCMLS [22], the clinical case database, such as the structured electronic medical record (SEMR) data[16], have significant contributions to the development of AI applications and methods in TCM fields.

Knowledge engineering (KE) is one of the substantial techniques of AI [23], which has been adopted as main approach for AI in medicine. Expert system, ontology and knowledge graph are the main KE techniques adopted in the field of AI in TCM. Guan Youbo's computer program for diagnosis and treatment of liver diseases, the first TCM expert system in China, has played an important role in the development of TCM diagnosis and treatment decision support system since it came out in 1979[24]. After that, hundreds of TCM expert systems with rule reasoning and machine learning as the main methods have been developed. Medical ontology and knowledge graph are also focused by AI in TM researchers since the available of large number of empirical knowledge from textbooks and ancient literatures in TCM. UTCMLS [21] and TCMLS [25] are the typical TCM medical ontology.

To discover the hidden knowledge and propose automatic clinical decision capabilities for disease management, various kinds of data mining and machine learning methods were used in AI in TCM field, which was mainly received its attention since 2000s. For well-defined and structured data, besides of the traditional methods(e.g. decision tree, Native Bayes and regression) that were widely used in AI in medicine, typically, the data mining methods, such as association rule, frequent itemset and complex network, and the machine learning methods, including classification methods (e.g. support vector machine, Bayesian network, multi-label learning and deep learning), and clustering methods (e.g. latent class model and primary component analysis), were frequently applied in TCM diagnosis and treatment analysis tasks. In addition, due to the significant role of electronic medical record for storing of clinical manifestations and diagnoses of TCM clinical procedures, text mining or information extraction has been a rising focused task for TCM clinical data analysis. However, in TCM field, time series analysis and image processing were relatively less studied due to the lack of data availability.

3. Applications and related work

AI has been widely used in the clinical diagnosis, clinical therapies , network pharmacology , and biomechanisms of syndrome of TCM for a long time.

The applications of clinical diagnosis includes tongue26[68], pulse27[69], acupuncture[28,29], herbal medicine[30], etc. The key aspects of application can be listed as follows: feature extraction[31], clinical decision support[32] and image classification[33]. Many diseases are involved in clinical practice, such as angina pectoris[34], lung cancer[35], polycystic ovary syndrome[36], etc. And the applications of AI in clinical therapies are mainly reflected in assisting the decision-making of clinical treatment. The TCM prescription intelligent decision-making system for various diseases (e.g. lung cancer [37], hypertension [38]) developed gradually, which also greatly improved the clinical decision making capability of TCM. The typical example of the integration of artificial intelligence and clinical pharmacology is network pharmacology, which is devoted to elucidating the action mechanism of drugs from the point of view of a biological network. It is an effective approach to discover the active substances of TCM and reveal the pharmacological mechanism of TCM [39]. Syndrome is the main types of clinical diagnosis for TCM. AI methods have been integrated with systems biology as a new route for the investigation of the underlying biological mechanisms of TCM syndrome, which has become one of the popular research topics. Using of data mining techniques and bioinformatics approaches, the biological basis of TCM syndrome and symptom would be further clarified in the futher.

4. Conclusion

The curation of big data makes AI technology and TCM fields much closer and their development more closely interconnected in multiple clinical applications, such as intelligent diagnosis, treatment recommendation, drug discovery, and biological mechanisms of syndrome. Besides of the ontology and knowledge graph development, related AI methodologies in TCM fields have transferred from production rules to data mining and now to deep learning, including applications for EMR information extraction, image-based clinical diagnosis and integrative mining of both clinical and biological data. TCM proposes a typical personalized medicine solution for chronic disease management, which involves various types of individualized data and biomedical research data. Many substantial challenges (e.g. data privacy, high quality clinical data, large-scale knowledge graph, human-machine interaction) should be addressed or developed to improve the usability and actual benefits for TCM practitioners. However, the large-scale real-world TCM clinical practice with essential effectiveness provides a promising future and arena for both the application and research of AI in TCM field to promote TCM practice from empirical experiences to evidence-based personalized medicine.

References

  1. Ye H, et al., Study on intelligent syndrome differentiation neural network model of stomachache in traditional Chinese medicine based on the real world. Medicine (Baltimore). 2020.99(22):e20316.

  2. Qi, F, et al., The advantages of using traditional Chinese medicine as an adjunctive therapy in the whole course of cancer treatment instead of only terminal stage of cancer. Bioscience trends, 2015. 9(1): 16-34.

  3. Lü, S, et al., The treatment of rheumatoid arthritis using Chinese medicinal plants: From pharmacology to potential molecular mechanisms. Journal of Ethnopharmacology, 2015. 176: 177-206.

  4. Zhou, L, et al., Systematic review and meta-analysis of traditional Chinese medicine in the treatment of migraines. The American journal of Chinese medicine, 2013. 41(05): 1011-1025.

  5. Xiao, L and Tao, R, Traditional Chinese medicine (TCM) therapy. Substance and Non-substance Addiction, 2017: 261-280.

  6. Xiong, X, et al., Chinese herbal medicine for coronavirus disease 2019: A systematic review and meta-analysis. Pharmacological Research, 2020: 105056.

  7. Shu, Z, et al., Clinical features and the traditional Chinese medicine therapeutic characteristics of 293 COVID-19 inpatient cases. Frontiers of medicine, 2020: 1-16.

  8. Shu, Z, et al., Add-On Semi-Individualized Chinese Medicine for Coronavirus Disease 2019 (ACCORD): A Retrospective Cohort Study of Hospital Registries. 2021;49(3):543-575.

  9. Evans, RS and Pestotnik, SL, Applications of medical informatics in antibiotic therapy. Antimicrobial Susceptibility Testing, 1994: 87-96.

  10. Ngiam, KY and Khor, W, Big data and machine learning algorithms for health-care delivery. The Lancet Oncology, 2019. 20(5): e262-e273.

  11. Ramesh, A, et al., Artificial intelligence in medicine. Annals of the Royal College of Surgeons of England, 2004. 86(5): 334.

  12. Rauschert, S, et al., Machine learning and clinical epigenetics: a review of challenges for diagnosis and classification. Clinical epigenetics, 2020. 12: 1-11.

  13. Nogales, A, et al., A survey of deep learning models in medical therapeutic areas. Artificial Intelligence in Medicine, 2021: 102020.

  14. Stanfill, MH and Marc, DT, Health information management: implications of artificial intelligence on healthcare data and information management. Yearbook of medical informatics, 2019. 28(1): 56.

  15. Tiwari, P, et al., Recapitulation of Ayurveda constitution types by machine learning of phenotypic traits. PloS one, 2017. 12(10): e0185380.

  16. Zhou, X, et al., Development of traditional Chinese medicine clinical data warehouse for medical knowledge discovery and decision support. Artificial Intelligence in medicine, 2010. 48(2-3): 139-152.

  17. Ru, J, et al., TCMSP: a database of systems pharmacology for drug discovery from herbal medicines. Journal of cheminformatics, 2014. 6(1): 1-6.

  18. Xu, H, et al., ETCM: an encyclopaedia of traditional Chinese medicine. Nucleic acids research, 2019. 47(D1): D976-D982.

  19. Fang, S, et al., HERB: a high-throughput experiment-and reference-guided database of traditional Chinese medicine. Nucleic Acids Research, 2021. 49(D1): D1197-D1206.

  20. Wu, Y, et al., SymMap: an integrative database of traditional Chinese medicine enhanced by symptom mapping. Nucleic acids research, 2019. 47(D1): D1110-D1117.

  21. Zhou, X, et al., Ontology development for unified traditional Chinese medical language system. Artificial intelligence in medicine, 2004. 32(1): 15-27.

  22. Guo, Y, et al., Preliminary study on the characteristic elements of TCM clinical terminology standardization based on SNOMED CT core framework. Chinese Journal of TCM information, 2008(09): 96-97.(in Chinese).

  23. Studer, R, Benjamins, VR, and Fensel, D, Knowledge engineering: Principles and methods. Data & knowledge engineering, 1998. 25(1-2): 161-197.

  24. Zong, X and Dai, L, Analysis of 196 cases of liver disease treated by computer. Liaoning Journal of traditional Chinese medicine, 1992(06): 26-27.(in Chinese).

  25. Long, H, et al., An ontological framework for the formalization, organization and usage of TCM-Knowledge. BMC medical informatics and decision making, 2019. 19(2): 79-89.

  26. Tang, Y, et al., Classification of Tongue Image Based on Multi-task Deep Convolutional Neural Network. Computer Science, 2018. 45(12): 255-261.(in Chinese).

  27. Huang, Q, Research on the Auxiliary System of Pulse Diagnosis in Traditional Chinese Medicine Based on Artificial Intelligence. Shanxi University of Science & Technology. 2018. (in Chinese).

  28. Cui, J, The Immersive Acupuncture Training System based on Machine Learning. Graduate School of Tianjin University. 2019. (in Chinese).

  29. Yin, T, et al., Progress and Prospect: Acupuncture Efficacy of Combining Machine Learning with Neuroimaging Properties. World Chinese Medicine, 2020. 15(11): 1551-1554.(in Chinese).

  30. Qian, H, Research and Implementation of Question Answering System based Traditional Chinese Medicine Semantic Web. Zhejiang University. 2016. (in Chinese).

  31. Yuan, N, et al., Depth Representation-based Named Entity Extraction for Symptom Phenotype of TCM Medical Record. World Science and Technology-Modernization of Traditional Chinese Medicine, 2018. 20(3): 255-362.(in Chinese).

  32. Wang, N, et al., Application of Artificial Intelligence-Assisted Diagnostic System in Diagnosis and Treatment of Coronavirus Disease 2019. China Medical Devices, 2020. 35(6): (in Chinese).

  33. Wang, Y, et al., Tongue Image Color Recognition in Traditional Chinese Medicine. Journal of Biomedical Engineering, 2005. 22(6): 1116-1120. (in Chinese).

  34. Feng, Y, et al., Modeling method and validation research of partially observable Markov decision-making process model in the optimization of clinical treatment for unstable angina pectoris with integrated traditional Chinese and western medicine. Chinese General Practice, 2020. 23(17): 2181-2185.(in Chinese).

  35. Pang, B, et al., Cognitive model for diagnosis and treatment of lung cancer based on the prototype category theory Optimization ideas and methods. Beijing Journal of Traditional Chinese Medicine, 2018. 37(12): 1141-1145.(in Chinese).

  36. Zhang, L, et al., Anxiety and Depression in Patients with Polycystic Ovary Syndrome and Relevant Factors. Journal of International Reproductive Health/Family Planning, 2018. 37(4): 269-272.(in Chinese).

  37. Ruan, C, et al. THCluster: herb supplements categorization for precision traditional Chinese medicine. in 2017 IEEE International Conference on Bioinformatics and Biomedicine (BIBM). 2017. IEEE.

  38. Liu, J, Jiang, W, and Shen, G, Design of TCM intelligent expert system for hypertension based on data analysis. Beijing Journal of Traditional Chinese Medicine, 2019. 38(09): 904-906+945.(in Chinese).

  39. Li, S and Bo, Z, Traditional Chinese medicine network pharmacology: theory, methodology and application. Chinese journal of natural medicines, 2013. 11(2): 110-120.

6:20-6:40 UTC

Sun 19th Sep

74. A Framework of "Quantitative ⨁ Fixed Image ⇒ Qualitative " induced by contradiction generation and Meta Synthetic Wisdom Engineering

Jiali Feng

Information Engineering College, Shanghai Maritime University, Shanghai, 200136 China

Abstract:

The Turing question "Can machines think?" involves the basic contradiction of "Can matter turn into spirit" in philosophy, and induces the evolution of secondary contradictions from non-living matter, simple life, complex creatures to advanced wisdom, involving information, Physics, mathematics, logic, life, Noetic Science and Meta Synthetic Wisdom. The generation mechanism of contradiction is attributed as the procession in which two opposite attributes or properties of things, are relatively transferred from the initial moment and two different locations and meeting at the contradictory points and . Some of mathematical constructions induced by the varying of the transforming functions of two opposite location information x(t) and y(t) with time increment , and the mutual transmission and inversion between them are presented. A representing approach for "The law of unity of opposites", "The law of mutual transformation of quality and quantity" and "the law of negation of negation" of contradiction is given. The relationship between it and the classical structures in some fields of mathematics, physics, logic, life, nerves, thinking, and intelligence is discussed. In order to realization of the Noetic Science and its theoretical framework of “Quantitative Intelligence ⨁ Fixed Image Intelligence ⇒ Qualitative Intelligence (Meta Synthetic Wisdom)” proposed by Mr. Qian Xuesen, a mathematical model based on “quantitative ⨁ fixed image ⇒ qualitative (Meta Synthetic Wisdom Engineering)”, called “Synthesis of three approachs” was proposed in this paper. It also shown that it could be provided an alternative reference path for the realization of Mr. Zhong Yixin’s Intelligent Science based on the transformation mechanism and Mr. Wang Peizhuang’s Factor Space theory.

Keywords: noetic science; meta synthetic wisdom; generation mechanism of contradiction; transformation of location information ; mathematical construction;

References

1. Jiali Feng, Entanglement of Inner Product, Topos Induced by Opposition and Transformation, and Tensor Flow, in Proceeding of Intelligence Science I, Second IFIP TC 12 International Conference, ICIS2017, Shanghai, China, October 25-28,2017,22-36

2. Jiali Feng & Jingjuan Feng, The Conjugate Manifold of Space-Time Induced by the Law of Unity of Contradiction, Proceeding of 11th IFIP TC 12 International Conference, IIP 2020, Hangzhou, China, July 3-6, 2020: 86-98.

3. Jiali Feng Attribute Grid Computer Based on Qualitative Mapping for Artificial Intelligence, Proceeding of 10th IFIP 12 International Conference, IIP 2018, Nanning, China, October 19-22, 2018:129-139

4. Hsue-shen Tsien, The Letter to Dai Ruwei on 25, Janury, 1993, in, Lu Mingsen, Thought about Noetic Science of Hsue-sen. Tsien, Science Pess, 2012,252

5. Zhong Yixin, A unified model for emotion and intelligence in machine, Proceedings of Sino-Japan Symposium on Emotiton Computing,2003, July 1-2

6. Wang P Z. Fuzzy Set and Random Set Fall Shadow[M]. Beijing: Beijing normal university publishing, 1984

7. Wang P Z. Factor space, a mathematical preparing for the coming of big data tide (special talk), High-end Forum on Big Data, Chinese Academy of Sciences. Beijing, Dec. 2014.

8. Saunders Mac Lane, Category for The Working Mathematician (Second Edition), Springer-Verlag, New York, Berlin, Heidelberg, Barcelona, Hong Kong, London, Milan, Singapore, Tokyo,1998

9. C. Isham, and J. Butterfield, Topos Perspective on the Kochen-Specker Theorem: I. Quantum States as Generalized Valuations, International Journal of Theory Physics, Vol. 37, Number 11(1998):2669-2773

10. Jiali FENG, Qualitative Mapping Orthogonal System Induced by Subdivision Transformation of Qualitative Criterion and Biomimetic Pattern Recognition, CHINESE JOURNAL OF ELECTRONICS, Special Issue on Biomimetic Pattern Recognition, Vol.15 No.4A October, 2006:850-856.

11. Jiali Feng, Attribute network computing based on qualitative mapping and its application in pattern recognition, Journal of Intelligent and Fuzzy System, 19(2008):243-258

12. Jiali Feng and Wang Xun, Four-key Inner Product Decomposition of Inner Product of a Constant, The 10th China Conference on Machine Learning,2005.

13. Jiali Feng & Jingjuan Feng, Möbius Polar Coordinate System Based on Entangled Pull Back of Contradictory Oneself and Its Topos, Journal of Physics: Conference Series (Volume 1828, 2021), https://iopscience.iop.org/issue/1742-6596/1828/1


6:40-7:00 UTC

Sun 19th Sep

75. Paradox, Logic and Property of Infinity

Jincheng Zhang

101 College Entrance Examination Continuation School of Guangde County

Abstract:

The mathematics circle has been obsessed with paradoxes for a long time. Although many solutions have been proposed, none of these solutions are completely satisfactory. And the mathematical principles of paradoxes have not been clarified. Author of this paper finds that any calculus on a closed set can generate un-closed terms (extra-field terms). And the extra-field terms are the source of contradictions and paradoxes in mathematics. Thus the author finds un-closeness of logic calculus and establishes the logical system S-L and S-K that accommodate the paradoxes based on classical logic system L and K, which is called "logical system S" for short. The difference and connection between this logic system and present logic are as follows: if classical logic system is regarded as either true or false, then “Logic system S” is a logic system neither true nor false other than “classical logic” and “fuzzy logic”. This logic system transforms the classical logic closed system into an open logic system in which paradoxes are the unclosed terms of logic calculus.

The essence of "diagonal method" is to construct paradox. According to the logical systems S-L and S-K, we have known that the essence of paradox is an unclosed term in the field of logical calculus. Therefore, the propositions constructed with the "diagonal method" are all propositions outside the domain of definition (extra-field terms). We have found that a large number of propositions in the field of set theory, recursion theory and mathematical logic are actually mathematical propositions of extra-field terms. For example, it can be proved that the Gödel undecidable proposition is an extra-field term, the real number constructed by Cantor's diagonal method is an extra-field term, and the undeterminable Turing machine in the Turing halt problem is also an extra-field term.

It can be further proved that: The real numbers constructed by the Cantor’s diagonal method are surreal numbers.

This paper begins with analyzing "Axiom of infinity" of the classical axiom set theory ZF, and finds that the infinite set in the ZF system is confused and contradictory when it is defined as an infinite number. Therefore, the property of infinity is re-examined, two opposite infinite axioms (δ+1=δ or δ+1≠δ) are put forward, the ZF system is partially modified, and two non-Cantor set theory systems SZF (+) and SZF (-) are re-constructed. In these two systems, the ordinal number, the cardinal number, transfinite induction and the Cantor continuum hypothesis are discussed respectively. It is found that there is no uncountable ordinal number and cardinal number in these two systems, and the "transfinite induction method" is re-proved, and it can be further proved that Cantor’s diagonal number is the surreal number in the “non-standard analysis" established by A Robinson. These two systems are similar to Euclidean geometry and non-Euclidean geometry axiom systems, and can be regarded as the basis of non-standard analysis and standard analysis, respectively, which unifies infinity in set theory with infinity in analysis as a whole.

Keywords:Paradox, unclosed term, extra-field term, diagonal method of proof, uncountable, undecidable proposition, Axiom of infinity, ordinal number, cardinal number, continuum hypothesis, surreal number, non-standard analysis

1 Positive set, inverse set and dual transformation

2. Self-reference and paradoxes

3 The paradox is a logical unclosed calculus

4 Transformation of extra-field terms and classical logic

5 Proposition calculus system S-L

6. Inference of System S-items constructed by the "diagonal method" are unclosed terms

7. Inference of System S(II)- Gödel’s undecidable proposition of is an unclosed term

8. Contradiction and reconstruction of axiom of infinity

9. Non-Cantor set theory system SZF

10.Ordinal number

11.Transfinite induction

12.Cardinal numbers

13. Cantor’s continuum hypothesis

14. Standard analysis and non-standard analysis

15. Cantor’s diagonal number is a surreal number

References

1. Hamilton, A.G: Logic for Mathematicians, Cambridge of University, 1978: 29-40, 82-92.

2. [U.S]S.C Kleene , translated by Shaokui Mo, Introduction to Meta-Mathematics, Beijing: Science Press, 1984:4-12.

3. [U.S]Boolos, Computability and Mathematical Logic [M] Beijing: Electronic Industry Press, 2002: 152-160

4. Fangting Wang, Mathematical Logics [M], Beijing: Science Press 2001: 159-163

5. Huacan He, Universal Infinite Theory [M] Beijing: Science Press 2011

6. Jincheng Zhang, Logic System and Contradiction [J] System Intelligent Journal, 2012(3): 208-209

7. Jincheng Zhang, Fixed Terms and Undecidable Propositions of Logics and Mathematic Calculus (I)[J] System Intelligent Journal, 2014(4)

8. Jincheng Zhang, Fixed Terms and Undecidable Propositions of Logics and Mathematic Calculus (I)[J] System Intelligent Journal, 2014(5)

9. Jinwen Zhang, Introduction to Axiom Set Theory[M] Beijing: Science Press, February 1999

10. Mumin Dai & Haiyan Chen, Introduction to Axiom Set Theory[M], Beijing Science Press, 2011: 15-17

11. Mathematics encyclopedic dictionary, Japan Mathematics Collection [M] Beijing: Science Press 1984:7

Jincheng Zhang. Paradox、Logic And Non-Cantor Set Theory [M]. Harbin: Harbin Institute of Technology Press,2018:1.

Conference of the International Society of Interdisciplinary Studies of Symmetry 2021

Block 2:

13:00- ca.

17:00 UTC

Sun 19th Sep

Closing

Marcin Schroeder, Yixin Zhong, Pedro Marijuan

[This last block of prime-time will be one hour longer]

1) General Assembly (Marcin Schroeder) (1 hour)

Agenda:

(a) Opening Session

Acceptance of the protocol of the last General Assembly in Berkeley 2019

(b) Report of the President on IS4SI activities during the last period

(c) Financial report of the VP for funds on the last period

(d) Release of the board of directors

(e) Discussion of targets & activities for the next period (may be postponed until Presentation by New Co-Presidents)

(f) Election of the new board of directors

(g) Appointment of 2 auditors

(h) Discussion of, and proposals for, changes of the articles & Determination of the amount of membership fees.

(i) Short presentation of Special Interest Groups (Wolfgang Hofkirchner)

(j) Short presentation of new Brazilian Chapter of IS4SI

(k) End of the GA and official board meeting for approval of the Formation Application of the Brazilian Chapter by the board*.

2) Presentation by New Co-Presidents (Yixin Zhong & Pedro Marijuan) (2x30 minutes)

3) ) Closing Keynote Speech with Panel Discussion:

Speaker and Moderator: Yixin Zhong (China)

Panel Members: Marcin Schroeder (Japan)

Terrence Deacon (US)

Gordana Dodig-Crnkovic (Sweden)

Wolfgang Hofkirchner (Austria)

Pedro Marijuan (Spein)

Mark Burgin (USA)

Joeseph Brenner (France)

Closing (Marcin Schroeder, Yixin Zhong, Pedro Marijuan)

*Participants who are not board members will be asked to leave the zoom room at that time so that only the board members stay and elect on the Chapter Formation Application.



15:00-16:30 UTC

Sun 19th Sep

76. A Call for Paradigm Shift in Information Discipline

Zhong Yixin

Beijing University of Posts and Telecommunications Beijing 100876, China

Abstract:

It is a call for having collective reconsideration concerning the paradigm executed so far in the studies of information discipline, which is in fact not the one suitable for information discipline but is the one suitable for physical discipline. This is just the old saying of “putting A’s hat on B’s head”. The misuse of paradigm has been the unique cause of all problems related to the studies of information discipline and should therefore be shifted as soon, and as clean, as possible if we want to make the studies of information discipline enter into its high stage of development.

Key Words: Paradigm Shift, Paradigm for Physical Science, Paradigm for Information Studies,

1. Introduction

It has been more than 70 years since the publication of the paper “The Mathematical Theory of Communication” by Claude E. Shannon, which has later been renamed as information theory. Up to the present time, the theories and technologies related to information have widely been applied to almost every field of science and technology and made historical progresses and great achievements to the mankind, bringing the world entering into the “information era”.

As scientists and technologists in information discipline, we feel prod and happy. On the other hand, however, could we consider that the studies in information discipline have really been perfect? If it is not the case, what we should do for, and contribute to, the further development of the studies of information discipline then?

To find the clear answer to the questions raised above, we should make a thorough investigation on studies of information discipline so as to see if there really exist any weakness, problems, or even mistakes and big challenges to the discipline.

In the following sections, some findings of the investigation are briefly presented.

2. Concepts and Definitions

To avoid unnecessary misunderstandings, some foundational concepts related to the studies of information discipline need to be discussed and redefined.


(1) Paradigm

There have been many different ways in everyday English for explaining the meaning toward the word of ‘paradigm’, such as model, standard, pattern, typical example, and so on [1]. They are more or less similar to each other, to certain extent.

More precise explanation to the word of paradigm may be expressed as a formula [2],

Paradigm = World-view + Methodology. (2.1)

As is seen from (1.1), the explanation of the word ‘paradigm’ includes two elements. One is the world-view people use for appropriately understanding thing in real world, answering the question of what the thing is indeed, and another is the methodology, or approach, people use for properly dealing with the thing, answering the question of how to suitably deal with the thing.

In the context of scientific research, the formula above can then be changed to be [3]

Paradigm = Scientific view + Scientific methodology” (2.2)

Therefore, we can have the following definition for paradigm:

Definition 1 Paradigm

The paradigm for a scientific discipline is consisted of the scientific view and the scientific methodology for that discipline in which the former defines what the discipline is in essence while the latter defines how to do research in the discipline.

As can be seen from definition 1 that the paradigm for a discipline is the supreme guiding force for the studies of the discipline. Whether the paradigm for a discipline is suitable or not will determine if the studies of the discipline is successful or not.

(2) Information and the Ecological Chain of Information

Matter, energy, and information are regarded as three categories of raw resources widely existed in reality. Through proper manufacturing, handling and processing, the products of the raw resource could provide to humans with various kinds of materials, power, and artificial intelligence respectively.

The information as raw resource is, no doubt, useful. However, on the other hand, it will be much more useful if the information as raw resource is properly processed, transformed, and utilized by its human subject.

In practice, the information as raw resource will have to be perceived, processed and utilized by its subject for implement certain goal(s), thus forming the ecological chain of information within the framework of subject-object interaction as seen in Fig.1.


Figures only available at the downloaded file

Fig.1 Ecological Chain of Information within Subject-Object Interaction

The model in Fig.1 shows a typical process of subject-object interaction commonly existed in reality. The lower part of the model stands for the object existed in certain environment and the upper part represents the subject’s processing functions.

Once the object information originated from the object acts on the subject, the latter would produce an (intelligent) action reacting on the object for achieving, or keeping, the subject’s goal. The subject’s (intelligent) action would be produced through a number of functions of information processing, forming the ecological chain of information as is shown in Fig.1.

It is clearly to see from Fig.1 that there have been two kinds of information occurred in the ecological chain of information: one is named ontological information and another is epistemological information. The ontological information is presented by object in environment whereas the epistemological information produced by both subject and object. Either ontological information or epistemological information is important to human beings.

Definition 2: Ontological Information

Ontological information produced by object in real world is defined as the object’s states and the pattern of the state varying, all presented by the object itself.

Ontological information is more often named object information. It exists without depending on whether or not it is perceived by subject. So, it is purely objective concept of information having nothing to do with subjective factors.

Definition 3: Epistemological Information

Epistemological information a subject perceived from the ontological information of an object has three components: (1) the form (syntactic) information provided by the object sensed by the subject, (2) the utility (pragmatic) information provided by the object evaluated by the subject with respect to his goal, and (3) the meaning (semantic) information provided by the object produced by the subject via mapping the former two components into the space of meaning (semantic) space and naming the result.

Epistemological information is more often named subject’s perceived information. Clearly, it is originated from ontological information but modulated by the subject. So, it is subjective concept of information related to both subject and object.

Note that the definitions of form (syntactic) information and utility (pragmatic) information are obvious and easy to understand while the definition of meaning (semantic) information is not so intuitive and may thus need certain explanations.

The principle for producing meaning (semantic) information from the form (syntactic) and utility (pragmatic) information can specifically be explained in Fig.2 below.

Figures only available at the downloaded file

Fig.2 The Interrelations among form, utility, and meaning

This interrelationship shown in Fig.2 can be expressed in the following equation [4]

Y ~ (X, Z) (2.3)

The symbol X in Eq.(2.3) stands for the form (syntactic) information, Y the meaning (semantic) information, Z the utility (pragmatic) information, and the logic operator of “mapping & naming”, mapping the combination (X, Z) into the space of meaning (semantic) information and then giving it name.

It is obvious from Eq.(2.3) that, whenever the meaning (semantic) information Y is obtained, the combination of form (syntactic) information X and utility (pragmatic) information Z is also obtained. This means the meaning (semantic) information Y can represent the form (syntactic) X and the utility (pragmatic) information Z.

Note that the definition of syntactic, pragmatic, and semantic information stated in Definition 3 and expressed in Eq.(2.3) is the new results in [4] and they were not made clear either by Peirce or by Morris.

Note also that it would not be complete if only one of the definitions of information, either ontological information or epistemological information, is considered. It would also not be complete if only the two definitions of information have been considered without the understanding of the interrelations among X, Y, and Z.

(3) The Information Science, AI, and The Discipline of Information

Definition 4 Information Science [4]

Referring to Fig.1, the information science (IS) can be defined via the four elements:

The objet of IS: information and the ecological chain of information,

The content of IS: the properties and the laws governing its ecological chain,

The research approach for IS: the methodology of information ecology,

The goal of IS: to strengthen all information functions of human intelligence.

Definition 5 Human Intelligence and Artificial Intelligence (AI) [5]

Given problem-goal-knowledge, human intelligence is its ability to solve the given problem and reach the given goal via utilizing the knowledge provided.

AI is the machine’s ability endowed by humans for simulating the human intelligence.

Note that the term of ‘human intelligence’ is a subset of ‘human wisdom’.

Definition 6: Discipline of Information / Information Studies

The discipline of information / information studies is understood as the studies on the entirety of information science containing AI as its high member.

It is quite clear from the definitions 4-6 that the scope of the discipline of information, or information studies, is not limited to, but much wider than, the scope of Shannon information theory.

It is also clear, as an academic discipline dealing with information science, it should establish its own paradigm so as to have the supreme force for guiding and regulating the related academic activities of the discipline.

On the other hand, however, the paradigm of a discipline cannot be formed at the same time as its academic activities occurred. It can only be summarized and refined from the academic practice of the discipline through a sufficiently long period of time in history.

3. A Historical Challenge Information Discipline has been facing

Up to the present time, there have been two major categories of academic discipline so far, that is the discipline of physical science and the discipline of information science, yet only one category of paradigm that is the one for the discipline of physical science yet without the one for the discipline of information science.

This is because of the rule that the formation of the paradigm of an academic discipline will have to be much later than the occurrence of the related research activities of the discipline. This is why the information discipline started to develop since 1940s but has not been able to form its paradigm till the present time.

Because of the facts stated above, the research activities carried on in information discipline have in practice borrowed the paradigm for physics discipline which has been existed for hundreds years.

This is the un-avoided suffer and challenge to the studies of information discipline.

Many colleagues may not believe the above-mentioned challenge existed. To be more convincing, let’s have an investigation on the paradigm for physical discipline and the paradigm practically executed in information discipline more specifically.

The paradigm for physical discipline has the following features as shown in Table 1.

Table 1 Major Features for the Paradigm of Physical Discipline


  1. Scientific View

    • Object for study: Pure physical entity with no subjective factor

    • Focus of study: The structure & function of physical systems

    • Property of the object: Deterministic evolution

  2. Methodology

    • General approach: Divide and conquer

    • Means for description/analysis: Formalism

    • Means for decision-making: Formalistic matching


It is quite unfortunately to see that the paradigm really executed in practice in information discipline including AI (see Table 2), has been almost the same as the one for physics discipline (Table 1).

Table 2 Major Features for the Paradigm Actually Executed in Information Discipline


  1. Scientific View

    • Object for study: Physical system with no subjective factor

    • Focus of study: The structure & Functions of the physical system

    • Property of the object: Deterministic evolution with noise

  2. Methodology

    • General approach: Divide and conquer

    • Means for description/analysis: Formalism

    • Means for decision-making: Formalistic matching


As the results of the fact expressed in Table 2, the information studies has suffered a series of magnificent difficulties. Some examples are as follows below.

(1) No Unified Theory for Information Discipline

Because of the employment of the principle of divide and conquer, which is one methodology of the paradigm for physics discipline as is seen in Table 1 and 2, the information discipline has been divided into a number of pieces mutually isolated from each other at its beginning till 1990s such as sensing (information acquisition), communication (information transferring), computing (information processing), controlling (information execution), and etc. As for AI, it has been divided into three branches isolated from, and inconsistent to, each other, such as artificial neural networks, expert systems, and sensorimotor systems. These separations have led to the lack of the unified, or general, theory for information discipline, and for AI.

(2) Very Low Level of Intelligence in All AI Systems

Due to the employment of the formalism approach, which is another methodology of the paradigm for physics discipline as is seen in Table 1 and 2, both the factors of meaning (semantic) information and utility (pragmatic) information, which are nucleus of the ability of understanding, have been completely ignored. This has led to the very low level of intelligence in all AI systems.


4. What to Do Next?

Both of the problems mentioned above, that is, no unified theory for information discipline and very low level of intelligence for all kinds of AI system, are not tolerant any longer to the society. What we should do is to shift the paradigm from the one suitable for physical discipline to the one for information discipline.

What is the paradigm suitable for information discipline then?

Based on nearly 60 years studies, we have summarized and refined the paradigm suitable for the information discipline, which is now presented in the Table 3 below.

Table 3 Major Features for the Paradigm suitable for Information Discipline


  1. Scientific View

    • Object for study: Info process within subject-object interaction

    • Focus of study: Double win between subject and object

    • Property of the object: Non-deterministic evolution

  2. Methodology

    • General approach: Methodology of information ecology

    • Means for description/analysis: Trinity of form-value-meaning

    • Means for decision-making: Understanding-based

The detailed explanations on the paradigm suitable for information discipline, shown in Table 3, and the significant applications and great implications provided by the paradigm for information discipline will be reported in next article.

References

[1] Kuhn, T. S. The Structure of Scientific Revolution [M], University of Chicago Press, 1962

[2] Zhong, Yixin,. From The Methodology of Mechanical Reductionism to The One of Information Ecology [J], Philosophy Analysis, No.5, p.133-144, 2017

[3] Burgin, Mark and Zhong, Yixin., Methodology of Information Ecology in the Context of Modern Academic Research [J],Philosophy Analysis, 119-136,2019

[4] Zhong Yixin. Principles of Information Science [M]. Beijing: BUPT Press, 1988

[5] Zhong Yixin. Universal Theory of AI [M]. Beijing: Science Press, 2021


End of

IS4SI Summit General Program of Plenary Sessions

SEPTEMBER 12-19

Partners & Sponsors