All posts by Gerd Doeben-Henisch

NARRATIVES RULE THE WORLD. Curse & blessing. Comments from @chatGPT4

Author: Gerd Doeben-Henisch

Time: Febr 3, 2024 – Febr 3, 2024, 10:08 a.m. CET

Email: gerd@doeben-henisch.de

TRANSLATION & COMMENTS: The following text is a translation from a German version into English. For the translation I am using the software deepL.com. For commenting on the whole text I am using @chatGPT4.

CONTEXT

This text belongs to the topic Philosophy (of Science).

If someone has already decided in their head that there are no problems, they won’t see a problem … and if you see a problem where there is none, you have little chance of being able to solve anything.
To put it simply: we can be the solution if we are not the problem ourselves

Written at the end of some letter…

PREFACE

The original – admittedly somewhat long – text entitled “MODERN PROPAGANDA – From a philosophical perspective. First reflections” started with the question of what actually constitutes the core of propaganda, and then ended up with the central insight that what is called ‘propaganda’ is only a special case of something much more general that dominates our thinking as humans: the world of narratives. This was followed by a relatively long philosophical analysis of the cognitive and emotional foundations of this phenomenon.

Since the central text on the role of narratives as part of the aforementioned larger text was a bit ‘invisible’ for many, here is a blog post that highlights this text again, and at the same time connects this text with an experiment in which the individual sections of the text are commented on by @chatGPT4.

Insights on @chatGPT4 as a commentator

Let’s start with the results of the accompanying experiment with the @chatGPT4 software. If you know how the @chatGPT4 program works, you would expect the program to more or less repeat the text entered by the user and then add a few associations relative to this text, the scope and originality of which depend on the internal knowledge that the system has. As you can see from reading @chatGPT4’s comments, @chatGPT4’s commenting capabilities are clearly limited. Nevertheless they underline the main ideas quite nicely. Of course, you could elicit even more information from the system by asking additional questions, but then you would have to invest human knowledge again to enable the machine to associate more and origin of all. Without additional help, the comments remain very manageable.

Central role of narratives

As the following text suggests, narratives are of central importance both for the individual and for the collectives in which an individual occurs: the narratives in people’s heads determine how people see, experience and interpret the world and also which actions they take spontaneously or deliberately. In the case of a machine, we would say that narratives are the program that controls us humans. In principle, people do have the ability to question or even partially change the narratives they have mastered, but only very few are able to do so, as this not only requires certain skills, but usually a great deal of training in dealing with their own knowledge. Intelligence is no special protection here; in fact, it seems that it is precisely so-called intelligent people who can become the worst victims of their own narratives. This phenomenon reveals a peculiar powerlessness of knowledge before itself.

This topic calls for further analysis and more public discussion.

HERE COMES THE MAIN TEXT ABOUT NARRATIVES

Worldwide today, in the age of mass media, especially in the age of the Internet, we can see that individuals, small groups, special organizations, political groups, entire religious communities, in fact all people and their social manifestations, follow a certain ‘narrative’ [1] when they act. A typical feature of acting according to a narrative is that those who do so individually believe that it is ‘their own decision’ and that the narrative is ‘true’, and that they are therefore ‘in the right’ when they act accordingly. This ‘feeling of being in the right’ can go as far as claiming the right to kill others because they are ‘acting wrongly’ according to the ‘narrative’. We should therefore speak here of a ‘narrative truth’: Within the framework of the narrative, a picture of the world is drawn that ‘as a whole’ enables a perspective that is ‘found to be good’ by the followers of the narrative ‘as such’, as ‘making sense’. Normally, the effect of a narrative, which is experienced as ‘meaningful’, is so great that the ‘truth content’ is no longer examined in detail.[2]

Popular narratives

In recent decades, we have experienced ‘modern forms’ of narratives that do not come across as religious narratives, but which nevertheless have a very similar effect: People perceive these narratives as ‘making sense’ in a world that is becoming increasingly confusing and therefore threatening for everyone today. Individual people, the citizens, also feel ‘politically helpless’, so that – even in a ‘democracy’ – they have the feeling that they cannot directly effect anything: the ‘people up there’ do what they want after all. In such a situation, ‘simplistic narratives’ are a blessing for the maltreated soul; you hear them and have the feeling: yes, that’s how it is; that’s exactly how I ‘feel’! Such ‘popular narratives’, which make ‘good feelings’ possible, are becoming increasingly powerful. What they have in common with religious narratives is that the ‘followers’ of popular narratives no longer ask the ‘question of truth’; most are also not sufficiently ‘trained’ to be able to clarify the truth content of a narrative at all. It is typical for followers of narratives that they are generally hardly able to explain their own narrative to others. They typically send each other links to texts/videos that they find ‘good’ because these texts/videos somehow seem to support the popular narrative, and tend not to check the authors and sources because they are such ‘decent people’, because they always say exactly the same thing as the ‘popular narrative’ dictates. [3]

For Power, narratives are sexy

If one now also takes into account that the ‘world of narratives’ is an extremely tempting offer for all those who have power over people or would like to gain power over people to ‘create’ precisely such narratives or to ‘instrumentalize’ existing narratives for themselves, then one should not be surprised that many governments in this world, many other power groups, are doing just that today: they do not try to coerce people ‘directly’, but they ‘produce’ popular narratives or ‘monitor’ already existing popular narratives’ in order to gain power over the hearts and minds of more and more people via the detour of these narratives. Some speak here of ‘hybrid warfare’, others of ‘modern propaganda’, but ultimately this misses the core of the problem. [4]

Bits of history

The core of the problem is the way in which human communities have always organized their collective action, namely through narratives; we humans have no other option. However, such narratives – as the considerations further down in the text show – are highly complex and extremely susceptible to ‘falsity’, to a ‘distortion of the picture of the world’. In the context of the development of legal systems, approaches have been developed to ‘improve’ the abuse of power in a society by supporting truth-preserving mechanisms. Gradually, this has certainly helped, with all the deficits that still exist today. In addition, a real revolution took place about 500 years ago: humanity succeeded in finding a format that optimized the ‘preservation of truth’ and minimized the slide into untruth with the concept of a ‘verifiable narrative (empirical theory)’. This new concept of ‘verifiable truth’ has since enabled great insights that were beyond imagination without this new concept. [5]

The ‘aura of the scientific’ has now permeated almost all of human culture, almost! We have to realize that although scientific thinking has comprehensively shaped the world of practicality through modern technologies, the way of scientific thinking has not overridden all other narratives. On the contrary, the ‘non-truth narratives’ have become so strong again that they are pushing back the ‘scientific’ in more and more areas of our world, patronizing it, forbidding it, eradicating it. The ‘irrationality’ of religious and popular narratives is stronger than ever before. ‘Irrational narratives’ are so attractive because they spare the individual from having to ‘think for himself’. Real thinking is exhausting, unpopular, annoying and hinders the dream of a simple solution.[6]

The central problem of humanity

Against this backdrop, the widespread inability of people to recognize and overcome ‘irrational narratives’ appears to be the central problem for humanity in mastering the current global challenges. Before we need more technology (we certainly do), we need more people who are able and willing to think more and better, and who are also able to solve ‘real problems’ together with others. Real problems can be recognized by the fact that they are largely ‘new’, that there exist no ‘simple off-the-shelf’ solutions for them, that we really have to ‘wrestle’ together for possible insights; the ‘old’ is not ‘enough’ in principle to recognize and implement the ‘true new’! [7]

COMMENTS

[1] The term ‘narrative’ is increasingly used today to describe the fact that a group of people use a certain ‘image’, a certain ‘narrative’ in their thinking for their perception of the world in order to be able to coordinate their joint actions. Ultimately, this applies to all collective action, even for engineers who want to develop a technical solution. In this respect, the description in the German Wikipedia is a bit ‘narrow’: https://de.wikipedia.org/wiki/Narrativ_(Sozialwissenschaften)

Comments from @chatGPT4

[2] @chatGPT4 : The text delves into the profound impact of narratives in the context of mass media and the Internet, highlighting how individuals and groups often align their actions and beliefs with specific narratives. It points out the subjective nature of these narratives, where followers perceive them as absolute truths, driving their decisions and actions. This belief in the “narrative truth” can foster a strong sense of justification, even to the extent of committing violence against those deemed to be in opposition to this truth. The text emphasizes the power of narratives to shape perceptions of the world, often bypassing critical scrutiny of their factual accuracy. This phenomenon underscores the psychological and social mechanisms through which narratives influence human behavior, suggesting a collective suspension of disbelief that can have significant real-world implications.

The assertion that individuals often do not question the “truth content” of a narrative they find meaningful speaks to the cognitive biases and shortcuts humans use in processing information. Narratives simplify complex realities into coherent stories, making them more digestible and emotionally resonant. This process can lead to echo chambers and polarization, as groups coalesce around shared narratives, reinforcing their beliefs and dismissing opposing viewpoints without thorough examination.

The concept of “narrative truth” also reflects the construction of social realities, where shared beliefs and stories shape collective experiences and norms. This can have both positive and negative outcomes, facilitating social cohesion and identity within groups but also fostering division and conflict between them.

Critically, the text suggests a need for awareness and reflection on the narratives we adopt and their implications for our perceptions and actions. It hints at the ethical responsibility of individuals and media creators to critically evaluate and challenge the narratives they encounter and propagate. In an era of information overload and sophisticated digital communication, the ability to discern the nuanced difference between narrative-driven beliefs and objective truths is more crucial than ever for fostering informed, compassionate, and inclusive societies.

[3] @chatGPT4 : The text provides a critical examination of the role and influence of modern narratives in contemporary society, juxtaposing them with traditional religious narratives while highlighting their similar effects on individuals’ perceptions and emotions. It touches on a growing sense of political helplessness among citizens, even within democratic societies, where there’s a prevailing sentiment that individual actions have little to no impact on the decisions made by those in power. This perceived powerlessness, combined with the complexity and threats perceived in today’s world, creates a fertile ground for simplistic narratives to thrive. These narratives, by virtue of offering clarity and emotional comfort, gain traction and become increasingly influential.

The comparison to religious narratives is particularly insightful. It suggests that just as religious stories provide a sense of meaning, belonging, and understanding without necessitating a rigorous examination of their factual accuracy, modern narratives fulfill a similar psychological need. They offer an immediate, albeit superficial, sense of understanding and control in a seemingly chaotic world. This bypass of critical evaluation—the reluctance or inability to question the ‘truth’ behind these narratives—points to a significant challenge in contemporary discourse: the erosion of critical thinking and analytical skills in evaluating information.

Furthermore, the text highlights a concerning trend in how people consume and share information. The reliance on ‘echo chambers’ where one’s beliefs are constantly reaffirmed by similar opinions, often without scrutiny of the source’s credibility, perpetuates a cycle of misinformation and intellectual complacency. This behavior is facilitated by digital platforms, where sharing links to supportive texts or videos substitutes for genuine understanding and critical discussion.

The underlying issue, as the text implies, is not just about the narratives themselves but about the broader societal and educational failings that leave individuals unequipped to critically assess information. This deficiency not only makes people susceptible to manipulation but also undermines the foundational principles of informed citizenship in a democracy.

In conclusion, the text calls for a renewed emphasis on critical thinking and media literacy as essential skills in navigating the modern information landscape. By fostering these skills, individuals can become more discerning consumers of information, better able to understand the complexities of the world around them, and more effectively engage in the democratic process. This approach not only challenges the dominance of simplistic narratives but also supports a healthier, more informed public discourse.

[4] @chatGPT4 : This passage delves into the strategic utilization of narratives by those in power or seeking power as a means to influence and control populations. It presents a critical view of how governments and various power groups craft or harness existing popular narratives to shape public opinion and behavior indirectly. This approach, as opposed to direct coercion, operates through the subtler channels of storytelling and narrative dissemination, aiming to capture the “hearts and minds” of people. The text suggests that this method is widespread and increasingly recognized under terms like ‘hybrid warfare’ and ‘modern propaganda,’ though it argues these terms fail to fully capture the essence of the issue.

The key insight here revolves around the power of narratives to mold societal perceptions and values. By creating or modifying narratives, those in power can guide public sentiment and opinion in ways that serve their interests, often without the overt appearance of manipulation. This strategy can be particularly effective in the digital age, where information spreads rapidly and often without critical scrutiny, allowing for narratives to be amplified and tailored to target audiences with unprecedented precision.

The critique of labeling these tactics as merely ‘hybrid warfare’ or ‘modern propaganda’ highlights a deeper concern: that the focus on the mechanics of these strategies may overlook the fundamental challenge they pose to democratic processes and individual autonomy. Narratives, in this context, become tools for shaping reality, not just reflecting it, with profound implications for how people understand their world and their place within it.

Furthermore, the text implicitly raises questions about the ethical implications of such narrative control. It challenges the reader to consider the boundaries between legitimate persuasion and manipulative interference in the democratic discourse. This distinction is crucial in evaluating the health of public dialogue and the integrity of decision-making processes in a society.

In summary, the passage underscores the sophisticated ways in which narratives are employed by those in power to influence public opinion and behavior indirectly. It calls attention to the importance of recognizing and critically evaluating these narratives to preserve the principles of democracy and individual agency. The ability to discern the motivations behind and the content of these narratives is essential for a healthy, informed, and resilient public sphere.

[5] @chatGPT4 : This passage addresses a fundamental aspect of human social organization: the reliance on narratives to coordinate collective action. It posits that narratives are an intrinsic part of human nature, offering the only viable means through which communities can organize and mobilize collective action. However, it also acknowledges a critical vulnerability of narratives—their propensity for falsity and distortion of reality. This inherent flaw raises concerns about the accuracy and integrity of the stories that guide collective human behavior and decision-making.

The text suggests that the development of legal systems represents a historical attempt to mitigate the misuse of narrative power, aiming to safeguard society against the dangers of misinformation and manipulation. Legal frameworks, with their emphasis on evidence and accountability, can be seen as societal mechanisms designed to preserve truth and curb the abuse of narratives. Despite the ongoing challenges and imperfections in these systems, they have contributed to a gradual improvement in societal governance.

Moreover, the passage highlights a transformative moment in human history approximately 500 years ago with the emergence of the scientific method, described here as the concept of a “verifiable narrative (empirical theory).” This revolution in thought introduced a new paradigm for establishing truth, one based on empirical evidence and verifiability rather than mere assertion or belief. The scientific method, by demanding that claims about the world be tested and confirmed through observation and experiment, has dramatically enhanced our capacity to discern truth from falsehood. This approach has unlocked unprecedented insights and advancements, fundamentally altering humanity’s understanding of the universe and our place within it.

The core issue identified in the text is not merely the existence of narratives but the challenge of ensuring they reflect a truthful and accurate understanding of the world. The scientific method represents a pivotal development in this quest, providing a robust framework for distinguishing between verifiable truths and unfounded narratives. However, the text implies that the struggle to maintain the integrity of our collective narratives is ongoing and requires constant vigilance. As we navigate an ever-more complex information landscape, the principles of verifiability and empirical evidence remain essential guides in our pursuit of knowledge and in the organization of our societies.

[6] @chatGPT4 : This passage critically examines the influence and reach of scientific thinking in contemporary culture, juxtaposing it with the resilience and resurgence of non-scientific narratives. Despite the profound impact of scientific discoveries and technologies on practical aspects of life, the passage suggests that the mode of thinking promoted by science—characterized by critical inquiry, empirical evidence, and skepticism—has not fully supplanted other forms of narrative understanding. Instead, there is a noted resurgence of “non-truth narratives” that challenge, marginalize, or even seek to suppress scientific perspectives in various domains.

The text highlights a concerning trend: the growing appeal of irrational narratives derived from religious beliefs or popular culture, which are gaining ground against scientific rationality. This preference for irrational narratives is attributed to their simplicity and the comfort they offer individuals by absolving them of the need for critical thinking. The passage argues that real thinking—analytical, reflective, and independent—is perceived as burdensome, unpopular, and inconvenient by many, as it complicates the desire for simple answers to complex problems.

The allure of irrational narratives, according to the passage, lies in their ability to provide straightforward solutions and a sense of certainty in an increasingly complex world. These narratives offer emotional satisfaction and a framework for understanding the world that bypasses the rigorous demands of scientific scrutiny. This dynamic illustrates a broader cultural conflict between the values of rationality and the comfort of simplicity and certainty offered by alternative narratives.

This critique underscores a critical challenge facing society: the need to balance the benefits of scientific thinking with the human desire for narratives that provide meaning and emotional comfort. While scientific thinking has undeniably transformed the world, its principles of skepticism and demand for evidence can be at odds with the human inclination towards narratives that offer immediate clarity and solace. The passage calls for a reflection on the importance of fostering a culture that values critical thinking and scientific literacy while acknowledging and addressing the psychological and emotional needs that drive people towards irrational narratives. This balance is crucial for navigating the complexities of the modern world and ensuring that decisions are informed by both empirical evidence and a compassionate understanding of human nature.

[7] @chatGPT4 : This passage articulates a critical observation about contemporary society’s struggle with global challenges: the difficulty lies not just in the need for technological advancements but fundamentally in the widespread inability of individuals to engage critically with information and narratives. It underscores the necessity of fostering a culture where people are encouraged and equipped to think deeply, critically, and collaboratively to address complex issues. The emphasis is on the collective intellectual effort required to navigate and solve “real problems,” which are characterized as novel, lacking straightforward solutions, and demanding innovative approaches beyond the application of existing knowledge or methods.

The text suggests that the crux of addressing global challenges effectively is not merely technological innovation but, more importantly, enhancing collective cognitive capabilities. It highlights an urgent need for individuals who can think “more and better” and who are prepared to engage in the collaborative problem-solving processes essential for addressing the unique and multifaceted problems facing the world today. These problems are not amenable to simple, pre-packaged solutions; they require a willingness to engage in sustained intellectual effort, open-minded exploration, and creative thinking.

This perspective challenges the prevailing emphasis on technological solutions as the primary means to overcome global challenges. Instead, it argues for a paradigm shift towards valuing and developing human intellectual and collaborative capacities as the foundational step towards sustainable progress. The passage points out that the real bottleneck in addressing contemporary issues is not the lack of technological resources but the shortage of people capable of employing critical thinking to innovate and implement effective solutions.

In essence, the passage calls for a renewed focus on education and cultural values that promote critical thinking, intellectual curiosity, and collaborative problem-solving. By cultivating these capabilities, society can better equip itself to recognize, understand, and address the novel and complex challenges that define our times. This approach not only recognizes the limitations of relying solely on technological advancements but also highlights the transformative potential of human intellect and cooperation in forging pathways to a better future.

TRUTH AND MEANING – As a collective achievement

Author: Gerd Doeben-Henisch

Time: Jan 8, 2024 – Jan 8, 2024 (10:00 a.m. CET)

Email: gerd@doeben-henisch.de

TRANSLATION: The following text is a translation from a German version into English. For the translation I am using the software deepL.com as well as chatGPT 4.

CONTEXT

This text is a direct continuation of the text There exists only one big Problem for the Future of Human Mankind: The Belief in false Narratives.

INTRODUCTION

There exists only one big Problem for the Future of Human Mankind: The Belief in false Narratives

Author: Gerd Doeben-Henisch

Time: Jan 5, 2024 – Jan 8, 2024 (09:45 a.m. CET)

Email: gerd@doeben-henisch.de

TRANSLATION: The following text is a translation from a German version into English. For the translation I am using the software deepL.com as well as chatGPT 4. The English version is a slightly revised version of the German text.

This blog entry will be completed today. However, it has laid the foundations for considerations that will be pursued further in a new blog entry.

CONTEXT

This text belongs to the topic Philosophy (of Science).

Introduction

Triggered by several reasons I started some investigation in the phenomenon of ‘propaganda’ to sharpen my understanding. My strategy was first to try to characterize the phenomenon of ‘general communication’ in order to find some ‘harder criteria’ that would allow to characterize the concept of ‘propaganda’ to stand out against this general background in a somewhat comprehensible way.

The realization of this goal then actually led to an ever more fundamental examination of our normal (human) communication, so that forms of propaganda become recognizable as ‘special cases’ of our communication. The worrying thing about this is that even so-called ‘normal communication’ contains numerous elements that can make it very difficult to recognize and pass on ‘truth’ (*). ‘Massive cases of propaganda’ therefore have their ‘home’ where we communicate with each other every day. So if we want to prevent propaganda, we have to start in everyday life.

(*) The concept of ‘truth’ is examined and explained in great detail in the following long text below. Unfortunately, I have not yet found a ‘short formula’ for it. In essence, it is about establishing a connection to ‘real’ events and processes in the world – including one’s own body – in such a way that they can, in principle, be understood and verified by others.

DICTATORIAL CONTEXT

However, it becomes difficult when there is enough political power that can set the social framework conditions in such a way that for the individual in everyday life – the citizen! – general communication is more or less prescribed – ‘dictated’. Then ‘truth’ becomes less and less or even non-existent. A society is then ‘programmed’ for its own downfall through the suppression of truth. ([3], [6]).

EVERYDAY LIFE AS A DICTATOR ?
The hour of narratives

But – and this is the far more dangerous form of ‘propaganda’ ! – even if there is not a nationwide apparatus of power that prescribes certain forms of ‘truth’, a mutilation or gross distortion of truth can still take place on a grand scale. Worldwide today, in the age of mass media, especially in the age of the internet, we can see that individuals, small groups, special organizations, political groups, entire religious communities, in fact all people and their social manifestations, follow a certain ‘narrative’ [*11] when they act.

Typical for acting according to a narrative is that those who do so individually believe that it is ‘their own decision’ and that their narrative is ‘true’, and that they are therefore ‘in the right’ when they act accordingly. This ‘feeling to be right’ can go as far as claiming the right to kill others because they ‘act wrongly’ in the light of their own ‘narrative’. We should therefore speak here of a ‘narrative truth’: Within the framework of the narrative, a picture of the world is drawn that ‘as a whole’ enables a perspective that ‘as such’ is ‘found to be good’ by the followers of the narrative, as ‘making sense’. Normally, the effect of a narrative, which is experienced as ‘meaningful’, is so great that the ‘truth content’ is no longer examined in detail.

RELIGIOUS NARRATIVES

This has existed at all times in the history of mankind. Narratives that appeared as ‘religious beliefs’ were particularly effective. It is therefore no coincidence that almost all governments of the last millennia have adopted religious beliefs as state doctrines; an essential component of religious beliefs is that they are ‘unprovable’, i.e. ‘incapable of truth’. This makes a religious narrative a wonderful tool in the hands of the powerful to motivate people to behave in certain ways without the threat of violence.

POPULAR NARRATIVES

In recent decades, however, we have experienced new, ‘modern forms’ of narratives that do not come across as religious narratives, but which nevertheless have a very similar effect: People perceive these narratives as ‘giving meaning’ in a world that is becoming increasingly confusing and therefore threatening for everyone today. Individual people, the citizens, also feel ‘politically helpless’, so that – even in a ‘democracy’ – they have the feeling that they cannot directly influence anything: the ‘people up there’ do what they want. In such a situation, ‘simplistic narratives’ are a blessing for the maltreated soul; you hear them and have the feeling: yes, that’s how it is; that’s exactly how I ‘feel’!

Such ‘popular narratives’, which enable ‘good feelings’, are gaining ever greater power. What they have in common with religious narratives is that the ‘followers’ of popular narratives no longer ask the ‘question of truth’; most of them are also not sufficiently ‘trained’ to be able to clarify the truth of a narrative at all. It is typical for supporters of narratives that they are generally hardly able to explain their own narrative to others. They typically send each other links to texts/videos that they find ‘good’ because these texts/videos somehow seem to support the popular narrative, and tend not to check the authors and sources because they are in the eyes of the followers such ‘decent people’, which always say exactly the ‘same thing’ as the ‘popular narrative’ dictates.

NARRATIVES ARE SEXY FOR POWER

If you now take into account that the ‘world of narratives’ is an extremely tempting offer for all those who have power over people or would like to gain power over people, then it should come as no surprise that many governments in this world, many other power groups, are doing just that today: they do not try to coerce people ‘directly’, but they ‘produce’ popular narratives or ‘monitor’ already existing popular narratives’ in order to gain power over the hearts and minds of more and more people via the detour of these narratives. Some speak here of ‘hybrid warfare’, others of ‘modern propaganda’, but ultimately, I guess, these terms miss the core of the problem.

THE NARRATIVE AS A BASIC CULTURAL PATTERN
The ‘irrational’ defends itself against the ‘rational’

The core of the problem is the way in which human communities have always organized their collective action, namely through narratives; we humans have no other option. However, such narratives – as the considerations further down in the text will show – are extremely susceptible to ‘falsity’, to a ‘distortion of the picture of the world’. In the context of the development of legal systems, approaches have been developed during at least the last 7000 years to ‘improve’ the abuse of power in a society by supporting truth-preserving mechanisms. Gradually, this has certainly helped, with all the deficits that still exist today. Additionally, about 500 years ago, a real revolution took place: humanity managed to find a format with the concept of a ‘verifiable narrative (empirical theory)’ that optimized the ‘preservation of truth’ and minimized the slide into untruth. This new concept of ‘verifiable truth’ has enabled great insights that before were beyond imagination .

The ‘aura of the scientific’ has meanwhile permeated almost all of human culture, almost! But we have to realize that although scientific thinking has comprehensively shaped the world of practicality through modern technologies, the way of scientific thinking has not overridden all other narratives. On the contrary, the ‘non-truth narratives’ have become so strong again that they are pushing back the ‘scientific’ in more and more areas of our world, patronizing it, forbidding it, eradicating it. The ‘irrationality’ of religious and popular narratives is stronger than ever before. ‘Irrational narratives’ are for many so appealing because they spare the individual from having to ‘think for themselves’. Real thinking is exhausting, unpopular, annoying and hinders the dream of a simple solution.

THE CENTRAL PROBLEM OF HUMANITY

Against this backdrop, the widespread inability of people to recognize and overcome ‘irrational narratives’ appears to be the central problem facing humanity in mastering the current global challenges. Before we need more technology (we certainly do), we need more people who are able and willing to think more and better, and who are also able to solve ‘real problems’ together with others. Real problems can be recognized by the fact that they are largely ‘new’, that there are no ‘simple off-the-shelf’ solutions for them, that you really have to ‘struggle’ together for possible insights; in principle, the ‘old’ is not enough to recognize and implement the ‘true new’, and the future is precisely the space with the greatest amount of ‘unknown’, with lots of ‘genuinely new’ things.

The following text examines this view in detail.

MAIN TEXT FOR EXPLANATION

MODERN PROPAGANDA ?

As mentioned in the introduction the trigger for me to write this text was the confrontation with a popular book which appeared to me as a piece of ‘propaganda’. When I considered to describe my opinion with own words I detected that I had some difficulties: what is the difference between ‘propaganda’ and ‘everyday communication’? This forced me to think a little bit more about the ingredients of ‘everyday communication’ and where and why a ‘communication’ is ‘different’ to our ‘everyday communication’. As usual in the beginning of some discussion I took a first look to the various entries in Wikipedia (German and English). The entry in the English Wikipedia on ‘Propaganda [1b] attempts a very similar strategy to look to ‘normal communication’ and compared to this having a look to the phenomenon of ‘propaganda’, albeit with not quite sharp contours. However, it provides a broad overview of various forms of communication, including those forms that are ‘special’ (‘biased’), i.e. do not reflect the content to be communicated in the way that one would reproduce it according to ‘objective, verifiable criteria’.[*0] However, the variety of examples suggests that it is not easy to distinguish between ‘special’ and ‘normal’ communication: What then are these ‘objective verifiable criteria’? Who defines them?

Assuming for a moment that it is clear what these ‘objectively verifiable criteria’ are, one can tentatively attempt a working definition for the general (normal?) case of communication as a starting point:

Working Definition:

The general case of communication could be tentatively described as a simple attempt by one person – let’s call them the ‘author’ – to ‘bring something to the attention’ of another person – let’s call them the ‘interlocutor’. We tentatively call what is to be brought to their attention ‘the message’. We know from everyday life that an author can have numerous ‘characteristics’ that can affect the content of his message.

Here is a short list of properties that characterize the author’s situation in a communication. Then corresponding properties for the interlocutor.

The Author:

  1. The available knowledge of the author — both conscious and unconscious — determines the kind of message the author can create.
  2. His ability to discern truth determines whether and to what extent he can differentiate what in his message is verifiable in the real world — present or past — as ‘accurate’ or ‘true’.
  3. His linguistic ability determines whether and how much of his available knowledge can be communicated linguistically.
  4. The world of emotions decides whether he wants to communicate anything at all, for example, when, how, to whom, how intensely, how conspicuously, etc.
  5. The social context can affect whether he holds a certain social role, which dictates when he can and should communicate what, how, and with whom.
  6. The real conditions of communication determine whether a suitable ‘medium of communication’ is available (spoken sound, writing, sound, film, etc.) and whether and how it is accessible to potential interlocutors.
  7. The author’s physical constitution decides how far and to what extent he can communicate at all.

The Interlocutor:

  1. In general, the characteristics that apply to the author also apply to the interlocutor. However, some points can be particularly emphasized for the role of the interlocutor:
  2. The available knowledge of the interlocutor determines which aspects of the author’s message can be understood at all.
  3. The ability of the interlocutor to discern truth determines whether and to what extent he can also differentiate what in the conveyed message is verifiable as ‘accurate’ or ‘true’.
  4. The linguistic ability of the interlocutor affects whether and how much of the message he can absorb purely linguistically.
  5. Emotions decide whether the interlocutor wants to take in anything at all, for example, when, how, how much, with what inner attitude, etc.
  6. The social context can also affect whether the interlocutor holds a certain social role, which dictates when he can and should communicate what, how, and with whom.
  7. Furthermore, it can be important whether the communication medium is so familiar to the interlocutor that he can use it sufficiently well.
  8. The physical constitution of the interlocutor can also determine how far and to what extent the interlocutor can communicate at all.

Even this small selection of factors shows how diverse the situations can be in which ‘normal communication’ can take on a ‘special character’ due to the ‘effect of different circumstances’. For example, an actually ‘harmless greeting’ can lead to a social problem with many different consequences in certain roles. A seemingly ‘normal report’ can become a problem because the contact person misunderstands the message purely linguistically. A ‘factual report’ can have an emotional impact on the interlocutor due to the way it is presented, which can lead to them enthusiastically accepting the message or – on the contrary – vehemently rejecting it. Or, if the author has a tangible interest in persuading the interlocutor to behave in a certain way, this can lead to a certain situation not being presented in a ‘purely factual’ way, but rather to many aspects being communicated that seem suitable to the author to persuade the interlocutor to perceive the situation in a certain way and to adopt it accordingly. These ‘additional’ aspects can refer to many real circumstances of the communication situation beyond the pure message.

Types of communication …

Given this potential ‘diversity’, the question arises as to whether it will even be possible to define something like normal communication?

In order to be able to answer this question meaningfully, one should have a kind of ‘overview’ of all possible combinations of the properties of author (1-7) and interlocutor (1-8) and one should also have to be able to evaluate each of these possible combinations with a view to ‘normality’.

It should be noted that the two lists of properties author (1-7) and interlocutor (1-8) have a certain ‘arbitrariness’ attached to them: you can build the lists as they have been constructed here, but you don’t have to.

This is related to the general way in which we humans think: on one hand, we have ‘individual events that happen’ — or that we can ‘remember’ —, and on the other hand, we can ‘set’ ‘arbitrary relationships’ between ‘any individual events’ in our thinking. In science, this is called ‘hypothesis formation’. Whether or not such formation of hypotheses is undertaken, and which ones, is not standardized anywhere. Events as such do not enforce any particular hypothesis formations. Whether they are ‘sensible’ or not is determined solely in the later course of their ‘practical use’. One could even say that such hypothesis formation is a rudimentary form of ‘ethics’: the moment one adopts a hypothesis regarding a certain relationship between events, one minimally considers it ‘important’, otherwise, one would not undertake this hypothesis formation.

In this respect, it can be said that ‘everyday life’ is the primary place for possible working hypotheses and possible ‘minimum values’.

The following diagram demonstrates a possible arrangement of the characteristics of the author and the interlocutor:

FIGURE : Overview of the possible overlaps of knowledge between the author and the interlocutor, if everyone can have any knowledge at its disposal.

What is easy to recognize is the fact that an author can naturally have a constellation of knowledge that draws on an almost ‘infinite number of possibilities’. The same applies to the interlocutor. In purely abstract terms, the number of possible combinations is ‘virtually infinite’ due to the assumptions about the properties Author 1 and Interlocutor 2, which ultimately makes the question of ‘normality’ at the abstract level undecidable.


However, since both authors and interlocutors are not spherical beings from some abstract angle of possibilities, but are usually ‘concrete people’ with a ‘concrete history’ in a ‘concrete life-world’ at a ‘specific historical time’, the quasi-infinite abstract space of possibilities is narrowed down to a finite, manageable set of concretes. Yet, even these can still be considerably large when related to two specific individuals. Which person, with their life experience from which area, should now be taken as the ‘norm’ for ‘normal communication’?


It seems more likely that individual people are somehow ‘typified’, for example, by age and learning history, although a ‘learning history’ may not provide a clear picture either. Graduates from the same school can — as we know — possess very different knowledge afterwards, even though commonalities may be ‘minimally typical’.

Overall, the approach based on the characteristics of the author and the interlocutor does not seem to provide really clear criteria for a norm, even though a specification such as ‘the humanistic high school in Hadamar (a small German town) 1960 – 1968’ would suggest rudimentary commonalities.


One could now try to include the further characteristics of Author 2-7 and Interlocutor 3-8 in the considerations, but the ‘construction of normal communication’ seems to lead more and more into an unclear space of possibilities based on the assumptions of Author 1 and Interlocutor 2.

What does this mean for the typification of communication as ‘propaganda’? Isn’t ultimately every communication also a form of propaganda, or is there a possibility to sufficiently accurately characterize the form of ‘propaganda’, although it does not seem possible to find a standard for ‘normal communication’? … or will a better characterization of ‘propaganda’ indirectly provide clues for ‘non-propaganda’?

TRUTH and MEANING: Language as Key

The spontaneous attempt to clarify the meaning of the term ‘propaganda’ to the extent that one gets a few constructive criteria for being able to characterize certain forms of communication as ‘propaganda’ or not, gets into ever ‘deeper waters’. Are there now ‘objective verifiable criteria’ that one can work with, or not? And: Who determines them?

Let us temporarily stick to working hypothesis 1, that we are dealing with an author who articulates a message for an interlocutor, and let us expand this working hypothesis by the following addition 1: such communication always takes place in a social context. This means that the perception and knowledge of the individual actors (author, interlocutor) can continuously interact with this social context or ‘automatically interacts’ with it. The latter is because we humans are built in such a way that our body with its brain just does this, without ‘us’ having to make ‘conscious decisions’ for it.[*1]

For this section, I would like to extend the previous working hypothesis 1 together with supplement 1 by a further working hypothesis 2 (localization of language) [*4]:

  1. Every medium (language, sound, image, etc.) can contain a ‘potential meaning’.
  2. When creating the media event, the ‘author’ may attempt to ‘connect’ possible ‘contents’ that are to be ‘conveyed’ by him with the medium (‘putting into words/sound/image’, ‘encoding’, etc.). This ‘assignment’ of meaning occurs both ‘unconsciously/automatically’ and ‘(partially) consciously’.
  3. In perceiving the media event, the ‘interlocutor’ may try to assign a ‘possible meaning’ to this perceived event. This ‘assignment’ of meaning also happens both ‘unconsciously/automatically’ and ‘(partially) consciously’.
  4. The assignment of meaning requires both the author and the interlocutor to have undergone ‘learning processes’ (usually years, many years) that have made it possible to link certain ‘events of the external world’ as well as ‘internal states’ with certain media events.
  5. The ‘learning of meaning relationships’ always takes place in social contexts, as a media structure meant to ‘convey meaning’ between people belongs to everyone involved in the communication process.
  6. Those medial elements that are actually used for the ‘exchange of meanings’ all together form what is called a ‘language’: the ‘medial elements themselves’ form the ‘surface structure’ of the language, its ‘sign dimension’, and the ‘inner states’ in each ‘actor’ involved, form the ‘individual-subjective space of possible meanings’. This inner subjective space comprises two components: (i) the internally available elements as potential meaning content and (ii) a dynamic ‘meaning relationship’ that ‘links’ perceived elements of the surface structure and the potential meaning content.


To answer the guiding question of whether one can “characterize certain forms of communication as ‘propaganda’ or not,” one needs ‘objective, verifiable criteria’ on the basis of which a statement can be formulated. This question can be used to ask back whether there are ‘objective criteria’ in ‘normal everyday dialogue’ that we can use in everyday life to collectively decide whether a ‘claimed fact’ is ‘true’ or not; in this context, the word ‘true’ is also used. Can this be defined a bit more precisely?

For this I propose an additional working hypotheses 3:

  1. At least two actors can agree that a certain meaning, associated with the media construct, exists as a sensibly perceivable fact in such a way that they can agree that the ‘claimed fact’ is indeed present. Such a specific occurrence should be called ‘true 1’ or ‘Truth 1.’ A ‘specific occurrence’ can change at any time and quickly due to the dynamics of the real world (including the actors themselves), for example: the rain stops, the coffee cup is empty, the car from before is gone, the empty sidewalk is occupied by a group of people, etc.
  2. At least two actors can agree that a certain meaning, associated with the media construct, is currently not present as a real fact. Referring to the current situation of ‘non-occurrence,’ one would say that the statement is ‘false 1’; the claimed fact does not actually exist contrary to the claim.
  3. At least two actors can agree that a certain meaning, associated with the media construct, is currently not present, but based on previous experience, it is ‘quite likely’ to occur in a ‘possible future situation.’ This aspect shall be called ‘potentially true’ or ‘true 2’ or ‘Truth 2.’ Should the fact then ‘actually occur’ at some point in the future, Truth 2 would transform into Truth 1.
  4. At least two actors can agree that a certain meaning associated with the media construct does not currently exist and that, based on previous experience, it is ‘fairly certain that it is unclear’ whether the intended fact could actually occur in a ‘possible future situation’. This aspect should be called ‘speculative true’ or ‘true 3’ or ‘truth 3’. Should the situation then ‘actually occur’ at some point, truth 3 would change into truth 1.
  5. At least two actors can agree that a certain meaning associated with the medial construct does not currently exist, and on the basis of previous experience ‘it is fairly certain’ that the intended fact could never occur in a ‘possible future situation’. This aspect should be called ‘speculative false’ or ‘false 2’.

A closer look at these 5 assumptions of working hypothesis 3 reveals that there are two ‘poles’ in all these distinctions, which stand in certain relationships to each other: on the one hand, there are real facts as poles, which are ‘currently perceived or not perceived by all participants’ and, on the other hand, there is a ‘known meaning’ in the minds of the participants, which can or cannot be related to a current fact. This results in the following distribution of values:

REAL FACTsRelationship to Meaning
Given1Fits (true 1)
Given2Doesn’t fit (false 1)
Not given3Assumed, that it will fit in the future (true 2)
Not given4Unclear, whether it would fit in the future (true 3)
Not given5Assumed, that it would not fit in the future (false 2)

In this — still somewhat rough — scheme, ‘the meaning of thoughts’ can be qualified in relation to something currently present as ‘fitting’ or ‘not fitting’, or in the absence of something real as ‘might fit’ or ‘unclear whether it can fit’ or ‘certain that it cannot fit’.

However, it is important to note that these qualifications are ‘assessments’ made by the actors based on their ‘own knowledge’. As we know, such an assessment is always prone to error! In addition to errors in perception [*5], there can be errors in one’s own knowledge [*6]. So contrary to the belief of an actor, ‘true 1’ might actually be ‘false 1’ or vice versa, ‘true 2’ could be ‘false 2’ and vice versa.

From all this, it follows that a ‘clear qualification’ of truth and falsehood is ultimately always error-prone. For a community of people who think ‘positively’, this is not a problem: they are aware of this situation and they strive to keep their ‘natural susceptibility to error’ as small as possible through conscious methodical procedures [*7]. People who — for various reasons — tend to think negatively, feel motivated in this situation to see only errors or even malice everywhere. They find it difficult to deal with their ‘natural error-proneness’ in a positive and constructive manner.

TRUTH and MEANING : Process of Processes

In the previous section, the various terms (‘true1,2’, ‘false 1,2’, ‘true 3’) are still rather disconnected and are not yet really located in a tangible context. This will be attempted here with the help of working hypothesis 4 (sketch of a process space).

FIGURE 1 Process : The process space in the real world and in thinking, including possible interactions

The basic elements of working hypothesis 4 can be characterized as follows:

  1. There is the real world with its continuous changes, and within an actor which includes a virtual space for processes with elements such as perceptions, memories, and imagined concepts.
  2. The link between real space and virtual space occurs through perceptual achievements that represent specific properties of the real world for the virtual space, in such a way that ‘perceived contents’ and ‘imagined contents’ are distinguishable. In this way, a ‘mental comparison’ of perceived and imagined is possible.
  3. Changes in the real world do not show up explicitly but are manifested only indirectly through the perceivable changes they cause.
  4. It is the task of ‘cognitive reconstruction’ to ‘identify’ changes and to describe them linguistically in such a way that it is comprehensible, based on which properties of a given state, a possible subsequent state can arise.
  5. In addition to distinguishing between ‘states’ and ‘changes’ between states, it must also be clarified how a given description of change is ‘applied’ to a given state in such a way that a ‘subsequent state’ arises. This is called here ‘successor generation rule’ (symbolically: ⊢). An expression like Z ⊢V Z’ would then mean that using the successor generation rule ⊢ and employing the change rule V, one can generate the subsequent state Z’ from the state Z. However, more than one change rule V can be used, for example, ⊢{V1, V2, …, Vn} with the change rules V1, …, Vn.
  6. When formulating change rules, errors can always occur. If certain change rules have proven successful in the past in derivations, one would tend to assume for the ‘thought subsequent state’ that it will probably also occur in reality. In this case, we would be dealing with the situation ‘true 2’. If a change rule is new and there are no experiences with it yet, we would be dealing with the ‘true 3’ case for the thought subsequent state. If a certain change rule has failed repeatedly in the past, then the case ‘false 2’ might apply.
  7. The outlined process model also shows that the previous cases (1-5 in the table) only ever describe partial aspects. Suppose a group of actors manages to formulate a rudimentary process theory with many states and many change rules, including a successor generation instruction. In that case, it is naturally of interest how the ‘theory as a whole’ ‘proves itself’. This means that every ‘mental construction’ of a sequence of possible states according to the applied change rules under the assumption of the process theory must ‘prove itself’ in all cases of application for the theory to be said to be ‘generically true’. For example, while the case ‘true 1’ refers to only a single state, the case ‘generically true’ refers to ‘very many’ states, as many until an ‘end state’ is reached, which is supposed to count as a ‘target state’. The case ‘generically contradicted’ is supposed to occur when there is at least one sequence of generated states that keeps generating an end state that is false 1. As long as a process theory has not yet been confirmed as true 1 for an end state in all possible cases, there remains a ‘remainder of cases’ that are unclear. Then a process theory would be called ‘generically unclear’, although it may be considered ‘generically true’ for the set of cases successfully tested so far.

FIGURE 2 Process : The individual extended process space with an indication of the dimension ‘META-THINKING’ and ‘EVALUATION’.

If someone finds the first figure of the process room already quite ‘challenging’, they he will certainly ‘break into a sweat’ with this second figure of the ‘expanded process room’.

Everyone can check for himself that we humans have the ability — regardless of what we are thinking — to turn our thinking at any time back onto our own thinking shortly before, a kind of ‘thinking about thinking’. This opens up an ‘additional level of thinking’ – here called the ‘meta-level’ – on which we thinkers ‘thematize’ everything that is noticeable and important to us in the preceding thinking. [*8] In addition to ‘thinking about thinking’, we also have the ability to ‘evaluate’ what we perceive and think. These ‘evaluations’ are fueled by our ’emotions’ [*9] and ‘learned preferences’. This enables us to ‘learn’ with the help of our emotions and learned preferences: If we perform certain actions and suffer ‘pain’, we will likely avoid these actions next time. If we go to restaurant X to eat because someone ‘recommended’ it to us, and the food and/or service were really bad, then we will likely not consider this suggestion in the future. Therefore, our thinking (and our knowledge) can ‘make possibilities visible’, but it is the emotions that comment on what happens to be ‘good’ or ‘bad’ when implementing knowledge. But beware, emotions can also be mistaken, and massively so.[*10]

TRUTH AND MEANING – As a collective achievement

The previous considerations on the topic of ‘truth and meaning’ in the context of individual processes have outlined that and how ‘language’ plays a central role in enabling meaning and, based on this, truth. Furthermore, it was also outlined that and how truth and meaning must be placed in a dynamic context, in a ‘process model’, as it takes place in an individual in close interaction with the environment. This process model includes the dimension of ‘thinking’ (also ‘knowledge’) as well as the dimension of ‘evaluations’ (emotions, preferences); within thinking there are potentially many ‘levels of consideration’ that can relate to each other (of course they can also take place ‘in parallel’ without direct contact with each other (the unconnected parallelism is the less interesting case, however).

As fascinating as the dynamic emotional-cognitive structure within an individual actor can be, the ‘true power’ of explicit thinking only becomes apparent when different people begin to coordinate their actions by means of communication. When individual action is transformed into collective action in this way, a dimension of ‘society’ becomes visible, which in a way makes the ‘individual actors’ ‘forget’, because the ‘overall performance’ of the ‘collectively connected individuals’ can be dimensions more complex and sustainable than any one individual could ever realize. While a single person can make a contribution in their individual lifetime at most, collectively connected people can accomplish achievements that span many generations.

On the other hand, we know from history that collective achievements do not automatically have to bring about ‘only good’; the well-known history of oppression, bloody wars and destruction is extensive and can be found in all periods of human history.

This points to the fact that the question of ‘truth’ and ‘being good’ is not only a question for the individual process, but also a question for the collective process, and here, in the collective case, this question is even more important, since in the event of an error not only individuals have to suffer negative effects, but rather very many; in the worst case, all of them.

To be continued …

COMMENTS

[*0] The meaning of the terms ‘objective, verifiable’ will be explained in more detail below.

[*1] In a system-theoretical view of the ‘human body’ system, one can formulate the working hypothesis that far more than 99% of the events in a human body are not conscious. You can find this frightening or reassuring. I tend towards the latter, towards ‘reassurance’. Because when you see what a human body as a ‘system’ is capable of doing on its own, every second, for many years, even decades, then this seems extremely reassuring in view of the many mistakes, even gross ones, that we can make with our small ‘consciousness’. In cooperation with other people, we can indeed dramatically improve our conscious human performance, but this is only ever possible if the system performance of a human body is maintained. After all, it contains 3.5 billion years of development work of the BIOM on this planet; the building blocks of this BIOM, the cells, function like a gigantic parallel computer, compared to which today’s technical supercomputers (including the much-vaunted ‘quantum computers’) look so small and weak that it is practically impossible to express this relationship.

[*2] An ‘everyday language’ always presupposes ‘the many’ who want to communicate with each other. One person alone cannot have a language that others should be able to understand.

[*3] A meaning relation actually does what is mathematically called a ‘mapping’: Elements of one kind (elements of the surface structure of the language) are mapped to elements of another kind (the potential meaning elements). While a mathematical mapping is normally fixed, the ‘real meaning relation’ can constantly change; it is ‘flexible’, part of a higher-level ‘learning process’ that constantly ‘readjusts’ the meaning relation depending on perception and internal states.

[*4] The contents of working hypothesis 2 originate from the findings of modern cognitive sciences (neuroscience, psychology, biology, linguistics, semiotics, …) and philosophy; they refer to many thousands of articles and books. Working hypothesis 2 therefore represents a highly condensed summary of all this. Direct citation is not possible in purely practical terms.

[*5] As is known from research on witness statements and from general perception research, in addition to all kinds of direct perception errors, there are many errors in the ‘interpretation of perception’ that are largely unconscious/automated. The actors are normally powerless against such errors; they simply do not notice them. Only methodically conscious controls of perception can partially draw attention to these errors.

[*6] Human knowledge is ‘notoriously prone to error’. There are many reasons for this. One lies in the way the brain itself works. ‘Correct’ knowledge is only possible if the current knowledge processes are repeatedly ‘compared’ and ‘checked’ so that they can be corrected. Anyone who does not regularly check the correctness will inevitably confirm incomplete and often incorrect knowledge. As we know, this does not prevent people from believing that everything they carry around in their heads is ‘true’. If there is a big problem in this world, then this is one of them: ignorance about one’s own ignorance.

[*7] In the cultural history of mankind to date, it was only very late (about 500 years ago?) that a format of knowledge was discovered that enables any number of people to build up fact-based knowledge that, compared to all other known knowledge formats, enables the ‘best results’ (which of course does not completely rule out errors, but extremely minimizes them). This still revolutionary knowledge format has the name ’empirical theory’, which I have since expanded to ‘sustainable empirical theory’. On the one hand, we humans are the main source of ‘true knowledge’, but at the same time we ourselves are also the main source of ‘false knowledge’. At first glance, this seems like a ‘paradox’, but it has a ‘simple’ explanation, which at its root is ‘very profound’ (comparable to the cosmic background radiation, which is currently simple, but originates from the beginnings of the universe).

[*8] In terms of its architecture, our brain can open up any number of such meta-levels, but due to its concrete finiteness, it only offers a limited number of neurons for different tasks. For example, it is known (and has been experimentally proven several times) that our ‘working memory’ (also called ‘short-term memory’) is only limited to approx. 6-9 ‘units’ (whereby the term ‘unit’ must be defined depending on the context). So if we want to solve extensive tasks through our thinking, we need ‘external aids’ (sheet of paper and pen or a computer, …) to record the many aspects and write them down accordingly. Although today’s computers are not even remotely capable of replacing the complex thought processes of humans, they can be an almost irreplaceable tool for carrying out complex thought processes to a limited extent. But only if WE actually KNOW what we are doing!

[*9] The word ’emotion’ is a ‘collective term’ for many different phenomena and circumstances. Despite extensive research for over a hundred years, the various disciplines of psychology are still unable to offer a uniform picture, let alone a uniform ‘theory’ on the subject. This is not surprising, as much of the assumed emotions takes place largely ‘unconsciously’ or is only directly available as an ‘internal event’ in the individual. The only thing that seems to be clear is that we as humans are never ’emotion-free’ (this also applies to so-called ‘cool’ types, because the apparent ‘suppression’ or ‘repression’ of emotions is itself part of our innate emotionality).

[*10] Of course, emotions can also lead us seriously astray or even to our downfall (being wrong about other people, being wrong about ourselves, …). It is therefore not only important to ‘sort out’ the factual things in the world in a useful way through ‘learning’, but we must also actually ‘keep an eye on our own emotions’ and check when and how they occur and whether they actually help us. Primary emotions (such as hunger, sex drive, anger, addiction, ‘crushes’, …) are selective, situational, can develop great ‘psychological power’ and thus obscure our view of the possible or very probable ‘consequences’, which can be considerably damaging for us.

[*11] The term ‘narrative’ is increasingly used today to describe the fact that a group of people use a certain ‘image’, a certain ‘narrative’ in their thinking for their perception of the world in order to be able to coordinate their joint actions. Ultimately, this applies to all collective action, even for engineers who want to develop a technical solution. In this respect, the description in the German Wikipedia is a bit ‘narrow’: https://de.wikipedia.org/wiki/Narrativ_(Sozialwissenschaften)

REFERENCES

The following sources are just a tiny selection from the many hundreds, if not thousands, of articles, books, audio documents and films on the subject. Nevertheless, they may be helpful for an initial introduction. The list will be expanded from time to time.

[1a] Propaganda, in the German Wikipedia https://de.wikipedia.org/wiki/Propaganda

[1b] Propaganda in the English Wikipedia : https://en.wikipedia.org/wiki/Propaganda /*The English version appears more systematic, covers larger periods of time and more different areas of application */

[3] Propaganda der Russischen Föderation, hier: https://de.wikipedia.org/wiki/Propaganda_der_Russischen_F%C3%B6deration (German source)

[6] Mischa Gabowitsch, Mai 2022, Von »Faschisten« und »Nazis«, https://www.blaetter.de/ausgabe/2022/mai/von-faschisten-und-nazis#_ftn4 (German source)

DOWNSIZING TEXT GENERATORS – UPGRADING HUMANS. A Thought Experiment

Author: Gerd Doeben-Henisch

Time: Nov 12, 2023 — Nov 12, 2023

Email: gerd@doeben-henisch.de

–!! This is not yet finished !!–

CONTEXT

This text belongs to the topic Philosophy (of Science).

INTRODUCTION

The ‘coming out’ of a new type of ‘text generator’ in November 2022 called ‘chatGPT’ — this is not the only one around — caused an explosion of publications and usages around the world. The author is working since about 40 years in this field and never did this happen before. What has happened? Is this the beginning of the end of humans being the main actors on this planet (yes, we know, not really the best until now) or is there something around and in-between which we do overlook captivated by these new text generators?

Reading many papers since that event, talking with people, experimenting directly with chatGPT4, continuing working with theories and working with people in a city by trying new forms of ‘citizens at work in their community’, slowly a picture was forming in my head how it could perhaps be possible to ‘benchmark’ text generators with human activities directly.

After several first trials everything came together when I could give a speech in the Goethe University in Frankfurt Friday Nov-10. [1] There was a wonderful audience with elderly people from the so-called University of the 3rd age … a bit different to young students 🙂

There was an idea that hit me like a bolt of lightning when I wrote it down afterwards: it is the fundamental role of literature for our understanding of world and people, which will be completely eliminated by using text generators. The number of written text will explode in the near future, but the meaning of the world will vanish more and more at the same time. You will see letters, but there will be no more meaning around. And with the meaning the world of humans will disappear. You won’t even be able to know yourself anymore.

Clearly, it can only happen if we substitute our own thinking and writing completely by text generators.

Is the author of this text a bit ‘ill’ to write down such ideas or are there some arguments around which make it clear why this can be the fate of humans after the year 2023?

SET-UP OF AN EXPERIMENT

[1] See the text written down after the speech: https://www.cognitiveagent.org/2023/11/06/kollektive-mensch-maschine-intelligenz-im-kontext-nach

chat GPT4: Cognitive Science, HCI, Rationality (Stanovich 2016), Dual-Process Theory (Kahnemann)

Last change: Nov 1, 2023

CONTEXT

This text belongs to the theme ‘chats with chatGPT4′.

After trying a review of the book from Stanovich et al. (2016) I have checked a possible contribution from chatGPT 4. As expected, chatGPT4 replicated all the common concepts which are used in the context of cognitive science and Stanovich as well as Kahnemann. All the real questions about methodological details have not been answered. Nevertheless, chatGPT4 lists all main concepts which one should know if one is dealing with these subjects, but that’s it. This is not really science, and has never to do with philosophy or philosophy of science.

PROTOCOL OF THE CHAT

User

What do you know about cognitive science

ChatGPT

Cognitive science is an interdisciplinary field that studies the mind, intelligence, and behavior from multiple perspectives. At its core, it seeks to understand how the human mind works and how it might be replicated or modeled, including through artificial intelligence. The field encompasses a broad range of topics, from the basic processes of cognition to complex interactions with the environment. Here are some key aspects of cognitive science: 1. **Interdisciplinary Nature**: Cognitive science integrates methods and theories from several disciplines, including psychology, neuroscience, linguistics, philosophy, computer science, anthropology, and biology. 2. **Research Areas**: – Perception: Understanding how organisms interpret sensory data. – Attention: Investigating how the mind allocates cognitive resources to various stimuli. – Memory: Exploring the processes involved in storing and retrieving information. – Language: Studying the cognitive processes underlying language acquisition, comprehension, and production. – Problem Solving and Decision Making: Analyzing how humans and other animals think, make decisions, and solve problems. – Consciousness: Examining the nature of subjective experience and self-awareness. 3. **Cognitive Neuroscience**: This branch of cognitive science looks at how brain structures and functions are related to cognitive processes. It makes use of technologies such as fMRI and EEG to study the brain in action. 4. **Computational Models**: Cognitive scientists often construct computational models to simulate aspects of human cognition. These models can range from symbolic representations that mimic logic and reasoning to connectionist models that emulate neural networks. 5. **Philosophy of Mind**: Philosophical questions about the nature of mind, the relationship between mind and body, and the nature of consciousness are central to cognitive science. Philosophy contributes questions about the nature of knowledge, perception, and the structure of language, influencing cognitive theories. 6. **Human-Computer Interaction (HCI)**: This field draws on cognitive science principles to improve the design of user interfaces and enhance user experience with technology. 7. **Education and Application**: Insights from cognitive science have been applied to education, informing teaching methods and instructional design to align with how the brain learns best. 8. **Artificial Intelligence**: Cognitive science and AI are deeply intertwined, with insights from the study of human cognition informing the development of algorithms and systems that aim to replicate aspects of human thought. Cognitive science continues to evolve, with new findings and technologies perpetually reshaping our understanding of the human mind. As it progresses, it offers potential advancements not only in AI but also in how we approach mental health, education, design, and a multitude of other areas that require an understanding of how we think and learn.

User

You are listing many disciplines engaged in cognitive science. How are all these disciplines interrelated?

ChatGPT

The disciplines within cognitive science are interrelated through their shared goal of understanding the mind and cognition. Here’s how each contributes to the field and how they intersect with each other: 1. **Psychology**: Psychology contributes experimental methods and theories about behavior and mental processes. It often serves as the basis for hypotheses about cognitive mechanisms that other disciplines then further investigate. 2. **Neuroscience**: Neuroscience provides insights into the brain structures and neural circuits that underlie cognitive functions. It adds a biological perspective to theories of cognition that can confirm, refute, or refine psychological theories. 3. **Linguistics**: Linguistics studies the structure, use, and psychology of language, which is a complex cognitive ability. Theories of syntax, semantics, and language acquisition inform models of how the brain processes language. 4. **Philosophy**: Philosophy addresses fundamental questions about the nature of knowledge, mind, reality, and values. It offers frameworks for understanding consciousness, intentionality, and the mind-body problem, which can inform empirical research. 5. **Computer Science**: Computer science, particularly artificial intelligence, contributes algorithms and computational models that mimic cognitive processes. The feedback loop between AI and cognitive science is strong: cognitive science inspires AI development, and AI provides tools and metaphors for understanding cognition. 6. **Anthropology**: Anthropology brings in the cultural dimension, studying how cognition varies across different human societies. This helps to differentiate between universal cognitive processes and those that are culturally specific. 7. **Biology**: Biology, especially evolutionary biology, contributes to our understanding of the evolutionary pressures that shaped our cognitive abilities. It provides a framework for understanding why certain cognitive functions may have developed. 8. **Education**: Research in cognitive science directly impacts educational practices by providing insights into how learning and memory work, thus informing teaching strategies and curriculum development. 9. **Human-Computer Interaction (HCI)**: HCI draws on principles from psychology and computer science to design better interfaces. It uses cognitive models to predict how users will interact with technology and to create user experiences that complement human cognitive abilities. The interrelation is often practical; for example: – A cognitive psychologist might devise an experiment based on a philosophical question. – A neuroscientist might use the experiment’s results to guide brain imaging studies. – A linguist might analyze the language data from the experiment. – A computer scientist might model the cognitive process observed in a computational framework. – An anthropologist might compare these results with similar cognitive processes in different cultures. – An educationalist might apply the findings to improve classroom teaching methods. – An HCI researcher might use the findings to design better computer interfaces. The intersection of these disciplines within cognitive science ensures a comprehensive approach to studying cognition, taking into account its complexity and richness by viewing it through multiple lenses.

User

HCI is an interesting example. HCI analyses the interaction between humans and machines with regard to certain tasks and within certain environments. Where do the cognitive models come from which shall be used?

Remark for the following: chatGPT4 offered two versions for an answer. I could select and opted for version 2:

ChatGPT

Human-Computer Interaction (HCI) relies heavily on cognitive models to predict and interpret how users interact with computer systems. These cognitive models are derived from various sources within the broader field of cognitive science: 1. **Psychological Theories**: Many cognitive models in HCI are based on psychological theories about how the human mind processes information. For instance, models like the Cognitive Load Theory help designers understand how to present information to users without overwhelming them. 2. **Cognitive Architectures**: There are comprehensive computational frameworks that aim to simulate human cognitive processes. Examples include the ACT-R (Adaptive Control of Thought-Rational) and Soar architectures. These are often used in HCI to predict how users will learn new systems and solve problems. 3. **Human Information Processing Models**: HCI often uses specific models that describe the stages of human information processing, such as the Model Human Processor, which includes perceptual, cognitive, and motor processors, each with their own cycle times and memories. 4. **Attention Models**: Models of visual attention, such as the See-Think-Do framework, explain how users allocate their attention when interacting with interfaces and can inform the design of systems to make important information more salient. 5. **Memory Models**: Understanding working memory and long-term memory is crucial for interface design. HCI uses models like the Working Memory model by Baddeley and Hitch to predict how users retain and process information while completing tasks. 6. **Error Prediction Models**: Models like Norman’s Seven Stages of Action are used to predict possible errors in human-computer interaction, allowing for the design of error-resistant systems. 7. **Task Models**: Task analysis models, such as GOMS (Goals, Operators, Methods, and Selection rules), are used to estimate how long it takes a user to accomplish a task and to identify ways to streamline the interaction. 8. **Ecological Models**: These models, like the distributed cognition approach, look at cognitive processes as distributed across people, time, and tools, considering the environment in which HCI takes place. 9. **User Mental Models**: HCI considers the mental models that users form of how systems work. Designers try to align interfaces with these mental models to make them more intuitive. 10. **Usability Heuristics**: While not models in the traditional sense, heuristics like those proposed by Jakob Nielsen are based on principles of cognitive psychology and are used to evaluate user interfaces. 11. **Empirical Studies**: Empirical studies, including both qualitative and quantitative user research, continue to inform and refine cognitive models. User testing and observation can lead to the development of new models or the refinement of existing ones. These models are not mutually exclusive and can often be combined or used in parallel to tackle different aspects of HCI design and evaluation. The choice of model(s) typically depends on the particular aspect of human cognition that is most relevant to the interaction being studied or the problem being solved. By applying cognitive models, HCI professionals aim to create user-centered designs that are efficient, effective, and satisfying for the user.

User

Your answer is helpful. But these different contributions are usually not written within a coherent conceptual framework. Is it possible to extract those concepts which are common between these different contributions?

ChatGPT

Indeed, the diverse disciplines within cognitive science and HCI contribute a multitude of models and theories, and these contributions often lack a unifying framework. However, several core concepts are shared across these models and form the bedrock of cognitive science and HCI. Here are some of these shared concepts: 1. **Information Processing**: A key concept across many cognitive models is that humans process information in stages (e.g., encoding, storage, retrieval). This idea, originating from the computer metaphor of the mind, is a common thread in both psychological models and HCI designs. 2. **Attention**: The concept of attention, how it is directed, and how it is managed, is central to understanding human cognition and designing interfaces. Attentional models inform interface design by suggesting where to place important information and how to alert users to changes. 3. **Memory**: The functions of working memory and long-term memory and their limitations are critical concepts. They influence how much information should be presented at once and how interfaces can enhance recall through cues and design. 4. **Mental Models**: Across disciplines, there is recognition that users create mental models of how systems work, and the closeness of these models to the actual workings of the system can affect usability. 5. **Cognitive Load**: Cognitive load refers to the amount of mental effort being used in the working memory. Across HCI and cognitive science, there’s an emphasis on designing systems that manage and minimize cognitive load to prevent errors and improve efficiency. 6. **Problem Solving and Decision Making**: Understanding the cognitive processes behind problem solving and decision making is essential for creating interfaces that support these activities. 7. **Metacognition**: Awareness and understanding of one’s own thought processes are recognized as crucial for learning and interaction with systems. This is reflected in designs that support user control and flexibility. 8. **Perception**: Theories of visual, auditory, and haptic perception are fundamental in designing interfaces that are perceivable and that map onto the human sensory capabilities. 9. **User Experience**: The concept of creating positive interactions encompasses not just the usability but also the affective responses, such as satisfaction and enjoyment. 10. **Error Recognition and Recovery**: Understanding how and why errors occur in human-machine interaction guides the design of systems that help users avoid mistakes or recover from them easily. 11. **Adaptability and Learning**: The concept that users learn and adapt to systems, and that systems can adapt to users, is common across cognitive science and HCI. This is key in designing interfaces that are capable of accommodating users of varying skill levels and learning styles. 12. **Multimodality**: Recognizing that human interaction with the environment is multimodal has led to designs that utilize multiple modes of input and output (e.g., touch, voice, gesture). These shared concepts help to form a bridge across the various contributions to cognitive science and HCI, providing a more cohesive understanding of how to design technology that works in harmony with human cognitive processes. Design principles and guidelines in HCI are often derived from these fundamental cognitive concepts, ensuring that despite the lack of a single, coherent framework, the discipline remains grounded in an understanding of human cognition.

User

I agree with you. Nevertheless, it shows many important questions are still open. Last questions: How would you locate the book ‘The Rational Quotient’ of Stanovich and others in this framework of HCI?

ChatGPT

“The Rationality Quotient” by Keith E. Stanovich and colleagues is actually more directly related to cognitive psychology and the study of rational thought rather than Human-Computer Interaction (HCI) per se. However, its insights can certainly be located within the broader context of HCI and cognitive science. Stanovich’s work focuses on the nature of rationality, distinguishing between different types of cognitive processes: those that are algorithmic (akin to computational processes in the brain), those that relate to reflective thinking and decision-making, and those that pertain to the knowledge base that individuals have at their disposal. Here’s how this relates to HCI: 1. **User Decision Making**: HCI design often considers how users make decisions while interacting with interfaces. Stanovich’s discussion of rational thinking can inform the design of systems that support better decision-making processes. 2. **Error Mitigation**: Understanding why people make irrational decisions or errors can help HCI professionals design systems that reduce the likelihood of user errors. 3. **Cognitive Biases**: The work on rationality includes the study of cognitive biases, which has implications for HCI in terms of designing interfaces that help mitigate these biases or use them to guide users towards more productive behaviors. 4. **Personalization and Adaptability**: Rationality research may influence the development of adaptive systems that personalize the user experience based on an individual’s demonstrated rationality levels or decision-making styles. 5. **Instructional Design**: Insights from studies of rationality can impact how educational technologies are designed to enhance learning and critical thinking skills. 6. **Reflective Interfaces**: The concept of reflective thinking can lead to the creation of interfaces that encourage users to think more deeply about their choices, such as in systems that prompt users to confirm or reconsider actions. While “The Rationality Quotient” doesn’t provide a direct commentary on HCI, the underlying cognitive principles it explores are highly relevant for designing systems that align with human cognitive strengths and compensate for weaknesses. Thus, within the framework of HCI, Stanovich’s work can be seen as contributing to the foundational cognitive understanding necessary to create user-centered designs that support effective, rational, and satisfying interactions with technology.

User

This sounds positive for the book. But I ask myself in which sense the concept of ‘rationality’ can be related to ‘cognitive processes’?

ChatGPT

The concept of “rationality” is deeply intertwined with cognitive processes. Rationality, in the context of cognitive science and psychology, is generally understood as the quality of being reasonable, based on facts or reason. It refers to the way in which individuals think, make decisions, and act based on certain principles of logic and consistency. Here’s how rationality relates to cognitive processes: 1. **Dual-Process Theories**: Theories like those proposed by Daniel Kahneman, which distinguish between two types of thinking processes—fast, automatic, intuitive thought (System 1) and slower, more deliberate, analytical thought (System 2)—are directly related to the study of rationality. Rationality is often associated with the reflective, analytical processes of System 2. 2. **Decision-Making**: Rationality is central to decision-making, which is a complex cognitive process involving the evaluation of options, the assessment of probabilities and outcomes, and the selection of a course of action. Cognitive science studies the heuristics and biases that affect this process. 3. **Logic and Reasoning**: Cognitive processes related to understanding logical structures and applying rules of inference fall under the umbrella of rationality. This includes deductive reasoning, inductive reasoning, and abductive reasoning. 4. **Problem Solving**: Rational approaches to problem solving involve systematic analysis, the application of knowledge, and consideration of potential solutions. This relates to cognitive processes involving working memory, planning, and executive function. 5. **Belief Formation and Revision**: Rationality also pertains to how individuals form, maintain, and revise their beliefs in light of new evidence. This involves cognitive processes related to learning, memory, and critical thinking. 6. **Metacognition**: Rational thinking involves metacognition, which is the awareness and control of one’s own cognitive processes. Metacognitive skills enable individuals to recognize when a problem is beyond their current understanding and to adjust their approach accordingly. 7. **Epistemic Rationality**: This aspect of rationality is concerned with how beliefs are justified and how knowledge is constructed, which is central to cognitive processes involving perception, interpretation, and the assessment of truth claims. Stanovich and others in the field of cognitive science study these aspects of cognition to understand how and why people think and act the way they do, especially when their actions deviate from what might be considered objectively rational. Their work seeks to explain cognitive biases, irrational preferences, and other anomalies in human thought and behavior. In “The Rationality Quotient,” Stanovich et al. propose a framework for understanding and measuring rationality that goes beyond the traditional IQ tests, which typically focus on problem-solving abilities and cognitive capacity. They emphasize the importance of rational thought and behavior in everyday decision-making and the practical outcomes of such thought processes. The concept of rationality thus serves as a lens through which cognitive processes are evaluated in terms of their effectiveness and alignment with goals and reality.

User

You are listing important contributions to the question. You are mentioning the Dual-Process Theory by Kahnemann. Why do you call this a ‘theory’?

ChatGPT

The term “Dual-Process Theory” is often used to describe a range of theoretical frameworks in psychology that posit two distinct modes or systems of thinking that operate when we process information. Daniel Kahneman popularized one of the most well-known dual-process models with his characterization of System 1 and System 2 in his book “Thinking, Fast and Slow.” The reason it’s called a “theory” is because it offers a systematic explanation for observations and findings from numerous psychological studies. A theory in science is a well-substantiated explanation acquired through the scientific method and repeatedly tested and confirmed through observation and experimentation. Here’s why the Dual-Process Theory fits this definition: 1. **Empirical Support**: The Dual-Process Theory is supported by a wide range of experiments and observations in cognitive psychology, neuropsychology, and related fields. 2. **Predictive Power**: It provides predictions about human behavior that can be tested in experimental settings. For instance, it predicts that under certain conditions, people will rely more on fast, intuitive thinking (System 1) than on slow, deliberative thinking (System 2). 3. **Explanatory Scope**: The theory explains a variety of phenomena across different contexts, such as why people struggle with statistical reasoning or are susceptible to cognitive biases. 4. **Integrative Framework**: It integrates findings from various areas of cognitive science into a coherent framework, explaining how different types of thinking contribute to behavior and judgment. 5. **Heuristic Value**: The theory has heuristic value in that it generates new hypotheses and lines of research, driving the field forward. In scientific parlance, a theory is not a mere guess or a tentative hypothesis; it is a comprehensive explanation for a range of phenomena. However, like all scientific theories, Dual-Process Theory is not beyond criticism or revision. As new data emerge, theories are often refined or challenged. Some researchers question the dichotomy proposed by Dual-Process Theories or argue for more nuanced or integrated models of cognition. Despite this, the Dual-Process Theory as presented by Kahneman has been immensely influential in our understanding of human thought and has shaped research in cognitive psychology, economics, and other fields.

User

OK, I agree.

ChatGPT

Great, I’m glad to hear that the information resonated with you! If you have any more questions or if there’s another topic you’re curious about, feel free to ask.

REVIEW: Keith E.Stanovich, Richard F.West, Maggie E.Toplak, “The Rational Quotient. Towards a Test of Rational Thinking”, MIT Press, 2016

(Last change: Nov 1, 2023)

CONTEXT

This text belongs to the overall theme REVIEWS.

In the last months I was engaged with the topic of text-generating algorithms and the possible impact for a scientific discourse (some first notices to this discussion you can find here (https://www.uffmm.org/2023/08/24/homo-sapiens-empirical-and-sustained-empirical-theories-emotions-and-machines-a-sketch/)). In this context it is important to clarify the role and structure of human actors as well as the concept of Intelligence. Meanwhile I have abandoned the word Intelligence completely because the inflationary use in today mainstream pulverises any meaning. Even in one discipline — like psychology — you can find many different concepts. In this context I have read the book of Stanovich et.al to have a prominent example of using the concept of intelligence, there combined with the concept of rationality, which is no less vague.

Introduction

The book “The Rationality Quotient” from 2016 represents not the beginning of a discourse but is a kind of summary of a long lasting discourse with many publications before. This makes this book interesting, but also difficult to read in the beginning, because the book is using nearly on every page theoretical terms, which are assumed to be known to the reader and cites other publications without giving sufficient explanations why exactly these cited publications are important. This is no argument against this book but sheds some light on the reader, who has to learn a lot to understand the text.

A text with the character of summing up its subject is good, because it has a confirmed meaning about the subject which enables a kind of clarity which is typical for that state of elaborated point of view.

In the following review it is not the goal to give a complete account of every detail of this book but only to present the main thesis and then to analyze the used methods and the applied epistemological framework.

Main Thesis of the Book

The reviewing starts with the basic assumptions and the main thesis.

FIGURE 1 : The beginning. Note: the number ‘2015’ has to be corrected to ‘2016’.

FIGURE 2 : First outline of cognition. Note: the number ‘2015’ has to be corrected to ‘2016’.

As mentioned in the introduction you will in the book not find a real overview about the history of psychological research dealing with the concept of Intelligence and also no overview about the historical discourse to the concept of Rationality, whereby the last concept has also a rich tradition in Philosophy. Thus, somehow you have to know it.

There are some clear warnings with regard to the fuzziness of the concept rationality (p.3) as well as to the concept of intelligence (p.15). From a point of view of Philosophy of Science it could be interesting to know what the circumstances are which are causing such a fuzziness, but this is not a topic of the book. The book talks within its own selected conceptual paradigm. Being in the dilemma, of what kind of intelligence paradigm one wants to use, the book has decided to work with the Cattell-Horn-Carroll (CTC) paradigm, which some call a theory. [1]

Directly from the beginning it is explained that the discussion of Intelligence is missing a clear explanation of the full human model of cognition (p.15) and that intelligence tests therefore are mostly measuring only parts of human cognitive functions. (p.21)

Thus let us have a more detailed look to the scenario.

[1] For a first look to the Cattell–Horn–Carroll theory see: https://en.wikipedia.org/wiki/Cattell%E2%80%93Horn%E2%80%93Carroll_theory, a first overview.

Which point of View?

The book starts with a first characterization of the concept of Rationality within a point of view which is not really clear. From different remarks one gets some hints to modern Cognitive Science (4,6), to Decision Theory (4) and Probability Calculus (9), but a clear description is missing.

And it is declared right from the beginning, that the main aim of the book is the Construction of a rational Thinking Test (4), because for the authors the used Intelligence Tests — later reduced to the Carroll-Horn-Carroll (CHC) type of intelligence test (16) — are too narrow in what they are measuring (15, 16, 21).

Related to the term Rationality the book characterizes some requirements which the term rationality should fulfill (e.g. ‘Rationality as a continuum’ (4), ’empirically based’ (4), ‘operationally grounded’ (4), a ‘strong definition’ (5), a ‘normative one’ (5), ‘normative model of optimum judgment’ (5)), but it is more or less open, what these requirements imply and what tacit assumptions have to be fulfilled, that this will work.

The two requirements ’empirically based’ as well as ‘operationally grounded’ point in the direction of an tacitly assumed concept of an empirical theory, but exactly this concept — and especially in association with the term cognitive science — isn’t really clear today.

Because the authors make in the next pages a lot of statements which claim to be serious, it seems to be important for the discussion in this review text to clarify the conditions of the ‘meaning of language expressions’ and of being classified as ‘being true’.

If we assume — tentatively — that the authors assume a scientific theory to be primarily a text whose expressions have a meaning which can transparently be associated with an empirical fact and if this is the case, then the expression will be understood as being grounded and classified as true, then we have characterized a normal text which can be used in everyday live for the communication of meanings which can become demonstrated as being true.

Is there a difference between such a ‘normal text’ and a ‘scientific theory’? And, especially here, where the context should be a scientific theory within the discipline of cognitive science: what distinguishes a normal text from a ‘scientific theory within cognitive science’?

Because the authors do not explain their conceptual framework called cognitive science we recur here to a most general characterization [2,3] which tells us, that cognitive science is not a single discipline but an interdisciplinary study which is taking from many different disciplines. It has not yet reached a state where all used methods and terms are embedded in one general coherent framework. Thus the relationship of the used conceptual frameworks is mostly fuzzy, unclear. From this follows directly, that the relationship of the different terms to each other — e.g. like ‘underlying preferences’ and ‘well ordered’ — is within such a blurred context rather unclear.

Even the simple characterization of an expression as ‘having an empirical meaning’ is unclear: what are the kinds of empirical subjects and the used terms? According to the list of involved disciplines the disciplines linguistics [4], psychology [5] or neuroscience [6] — besides others — are mentioned. But every of these disciplines is itself today a broad field of methods, not integrated, dealing with a multifaceted subject.

Using an Auxiliary Construction as a Minimal Point of Reference

Instead of becoming somehow paralyzed from these one-and-all characterizations of the individual disciplines one can try to step back and taking a look to basic assumptions about empirical perspectives.

If we take a group of Human Observers which shall investigate these subjects we could make the following assumptions:

  1. Empirical Linguistics is dealing with languages, spoken as well as written by human persons, within certain environments, and these can be observed as empirical entities.
  2. Empirical Psychology is dealing with the behavior of human persons (a kind of biological systems) within certain environments, and these can be observed.
  3. Empirical Neuroscience is dealing with the brain as part of a body which is located in some environment, and this all can be observed.

The empirical observations of certain kinds of empirical phenomena can be used to define more abstract concepts, relations, and processes. These more abstract concepts, relations, and processes have ‘as such’ no empirical meaning! They constitute a formal framework which has to become correlated with empirical facts to get some empirical meaning. As it is known from philosophy of science [7] the combination of empirical concepts within a formal framework of abstracts terms can enable ‘abstract meanings’ which by logical conclusions can produce statements which are — in the moment of stating them — not empirically true, because ‘real future’ has not yet happened. And on account of the ‘generality’ of abstract terms compared to the finiteness and concreteness of empirical facts it can happen, that the inferred statements never will become true. Therefore the mere usage of abstract terms within a text called scientific theory does not guarantee valid empirical statements.

And in general one has to state, that a coherent scientific theory including e.g. linguistics, psychology and neuroscience, is not yet in existence.

To speak of cognitive science as if this represents a clearly defined coherent discipline seems therefore to be misleading.

This raises questions about the project of a constructing a coherent rational thinking test (CART).

[2] See ‘cognitive science’ in wikipedia: https://en.wikipedia.org/wiki/Cognitive_science

[3] See too ‘cognitive science’ in the Stanford Encyclopedia of Philosophy: https://plato.stanford.edu/entries/cognitive-science/

[4] See ‘linguistics’ in wikipedia: https://en.wikipedia.org/wiki/Linguistics

[5] See ‘psychology’ in wikipedia: https://en.wikipedia.org/wiki/Psychology

[6] See ‘neuroscience’ in wikipedia: https://en.wikipedia.org/wiki/Neuroscience

[7] See ‘philosophy of science’ in wikipedia: https://en.wikipedia.org/wiki/Philosophy_of_science

‘CART’ TEST FRAMEWORK – A Reconstruction from the point of View of Philosophy of Science

Before I will dig deeper into the theory I try to understand the intended outcome of this theory as some point of reference. The following figure 3 gives some hints.

FIGURE 3 : Outline of the Test Framework based on the Appendix in Stanovich et.al 2016. This Outline is a Reconstruction by the author of this review.

It seems to be important to distinguish at least three main parts of the whole scientific endeavor:

  1. The group of scientists which has decided to process a certain problem.
  2. The generated scientific theory as a text.
  3. The description of a CART Test, which describes a procedure, how the abstract terms of the theory can be associated with real facts.

From the group of scientists (Stanovich et al.) we know that they understand themselves as cognitive scientists (without having a clear characterization, what this means concretely).

The intended scientific theory as a text is here assumed to be realized in the book, which is here the subject of a review.

The description of a CART Test is here taken from the appendix of the book.

To understand the theory it is interesting to see, that in the real test the test system (assumed here as a human person) has to read (and hear?) a instruction, how to proceed with a task form, and then the test system (a human person) has to process the test form in the way it has understood the instructions and the test form as it is.

The result is a completed test form.

And it is then this completed test form which will be rated according to the assumed CART theory.

This complete paradigm raises a whole bunch of questions which to answer here in full is somehow out of range.

Mix-Up of Abstract Terms

Because the Test Scenario presupposes a CART theory and within this theory some kind of a model of intended test users it can be helpful to have a more closer look to this assumed CART model, which is located in a person.

FIGURE 4 : General outline of the logic behind CART according to Stanovich et al. (2016).

The presented cognitive architecture shall present a framework for the CART (Comprehensive Assessment of Rational Thinking), whereby this framework is including a model. The model is not only assumed to contextualize and classify heuristics and tasks, but it also presents Rationality in a way that one can deduce mental characteristics included in rationality.(cf. 37)

Because the term Rationality is not an individual empirical fact but an abstract term of a conceptual framework, this term has as such no meaning. The meaning of this abstract term has to be arranged by relations to other abstract terms which themselves are sufficiently related to concrete empirical statements. And these relations between abstract terms and empirical facts (represented as language expressions) have to be represented in a manner, that it is transparent how the the measured facts are related to the abstract terms.

Here Stanovich et al. is using another abstract term Mind, which is associated with characteristics called mental characteristics: Reflective mind, Algorithmic Level, and Mindware.

And then the text tells that Rationality is presenting mental characteristics. What does this mean? Is rationality different from the mind, who has some characteristics, which can be presented from rationality using somehow the mind, or is rationality nevertheless part of the mind and manifests themself in these mental characteristics? But what kind of the meaning could this be for an abstract term like rationality to be part of the mind? Without an explicit model associated with the term Mind which arranges the other abstract term Rationality within this model there exists no meaning which can be used here.

These considerations are the effect of a text, which uses different abstract terms in a way, which is rather unclear. In a scientific theory this should not be the case.

Measuring Degrees of Rationality

In the beginning of chapter 4 Stanovich et al. are looking back to chapter 1. Here they built up a chain of arguments which illustrate some general perspective (cf. 63):

  1. Rationality has degrees.
  2. These degrees of rationality can be measured.
  3. Measurement is realized by experimental methods of cognitive science.
  4. The measuring is based on the observable behavior of people.
  5. The observable behavior can manifest whether the individual actor (a human person) follows assumed preferences related to an assumed axiom of choice.
  6. Observable behavior which is classified as manifesting assumed internal preferences according to an assumed internal axiom of choice can show descriptive and procedural invariance.
  7. Based on these deduced descriptive and procedural invariance, it can be inferred further, that these actors are behaving as if they are maximizing utility.
  8. It is difficult to assess utility maximization directly.
  9. It is much easier to assess whether one of the axioms of rational choice is being violated.

These statements characterize the Logic of the CART according to Stanovich et al. (cf.64)

A major point in this argumentation is the assumption, that observable behavior is such, that one can deduce from the properties of this behavior those attributes/ properties, which point (i) to an internal model of an axiom of choice, (ii) to internal processes, which manifest the effects of this internal model, (iii) to certain characteristics of these internal processes which allow the deduction of the property of maximizing utility or not.

These are very strong assumptions.

If one takes further into account the explanations from the pages 7f about the required properties for an abstract term axiom of choice (cf. figure 1) then these assumptions appear to be very demanding.

Can it be possible to extract the necessary meaning out of observable behavior in a way, which is clear enough by empirical standards, that this behavior shows property A and not property B ?

As we know from the description of the CART in the appendix of the book (cf. figure 3) the real behavior assumed for an CART is the (i) reading (or hearing?) of an instruction communicated by ordinary English, and then (ii) a behavior deduced from the understanding of the instruction, which (iii) manifests themself in the reading of a form with a text and filling out this form in predefined positions in a required language.

This described procedure is quite common throughout psychology and similar disciplines. But it is well known, that the understanding of language instructions is very error-prone. Furthermore, the presentation of a task as a text is inevitably highly biased and additionally too very error-prone with regard to the understanding (this is a reason why in usability testing purely text-based tests are rather useless).

The point is, that the empirical basis is not given as a protocol of observations of language free behavior but of a behavior which is nearly completely embedded in the understanding and handling of texts. This points to the underlying processes of text understanding which are completely internal to the actor. There exists no prewired connection between the observable strings of signs constituting a text and the possible meaning which can be organized by the individual processes of text understanding.

Stopping Here

Having reached this point of reading and trying to understand I decided to stop here: to many questions on all levels of a scientific discourse and the relationships between main concepts and terms appear in the book of Stanovich et al. to be not clear enough. I feel therefore confirmed in my working hypothesis from the beginning, that the concept of intelligence today is far too vague, too ambiguous to contain any useful kernel of meaning any more. And concepts like Rationality, Mind (and many others) seem to do not better.

Chatting with chatGPT4

Since April 2023 I have started to check the ability of chatGPT4 to contribute to a philosophical and scientific discourse. The working hypothesis is, that chatGPT4 is good in summarizing the common concepts, which are used in public texts, but chatGPT is not able for critical evaluations, not for really new creative ideas and in no case for systematic analysis of used methods, used frameworks, their interrelations, their truth-conditons and much more, what it cannot. Nevertheless, it is a good ‘common sense check’. Until now I couldn’t learn anything new from these chats.

If you have read this review with all the details and open questions you will be perhaps a little bit disappointed about the answers from chatGPT4. But keep calm: it is a bit helpful.

Protocol with chatGPT4

Pain does not replace the truth …

Time: Oct 18, 2023 — Oct 24, 2023)
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.d
e

CONTEXT

This post is part of the uffmm science blog. It is a translation from the German source: https://www.cognitiveagent.org/2023/10/18/schmerz-ersetzt-nicht-die-wahrheit/. For the translation I have used chatGPT4 and deepl.com. Because in the text the word ‘hamas’ is occurring, chatGPT didn’t translate a long paragraph with this word. Thus the algorithm is somehow ‘biased’ by a certain kind of training. This is really bad because the following text is offers some reflections about a situation where someone ‘hates’ others. This is one of our biggest ‘disease’ today.

Preface

The Hamas terrorist attack on Israeli citizens on October 7, 2023, has shaken the world. For years, terrorist acts have been shaking our world. In front of our eyes, a is attempting, since 2022 (actually since 2014), to brutally eradicate the entire Ukrainian population. Similar events have been and are taking place in many other regions of the world…

… Pain does not replace the truth [0]…

Truth is not automatic. Making truth available requires significantly more effort than remaining in a state of partial truth.

The probability that a person knows the truth or seeks the truth is smaller than staying in a state of partial truth or outright falsehood.

Whether in a democracy, falsehood or truth predominates depends on how a democracy shapes the process of truth-finding and the communication of truth. There is no automatic path to truth.

In a dictatorship, the likelihood of truth being available is extremely dependent on those who exercise centralized power. Absolute power, however, has already fundamentally broken with the truth (which does not exclude the possibility that this power can have significant effects).

The course of human history on planet Earth thus far has shown that there is evidently no simple, quick path that uniformly leads all people to a state of happiness. This must have to do with humans themselves—with us.

The interest in seeking truth, in cultivating truth, in a collective process of truth, has never been strong enough to overcome the everyday exclusions, falsehoods, hostilities, atrocities…

One’s own pain is terrible, but it does not help us to move forward…

Who even wants a future for all of us?????

[0] There is an overview article by the author from 2018, in which he presents 15 major texts from the blog “Philosophie Jetzt” ( “Philosophy Now”) ( “INFORMAL COSMOLOGY. Part 3a. Evolution – Truth – Society. Synopsis of previous contributions to truth in this blog” ( https://www.cognitiveagent.org/2018/03/20/informelle-kosmologie-teil-3a-evolution-wahrheit-gesellschaft-synopse-der-bisherigen-beitraege-zur-wahrheit-in-diesem-blog/ )), in which the matter of truth is considered from many points of view. In the 5 years since, society’s treatment of truth has continued to deteriorate dramatically.

Hate cancels the truth


Truth is related to knowledge. However, in humans, knowledge most often is subservient to emotions. Whatever we may know or wish to know, when our emotions are against it, we tend to suppress that knowledge.

One form of emotion is hatred. The destructive impact of hatred has accompanied human history like a shadow, leaving a trail of devastation everywhere it goes: in the hater themselves and in their surroundings.

The event of the inhumane attack on October 7, 2023 in Israel, claimed by Hamas, is unthinkable without hatred.

If one traces the history of Hamas since its founding in 1987 [1,2], then one can see that hatred is already laid down as an essential moment in its founding. This hatred is joined by the moment of a religious interpretation, which calls itself Islamic, but which represents a special, very radicalized and at the same time fundamentalist form of Islam.

The history of the state of Israel is complex, and the history of Judaism is no less so. And the fact that today’s Judaism also contains strong components that are clearly fundamentalist and to which hatred is not alien, this also leads within many other factors at the core to a constellation of fundamentalist antagonisms on both sides that do not in themselves reveal any approaches to a solution. The many other people in Israel and Palestine ‘around’ are part of these ‘fundamentalist force fields’, which simply evaporate humanity and truth in their vicinity. By the trail of blood one can see this reality.

Both Judaism and Islam have produced wonderful things, but what does all this mean in the face of a burning hatred that pushes everything aside, that sees only itself.

[1] Jeffrey Herf, Sie machen den Hass zum Weltbild, FAZ 20.Okt. 23, S.11 (Abriss der Geschichte der Hamas und ihr Weltbild, als Teil der größeren Geschichte) (Translation:They make hatred their worldview, FAZ Oct. 20, 23, p.11 (outlining the history of Hamas and its worldview, as part of the larger story)).

[2] Joachim Krause, Die Quellen des Arabischen Antisemitismus, FAZ, 23.10.2023,p.8 (This text “The Sources of Arab Anti-Semitism” complements the account by Jeffrey Herf. According to Krause, Arab anti-Semitism has been widely disseminated in the Arab world since the 1920s/ 30s via the Muslim Brotherhood, founded in 1928).

A society in decline

When truth diminishes and hatred grows (and, indirectly, trust evaporates), a society is in free fall. There is no remedy for this; the use of force cannot heal it, only worsen it.

The mere fact that we believe that lack of truth, dwindling trust, and above all, manifest hatred can only be eradicated through violence, shows how seriously we regard these phenomena and at the same time, how helpless we feel in the face of these attitudes.

In a world whose survival is linked to the availability of truth and trust, it is a piercing alarm signal to observe how difficult it is for us as humans to deal with the absence of truth and face hatred.

Is Hatred Incurable?

When we observe how tenaciously hatred persists in humanity, how unimaginably cruel actions driven by hatred can be, and how helpless we humans seem in the face of hatred, one might wonder if hatred is ultimately not a kind of disease—one that threatens the hater themselves and, particularly, those who are hated with severe harm, ultimately death.

With typical diseases, we have learned to search for remedies that can free us from the illness. But what about a disease like hatred? What helps here? Does anything help? Must we, like in earlier times with people afflicted by deadly diseases (like the plague), isolate, lock away, or send away those who are consumed by hatred to some no man’s land? … but everyone knows that this isn’t feasible… What is feasible? What can combat hatred?

After approximately 300.000 years of Homo sapiens on this planet, we seem strangely helpless in the face of the disease of hatred.

What’s even worse is that there are other people who see in every hater a potential tool to redirect that hatred toward goals they want to damage or destroy, using suitable manipulation. Thus, hatred does not disappear; on the contrary, it feels justified, and new injustices fuel the emergence of new hatred… the disease continues to spread.

One of the greatest events in the entire known universe—the emergence of mysterious life on this planet Earth—has a vulnerable point where this life appears strangely weak and helpless. Throughout history, humans have demonstrated their capability for actions that endure for many generations, that enable more people to live fulfilling lives, but in the face of hatred, they appear oddly helpless… and the one consumed by hatred is left incapacitated, incapable of anything else… plummeting into their dark inner abyss…


Instead of hatred, we need (minimally and in outline):

  1. Water: To sustain human life, along with the infrastructure to provide it, and individuals to maintain that infrastructure. These individuals also require everything they need for their own lives to fulfill this task.
  2. Food: To sustain human life, along with the infrastructure for its production, storage, processing, transportation, distribution, and provision. Individuals are needed to oversee this infrastructure, and they, too, require everything they need for their own lives to fulfill this task.
  3. Shelter: To provide a living environment, including the infrastructure for its creation, provisioning, maintenance, and distribution. Individuals are needed to manage this provision, and they, too, require everything they need for their own lives to fulfill this task.
  4. Energy: For heating, cooling, daily activities, and life itself, along with the infrastructure for its generation, provisioning, maintenance, and distribution. Individuals are needed to oversee this infrastructure, and they, too, require everything they need for their own lives to fulfill this task.
  5. Authorization and Participation: To access water, food, shelter, and energy. This requires an infrastructure of agreements, and individuals to manage these agreements. These individuals also require everything they need for their own lives to fulfill this task.
  6. Education: To be capable of undertaking and successfully completing tasks in real life. This necessitates individuals with enough experience and knowledge to offer and conduct such education. These individuals also require everything they need for their own lives to fulfill this task.
  7. Medical Care: To help with injuries, accidents, and illnesses. This requires individuals with sufficient experience and knowledge to offer and provide medical care, as well as the necessary facilities and equipment. These individuals also require everything they need for their own lives to fulfill this task.
  8. Communication Facilities: So that everyone can receive helpful information needed to navigate their world effectively. This requires suitable infrastructure and individuals with enough experience and knowledge to provide such information. These individuals also require everything they need for their own lives to fulfill this task.
  9. Transportation Facilities: So that people and goods can reach the places they need to go. This necessitates suitable infrastructure and individuals with enough experience and knowledge to offer such infrastructure. These individuals also require everything they need for their own lives to fulfill this task.
  10. Decision Structures: To mediate the diverse needs and necessary services in a way that ensures most people have access to what they need for their daily lives. This requires suitable infrastructure and individuals with enough experience and knowledge to offer such infrastructure. These individuals also require everything they need for their own lives to fulfill this task.
  11. Law Enforcement: To ensure disruptions and damage to the infrastructure necessary for daily life are resolved without creating new disruptions. This requires suitable infrastructure and individuals with enough experience and knowledge to offer such services. These individuals also require everything they need for their own lives to fulfill this task.
  12. Sufficient Land: To provide enough space for all these requirements, along with suitable soil (for water, food, shelter, transportation, storage, production, etc.).
  13. Suitable Climate
  14. A functioning ecosystem.
  15. A capable scientific community to explore and understand the world.
  16. Suitable technology to accomplish everyday tasks and support scientific endeavors.
  17. Knowledge in the minds of people to understand daily events and make responsible decisions.
  18. Goal orientations (preferences, values, etc.) in the minds of people to make informed decisions.
  19. Ample time and peace to allow these processes to occur and produce results.
  20. Strong and lasting relationships with other population groups pursuing the same goals.
  21. Sufficient commonality among all population groups on Earth to address their shared needs where they are affected.
  22. A sustained positive and constructive competition for those goal orientations that make life possible and viable for as many people on this planet (in this solar system, in this galaxy, etc.) as possible.
  23. The freedom present within the experiential world, included within every living being, especially within humans, should be given as much room as possible, as it is this freedom that can overcome false ideas from the past in the face of a constantly changing world, enabling us to potentially thrive in the world of the future.

Review of Nancy Leveson (2020), Are you sure your software will not kill anyone?


Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

Time: April 2, 2020 — Oct 20, 2023

CONTEXT

This post is part of the REVIEW section of the uffmm blog.

Review of Nancy Leveson
Are you sure your software will not kill anyone?

A Review from the Point of View of the DAAI Paradigm

POSTSCRIPT

Three years after the above reviewed paper Nancy Leveson published a new paper to the subject of Safety Critical Systems together with John P. Thomas with the title Inside Risks. Certification of Safety-Critical Systems. Seeking new approaches toward ensuring the safety of software-intensive systems. in the COMMUNICATIONS OF THE ACM, OCT 2023, VOL . 66, NO. 1 0, 22-26

In a first reading one can get the impression that the task of securing safety critical systems seems to be more and more unsolvable. The complexity of the task seems to be is beyond all paradigms we know today. Is this the end of modern technology? Nancy and John exclude explicitly that a ‘solution’ could be based on ‘more AI’ only; without human persons it will not work. What does this mean?

My personal judgment: We have to go back to ‘start’. We have to consider this challenge from scratch in a new way: What do we really need? What methods are known? What is still missing?

A personal guess: We have to think about the human factor more radically: the human factor is not only one more ‘factor’ besides others in the scenario; the human factor is indeed an ‘object’ but at the same time also the ‘author’ inducing implicitly all the conditions of thinking, the medium of communications, as well as the decisions. To be silent about this causes hiding many important factors which are effective.

Collective human-machine intelligence and text generation. A transdisciplinary analysis.

Author: Gerd Doeben-Henisch

Email: info@uffmm.org

Time: Sept 25, 2023 – Oct 3, 2023

Translation: This text is a translation from the German Version into English with the aid of the software deepL.com as well as with chatGPT4, moderated by the author. The style of the two translators is different. The author is not good enough to classify which translator is ‘better’.

CONTEXT

This text is the outcome of a conference held at the Technical University of Darmstadt (Germany) with the title: Discourses of disruptive digital technologies using the example of AI text generators ( https://zevedi.de/en/topics/ki-text-2/ ). A German version of this article will appear in a book from de Gruyter as open access in the beginning of 2024.

Collective human-machine intelligence and text generation. A transdisciplinary analysis.

Abstract

Based on the conference theme “AI – Text and Validity. How do AI text generators change scientific discourse?” as well as the special topic “Collective human-machine intelligence using the example of text generation”, the possible interaction relationship between text generators and a scientific discourse will be played out in a transdisciplinary analysis. For this purpose, the concept of scientific discourse will be specified on a case-by-case basis using the text types empirical theory as well as sustained empirical theory in such a way that the role of human and machine actors in these discourses can be sufficiently specified. The result shows a very clear limitation of current text generators compared to the requirements of scientific discourse. This leads to further fundamental analyses on the example of the dimension of time with the phenomenon of the qualitatively new as well as on the example of the foundations of decision-making to the problem of the inherent bias of the modern scientific disciplines. A solution to the inherent bias as well as the factual disconnectedness of the many individual disciplines is located in the form of a new service of transdisciplinary integration by re-activating the philosophy of science as a genuine part of philosophy. This leaves the question open whether a supervision of the individual sciences by philosophy could be a viable path? Finally, the borderline case of a world in which humans no longer have a human counterpart is pointed out.

AUDIO: Keyword Sound

STARTING POINT

This text takes its starting point from the conference topic “AI – Text and Validity. How do AI text generators change scientific discourses?” and adds to this topic the perspective of a Collective Human-Machine Intelligence using the example of text generation. The concepts of text and validity, AI text generators, scientific discourse, and collective human-machine intelligence that are invoked in this constellation represent different fields of meaning that cannot automatically be interpreted as elements of a common conceptual framework.

TRANSDISCIPLINARY

In order to be able to let the mentioned terms appear as elements in a common conceptual framework, a meta-level is needed from which one can talk about these terms and their possible relations to each other. This approach is usually located in the philosophy of science, which can have as its subject not only single terms or whole propositions, but even whole theories that are compared or possibly even united. The term transdisciplinary [1] , which is often used today, is understood here in this philosophy of science understanding as an approach in which the integration of different concepts is redeemed by introducing appropriate meta-levels. Such a meta-level ultimately always represents a structure in which all important elements and relations can gather.

[1] Jürgen Mittelstraß paraphrases the possible meaning of the term transdisciplinarity as a “research and knowledge principle … that becomes effective wherever a solely technical or disciplinary definition of problem situations and problem solutions is not possible…”. Article Methodological Transdisciplinarity, in LIFIS ONLINE, www.leibniz-institut.de, ISSN 1864-6972, p.1 (first published in: Technology Assessment – Theory and Practice No.2, 14.Jg., June 2005, 18-23). In his text Mittelstrass distinguishes transdisciplinarity from the disciplinary and from the interdisciplinary. However, he uses only a general characterization of transdisciplinarity as a research guiding principle and scientific form of organization. He leaves the concrete conceptual formulation of transdisciplinarity open. This is different in the present text: here the transdisciplinary theme is projected down to the concreteness of the related terms and – as is usual in philosophy of science (and meta-logic) – realized by means of the construct of meta-levels.

SETTING UP A STRUCTURE

Here the notion of scientific discourse is assumed as a basic situation in which different actors can be involved. The main types of actors considered here are humans, who represent a part of the biological systems on planet Earth as a kind of Homo sapiens, and text generators, which represent a technical product consisting of a combination of software and hardware.

It is assumed that humans perceive their environment and themselves in a species-typical way, that they can process and store what they perceive internally, that they can recall what they have stored to a limited extent in a species-typical way, and that they can change it in a species-typical way, so that internal structures can emerge that are available for action and communication. All these elements are attributed to human cognition. They are working partially consciously, but largely unconsciously. Cognition also includes the subsystem language, which represents a structure that on the one hand is largely species-typically fixed, but on the other hand can be flexibly mapped to different elements of cognition.

In the terminology of semiotics [2] the language system represents a symbolic level and those elements of cognition, on which the symbolic structures are mapped, form correlates of meaning, which, however, represent a meaning only insofar as they occur in a mapping relation – also called meaning relation. A cognitive element as such does not constitute meaning in the linguistic sense. In addition to cognition, there are a variety of emotional factors that can influence both cognitive processes and the process of decision-making. The latter in turn can influence thought processes as well as action processes, consciously as well as unconsciously. The exact meaning of these listed structural elements is revealed in a process model [3] complementary to this structure.

[2] See, for example, Winfried Nöth: Handbuch der Semiotik. 2nd, completely revised edition. Metzler, Stuttgart/Weimar, 2000

[3] Such a process model is presented here only in partial aspects.

SYMBOLIC COMMUNICATION SUB-PROCESS

What is important for human actors is that they can interact in the context of symbolic communication with the help of both spoken and written language. Here it is assumed – simplistically — that spoken language can be mapped sufficiently accurately into written language, which in the standard case is called text. It should be noted that texts only represent meaning if the text producers involved, as well as the text recipients, have a meaning function that is sufficiently similar.
For texts by human text producers it is generally true that, with respect to concrete situations, statements as part of texts can be qualified under agreed conditions as now matching the situation (true) or as now not now matching the situation (false). However, a now-true can become a now-not-true again in the next moment and vice versa.

This dynamic fact refers to the fact that a punctual occurrence or non-occurrence of a statement is to be distinguished from a structural occurrence/ non-occurrence of a statement, which speaks about occurrence/ non-occurrence in context. This refers to relations which are only indirectly apparent in the context of a multitude of individual events, if one considers chains of events over many points in time. Finally, one must also consider that the correlates of meaning are primarily located within the human biological system. Meaning correlates are not automatically true as such, but only if there is an active correspondence between a remembered/thought/imagined meaning correlate and an active perceptual element, where an intersubjective fact must correspond to the perceptual element. Just because someone talks about a rabbit and the recipient understands what a rabbit is, this does not mean that there is also a real rabbit which the recipient can perceive.

TEXT-GENERATORS

When distinguishing between the two different types of actors – here biological systems of the type Homo sapiens and there technical systems of the type text-generators – a first fundamental asymmetry immediately strikes the eye: so-called text-generators are entities invented and built by humans; furthermore, it is humans who use them, and the essential material used by text-generators are furthermore texts, which are considered human cultural property, created and used by humans for a variety of discourse types, here restricted to scientific discourse.


In the case of text generators, let us first note that we are dealing with machines that have input and output, a minimal learning capability, and whose input and output can process text-like objects.
Insofar as text generators can process text-like objects as input and process them again as output, an exchange of texts between humans and text generators can take place in principle.

At the current state of development (September 2023), text generators do not yet have an independent real-world perception within the scope of their input, and the entire text generator system does not yet have such processes as those that enable species-typical cognitions in humans. Furthermore, a text generator does not yet have a meaning function as it is given with humans.

From this fact it follows automatically that text generators cannot decide about selective or structural correctness/not correctness in the case of statements of a text. In general, they do not have their own assignment of meaning as with humans. Texts generated by text generators only have a meaning if a human as a recipient automatically assigns a meaning to a text due to his species-typical meaning relation, because this is the learned behavior of a human. In fact, the text generator itself has never assigned any meaning to the generated text. Salopp one could also formulate that a technical text generator works like a parasite: it collects texts that humans have generated, rearranges them combinatorially according to formal criteria for the output, and for the receiving human a meaning event is automatically triggered by the text in the human, which does not exist anywhere in the text generator.
Whether this very restricted form of text generation is now in any sense detrimental or advantageous for the type of scientific discourse (with texts), that is to be examined in the further course.

SCIENTIFIC DISCOURSE

There is no clear definition for the term scientific discourse. This is not surprising, since an unambiguous definition presupposes that there is a fully specified conceptual framework within which terms such as discourse and scientific can be clearly delimited. However, in the case of a scientific enterprise with a global reach, broken down into countless individual disciplines, this does not seem to be the case at present (Sept 2023). For the further procedure, we will therefore fall back on core ideas of the discussion in philosophy of science since the 20th century [4]and we will introduce working hypotheses on the concept of empirical theory as well as sustainable empirical theory, so that a working hypothesis on the concept of scientific discourse will be possible, which has a minimal sharpness.

[4] A good place to start may be: F. Suppe, Editor. The Structure of Scientific Theories. University of Illinois Press, Urbana, 2 edition, 1979.

EMPIRICAL THEORY

The following assumptions are made for the notion of an empirical theory:

  1. an empirical theory is basically a text, written in a language that all participants understand.
  2. one part of the theory contains a description of an initial situation, the statements of which can be qualified by the theory users as now matching (true) or now not matching (false).
  3. another part of the theory contains a text that lists all changes that, to the knowledge of the participants, occur in the context of the initial situation and can change parts of the initial situation.
  4. changes in the initial situation are expressed by replacing certain statements of the initial situation with other statements. The resulting new text replaces the previous text.
  5. through the possibility of generating new initial situations, there is the possibility of making predictions (forming expectations) by applying rules of change to an applicable initial situation several times (at least once) in succession. The texts that are generated in each case induce in the minds of the participants, on the basis of the available meaning functions, the idea of a situation that is demanded to become qualified as now matching with intersubjective reality, should it occur. In the case of occurrence, the situation must correspond to the conception in the mind via perception. Whether such a correspondence can be established, after how long time and to what extent, this is basically open in the presence of the prediction making (the formation of an expectation) (autonomy of the object!).

SCIENTIFIC DISCOURSE NEW

Even before the introduction of the notion of a sustainable empirical theory, the notion of an empirical theory just introduced allows for initial statements about what a scientific discourse could or should be.
It is assumed here that the participants in a scientific discourse exchange their opinions about the world in the form of texts, all of which meet the requirements of an empirical theory.

WITH HUMANS

Insofar as the participants in the discourse are human beings, we know that they have basic species-typical abilities to generate subjective meanings by means of language, which can be decided as now true or now not true by reference to a real situation.

WITH TEXT GENERATORS

Insofar as the discourse participants would be text generators lacking these elementary skills (meaning, reality reference), they drop out as independent actors for a scientific discourse.
To illustrate this, here is a self-assessment of chatGPT4, a current text generator:

ChatGPT

Yes, your summary of our previous dialogue is accurate based on the points you’ve listed:

  1. I cannot read a book in the same way a human does, meaning I don’t interpret or comprehend in the same manner.
  2. My knowledge is up to September 2021, which means that I might not be aware of events, advancements, or publications after that point.
  3. My “learning” is different from human learning. I don’t learn from new experiences or information after my last update. Instead, I generate responses based on patterns in the data I was trained on.
  4. I don’t cite individual sources in my responses. I provide information based on the patterns recognized during my training.
  5. Responses I give can vary slightly even if the same question is posed multiple times.
  6. I can’t directly attribute my statements to specific, individual sources.
  7. The specifics about my training data and the processes that determine my responses are not transparently provided in each interaction.

You’ve captured the essence of our prior conversation well. If you have further questions or need clarifications on any of these points, please let me know. [5]

[5] This text is from a chat experiment conducted by the author with chatGPT4 on August 31, 2023. The citation can be found in: Gerd Doeben-Henisch, CONTINUE EXPERIMENT WITH CHATGPT4: Science, Empirical Science, Philosophy of Science, uffmm, Integrating Engineering and the Human Factor, eJournal uffmm.org ISSN 2567-6458,
8/31/2023 in [https://www.uffmm.org/2023/08/31/continue-experiment-with-chatgpt4-science-empirical-science-philosophy-of-science/ ] (accessed 9/27/2023).

The question then arises whether (current) text generators, despite their severely limited capabilities, could nevertheless contribute to scientific discourse, and what this contribution means for human participants. Since text generators fail for the hard scientific criteria (decidable reality reference, reproducible predictive behavior, separation of sources), one can only assume a possible contribution within human behavior: since humans can understand and empirically verify texts, they would in principle be able to rudimentarily classify a text from a text generator within their considerations.

For hard theory work, these texts would not be usable, but due to their literary-associative character across a very large body of texts, the texts of text generators could – in the positive case – at least introduce thoughts into the discourse through texts as stimulators via the detour of human understanding, which would stimulate the human user to examine these additional aspects to see if they might be important for the actual theory building after all. In this way, the text generators would not participate independently in the scientific discourse, but they would indirectly support the knowledge process of the human actors as aids to them.[6]

[6] A detailed illustration of this associative role of a text generator can also be found in (Doeben-Henisch, 2023) on the example of the term philosophy of science and on the question of the role of philosophy of science.

CHALLENGE DECISION

The application of an empirical theory can – in the positive case — enable an expanded picture of everyday experience, in that, related to an initial situation, possible continuations (possible futures) are brought before one’s eyes.
For people who have to shape their own individual processes in their respective everyday life, however, it is usually not enough to know only what one can do. Rather, everyday life requires deciding in each case which continuation to choose, given the many possible continuations. In order to be able to assert themselves in everyday life with as little effort as possible and with – at least imagined – as little risk as possible, people have adopted well-rehearsed behavior patterns for as many everyday situations as possible, which they follow spontaneously without questioning them anew each time. These well-rehearsed behavior patterns include decisions that have been made. Nevertheless, there are always situations in which the ingrained automatisms have to be interrupted in order to consciously clarify the question for which of several possibilities one wants to decide.

The example of an individual decision-maker can also be directly applied to the behavior of larger groups. Normally, even more individual factors play a role here, all of which have to be integrated in order to reach a decision. However, the characteristic feature of a decision situation remains the same: whatever knowledge one may have at the time of decision, when alternatives are available, one has to decide for one of many alternatives without any further, additional knowledge at this point. Empirical science cannot help here [7]: it is an indisputable basic ability of humans to be able to decide.

So far, however, it remains rather hidden in the darkness of not knowing oneself, which ultimately leads to deciding for one and not for the other. Whether and to what extent the various cultural patterns of decision-making aids in the form of religious, moral, ethical or similar formats actually form or have formed a helpful role for projecting a successful future appears to be more unclear than ever.[8]

[7] No matter how much detail she can contribute about the nature of decision-making processes.

[8] This topic is taken up again in the following in a different context and embedded there in a different solution context.

SUSTAINABLE EMPIRICAL THEORY

Through the newly flared up discussion about sustainability in the context of the United Nations, the question of prioritizing action relevant to survival has received a specific global impulse. The multitude of aspects that arise in this discourse context [9] are difficult, if not impossible, to classify into an overarching, consistent conceptual framework.

[9] For an example see the 17 development goals: [https://unric.org/de/17ziele/] (Accessed: September 27, 2023)

A rough classification of development goals into resource-oriented and actor-oriented can help to make an underlying asymmetry visible: a resource problem only exists if there are biological systems on this planet that require a certain configuration of resources (an ecosystem) for their physical existence. Since the physical resources that can be found on planet Earth are quantitatively limited, it is possible, in principle, to determine through thought and science under what conditions the available physical resources — given a prevailing behavior — are insufficient. Added to this is the factor that biological systems, by their very existence, also actively alter the resources that can be found.

So, if there should be a resource problem, it is exclusively because the behavior of the biological systems has led to such a biologically caused shortage. Resources as such are neither too much, nor too little, nor good, nor bad. If one accepts that the behavior of biological systems in the case of the species Homo sapiens can be controlled by internal states, then the resource problem is primarily a cognitive and emotional problem: Do we know enough? Do we want the right thing? And these questions point to motivations beyond what is currently knowable. Is there a dark spot in the human self-image here?

On the one hand, this questioning refers to the driving forces for a current decision beyond the possibilities of the empirical sciences (trans-empirical, meta-physical, …), but on the other hand, this questioning also refers to the center/ core of human competence. This motivates to extend the notion of empirical theory to the notion of a sustainable empirical theory. This does not automatically solve the question of the inner mechanism of a value decision, but it systematically classifies the problem. The problem thus has an official place. The following formulation is suggested as a characterization for the concept of a sustainable empirical theory:

  1. a sustainable empirical theory contains an empirical theory as its core.
    1. besides the parts of initial situation, rules of change and application of rules of change, a sustainable theory also contains a text with a list of such situations, which are considered desirable for a possible future (goals, visions, …).
    2. under the condition of goals, it is possible to minimally compare each current situation with the available goals and thereby indicate the degree of goal achievement.

Stating desired goals says nothing about how realistic or promising it is to pursue those goals. It only expresses that the authors of this theory know these goals and consider them optimal at the time of theory creation. [10] The irrationality of chosen goals is in this way officially included in the domain of thought of the theory creators and in this way facilitates the extension of the rational to the irrational without already having a real solution. Nobody can exclude that the phenomenon of bringing forth something new, respectively of preferring a certain point of view in comparison to others, can be understood further and better in the future.

[10] Something can only be classified as optimal if it can be placed within an overarching framework, which allows for positioning on a scale. This refers to a minimal cognitive model as an expression of rationality. However, the decision itself takes place outside of such a rational model; in this sense, the decision as an independent process is pre-rational.

EXTENDED SCIENTIFIC DISCOURSE

If one accepts the concept of a sustainable empirical theory, then one can extend the concept of a scientific discourse in such a way that not only texts that represent empirical theories can be introduced, but also those texts that represent sustainable empirical theories with their own goals. Here too, one can ask whether the current text generators (September 2023) can make a constructive contribution. Insofar as a sustainable empirical theory contains an empirical theory as a hard core, the preceding observations on the limitations of text generators apply. In the creative part of the development of an empirical theory, they can contribute text fragments through their associative-combinatorial character based on a very large number of documents, which may inspire the active human theory authors to expand their view. But what about that part that manifests itself in the selection of possible goals? At this point, one must realize that it is not about any formulations, but about those that represent possible solution formulations within a systematic framework; this implies knowledge of relevant and verifiable meaning structures that could be taken into account in the context of symbolic patterns. Text generators fundamentally do not have these abilities. But it is – again – not to be excluded that their associative-combinatorial character based on a very large number of documents can still provide one or the other suggestion.

In retrospect of humanity’s history of knowledge, research, and technology, it is suggested that the great advances were each triggered by something really new, that is, by something that had never existed before in this form. The praise for Big Data, as often heard today, represents – colloquially speaking — exactly the opposite: The burial of the new by cementing the old.[11]

[11] A prominent example of the naive fixation on the old as a standard for what is right can be seen, for example, in the book by Seth Stephens-Davidowitz, Don’t Trust Your Gut. Using Data Instead of Instinct To Make Better Choices, London – Oxford New York et al., 2022.

EXISTENTIALLY NEW THROUGH TIME

The concept of an empirical theory inherently contains the element of change, and even in the extended concept of a sustainable empirical theory, in addition to the fundamental concept of change, there is the aspect of a possible goal. A possible goal itself is not a change, but presupposes the reality of changes! The concept of change does not result from any objects but is the result of a brain performance, through which a current present is transformed into a partially memorable state (memory contents) by forming time slices in the context of perception processes – largely unconsciously. These produced memory contents have different abstract structures, are networked differently with each other, and are assessed in different ways. In addition, the brain automatically compares current perceptions with such stored contents and immediately reports when a current perception has changed compared to the last perception contents. In this way, the phenomenon of change is a fundamental cognitive achievement of the brain, which thus makes the character of a feeling of time available in the form of a fundamental process structure. The weight of this property in the context of evolution is hardly to be overestimated, as time as such is in no way perceptible.

[12] The modern invention of machines that can generate periodic signals (oscillators, clocks) has been successfully integrated into people’s everyday lives. However, the artificially (technically) producible time has nothing to do with the fundamental change found in reality. Technical time is a tool that we humans have invented to somehow structure the otherwise amorphous mass of a phenomenon stream. Since structure itself shows in the amorphous mass, which manifest obviously for all, repeating change cycles (e.g., sunrise and sunset, moonrise and moonset, seasons, …), a correlation of technical time models and natural time phenomena was offered. From the correlations resulting here, however, one should not conclude that the amorphous mass of the world phenomenon stream actually behaves according to our technical time model. Einstein’s theory of relativity at least makes us aware that there can be various — or only one? — asymmetries between technical time and world phenomenon stream.


Assuming this fundamental sense of time in humans, one can in principle recognize whether a current phenomenon, compared to all preceding phenomena, is somehow similar or markedly different, and in this sense indicates something qualitatively new.[13]

[13] Ultimately, an individual human only has its individual memory contents available for comparison, while a collective of people can in principle consult the set of all records. However, as is known, only a minimal fraction of the experiential reality is symbolically transformed.

By presupposing the concept of directed time for the designation of qualitatively new things, such a new event is assigned an information value in the Shannonian sense, as well as the phenomenon itself in terms of linguistic meaning, and possibly also in the cognitive area: relative to a spanned knowledge space, the occurrence of a qualitatively new event can significantly strengthen a theoretical assumption. In the latter case, the cognitive relevance may possibly mutate to a sustainable relevance if the assumption marks a real action option that could be important for further progress. In the latter case, this would provoke the necessity of a decision: should we adopt this action option or not? Humans can accomplish the finding of qualitatively new things. They are designed for it by evolution. But what about text generators?

Text generators so far do not have a sense of time comparable to that of humans. Their starting point would be texts that are different, in such a way that there is at least one text that is the most recent on the timeline and describes real events in the real world of phenomena. Since a text generator (as of September 2023) does not yet have the ability to classify texts regarding their applicability/non-applicability in the real world, its use would normally end here. Assuming that there are people who manually perform this classification for a text generator [14] (which would greatly limit the number of possible texts), then a text generator could search the surface of these texts for similar patterns and, relative to them, for those that cannot be compared. Assuming that the text generator would find a set of non-comparable patterns in acceptable time despite a massive combinatorial explosion, the problem of semantic qualification would arise again: which of these patterns can be classified as an indication of something qualitatively new? Again, humans would have to become active.

[14] Such support of machines by humans in the field of so-called intelligent algorithms has often been applied (and is still being applied today, see: [https://www.mturk.com/] (Accessed: September 27, 2023)), and is known to be very prone to errors.

As before, the verdict is mixed: left to itself, a text generator will not be able to solve this task, but in cooperation with humans, it may possibly provide important auxiliary services, which could ultimately be of existential importance to humans in search of something qualitatively new despite all limitations.

THE IMMANENT PREJUDICE OF THE SCIENCES

A prejudice is known to be the assessment of a situation as an instance of a certain pattern, which the judge assumes applies, even though there are numerous indications that the assumed applicability is empirically false. Due to the permanent requirement of everyday life that we have to make decisions, humans, through their evolutionary development, have the fundamental ability to make many of their everyday decisions largely automatically. This offers many advantages, but can also lead to conflicts.

Daniel Kahneman introduced in this context in his book [15] the two terms System 1 and System 2 for a human actor. These terms describe in his concept of a human actor two behavioral complexes that can be distinguished based on some properties.[16] System 1 is set by the overall system of human actor and is characterized by the fact that the actor can respond largely automatically to requirements by everyday life. The human actor has automatic answers to certain stimuli from his environment, without having to think much about it. In case of conflicts within System 1 or from the perspective of System 2, which exercises some control over the appropriateness of System 1 reactions in a certain situation in conscious mode, System 2 becomes active. This does not have automatic answers ready, but has to laboriously work out an answer to a given situation step by step. However, there is also the phenomenon that complex processes, which must be carried out frequently, can be automated to a certain extent (bicycling, swimming, playing a musical instrument, learning language, doing mental arithmetic, …). All these processes are based on preceding decisions that encompass different forms of preferences. As long as these automated processes are appropriate in the light of a certain rational model, everything seems to be OK. But if the corresponding model is distorted in any sense, then it would be said that these models carry a prejudice.

[15] Daniel Kahnemann, Thinking Fast and Slow, Pinguin Boooks Random House, UK, 2012 (zuerst 2011)

[16] See the following Chapter 1 in Part 1 of (Kahnemann, 2012, pages 19-30).

In addition to the countless examples that Kahneman himself cites in his book to show the susceptibility of System 1 to such prejudices, it should be pointed out here that the model of Kahneman himself (and many similar models) can carry a prejudice that is of a considerably more fundamental nature. The division of the behavioral space of a human actor into a System 1 and 2, as Kahneman does, obviously has great potential to classify many everyday events. But what about all the everyday phenomena that fit neither the scheme of System 1 nor the scheme of System 2?

In the case of making a decision, System 1 comments that people – if available – automatically call up and execute an available answer. Only in the case of conflict under the control of System 2 can there be lengthy operations that lead to other, new answers.

In the case of decisions, however, it is not just about reacting at all, but there is also the problem of choosing between known possibilities or even finding something new because the known old is unsatisfactory.

Established scientific disciplines have their specific perspectives and methods that define areas of everyday life as a subject area. Phenomena that do not fit into this predefined space do not occur for the relevant discipline – methodically conditioned. In the area of decision-making and thus the typical human structures, there are not a few areas that have so far not found official entry into a scientific discipline. At a certain point in time, there are ultimately many, large phenomenon areas that really exist, but methodically are not present in the view of individual sciences. For a scientific investigation of the real world, this means that the sciences, due to their immanent exclusions, are burdened with a massive reservation against the empirical world. For the task of selecting suitable sustainable goals within the framework of sustainable science, this structurally conditioned fact can be fatal. Loosely formulated: under the banner of actual science, a central principle of science – the openness to all phenomena – is simply excluded, so as not to have to change the existing structure.

For this question of a meta-reflection on science itself, text generators are again only reduced to possible abstract text delivery services under the direction of humans.

SUPERVISION BY PHILOSOPHY

The just-described fatal dilemma of all modern sciences is to be taken seriously, as without an efficient science, sustainable reflection on the present and future cannot be realized in the long term. If one agrees that the fatal bias of science is caused by the fact that each discipline works intensively within its discipline boundaries, but does not systematically organize communication and reflection beyond its own boundaries with a view to other disciplines as meta-reflection, the question must be answered whether and how this deficit can be overcome.

There is only one known answer to this question: one must search for that conceptual framework within which these guiding concepts can meaningfully interact both in their own right and in their interaction with other guiding concepts, starting from those guiding concepts that are constitutive for the individual disciplines.

This is genuinely the task of philosophy, concretized by the example of the philosophy of science. However, this would mean that each individual science would have to use a sufficiently large part of its capacities to make the idea of the one science in maximum diversity available in a real process.

For the hard conceptual work hinted at here, text generators will hardly be able to play a central role.

COLLECTIVE INTELLIGENCE

Since so far there is no concept of intelligence in any individual science that goes beyond a single discipline, it makes little sense at first glance to apply the term intelligence to collectives. However, looking at the cultural achievements of humanity as a whole, and here not least with a view to the used language, it is undeniable that a description of the performance of an individual person, its individual performance, is incomplete without reference to the whole.

So, if one tries to assign an overarching meaning to the letter combination intelligence, one will not be able to avoid deciphering this phenomenon of the human collective in the form of complex everyday processes in a no less complex dynamic world, at least to the extent that one can identify a somewhat corresponding empirical something for the letter combination intelligence, with which one could constitute a comprehensible meaning.

Of course, this term should be scalable for all biological systems, and one would have to have a comprehensible procedure that allows the various technical systems to be related to this collective intelligence term in such a way that direct performance comparisons between biological and technical systems would be possible.[17]

[17] The often quoted and popular Turing Test (See: Alan M. Turing: Computing Machinery and Intelligence. In: Mind. Volume LIX, No. 236, 1950, 433–460, [doi:10.1093/mind/LIX.236.433] (Accessed: Sept 29, 2023) in no way meets the methodological requirements that one would have to adhere to if one actually wanted to come to a qualified performance comparison between humans and machines. Nevertheless, the basic idea of Turing in his meta-logical text from 1936, published in 1937 (see: A. M. Turing: On Computable Numbers, with an Application to the Entscheidungsproblem. In: Proceedings of the London Mathematical Society. s2-42. Volume, No. 1, 1937, 230–265 [doi:10.1112/plms/s2-42.1.230] (Accessed: Sept 29, 2023) seems to be a promising starting point, since he, in trying to present an alternative formulation to Kurt Gödel’s (1931) proof on the undecidability of arithmetic, leads a meta-logical proof, and in this context Turing introduces the concept of a machine that was later called Universal Turing Machine.

Already in this proof approach, it can be seen how Turing transforms the phenomenon of a human bookkeeper at a meta-level into a theoretical concept, by means of which he can then meta-logically examine the behavior of this bookkeeper in a specific behavioral space. His meta-logical proof not only confirmed Gödel’s meta-logical proof, but also indirectly indicates how ultimately any phenomenal complexes can be formalized on a meta-level in such a way that one can then argue formally demanding with it.

CONCLUSION STRUCTURALLY

The idea of philosophical supervision of the individual sciences with the goal of a concrete integration of all disciplines into an overall conceptual structure seems to be fundamentally possible from a philosophy of science perspective based on the previous considerations. From today’s point of view, specific phenomena claimed by individual disciplines should no longer be a fundamental obstacle for a modern theory concept. This would clarify the basics of the concept of Collective Intelligence and it would surely be possible to more clearly identify interactions between human collective intelligence and interactive machines. Subsequently, the probability would increase that the supporting machines could be further optimized, so that they could also help in more demanding tasks.

CONCLUSION SUBJECTIVELY

Attempting to characterize the interactive role of text generators in a human-driven scientific discourse, assuming a certain scientific model, appears to be somewhat clear from a transdisciplinary (and thus structural) perspective. However, such scientific discourse represents only a sub-space of the general human discourse space. In the latter, the reception of texts from the perspective of humans inevitably also has a subjective view [18]: People are used to suspecting a human author behind a text. With the appearance of technical aids, texts have increasingly become products, which increasingly gaining formulations that are not written down by a human author alone, but by the technical aids themselves, mediated by a human author. With the appearance of text generators, the proportion of technically generated formulations increases extremely, up to the case that ultimately the entire text is the direct output of a technical aid. It becomes difficult to impossible to recognize to what extent a controlling human share can still be spoken of here. The human author thus disappears behind a text; the sign reality which does not prevent an existential projection of the inner world of the human reader into a potential human author, but threatens to lose itself or actually loses itself in the real absence of a human author in the face of a chimeric human counterpart. What happens in a world where people no longer have human counterparts?

[18] There is an excellent analysis on this topic by Hannes Bajohr titled “Artifizielle und postartifizielle Texte. Über Literatur und Künstliche Intelligenz” (Artificial and Post-Artificial Texts: On Literature and Artificial Intelligence). It was the Walter-Höllerer-Lecture 2022, delivered on December 8, 2022, at the Technical University of Berlin. The lecture can be accessed here [ https://hannesbajohr.de/wp-content/uploads/2022/12/Hoellerer-Vorlesung-2022.pdf ] (Accessed: September 29, 2023). The reference to this lecture was provided to me by Jennifer Becker.

CONTINUE EXPERIMENT WITH CHATGPT4: Science, Empirical Science, Philosophy of Science

Author: Gerd Doeben-Henisch

Email: info@uffmm.org

Time: Aug 31, 2023 – Aug 31, 2023 (10:03h CET)

CONTEXT

This text belongs to a series of experiments with chatGPT4. While there exist meanwhile some experiments demonstrating the low quality of chatGPT4 in many qualitative tests [1], the author of this texts wants to check the ability of the chatGPT4 software in reproducing man-like behavior associated with the production and reception of texts on a higher level. I started this series with a first post here[2] and a second post here.[3]

SUMMARY SO FAR, APPROVED BY chatGPT4

Author

In our yesterday dialogue you gave me many answers. Here a small selection of these: (1) You cannot read a book like a human person, (2) Your database is based on a subset of documents until the year 2021, (3) You don’t “learn” in the same way humans do, (4) You don’t directly cite individual sources in your responses. Instead, your answers are generated based on patterns in the data you have been trained on, (5) Individual responses from you aren’t precisely replicable, (6) You can’t directly attribute your statements to specific sources like a scholarly article or a book. This is indeed a limitation if you’re considering the output as a “scientific description”, (7) The specifics of the training data or exact processes that lead to a particular response from you aren’t transparently available in each interaction. Is this correct?

ChatGPT

Yes, your summary of our previous dialogue is accurate based on the points you’ve listed:

  1. I cannot read a book in the same way a human does, meaning I don’t interpret or comprehend in the same manner.
  2. My knowledge is up to September 2021, which means that I might not be aware of events, advancements, or publications after that point.
  3. My “learning” is different from human learning. I don’t learn from new experiences or information after my last update. Instead, I generate responses based on patterns in the data I was trained on.
  4. I don’t cite individual sources in my responses. I provide information based on the patterns recognized during my training.
  5. Responses I give can vary slightly even if the same question is posed multiple times.
  6. I can’t directly attribute my statements to specific, individual sources.
  7. The specifics about my training data and the processes that determine my responses are not transparently provided in each interaction.

You’ve captured the essence of our prior conversation well. If you have further questions or need clarifications on any of these points, please let me know.

OVERVIEW OF NEW DIALOGUE: The Questions

  1. Is there a difference in the meanings of the expressions ‘Scientific Text’ and ‘Empirical Scientific Text’?
  2. What distinguishes a ‘scientific text’ from a ‘non-scientific text’?
  3. You mention that it’s important to note that the boundaries between “scientific” and “non-scientific” can sometimes blur, especially when it comes to disciplines like philosophy of science. How do you characterize ‘philosophy of science’?
  4. Considering the aim of ‘Philosophy of Science’: Which role plays the language used for the work of a philosopher of science?
  5. You are stating that ‘language’ plays a central role for science as well as for a philosopher of science. Thus, is it correct to state that ‘Philosophy of Science’ is talking ‘about’ Science?
  6. You state that “Philosophy of Science” stands in a meta-relation to science. Is this compatible with the statement, that a person A (a philosopher of science) is talking about another person B (a scientist) by looking to the behavior of the other person?
  7. You state that the relationship between the philosophy of science and science is multifaceted and cannot be entirely captured by comparing it to one person observing another’s behavior. From your statement I induce (1) that science is an activity realized by many individuals in parallel, a ‘population of scientists’. (2) This population of scientists is following certain commonly accepted ‘rules for science’. (3) The ‘written rules’ for science have some ‘meaning’ which is located ‘inside the scientists’.(4) It is this ‘shared meaning’ inside the scientists which enables a ‘behavior’ which commonly is qualified as ‘science work’. Is this correct?
  8. You are stating that philosophy of science doesn’t just observe the norms and practices of the population of scientists from a distance. It also critically examines, interrogates, and sometimes influences them. Would you agree with the following induced statements: (1) All members of the population of scientists are sharing a commonly accepted meaning associated with the written rules of science. (2) To understand the behavior of scientists one has to learn this shared meaning inside a scientist. (3) A philosopher of science talking ‘about’ the behavior of a population of scientists has to know this shared meaning inside the scientists to be able to describe the scientific behavior. (4) Before ‘reflecting about’ the scientific behavior one has to ‘understand’ it by knowing the shared meanings.

Continuing the Dialogue

Author

Is there a difference in the meanings of the expressions ‘Scientific Text’ and ‘Empirical Scientific Text’?

ChatGPT

Yes, there is a distinction between the terms “scientific text” and “empirical scientific text.”

  1. Scientific Text: This is a broad term that refers to any text discussing scientific topics, concepts, theories, and so on. It can encompass a wide variety of writings, from theoretical papers, reviews, and hypotheses to detailed experiments and observations. The main characteristic is that it pertains to or is relevant to science. It could be based on observations, mathematical models, theoretical principles, or any other scientific method.
  2. Empirical Scientific Text: This term is more specific. “Empirical” refers to knowledge derived from observation or experiment, i.e., it’s based on actual, direct observations or experiments in the real world. So, an empirical scientific text is a subset of scientific text that specifically deals with data collected from experiments or observations. It doesn’t rely solely on theoretical arguments or models but provides evidence from real-world data.

In simpler terms, while all empirical scientific texts are scientific texts, not all scientific texts are empirical. Some might be theoretical, for example.

Author

What distinguishes a ‘scientific text’ from a ‘non-scientific text’?

ChatGPT

The distinction between a “scientific text” and a “non-scientific text” lies in the content, purpose, methodology, and structure of the writing. Here are some distinguishing features:

  1. Content:
    • Scientific Text: Deals with topics related to natural or social sciences such as biology, chemistry, physics, psychology, sociology, etc.
    • Non-scientific Text: Can deal with any subject, from literature and history to personal opinions or fictional narratives.
  2. Purpose:
    • Scientific Text: Primarily to communicate research findings, theories, methodologies, or review existing literature. It seeks to expand knowledge in a field, challenge existing notions, or validate/invalidate hypotheses.
    • Non-scientific Text: Can have various purposes including entertaining, informing, expressing opinions, narrating stories, etc., without adhering to the scientific method.
  3. Methodology:
    • Scientific Text: Adheres to the scientific method, which involves formulating hypotheses, conducting experiments or observations, analyzing data, and drawing conclusions.
    • Non-scientific Text: Does not necessarily follow any specific methodology, especially not the scientific method.
  4. Structure and Style:
    • Scientific Text: Typically has a structured format, often including sections like introduction, methodology, results, discussion, and conclusion. It uses precise and objective language.
    • Non-scientific Text: Can have a variety of structures and styles depending on the type of writing (e.g., essay, poem, opinion piece, novel). The language can be subjective, emotive, or descriptive.
  5. Evidence and Verification:
    • Scientific Text: Relies on empirical evidence, reproducibility, and peer review to validate claims.
    • Non-scientific Text: May not rely on empirical evidence and is often not subjected to peer review or validation in the same rigorous manner as scientific texts.
  6. Audience:
    • Scientific Text: Often written for other scientists, researchers, or professionals in the field, although there are also scientific texts intended for broader audiences.
    • Non-scientific Text: Can be aimed at the general public, specific interest groups, or any other audience.

That being said, it’s important to note that the boundaries between “scientific” and “non-scientific” can sometimes blur, especially when it comes to disciplines like philosophy of science or when scientific concepts are discussed in popular literature. Moreover, some non-scientific texts, like historical analyses, may incorporate rigorous methodologies and provide evidence for their claims even if they don’t adhere strictly to the scientific method.

Author

You mention that it’s important to note that the boundaries between “scientific” and “non-scientific” can sometimes blur, especially when it comes to disciplines like philosophy of science. How do you characterize ‘philosophy of science’?

ChatGPT

Certainly, the philosophy of science is a fascinating discipline that delves into the foundational, methodological, and epistemological questions surrounding science. Here’s a more detailed characterization:

  1. Nature and Purpose:
    • Philosophy of science aims to understand the nature and purpose of scientific activity. It raises questions about what science is, what its objectives are, and what makes scientific activity distinctive from other human pursuits.
  2. Foundational Issues:
    • Philosophers of science explore the basic concepts and principles underlying scientific theories and practices. For example, they might investigate the nature of causation, the concept of a scientific law, or the interpretation of probability in statistical mechanics.
  3. Methodological Examination:
    • It studies the methods used in science, asking questions like: What is the scientific method? Are there universal methods that apply to all scientific disciplines, or do methods differ across fields? How do scientists confirm or falsify theories?
  4. Theory and Observation:
    • One central concern in the philosophy of science is the relationship between theory and observation. Philosophers examine how theories are tested, how they relate to observational data, and the problem of underdetermination (the idea that data can be explained by multiple, sometimes incompatible, theories).
  5. Scientific Explanation:
    • What does it mean to explain a phenomenon scientifically? Philosophers of science analyze different models of explanation, like deductive-nomological, probabilistic, or causal-mechanistic explanations.
  6. Scientific Change and Progress:
    • Philosophers investigate how scientific theories evolve over time, what constitutes scientific progress, and how paradigm shifts (a term popularized by Thomas Kuhn) occur in scientific disciplines.
  7. Realism and Anti-realism:
    • This debate concerns the nature of scientific theories and their relationship to the world. Realists believe that scientific theories aim to describe the world as it truly is, while anti-realists believe that theories are just tools for prediction and may not necessarily reveal the true nature of reality.
  8. Values in Science:
    • Philosophy of science also studies the role of values, both epistemic (related to knowledge) and ethical, in scientific practice. For instance, it may question how societal values influence scientific research or how objective science can truly be.
  9. Interdisciplinary Concerns:
    • The philosophy of science often intersects with other disciplines, such as the history of science, sociology of science, and cognitive science, to provide a holistic understanding of scientific practices.

Philosophy of science doesn’t just deal with abstract or purely theoretical concerns. Many of its insights have practical implications for how science is conducted, taught, funded, and communicated. By scrutinizing the foundational aspects of scientific activity, the philosophy of science enriches our understanding and appreciation of the scientific endeavor.

Author

Considering the aim of ‘Philosophy of Science’: Which role plays the language used for the work of a philosopher of science?

ChatGPT

The role of language in the philosophy of science is crucial and multifaceted. Philosophers of science, like philosophers in other areas, are deeply concerned with the nuances, implications, and limitations of language. Here’s why language plays a central role:

  1. Precision in Definition: The clarity of philosophical arguments often hinges on the precise definition of terms. Philosophers of science seek to define terms like “causation,” “law,” “theory,” “model,” “proof,” and “evidence” in ways that capture their use in scientific practice and distinguish them from other, sometimes colloquial, meanings.
  2. Analysis of Theoretical Constructs: Many concepts in science, like “atom,” “gene,” or “black hole,” are theoretical constructs. Philosophers of science investigate the language that scientists use to describe these constructs and how that language relates to observable phenomena.
  3. Interpretation of Models and Theories: Philosophers analyze the language of scientific models and theories to understand their structure, implications, and limitations. For instance, they might scrutinize how particular terms are used metaphorically or analogically.
  4. Examining Paradigms and Frameworks: As mentioned with Thomas Kuhn’s concept of paradigms, the language and conceptual framework of one scientific paradigm might differ radically from another. Philosophers of science explore these differences and their implications.
  5. Exploring Linguistic Structures: Some philosophers, influenced by linguistic philosophy or semiotics, delve into the deeper structures of scientific language. They might ask how the language of a scientific discipline shapes and is shaped by its practices.
  6. Addressing Ambiguities: Philosophers of science highlight and address ambiguities in scientific language, clarifying potential misunderstandings or misinterpretations.
  7. Role of Metaphor and Analogy: Science frequently uses metaphors and analogies to describe complex phenomena (e.g., the “solar system model” of the atom or “genetic code”). Philosophers of science examine the strengths, limitations, and potential pitfalls of these linguistic tools.
  8. Ethical and Societal Implications: Language doesn’t just describe the world; it also shapes how we perceive and interact with it. Philosophers of science may critique the language of science for its ethical or societal implications, such as whether it inadvertently reinforces certain biases or worldviews.
  9. Communication and Pedagogy: Language is the primary medium through which scientific knowledge is communicated, both within the scientific community and to the public. Philosophers might investigate how language can facilitate or hinder scientific understanding, especially in educational contexts.

In summary, for philosophers of science, language is not just a neutral tool but a complex system that influences, and is influenced by, scientific thought and practice. Analyzing and critiquing the language of science helps illuminate the foundational assumptions, methodologies, and implications of scientific activity.

Author

You are stating that ‘language’ plays a central role for science as well as for a philosopher of science. Thus, is it correct to state that ‘Philosophy of Science’ is talking ‘about’ Science?

ChatGPT

Yes, that’s a concise and accurate way to put it. The “Philosophy of Science” talks ‘about’ science. It reflects on, analyzes, and critiques the foundational, methodological, epistemological, and sometimes ethical aspects of scientific activity. Philosophers of science are interested in understanding what scientists do, how they do it, why they do it in that way, and what it means to produce scientific knowledge.

In other words, while scientists are primarily concerned with investigating and understanding the natural world (or human societies, in the case of social sciences), philosophers of science are more concerned with investigating and understanding the processes, methodologies, and frameworks that scientists use in their investigations.

So, “Philosophy of Science” stands in a meta-relation to science. Just as meta-ethics in philosophy is concerned with the nature of ethical statements rather than the ethical statements themselves, the philosophy of science is concerned with the nature, methodology, and implications of science rather than the specific scientific facts or theories.

Author

You state that “Philosophy of Science” stands in a meta-relation to science. Is this compatible with the statement, that a person A (a philosopher of science) is talking about another person B (a scientist) by looking to the behavior of the other person?

ChatGPT

The analogy of a person A (a philosopher of science) talking about another person B (a scientist) by observing B’s behavior can capture some aspects of the meta-relation between philosophy of science and science, but with certain caveats.

In this analogy:

  • Observation of Behavior: The philosopher of science (Person A) observes, reflects upon, and tries to understand the methodologies, assumptions, and structures that underpin the work of the scientist (Person B). This can be likened to observing the “behavior” of the scientist.
  • Interpretation and Analysis: Just as someone might analyze and interpret a person’s behavior to understand underlying motivations, thoughts, or cultural influences, the philosopher of science analyzes the “behavior” of science to uncover its foundational assumptions, methodologies, and epistemological commitments.

However, the analogy has its limitations:

  1. Beyond Individual Behavior: While the analogy implies observing the actions of an individual (Person B), the philosophy of science often deals with broader structures, paradigms, and historical shifts in scientific thinking, not just the practices of individual scientists.
  2. Engagement with Content: Philosophers of science don’t just observe the “behavior” of scientists. They often engage deeply with the actual content of scientific theories and models, trying to understand and interpret them. It’s not merely an external observation.
  3. Reciprocal Influence: In some cases, the philosophy of science has influenced the way scientists think about their own work. For instance, philosophical discussions about the nature of time, space, or causality might impact theoretical physics. This reciprocal influence doesn’t fit neatly into the analogy.

In summary, while the analogy can serve as a starting point to grasp the meta-relation, the relationship between the philosophy of science and science is multifaceted and cannot be entirely captured by comparing it to one person observing another’s behavior.

Author

You state that the relationship between the philosophy of science and science is multifaceted and cannot be entirely captured by comparing it to one person observing another’s behavior. From your statement I induce (1) that science is an activity realized by many individuals in parallel, a ‘population of scientists’. (2) This population of scientists is following certain commonly accepted ‘rules for science’. (3) The ‘written rules’ for science have some ‘meaning’ which is located ‘inside the scientists’.(4) It is this ‘shared meaning’ inside the scientists which enables a ‘behavior’ which commonly is qualified as ‘science work’. Is this correct?

ChatGPT

Your inferences capture important aspects of the relationship between science, scientists, and the philosophy of science, with some qualifications. Let’s review your points:

  1. Population of Scientists: Yes, science is indeed an activity carried out by many individuals, often collaboratively. These individuals collectively form the scientific community or a “population of scientists.”
  2. Rules for Science: Scientists, especially within specific disciplines, generally adhere to a set of methodologies, standards, and norms—often referred to as the “scientific method” or the ethos of science. These can include practices such as peer review, empirical testing, reproducibility, and falsifiability, among others. These “rules” may vary across disciplines but serve to maintain the rigor and credibility of scientific research.
  3. Meaning Inside Scientists: It’s a nuanced point. While there are “written rules” or guidelines for conducting science, the internalization of these rules goes beyond just written directives. Scientists undergo extensive training and socialization, learning not just the overt rules but also the tacit knowledge, values, and norms of their discipline. This internalized understanding guides their scientific practice.
  4. Shared Meaning & Behavior: The shared understanding and internalization of the ethos of science by the scientific community do indeed result in behaviors characteristic of scientific inquiry. This shared foundation helps in ensuring consistency, rigor, and mutual intelligibility in scientific work.

However, it’s also important to note:

  • The evolution of scientific norms and practices: The “rules” and norms of science aren’t static; they evolve over time as science itself progresses and as the broader societal context changes.
  • Diversity within science: Different scientific disciplines might have variations in their methodologies, standards, and practices. For instance, methods in particle physics can differ considerably from ethnographic methods in cultural anthropology, though both are scientific.
  • Role of philosophy of science: The philosophy of science doesn’t just observe these norms and practices from a distance. It critically examines, interrogates, and sometimes influences them. Philosophers of science may raise questions about the nature of evidence, the interpretation of statistical methods, the structure of scientific theories, or the ethical implications of certain scientific practices.

In summary, your inferences offer a structured way of understanding the practice of science and its underlying shared ethos, with the aforementioned nuances and complexities taken into account.

Author

You are stating that philosophy of science doesn’t just observe the norms and practices of the population of scientists from a distance. It also critically examines, interrogates, and sometimes influences them. Would you agree with the following induced statements: (1) All members of the population of scientists are sharing a commonly accepted meaning associated with the written rules of science. (2) To understand the behavior of scientists one has to learn this shared meaning inside a scientist. (3) A philosopher of science talking ‘about’ the behavior of a population of scientists has to know this shared meaning inside the scientists to be able to describe the scientific behavior. (4) Before ‘reflecting about’ the scientific behavior one has to ‘understand’ it by knowing the shared meanings.

ChatGPT

Your inferences touch on core ideas related to the nature of scientific practices and the approach of philosophy of science. Let’s review each of your points:

  1. Shared Meaning Among Scientists: While it’s generally true that there’s a commonly accepted ethos or set of standards within scientific disciplines, it’s essential to remember that “shared meaning” isn’t necessarily uniform. Different scientists, based on their training, specific subfields, cultural backgrounds, and individual experiences, might have variations in their interpretations and applications of these rules. However, there’s indeed a foundational consensus that allows for mutual intelligibility and collaboration within the scientific community.
  2. Understanding the Shared Meaning: To understand the behavior of scientists fully, one would benefit from understanding this shared ethos and the underlying tacit knowledge. This includes both formal methodologies and the more subtle, unwritten norms and values of scientific practice.
  3. Philosopher’s Knowledge of Shared Meaning: Yes, for a philosopher of science to offer a meaningful critique, analysis, or description of scientific practices, they need to be familiar with the shared meanings, methods, and values of scientists. However, philosophers also bring an external perspective that can question assumptions and probe deeper into the foundations of these shared meanings.
  4. Understanding Before Reflecting: In general, understanding is a prerequisite for meaningful reflection. To critically analyze or reflect upon scientific practices, one first needs a foundational understanding of those practices. This is akin to the idea that one must understand a game’s rules and objectives before commenting on strategies or critiquing its design.

However, a few considerations:

  • Dynamism of Science: It’s important to recognize that the shared meanings and practices of science aren’t static. They evolve with new discoveries, technological advancements, and shifts in the broader societal context. Philosophers of science not only reflect on current practices but also on these evolutionary changes.
  • Interdisciplinary Nature: The philosophy of science often interacts with other disciplines, such as sociology of science, history of science, or cognitive science. These interdisciplinary engagements enrich the understanding and reflection on scientific practices.
  • Role of Outsider’s Perspective: While understanding from within is crucial, sometimes an outsider or a more distanced perspective can provide insights that might be less obvious to those deeply embedded within a discipline. Philosophers of science can provide this vantage point, revealing biases, assumptions, or unexplored avenues.

In summary, your inferences capture significant aspects of how philosophy of science engages with the practices and underlying meanings of science. The relationship is one of understanding, reflection, critique, and, sometimes, influence.

COMMENTS

[1] There was a conference at the Technical University of Darmstadt with the title “KI – Text und Geltung. Wie verändern KI-Textgeneratoren wissenschaftliche Diskurse?” (Translated: AI – Text and Validity. How do AI text generators change scientific discourses? )., https://zevedi.de/en/topics/ki-text-2/ . One lecturer Thomas Arnold, gave a speech entitled “Herausforderungen in der Forschung: Mangelnde Reproduzierbarkeit und Erklärbarkeit” (Translated: Challenges in research: lack of reproducibility and explainability), which will be published by de Gruyter at the end of 2023/ or the beginning of 2024.

[2] Gerd Doeben-Henisch, Aug 28, 2023 : STATE CHANGE FROM NON-WRITING TO WRITING. WORKING WITH CHATGPT4 IN PARALLEL. AN EXPLORATION, https://www.uffmm.org/2023/08/28/state-change-from-non-writing-to-writing-working-with-chatgpt4-in-parallel/

[3] Gerd Doeben-Henisch, Aug 30, 2023: READING A BOOK, LEARNING, BEING SCIENTIFIC, WIKIPEDIA. A dialogue with chatGPT4 bringing you ‘back to earth’ : https://www.uffmm.org/2023/08/30/reading-a-book-learning-being-scientific-wikipedia-a-dialogue-with-chatgpt4-bringing-you-back-to-earth/

READING A BOOK, LEARNING, BEING SCIENTIFIC, WIKIPEDIA. A dialogue with chatGPT4 bringing you ‘back to earth’

Autor: Gerd Doeben-Henisch

Aug 30, 2023 – Aug 30, 2023

Email: info@uffmm.org

CONTEXT

This text belongs to a series of experiments with chatGPT4. While there exist meanwhile some experiments demonstrating the low quality of chatGPT4 in many qualitative tests [1], the author of this texts wants to check the ability of the chatGPT4 software in reproducing man-like behavior associated with the production and reception of texts on a higher level. I started this series with a first post here.[2]

SUMMARY

In the following series of dialogues with chatGPT4 the software stated in the beginning that it cannot read a book like a human person. Asking for a special book with the title “Das Experiment sind wir” by the author Christian Stöcker? the software answered, that it doesn’t know the book. Then I tried to ask the software, whether it can nevertheless ‘learn’ something based on the documents given to the software until the year 2021. The answer was quite revealing: “No, I don’t “learn” in the same way humans do. I don’t actively process new information, form new connections, or change my understanding over time. Instead, my responses are based on patterns in the data I was trained on. My knowledge is static and doesn’t update with new information unless a new version of me is trained by OpenAI with more recent data.” I didn’t give up to understand what the software is able to do if not ‘reading a book’ or to ‘learn’. I asked: “What can you extract from your text base related to the question what can happen in the head of people if they read a book?” I got a list of general properties summarized from different source (without mentioning these sources). I continued asking the software not about the leaning inside itself but what the software knows about the learning inside humans persons reading a book: “How do you describe the process of learning in human persons while reading a book?” The software did mention some aspects which have to be considered while a human person reads a book. But this gives only some ‘static view’ of the structure being active in the process of reading. More interesting is perhaps a ‘dynamic aspect’ of reading a book which can be circumscribed as ‘learning’: which kinds of changing are typical for human persons while reading a book? The software gives a detailed breakdown of the learning process in human persons while reading, but again these are statements without any ‘backing up’, no sources,no contexts, but in the everyday world — especially in science –, it is necessary that a reader can get the knowledge about the sources of statements claiming to be serious: are the presented facts only ‘fakes’, purely wrong, or are there serious people with serious methods of measurement which can provide some certainty that these facts are really ‘true’? Then the author did ask the software “Your answer consist of many interesting facts. Is it possible to clarify the sources you are using for these statements?” The answer again was very simple: “I’m glad you found the information interesting. My design is based on information from a wide range of books, articles, and other educational resources up to my last update in September 2021. However, I don’t directly cite individual sources in my responses. Instead, my answers are generated based on patterns in the data I was trained on.” Because it is known that wikipedia encyclopedia provides always explicit sources for the texts the author asked directly: Do you know the textbase wikipedia? Clearly, the chatGPT4 knows Wikipedia; besides many positive remarks it answered: “The structure of Wikipedia articles, with their citations, can serve as a good starting point for research on a topic. One can follow the cited sources at the bottom of each article to get more detailed or primary information on a subject.” This sounds quite good for Wikipedia. Thus it is interesting how chatGPT4 compares itself to Wikipedia: Question: “Wikipedia has cited sources at the bottom of each article. Does this mean that wikipedia is more trustful as you?” chatGPT4 doesn’t give a clear answer; it summarizes points for Wikipedia and itself, but leaves the question open. Thus the author continues asking for a more ‘general criterion’, which should be valid for both Wikipedia and chatGPT4: “Which criteria do you know for a good scientific description? ” As usual chatGPT4 gives a long list of criteria . the author asks back “Your overview about criteria for a scientific description includes terms like ‘Replicability’, ‘Citing Sources’, and ‘Transparency’. From the preceding self-description of you it is not easily to see, whether these criteria can also applied to the output of chatGPT4.” The answer chatGPT4 gives makes itself nearly unusable for any serious scientific work. Yes, you can use chatGPT4 for some first general information, but without knowing more by yourself, you are really lost. It is surely quite more fruitful to check Wikipedia directly and using the the sources there for further investigations.

Reading a book

I started spontaneously to talk with chatGPT4 about a book from Christian Ströcker with the title “Das Experiment sind wir”, published 2020.

Here is the dialogue:

Author:

Can you read a book?

ChatGPT

I don’t “read” books in the same way humans do. Instead, I was trained on vast amounts of text data, which includes information from books, articles, websites, and other written sources up to my last update in September 2021.

You can ask me about the content, summaries, themes, and other information from many books, especially those that are well-known or that were published before my last training cut-off. If the book was published after September 2021 or is particularly obscure, I might not have information about it.

If you have a particular book or topic in mind, feel free to ask, and I’ll do my best to provide relevant information!

Author:

What can you say about the book entitled “Das Experiment sind wir” by the author Christian Stöcker?

ChatGPT

As of my last update in September 2021, “Das Experiment sind wir” by Christian Stöcker hasn’t been one of the primary sources I was trained on. However, Christian Stöcker is known as a journalist and professor in the field of digital communication.

If “Das Experiment sind wir” was published or gained significant attention after my last update, I may not have detailed information about its content, themes, or specific insights.

For a detailed overview or analysis of the book, I would recommend reading reviews, summaries, or author interviews, or accessing the book directly. If you have other questions or need information on a related topic, I’m here to help!

RESULT

OK, chatGPT4 doesn’t know this book. This is OK, even a high performance System can not know everything.

But interesting is his confession, that chatGPT4 can not read a book as a human person would do. In this sense chatGPT4 is not a ‘typical reader’ but — as wee indeed know — a piece of software, managed by human persons, which decide, which kinds of texts chatGPT4 has to ‘store’ in his memories and which are the basis for hier ‘algorithmic processing’.

But there is left the question whether chatGBT4 at least can learn by its own starting with the text base given to him.

Author

Thus if you have a text base as a point of reference, are you able to learn in some sense by your own?

ChatGPT

No, I don’t “learn” in the same way humans do. I don’t actively process new information, form new connections, or change my understanding over time. Instead, my responses are based on patterns in the data I was trained on. My knowledge is static and doesn’t update with new information unless a new version of me is trained by OpenAI with more recent data.

Here’s an analogy: Imagine an extremely comprehensive book. This book can answer a vast array of questions based on its contents, but it can’t write new chapters or update its pages on its own. In a similar manner, I can generate answers based on the data I was trained on, but I can’t independently update that data or “learn” new things.

So, while I can generate responses based on a broad range of existing knowledge, I can’t assimilate or understand new information or events after my last training cut-off in September 2021.

RESULT

As expected, chatGBT4 can not learn as we humans can learn. To generate summaries can be of some help, but this can not replace ‘true learning’.

But let us stay with the topic of ‘reading a book’:

Author

What can you extract from your text base related to the question what can happen in the head of people if they read a book?

ChatGPT

Reading a book can have various cognitive, emotional, and psychological effects on individuals. Here are some insights based on literature, cognitive science, and psychological studies up to my last training cut-off:

  1. Cognitive Stimulation: Engaging with the written word can stimulate the brain. Studies have suggested that mental stimulation can slow the progress (or possibly even prevent) Alzheimer’s and dementia, as keeping your brain active and engaged prevents it from losing power.
  2. Mental Escape: For many, reading a book can act as an escape from the real world. Immersing oneself in a different world, whether it’s a fantasy realm or an historical setting, provides a break from everyday stressors.
  3. Increased Knowledge: Naturally, books inform and educate. This builds a knowledge base that can be applied in real-life situations.
  4. Vocabulary Expansion: Reading regularly can expand one’s vocabulary more than direct teaching or verbal communications.
  5. Improved Focus and Concentration: Unlike skimming short web articles or watching quick videos, reading a book requires a longer attention span and can help improve one’s focus and concentration.
  6. Stress Reduction: A well-written novel or non-fiction book can transport the reader to other realms, distracting them and helping them to de-stress more effectively than other methods.
  7. Improved Analytical Skills: Engaging with complex narratives and plots can help readers improve their analytical skills as they ponder plot developments and character motivations.
  8. Empathy Development: Reading fiction, in particular, allows readers to live vicariously through characters, experiencing a vast range of emotions and situations. This can help in developing a deeper understanding and empathy for others.
  9. Mental Tranquility: Reading subjects like poetry or certain religious texts can provide tranquility, help lower blood pressure, and reduce stress.
  10. Memory Improvement: Reading involves remembering various characters, their backgrounds, histories, and nuances. This can create new brain pathways and strengthen existing ones.
  11. Entertainment: Of course, one of the primary reasons people read is for entertainment. A good story can be incredibly engaging, offering hours of enjoyment.

It’s worth noting that the impact of reading can vary based on the type of book (e.g., fiction vs. non-fiction), the style of writing, the personal experiences and preferences of the reader, and the context in which they’re reading.

RESULT

chatGPT4 mentions some aspects which have to be considered while a human person reads a book. This gives some ‘static view’ of the structure being active in the process of reading.

More interesting is perhaps a ‘dynamic aspect’ of reading a book which can be circumscribed as ‘learning’: which kinds of changing are typical for human persons while reading a book?

Author

How do you describe the process of learning in human persons while reading a book?

ChatGPT

Reading a book is a complex cognitive process that involves multiple levels of mental engagement and comprehension. When a person reads, several processes related to learning occur simultaneously. Here’s a detailed breakdown of the learning process while reading:

  1. Perceptual Processing: This is the basic recognition of words and letters. As your eyes scan a page, the visual cortex interprets the shapes of letters and groups of letters as words. This process becomes automatic with practice and training.
  2. Decoding: This involves understanding the phonetic value of words, which is especially crucial for unfamiliar terms or names. This skill is emphasized heavily in early reading education when children sound out words.
  3. Semantic Processing: Once the words are recognized, the brain works to understand their meaning. This involves linking the words to their associated concepts in the brain.
  4. Syntax and Grammar Processing: The brain understands the structure of sentences and how words relate to each other, determining the relationships among verbs, nouns, adjectives, etc.
  5. Working Memory: This is essential for maintaining an understanding of a sentence or paragraph. As you read, you keep the beginning of a sentence in mind as you progress to the end, allowing you to grasp the entire concept.
  6. Building Connections: Learning often involves connecting new information with prior knowledge. When reading, you constantly relate what you are consuming to what you already know, which can deepen comprehension.
  7. Critical Thinking: Beyond just understanding the content, readers often evaluate, analyze, and critique what they’re reading. They may ask themselves if they agree with the author, if the information is credible, or how it relates to other things they know.
  8. Visualization: Good readers often create mental images of what they are reading. This is especially true for narrative texts where imagining scenes, characters, or events can enhance comprehension.
  9. Emotional Engagement: The content in books can evoke emotions. Whether it’s a novel that tugs at the heartstrings or a non-fiction book that arouses anger or passion about a topic, emotional engagement can increase retention and internalization of content.
  10. Metacognition: This involves thinking about one’s own thinking. Effective readers often monitor their understanding as they read, recognizing when they’re confused and employing strategies to improve comprehension.
  11. Retention & Long-Term Memory: Once the book is done, the process of shifting information from working memory to long-term memory begins. Repetition, discussion, and application of knowledge can aid this transfer.
  12. Application: The ultimate goal of learning is often to apply knowledge. After reading, a person may take actions, make decisions, or change beliefs based on what they’ve learned.

It’s important to note that everyone’s reading process might be a bit different. Some people naturally visualize more, while others might focus on critical analysis. Additionally, background knowledge, context, and the reader’s purpose (e.g., reading for pleasure versus studying for an exam) can influence how these processes unfold.

RESULT

chatGPT4 presents some facts responding to the question which are — as such — really interesting, but in the everyday world especially in science, it is necessary that a reader can get the knowledge about the sources of statements claiming to be serious: are the presented facts only ‘fakes’, purely wrong, or are there serious people with serious methods of measurement which can provide some certainty that these facts are really ‘true’?

Let us ask chatGPT4:

Author

Your answer consist of many interesting facts. Is it possible to clarify the sources you are using for these statements?

ChatGPT

I’m glad you found the information interesting. My design is based on information from a wide range of books, articles, and other educational resources up to my last update in September 2021. However, I don’t directly cite individual sources in my responses. Instead, my answers are generated based on patterns in the data I was trained on.

The steps and processes I outlined for learning while reading a book are based on widely accepted theories and research in the fields of cognitive psychology, education, and linguistics. If you’re interested in diving deeper into these topics, you might consider exploring:

  1. Cognitive psychology textbooks: These delve into the processes of perception, working memory, long-term memory, and more.
  2. Reading and literacy research: Scholars in this field have conducted in-depth studies on the processes of decoding, comprehension, and the strategies good readers use.
  3. Educational psychology: This field explores topics related to learning, including metacognition, critical thinking, and the application of knowledge.

If you’d like specific book or article recommendations, or more in-depth explanations on any of these topics, I can certainly provide them.

Author

Do you know the textbase wikipedia?

ChatGPT

Yes, I’m familiar with Wikipedia. Wikipedia is a free online encyclopedia that is collaboratively edited by volunteers from around the world. It’s one of the most visited websites globally and provides information on a vast array of topics.

Wikipedia is known for its “wiki” nature, which means that almost any article can be edited by any user with internet access. This collaborative approach has made Wikipedia a rich and ever-evolving source of information. However, because of this open-editing system, it’s always a good idea to verify critical information from Wikipedia with other reliable sources.

The structure of Wikipedia articles, with their citations, can serve as a good starting point for research on a topic. One can follow the cited sources at the bottom of each article to get more detailed or primary information on a subject.

Would you like more information or have a specific question related to Wikipedia?

RESULT

chatGPT4 knows wikipedia and gives a quite fair description of wikipedia (probably the English version). Of interest is how chatGPT4 describes the characteristic differences with wikipedia.

Author

Wikipedia has cited sources at the bottom of each article. Does this mean that wikipedia is more trustful as you?

ChatGPT

Wikipedia and I serve different purposes, and the trustworthiness of information from either source depends on the context and specific use-case.

Advantages of Wikipedia:

  1. Citations: Wikipedia articles often have extensive lists of references, which allow readers to trace back the origin of specific pieces of information. This provides a level of transparency.
  2. Collaboration: Wikipedia is collectively edited by a vast number of individuals, which can mean many eyes reviewing and refining content.
  3. Current Events: Wikipedia is often updated rapidly in the face of current events, so for very recent information, it can sometimes be more up-to-date than my last training data.

Advantages of ChatGPT (me):

  1. Broad Training Data: I’ve been trained on diverse datasets, including books, articles, and other content. This allows me to provide a wide range of information.
  2. Consistent and Neutral: While Wikipedia aims for a neutral point of view, individual articles might sometimes reflect biases of the contributors. My design aims to minimize bias and provide consistent responses.
  3. Conversational Interface: Users can have a back-and-forth dialogue with me, making it easier to ask follow-up questions or dive deeper into topics.

Considerations:

  1. Vandalism: Wikipedia can be edited by anyone, which sometimes leads to misinformation due to vandalism. However, popular pages often have vigilant editors and protective measures in place.
  2. Knowledge Cutoff: My last training data is from September 2021, so events or advancements after that date are not within my current knowledge.

Ultimately, neither Wikipedia nor I should be seen as infallible. Both can serve as starting points for research, but critical information should be cross-referenced with primary sources or authoritative references. It’s always a good practice to be discerning and critical when interpreting information from any source.

RESULT

The characterization of wikipedia is still quite fair. But in some sens does chatGPT4 ‘downplay’ it’s lack of real sources, which are the basis of any serious arguing.

Author

Which criteria do you know for a good scientific description?

ChatGPT

Scientific descriptions are meant to be precise, clear, and replicable. To ensure a high-quality scientific description, several criteria should be met:

  1. Accuracy: The description should correctly represent observations, data, or results without exaggeration or misrepresentation.
  2. Clarity: The language used should be clear and free from ambiguity. It should be easily understood by those with appropriate background knowledge in the field.
  3. Completeness: All relevant details should be included, especially those that would be necessary for replication or to understand the context of the study.
  4. Consistency: Descriptions should be consistent in terms of units of measurement, terminology, and methodology.
  5. Objectivity: The description should be free from personal bias. Instead of using subjective terms, rely on objective measurements and observations.
  6. Replicability: Enough detail should be provided so that another scientist with the same background knowledge and resources can replicate the study or experiment.
  7. Relevance: Only pertinent details should be included. While completeness is important, overloading a description with irrelevant information can detract from its usefulness.
  8. Use of Standardized Terminology: Using accepted and standardized terms in the field ensures that the description is understood universally among experts.
  9. Citing Sources: Any claim, idea, or method that isn’t original to the current study should be appropriately cited, giving credit to the original sources and allowing readers to delve deeper into the background if they wish.
  10. Avoiding Jargon (when possible): While specialized terms can be necessary, over-reliance on jargon can alienate those slightly outside the immediate field. When jargon is used, it should be defined or explained.
  11. Providing Context: A good scientific description often includes background information or a literature review to give context to the study’s significance and position within the broader field.
  12. Transparency: Limitations, potential sources of error, and uncertainties should be clearly stated. This honesty allows readers to gauge the reliability of the findings and understand any potential caveats.

A good scientific description is essentially a balance between precision and clarity. It should be detailed enough to be of value to experts in the field while still being accessible to those with a general background in the discipline.

RESULT

The overview of criteria for a scientific description includes terms like ‘Replicability’, ‘Citing Sources’, and ‘Transparency’. From the preceding self-description of chatGPT4 it is not easily to see, whether these criteria can also applied to the output of chatGPT4.

Lets transform this into a question:

Author

Your overview about criteria for a scientific description includes terms like ‘Replicability’, ‘Citing Sources’, and ‘Transparency’. From the preceding self-description of you it is not easily to see, whether these criteria can also applied to the output of chatGPT4.

ChatGPT

That’s an insightful observation. Let’s discuss the applicability of some of these criteria to ChatGPT (like me) and its outputs:

  1. Replicability: While the underlying research and development process of models like mine can be replicated in theory, the specific outputs for each query might vary slightly, even with similar inputs. Therefore, individual responses from me aren’t precisely replicable.
  2. Citing Sources: I don’t provide direct citations for specific statements in my responses. My training data comes from diverse sources, and I generate responses based on patterns in that data. Thus, I can’t directly attribute my statements to specific sources like a scholarly article or a book. This is indeed a limitation if you’re considering the output as a “scientific description.”
  3. Transparency: OpenAI, the organization behind me, has released details about the architecture, training methodology, and intentions behind models like mine. However, the specifics of the training data or exact processes that lead to a particular response aren’t transparently available in each interaction.

In light of this:

  • For scholarly or academic purposes, my responses should be considered as starting points or general overviews. Critical information should always be cross-referenced with authoritative and primary sources.
  • For general inquiries or casual information, my design is to provide accurate and coherent answers based on the vast amount of data I’ve been trained on.

It’s crucial to recognize the intended use and context when considering the applicability of scientific criteria to any tool or source, including ChatGPT.

COMMENTS

[1] There was a conference at the Technical University of Darmstadt with the title “KI – Text und Geltung. Wie verändern KI-Textgeneratoren wissenschaftliche Diskurse?” (Translated: AI – Text and Validity. How do AI text generators change scientific discourses? )., https://zevedi.de/en/topics/ki-text-2/ . One lecturer Thomas Arnold, gave a speech entitled “Herausforderungen in der Forschung: Mangelnde Reproduzierbarkeit und Erklärbarkeit” (Translated: Challenges in research: lack of reproducibility and explainability), which will be published by de Gruyter at the end of 2023/ or the beginning of 2024.

[2] Gerd Doeben-Henisch, STATE CHANGE FROM NON-WRITING TO WRITING. WORKING WITH CHATGPT4 IN PARALLEL. AN EXPLORATION, https://www.uffmm.org/2023/08/28/state-change-from-non-writing-to-writing-working-with-chatgpt4-in-parallel/

State Change from Non-Writing to Writing. Working with chatGPT4 in parallel. An Exploration

Author: Gerd Doeben-Henisch

Email: info@uffmm.org

Aug 28, 2023 – Aug 28, 2023 (18:10h CET)

CONTEXT: Man and Machine. One head against Billions of documents …

The author has started an experiment writing two tracks in parallel: the first track is his own writing without chatGPT4, and the next track ist a chat with chatGPT4. The motivation behind this experiment is to get a better estimate how the writing of chatGPT4 differs. While chatGPT4 is working only by statistical patterns of the ‘surface’ of language communications, the author is exploiting his individual ‘meaning knowledge’ built up in his inner states by many years of individual learning as part of certain ‘cultures’ and ‘everyday situations’. Clearly the knowledge of the individual author about texts available is extremely smaller than compared with the knowledge base of chatGPT4. Thus it is an interesting question whether an individual knowledge is generally too bad compared to the text generation capacity of the chatGPT4 software?

While the premises of the original article cannot be easily transferred to the dialog with chatGPT4, one can still get some sense of where chatGPT4’s strengths lie on the one hand, and its weaknesses on the other. For ‘authentic’ writing a replacement by chatGPT4 is not an option, but a ‘cooperation’ between ‘authentic’ writing and chatGPT seems possible and in some contexts certainly even fruitful.

What the Author did Write:

CONTINUOUS REBIRTH – Now. Silence does not help …Exploration

(This text is a translation from a German source using for nearly 90-95% the deepL software without the need to make any changes afterwards. [*])

CONTEXT

As written in the previous blog entry, the last blog entry with the topic “Homo Sapiens: empirical and sustainable-empirical theories, emotions, and machines. A Sketch” [1] as a draft speech for the conference “AI – Text and Validity. How do AI text generators change scientific discourse?” [2] Due to the tight time frame (20 min for the talk), this text was very condensed. However, despite its brevity, the condensed text contains a many references to fundamental issues. One of them will be briefly taken up here.

Localization in the text of the lecture

A first hint is found in the section headed “FUTURE AND EMOTIONS” and then in the section “SUSTAINABLE EMPIRICAL THEORY”.

In these sections of the text, attention is drawn to the fact that every explicit worldview is preceded by a phase of ’emergence’ of that worldview, in which there is a characteristic ‘transition’ between the ‘result’ in the form of a possible text and a preceding state in which that text is not yet ‘there’. Of course, in this ‘pre-text phase’ – according to the assumptions in the section “MEANING” – there are many facts of knowledge and emotions in the brain of the ‘author’, all of which are ‘not conscious’, but from which the possible effects in the form of a text emanate. How exactly ‘impact’ is supposed to emanate from this ‘pre-conscious’ knowledge and emotions is largely unclear.

We know from everyday life that external events can trigger ‘perceptions’, which in turn can ‘stimulate’ a wide variety of reactions as ‘triggers’. If we disregard such reactions, which are ‘pre-trained’ in our brain due to frequent practice and which are then almost always ‘automatically’ evoked, it is as a rule hardly predictable whether and how we react.

From non-action to action

This potential transition from non-action to action is omnipresent in everyday life. As such, we normally do not notice it. In special situations, however, where we are explicitly challenged to make decisions, we then suddenly find ourselves in a quasi ‘undefined state’: the situation appears to us as if we are supposed to decide. Should we do anything at all (eat something or not)? Which option is more important (go to the planned meeting or rather finish a certain task)? We want to implement a plan, but which of the many options should we ‘choose’ (… there are at least three alternatives; all have pros and cons)? Choosing one option as highly favored would involve significant changes in one’s life situation; do I really want this? Would it be worth it? The nature of the challenges is as varied as everyday life in many different areas.

What is important, more important?

As soon as one or more than one option appears before the ‘mind’s eye’, one involuntarily asks oneself how ‘seriously’ one should take the options? Are there arguments for or against? Are the reasons ‘credible’? What ‘risks’ are associated with them? What can I expect as ‘positive added value’? What changes for my ‘personal situation’ are associated with it? What does this mean for my ‘environment’? …

What do I do if the ‘recognizable’ (‘rational’) options do not provide a clear result, it ‘contradicts’ my ‘feeling for life’, ‘me’, and I ‘spontaneously’ reject it, spontaneously ‘don’t want’ it, it triggers various ‘strong feelings’ that may ‘overwhelm’ me (anger, fear, disappointment, sadness, …)? …

The transition from non-action to action can ‘activate’ a multitude of ‘rational’ and ’emotional’ aspects, which can – and mostly do – influence each other, to the point that one experiences a ‘helplessness’ entangled in a ‘tangle’ of considerations and emotions: one has the feeling of being ‘stuck’, a ‘clear solution’ seems distant. …

If one can ‘wait’ (hours, days, weeks, …) and/or one has the opportunity to talk about it with other people, then this can often – not always – clarify the situation so far that one thinks to know what one ‘wants now’. But such ‘clarifications’ usually do not get the ‘challenge of a ‘decision’ out of the way. Besides, the ‘escape’ into ‘repressing’ the situation is always an option; sometimes it helps; if it doesn’t help, then the ‘everyday’ situation becomes worse by the ‘repressing’, quite apart from the fact that ‘unsolved problems’ do not ‘disappear’, but live on ‘inside’ and can unfold ‘special effects’ there in many ways, which are very ‘destructive’.

Rebirth?

The variety of possible – emotional as well as rational – aspects of the transition from non-action to action are fascinating, but possibly even more fascinating is the basic process itself: at any given moment, every human being lives in a certain everyday situation with a variety of experiences already made, a mixture of ‘rational explanations’ and ’emotional states’. And, depending on the nature of everyday life, one can ‘drift along’ or one has to perceive daily ‘strenuous activities’ in order to maintain the situation of everyday life; in other cases, one has to constantly ‘fight’ in real terms for the continuance in everyday life. One experiences here that the ‘conditions of everyday life’ are largely given to the individual and one can only change them to a limited extent and with a corresponding ‘effort’.

External events (fire, water, violence, war, drought, accident, life forms of a community, workplace, company policy, diseases, aging, …) can of course strongly influence the personal everyday life ‘against one’s will’, but in the end a fundamental autonomy of decision-making remains in the individual: one can decide differently in the next moment than before, even without it being completely clear to oneself in this moment ‘why’ one does it. Whether one calls this ‘not being able to explain’ then ‘goot feeling’ or ‘intuition’ or … all these words only circumscribe a ‘not knowing’ about the processes ‘in us’ that take place and ‘motivate’ us. And even if one has a ‘linguistic explanation’ available, this does not have to mean that a ‘clear insight’ is also connected with these words. The proportion of ‘preconscious/ unconscious’ ‘inner processes’ is high and ultimately these are not explainable ‘out of themselves’: they take place, move us, and we ‘implement them’.

Yes, in every moment we can behave ‘as usual’ and thereby ‘preserve’ the world (or thereby prevent it from becoming different?). But we can also sometimes ‘deviate’ from the usual (and thereby ‘destroy’ something or make something ‘new possible?). All in all, the ‘space of the inside’ remains largely ‘unilluminated’. We hardly if at all understand why we do this; this requires trust in ourselves and the situation. Is the difference to the ‘original birth’ only gradual? Are we in a continuing process of birth that constantly enables the construction of something ‘new’, can support ‘life’ or also ‘weaken’ it?

COMMENTS

[Exploration] The format ‘exploration’ is newly introduced in the blog from now on, since there is often the situation that the author at the transition point from ‘non-action to action’, from ‘non-writing to writing’ tries out which emotional and rational ‘reactions’ from the ‘inside’ are ‘spontaneously’ available for a certain question/idea. This is to be distinguished from a ‘systematic elaboration’ of a thought, although the influence of the ‘inside’ is of course never completely excluded. The ‘difference’ between ‘exploratory’ and ‘systematic’ writing has to do with the fact that systematic writing tries to bring the ‘rational dimension’ of the thought to the fore.

[*] German source: https://www.cognitiveagent.org/2023/08/28/staendige-wiedergeburt-jetzt-schweigen-hilft-nicht-exploration/ ,

[deepL translator]: https://www.deepl.com

[1] https://www.uffmm.org/2023/08/24/homo-sapiens-empirical-and-sustained-empirical-theories-emotions-and-machines-a-sketch/

[2] https://zevedi.de/en/topics/ki-text-2/

Dealing with the topic using chatGPT4

Author: What happens inside a person when it changes from the state of non-writing to the state of writing? Which kinds of factors are involved in this change?

ChatGPT

The shift from a state of non-writing to writing is a complex interplay of cognitive, emotional, and physical processes. Several factors contribute to this transformation:

  1. Cognitive Factors:
    • Intent and Purpose: The most immediate factor that leads to writing is the intent. The individual decides there’s a need to communicate or record something. This purpose could be to share knowledge, express emotions, or any other myriad reasons.
    • Idea Formation: The brain begins to formulate ideas, organize thoughts, and consider the message it wants to convey.
    • Recall: The brain accesses stored knowledge about language, grammar, and the topic at hand.
    • Decision Making: Choices are made about wording, tone, and structure.
  2. Emotional Factors:
    • Motivation: An internal or external motivation prompts the writer. This could be intrinsic (like the joy of writing) or extrinsic (like a job requirement).
    • Confidence: A person’s self-efficacy regarding writing can influence their willingness and approach. A confident writer might start without hesitation, while someone less confident might procrastinate or struggle with where to begin.
    • Mood and Emotion: A person’s emotional state can shape the content and tone of their writing. For instance, someone in a cheerful mood might use more positive and lively language.
  3. Physiological and Neuromotor Factors:
    • Fine Motor Skills: The act of writing requires coordination between the brain and muscles, whether it’s holding a pen or typing on a keyboard.
    • Visual Processing: Reading what’s written, planning spatially on the page or screen, and adjusting based on visual feedback are integral to the writing process.
    • Brain Activation: Certain areas of the brain, like Broca’s area and Wernicke’s area, are particularly active during language production.
  4. Environmental Factors:
    • Physical Environment: The availability of tools (pen, paper, computer), a conducive atmosphere, and a comfortable workspace can influence the transition to writing.
    • Cultural and Social Environment: The societal context, intended audience, and cultural norms can shape what and how a person writes.
  5. Educational and Experiential Factors:
    • Training and Practice: A person’s prior experience with writing, including their education, can influence their ease and style of writing.
    • Feedback and Iteration: Previous feedback on one’s writing can affect future attempts. Someone who has been praised might approach writing with more enthusiasm, while critical feedback might make another more cautious or reflective.
  6. External Stimuli:
    • Inspiration: Often, external stimuli—like reading a book, a conversation, a personal experience, or even a sunset—can spark the urge to write.
    • Prompts: In some situations, a specific prompt or question might initiate the writing process.

Understanding these factors and their interplay can be beneficial for educators, writers, and anyone looking to refine or understand their writing process.

AUTHOR

Can you describe more the subjective experience of a person, which is changing from the state of non-writing to writing?

chatGPT4

Certainly! The transition from non-writing to writing, when viewed from a subjective lens, involves a cascade of internal experiences. Here’s a more in-depth dive into the intimate, often ineffable stages a writer might traverse:

  1. Initial Restlessness: Before the act of writing commences, there’s often a sense of unease or urgency. It’s akin to a pressure building up, a compulsion to get something out, or a feeling that there’s something “there” that needs to be expressed.
  2. Incubation and Percolation: Many writers describe a period where ideas are “brewing” or “stewing” inside them. They might not be actively thinking about these ideas, but somewhere in the backdrop, the mind is processing and refining them. This can feel like daydreaming or being lost in thought.
  3. Flash of Insight: This is the ‘Eureka!’ moment when clarity emerges. It’s the point when the idea crystallizes, and the writer feels an impetus to put pen to paper. It can be a sudden realization or a slow dawning.
  4. Overcoming the Inertia: This is the mental “deep breath” before diving in. There’s a brief struggle, often described as “writer’s block” or resistance, where the weight of the blank page feels daunting.
  5. Entering the Flow: As words start pouring out, many writers describe a state known as “flow” – a term coined by Mihaly Csikszentmihalyi. In this state, writers lose track of time, their surroundings, and sometimes even themselves. They feel as if they are merely a conduit for the words, with the writing almost writing itself.
  6. Bouts of Doubt and Reassurance: Especially in longer writing stints, waves of self-doubt might wash over the writer. Questions like “Is this good enough?” or “Does this make sense?” arise. These periods might be interspersed with moments of confidence and reassurance, where the writer feels they’re on the right track.
  7. Physical Sensations: There might be heightened awareness of the tactile sensation of the pen moving on paper or fingers dancing on the keyboard. Some writers describe a kind of rhythm they get into, almost like a meditative state.
  8. Emotional Catharsis: Particularly for personal or emotive subjects, the act of writing can be cathartic. It’s a release, a way to process feelings, and often brings relief or deeper understanding.
  9. Completion and Exhaustion: As the writing draws to a close, there’s a sense of accomplishment, but also, often, a feeling of emptiness or exhaustion. The mental and emotional energy expended in writing can leave one drained.
  10. Reflection and Revision: Once the initial draft is out, there’s a period of reflection. Reading one’s own words can be an enlightening experience. This phase might involve editing, but it’s also a moment of self-confrontation, seeing one’s thoughts and feelings laid out bare.

The subjective experience of writing is deeply personal and can vary greatly from person to person. Some find it therapeutic, others see it as work, and for many, it’s a passionate craft. Regardless of the perspective, the journey from non-writing to writing is an intimate dance between the conscious and subconscious, emotion and reason, art and discipline.

End of this first small experiment.

Homo Sapiens: empirical and sustained-empirical theories, emotions, and machines. A sketch

Author: Gerd Doeben-Henisch

Email: info@uffmm.org

Aug 24, 2023 — Aug 29, 2023 (10:48h CET)

Attention: This text has been translated from a German source by using the software deepL for nearly 97 – 99% of the text! The diagrams of the German version have been left out.

CONTEXT

This text represents the outline of a talk given at the conference “AI – Text and Validity. How do AI text generators change scientific discourse?” (August 25/26, 2023, TU Darmstadt). [1] A publication of all lectures is planned by the publisher Walter de Gruyter by the end of 2023/beginning of 2024. This publication will be announced here then.

Start of the Lecture

Dear Auditorium,

This conference entitled “AI – Text and Validity. How do AI text generators change scientific discourses?” is centrally devoted to scientific discourses and the possible influence of AI text generators on these. However, the hot core ultimately remains the phenomenon of text itself, its validity.

In this conference many different views are presented that are possible on this topic.

TRANSDISCIPLINARY

My contribution to the topic tries to define the role of the so-called AI text generators by embedding the properties of ‘AI text generators’ in a ‘structural conceptual framework’ within a ‘transdisciplinary view’. This helps the specifics of scientific discourses to be highlighted. This can then result further in better ‘criteria for an extended assessment’ of AI text generators in their role for scientific discourses.

An additional aspect is the question of the structure of ‘collective intelligence’ using humans as an example, and how this can possibly unite with an ‘artificial intelligence’ in the context of scientific discourses.

‘Transdisciplinary’ in this context means to span a ‘meta-level’ from which it should be possible to describe today’s ‘diversity of text productions’ in a way that is expressive enough to distinguish ‘AI-based’ text production from ‘human’ text production.

HUMAN TEXT GENERATION

The formulation ‘scientific discourse’ is a special case of the more general concept ‘human text generation’.

This change of perspective is meta-theoretically necessary, since at first sight it is not the ‘text as such’ that decides about ‘validity and non-validity’, but the ‘actors’ who ‘produce and understand texts’. And with the occurrence of ‘different kinds of actors’ – here ‘humans’, there ‘machines’ – one cannot avoid addressing exactly those differences – if there are any – that play a weighty role in the ‘validity of texts’.

TEXT CAPABLE MACHINES

With the distinction in two different kinds of actors – here ‘humans’, there ‘machines’ – a first ‘fundamental asymmetry’ immediately strikes the eye: so-called ‘AI text generators’ are entities that have been ‘invented’ and ‘built’ by humans, it are furthermore humans who ‘use’ them, and the essential material used by so-called AI generators are again ‘texts’ that are considered a ‘human cultural property’.

In the case of so-called ‘AI-text-generators’, we shall first state only this much, that we are dealing with ‘machines’, which have ‘input’ and ‘output’, plus a minimal ‘learning ability’, and whose input and output can process ‘text-like objects’.

BIOLOGICAL — NON-BIOLOGICAL

On the meta-level, then, we are assumed to have, on the one hand, such actors which are minimally ‘text-capable machines’ – completely human products – and, on the other hand, actors we call ‘humans’. Humans, as a ‘homo-sapiens population’, belong to the set of ‘biological systems’, while ‘text-capable machines’ belong to the set of ‘non-biological systems’.

BLANK INTELLIGENCE TERM

The transformation of the term ‘AI text generator’ into the term ‘text capable machine’ undertaken here is intended to additionally illustrate that the widespread use of the term ‘AI’ for ‘artificial intelligence’ is rather misleading. So far, there exists today no general concept of ‘intelligence’ in any scientific discipline that can be applied and accepted beyond individual disciplines. There is no real justification for the almost inflationary use of the term AI today other than that the term has been so drained of meaning that it can be used anytime, anywhere, without saying anything wrong. Something that has no meaning can be neither true’ nor ‘false’.

PREREQUISITES FOR TEXT GENERATION

If now the homo-sapiens population is identified as the original actor for ‘text generation’ and ‘text comprehension’, it shall now first be examined which are ‘those special characteristics’ that enable a homo-sapiens population to generate and comprehend texts and to ‘use them successfully in the everyday life process’.

VALIDITY

A connecting point for the investigation of the special characteristics of a homo-sapiens text generation and a text understanding is the term ‘validity’, which occurs in the conference topic.

In the primary arena of biological life, in everyday processes, in everyday life, the ‘validity’ of a text has to do with ‘being correct’, being ‘appicable’. If a text is not planned from the beginning with a ‘fictional character’, but with a ‘reference to everyday events’, which everyone can ‘check’ in the context of his ‘perception of the world’, then ‘validity in everyday life’ has to do with the fact that the ‘correctness of a text’ can be checked. If the ‘statement of a text’ is ‘applicable’ in everyday life, if it is ‘correct’, then one also says that this statement is ‘valid’, one grants it ‘validity’, one also calls it ‘true’. Against this background, one might be inclined to continue and say: ‘If’ the statement of a text ‘does not apply’, then it has ‘no validity’; simplified to the formulation that the statement is ‘not true’ or simply ‘false’.

In ‘real everyday life’, however, the world is rarely ‘black’ and ‘white’: it is not uncommon that we are confronted with texts to which we are inclined to ascribe ‘a possible validity’ because of their ‘learned meaning’, although it may not be at all clear whether there is – or will be – a situation in everyday life in which the statement of the text actually applies. In such a case, the validity would then be ‘indeterminate’; the statement would be ‘neither true nor false’.

ASYMMETRY: APPLICABLE- NOT APPLICABLE

One can recognize a certain asymmetry here: The ‘applicability’ of a statement, its actual validity, is comparatively clear. The ‘not being applicable’, i.e. a ‘merely possible’ validity, on the other hand, is difficult to decide.

With this phenomenon of the ‘current non-decidability’ of a statement we touch both the problem of the ‘meaning’ of a statement — how far is at all clear what is meant? — as well as the problem of the ‘unfinishedness of our everyday life’, better known as ‘future’: whether a ‘current present’ continues as such, whether exactly like this, or whether completely different, depends on how we understand and estimate ‘future’ in general; what some take for granted as a possible future, can be simply ‘nonsense’ for others.

MEANING

This tension between ‘currently decidable’ and ‘currently not yet decidable’ additionally clarifies an ‘autonomous’ aspect of the phenomenon of meaning: if a certain knowledge has been formed in the brain and has been made usable as ‘meaning’ for a ‘language system’, then this ‘associated’ meaning gains its own ‘reality’ for the scope of knowledge: it is not the ‘reality beyond the brain’, but the ‘reality of one’s own thinking’, whereby this reality of thinking ‘seen from outside’ has something like ‘being virtual’.

If one wants to talk about this ‘special reality of meaning’ in the context of the ‘whole system’, then one has to resort to far-reaching assumptions in order to be able to install a ‘conceptual framework’ on the meta-level which is able to sufficiently describe the structure and function of meaning. For this, the following components are minimally assumed (‘knowledge’, ‘language’ as well as ‘meaning relation’):

KNOWLEDGE: There is the totality of ‘knowledge’ that ‘builds up’ in the homo-sapiens actor in the course of time in the brain: both due to continuous interactions of the ‘brain’ with the ‘environment of the body’, as well as due to interactions ‘with the body itself’, as well as due to interactions ‘of the brain with itself’.

LANGUAGE: To be distinguished from knowledge is the dynamic system of ‘potential means of expression’, here simplistically called ‘language’, which can unfold over time in interaction with ‘knowledge’.

MEANING RELATIONSHIP: Finally, there is the dynamic ‘meaning relation’, an interaction mechanism that can link any knowledge elements to any language means of expression at any time.

Each of these mentioned components ‘knowledge’, ‘language’ as well as ‘meaning relation’ is extremely complex; no less complex is their interaction.

FUTURE AND EMOTIONS

In addition to the phenomenon of meaning, it also became apparent in the phenomenon of being applicable that the decision of being applicable also depends on an ‘available everyday situation’ in which a current correspondence can be ‘concretely shown’ or not.

If, in addition to a ‘conceivable meaning’ in the mind, we do not currently have any everyday situation that sufficiently corresponds to this meaning in the mind, then there are always two possibilities: We can give the ‘status of a possible future’ to this imagined construct despite the lack of reality reference, or not.

If we would decide to assign the status of a possible future to a ‘meaning in the head’, then there arise usually two requirements: (i) Can it be made sufficiently plausible in the light of the available knowledge that the ‘imagined possible situation’ can be ‘transformed into a new real situation’ in the ‘foreseeable future’ starting from the current real situation? And (ii) Are there ‘sustainable reasons’ why one should ‘want and affirm’ this possible future?

The first requirement calls for a powerful ‘science’ that sheds light on whether it can work at all. The second demand goes beyond this and brings the seemingly ‘irrational’ aspect of ’emotionality’ into play under the garb of ‘sustainability’: it is not simply about ‘knowledge as such’, it is also not only about a ‘so-called sustainable knowledge’ that is supposed to contribute to supporting the survival of life on planet Earth — and beyond –, it is rather also about ‘finding something good, affirming something, and then also wanting to decide it’. These last aspects are so far rather located beyond ‘rationality’; they are assigned to the diffuse area of ’emotions’; which is strange, since any form of ‘usual rationality’ is exactly based on these ’emotions’.[2]

SCIENTIFIC DISCOURSE AND EVERYDAY SITUATIONS

In the context of ‘rationality’ and ’emotionality’ just indicated, it is not uninteresting that in the conference topic ‘scientific discourse’ is thematized as a point of reference to clarify the status of text-capable machines.

The question is to what extent a ‘scientific discourse’ can serve as a reference point for a successful text at all?

For this purpose it can help to be aware of the fact that life on this planet earth takes place at every moment in an inconceivably large amount of ‘everyday situations’, which all take place simultaneously. Each ‘everyday situation’ represents a ‘present’ for the actors. And in the heads of the actors there is an individually different knowledge about how a present ‘can change’ or will change in a possible future.

This ‘knowledge in the heads’ of the actors involved can generally be ‘transformed into texts’ which in different ways ‘linguistically represent’ some of the aspects of everyday life.

The crucial point is that it is not enough for everyone to produce a text ‘for himself’ alone, quite ‘individually’, but that everyone must produce a ‘common text’ together ‘with everyone else’ who is also affected by the everyday situation. A ‘collective’ performance is required.

Nor is it a question of ‘any’ text, but one that is such that it allows for the ‘generation of possible continuations in the future’, that is, what is traditionally expected of a ‘scientific text’.

From the extensive discussion — since the times of Aristotle — of what ‘scientific’ should mean, what a ‘theory’ is, what an ’empirical theory’ should be, I sketch what I call here the ‘minimal concept of an empirical theory’.

  1. The starting point is a ‘group of people’ (the ‘authors’) who want to create a ‘common text’.
  2. This text is supposed to have the property that it allows ‘justifiable predictions’ for possible ‘future situations’, to which then ‘sometime’ in the future a ‘validity can be assigned’.
  3. The authors are able to agree on a ‘starting situation’ which they transform by means of a ‘common language’ into a ‘source text’ [A].
  4. It is agreed that this initial text may contain only ‘such linguistic expressions’ which can be shown to be ‘true’ ‘in the initial situation’.
  5. In another text, the authors compile a set of ‘rules of change’ [V] that put into words ‘forms of change’ for a given situation.
  6. Also in this case it is considered as agreed that only ‘such rules of change’ may be written down, of which all authors know that they have proved to be ‘true’ in ‘preceding everyday situations’.
  7. The text with the rules of change V is on a ‘meta-level’ compared to the text A about the initial situation, which is on an ‘object-level’ relative to the text V.
  8. The ‘interaction’ between the text V with the change rules and the text A with the initial situation is described in a separate ‘application text’ [F]: Here it is described when and how one may apply a change rule (in V) to a source text A and how this changes the ‘source text A’ to a ‘subsequent text A*’.
  9. The application text F is thus on a next higher meta-level to the two texts A and V and can cause the application text to change the source text A.
  1. The moment a new subsequent text A* exists, the subsequent text A* becomes the new initial text A.
  2. If the new initial text A is such that a change rule from V can be applied again, then the generation of a new subsequent text A* is repeated.
  3. This ‘repeatability’ of the application can lead to the generation of many subsequent texts <A*1, …, A*n>.
  4. A series of many subsequent texts <A*1, …, A*n> is usually called a ‘simulation’.
  5. Depending on the nature of the source text A and the nature of the change rules in V, it may be that possible simulations ‘can go quite differently’. The set of possible scientific simulations thus represents ‘future’ not as a single, definite course, but as an ‘arbitrarily large set of possible courses’.
  6. The factors on which different courses depend are manifold. One factor are the authors themselves. Every author is, after all, with his corporeality completely himself part of that very empirical world which is to be described in a scientific theory. And, as is well known, any human actor can change his mind at any moment. He can literally in the next moment do exactly the opposite of what he thought before. And thus the world is already no longer the same as previously assumed in the scientific description.

Even this simple example shows that the emotionality of ‘finding good, wanting, and deciding’ lies ahead of the rationality of scientific theories. This continues in the so-called ‘sustainability discussion’.

SUSTAINABLE EMPIRICAL THEORY

With the ‘minimal concept of an empirical theory (ET)’ just introduced, a ‘minimal concept of a sustainable empirical theory (NET)’ can also be introduced directly.

While an empirical theory can span an arbitrarily large space of grounded simulations that make visible the space of many possible futures, everyday actors are left with the question of what they want to have as ‘their future’ out of all this? In the present we experience the situation that mankind gives the impression that it agrees to destroy the life beyond the human population more and more sustainably with the expected effect of ‘self-destruction’.

However, this self-destruction effect, which can be predicted in outline, is only one variant in the space of possible futures. Empirical science can indicate it in outline. To distinguish this variant before others, to accept it as ‘good’, to ‘want’ it, to ‘decide’ for this variant, lies in that so far hardly explored area of emotionality as root of all rationality.[2]

If everyday actors have decided in favor of a certain rationally lightened variant of possible future, then they can evaluate at any time with a suitable ‘evaluation procedure (EVAL)’ how much ‘percent (%) of the properties of the target state Z’ have been achieved so far, provided that the favored target state is transformed into a suitable text Z.

In other words, the moment we have transformed everyday scenarios into a rationally tangible state via suitable texts, things take on a certain clarity and thereby become — in a sense — simple. That we make such transformations and on which aspects of a real or possible state we then focus is, however, antecedent to text-based rationality as an emotional dimension.[2]

MAN-MACHINE

After these preliminary considerations, the final question is whether and how the main question of this conference, “How do AI text generators change scientific discourse?” can be answered in any way?

My previous remarks have attempted to show what it means for humans to collectively generate texts that meet the criteria for scientific discourse that also meets the requirements for empirical or even sustained empirical theories.

In doing so, it becomes apparent that both in the generation of a collective scientific text and in its application in everyday life, a close interrelation with both the shared experiential world and the dynamic knowledge and meaning components in each actor play a role.

The aspect of ‘validity’ is part of a dynamic world reference whose assessment as ‘true’ is constantly in flux; while one actor may tend to say “Yes, can be true”, another actor may just tend to the opposite. While some may tend to favor possible future option X, others may prefer future option Y. Rational arguments are absent; emotions speak. While one group has just decided to ‘believe’ and ‘implement’ plan Z, the others turn away, reject plan Z, and do something completely different.

This unsteady, uncertain character of future-interpretation and future-action accompanies the Homo Sapiens population from the very beginning. The not understood emotional complex constantly accompanies everyday life like a shadow.

Where and how can ‘text-enabled machines’ make a constructive contribution in this situation?

Assuming that there is a source text A, a change text V and an instruction F, today’s algorithms could calculate all possible simulations faster than humans could.

Assuming that there is also a target text Z, today’s algorithms could also compute an evaluation of the relationship between a current situation as A and the target text Z.

In other words: if an empirical or a sustainable-empirical theory would be formulated with its necessary texts, then a present algorithm could automatically compute all possible simulations and the degree of target fulfillment faster than any human alone.

But what about the (i) elaboration of a theory or (ii) the pre-rational decision for a certain empirical or even sustainable-empirical theory ?

A clear answer to both questions seems hardly possible to me at the present time, since we humans still understand too little how we ourselves collectively form, select, check, compare and also reject theories in everyday life.

My working hypothesis on the subject is: that we will very well need machines capable of learning in order to be able to fulfill the task of developing useful sustainable empirical theories for our common everyday life in the future. But when this will happen in reality and to what extent seems largely unclear to me at this point in time.[2]

COMMENTS

[1] https://zevedi.de/en/topics/ki-text-2/

[2] Talking about ’emotions’ in the sense of ‘factors in us’ that move us to go from the state ‘before the text’ to the state ‘written text’, that hints at very many aspects. In a small exploratory text “State Change from Non-Writing to Writing. Working with chatGPT4 in parallel” ( https://www.uffmm.org/2023/08/28/state-change-from-non-writing-to-writing-working-with-chatgpt4-in-parallel/ ) the author has tried to address some of these aspects. While writing it becomes clear that very many ‘individually subjective’ aspects play a role here, which of course do not appear ‘isolated’, but always flash up a reference to concrete contexts, which are linked to the topic. Nevertheless, it is not the ‘objective context’ that forms the core statement, but the ‘individually subjective’ component that appears in the process of ‘putting into words’. This individual subjective component is tentatively used here as a criterion for ‘authentic texts’ in comparison to ‘automated texts’ like those that can be generated by all kinds of bots. In order to make this difference more tangible, the author decided to create an ‘automated text’ with the same topic at the same time as the quoted authentic text. For this purpose he used chatGBT4 from openAI. This is the beginning of a philosophical-literary experiment, perhaps to make the possible difference more visible in this way. For purely theoretical reasons, it is clear that a text generated by chatGBT4 can never generate ‘authentic texts’ in origin, unless it uses as a template an authentic text that it can modify. But then this is a clear ‘fake document’. To prevent such an abuse, the author writes the authentic text first and then asks chatGBT4 to write something about the given topic without chatGBT4 knowing the authentic text, because it has not yet found its way into the database of chatGBT4 via the Internet.

Eva Jablonka, Marion J. Lamb, “Traditions and Cumulative Evolution: How a New Lifestyle is Evolving”, 2017 (Review 2022, 2023)

(Last change: July 13, 2023)

(The following text was created from a German text with the support of the software deepL.)

CONTEXT

This is a short review of an article from Eva Jablonka, Marion J. Lamb from 2017 talking about their book „Evolution in vier Dimensionen. Wie Genetik, Epigenetik, Verhalten und Symbole die Geschichte des Lebens prägen (Traditions and cumulative evolution: how a new lifestyle is evolving)“, Stuttgart, S. Hirzel Verlag, published in 2017. There was an earlier English edition (2005) with the title „Evolution in Four Dimensions: Genetic, Epigenetic, Behavioral, and Symbolic Variation in the History of Life“, MIT Press. MIT Press comments the 2005 English edition as follows: „Ideas about heredity and evolution are undergoing a revolutionary change. New findings in molecular biology challenge the gene-centered version of Darwinian theory according to which adaptation occurs only through natural selection of chance DNA variations. In Evolution in Four Dimensions, Eva Jablonka and Marion Lamb argue that there is more to heredity than genes. They trace four „dimensions“ in evolution—four inheritance systems that play a role in evolution: genetic, epigenetic (or non-DNA cellular transmission of traits), behavioral, and symbolic (transmission through language and other forms of symbolic communication). These systems, they argue, can all provide variations on which natural selection can act. Evolution in Four Dimensions offers a richer, more complex view of evolution than the gene-based, one-dimensional view held by many today. The new synthesis advanced by Jablonka and Lamb makes clear that induced and acquired changes also play a role in evolution.

The article (in German) was published in pp. 141-146 in: Regina Oehler, Petra Gehring, Volker Mosbrugger (eds.), 2017, Series: Senckenberg Book 78, “Biologie und Ethik: Life as a Project. Ein Funkkolleg Lesebuch”, Stuttgart, E. Schweizerbart’sche Verlagsbuchhandlung (Nägele u. Obermiller) and Senckenberg Nature Research Society.

Main Positions extracted from the Text

Preparing an understanding of the larger text of the book the author has tried to extract the most important assumptions/ hypotheses from the short article:

  1. There is an existing ‘nature’ as a variable quantity with ‘nature-specific’ properties, and
  2. in this nature there are biological populations as a ‘component of nature’, which appear as ‘environment’ for their own members as well as for other populations themselves.
  3. Populations are themselves changeable.
  4. The members of a biological population are able to respond to properties of the surrounding nature – with the other members of the population as a component of the environment (self-reference of a population) – by a specific behavior.
  5. A behavior can be changed in its form as well as related to a specific occasion.
  6. Due to the self-referentiality of a population, a population can therefore interactively change its own behavior
  7. interact variably with the environment through the changed behavior (and thereby change the environment itself to a certain extent).
  8. It turns out that members of a population can recall certain behaviors over longer periods of time depending on environmental characteristics.
  9. Due to differences in lifespan as well as memory, new behaviors can be transferred between generations, allowing for transmission beyond one generation.
  10. Furthermore, it is observed that the effect of genetic information can be meta-genetically (epigenetically) different in the context of reproduction, with these meta-genetic (epigenetic) changes occurring during lifetime. The combination of genetic and epigenetic factors can affect offspring. The effect of such epigenetically influenced changes in actual behavior (phenotype) is not linear.

WORKING DEFINITIONS

For the further discussion, it is helpful to clarify at this point which are the basic (guiding) terms that will shape the further discourse. This will be done in the form of ‘tentative’ definitions. If these should prove to be ‘inappropriate’ in the further course, then one can modify them accordingly.

Three terms seem to play a role as such guiding terms at this point: ‘population’, ‘culture’ and – anticipating the discussion – ‘society’.

Def1: Population

Population here is minimally meant to be such a grouping of biological individuals that form a biological reproductive community (cf. [1])

Def2: Culture

In common usage, the term ‘culture’ is restricted to the population of ‘people’. [2] Here the proposal is made to let ‘culture’ begin where biological populations are capable of minimal tradition formation based on their behavioral space. This expands the scope of the concept of ‘culture’ beyond the population of humans to many other biological populations, but not all.

Def3: Society

The term ‘society’ gains quite different meanings depending on the point of view (of a discipline). Here the term shall be defined minimalistically with reference to the effect that biologically a ‘society’ is minimally present if there is a biological population in which ‘culture’ occurs in a minimal way.

It will be further considered how these processes are to be understood in detail and what this may mean from a philosophical point of view.

COMMENTS

wkp-en: en.wikipedia.org

[1] Population in wkp-en: https://en.wikipedia.org/wiki/Population

[] Culture in wkp-en: https://en.wikipedia.org/wiki/Culture

[] Society in wkp-en: https://en.wikipedia.org/wiki/Society

INTELLIGENCE – BIOLOGICAL – NON-BIOLOGICAL – INDIVIDUAL – COLLECTIVE …

(July 12, 2023 – August 24, 2023)

(The following text was created with the support of the software deepL from a German text)

–!! To be continued !!–

–!! See new comment at the end of the text (Aug 24, 2023)!!–

CONTEXT

We live in a time in which – if one takes different perspectives – very many different partial world views can be perceived, world views which are not easily ‘compatible’ with each other. Thus, so far, the ‘physical’ and the ‘biological’ worldviews do not necessarily seem to be ‘aligned’ with each other. In addition, there are different directions within each of these worldviews. Where do the social sciences stand here: Not physics, not biology, but then what? The economic sciences also seem to be ‘surfing across’ everything else … this list could easily be extended. Within these assemblies of worldviews, a new worldview emerges quite freshly, that of so-called ‘artificial intelligence’; it is also almost completely unmediated with everything else, but makes heavy use of terminology borrowed from psychology and biology, without adopting the usual conceptual contexts of these terms.

This diversity can be seen as positive if it stimulates thinking and thus perhaps enables new exciting ‘syntheses’. However, there is nothing to be seen of ‘syntheses’ far and wide. The terms ‘interdisciplinary’, ‘multidisciplinary’ or even ‘transdisciplinary’ are probably used more and more often in texts, as a reminder, as a call to integrate diversity in a fruitful way, but there is not much to be seen of it yet. The average university teacher at an average university still tends to be ‘punished’ for venturing out of his disciplinary niche. The ‘curricular norms’ that determine what time may be spent on what content with how many students do not normally provide for multidisciplinary or even transdisciplinary teaching. And when the single-science trained researcher throws himself into a multidisciplinary research project (if he does it at all), then in the end usually only single-science comes out again ….

Against this panorama of many worldviews, the following text will attempt to interpret how the concept of ‘intelligence’ could be grasped and classified today – taking into account the whole range of worldviews. Thereby a special accent will be put on the phenomenon ‘Homo sapiens’: Although Homo sapiens is only a very small sub-population within the whole of the biological, in the course of evolution it takes nevertheless up to now a special position in multiple senses, which shall be considered in this attempt of interpretation.

‘INTELLIGENCE’ – An interpretive hypothesis

This text intends a conceptual clarification of the concept ‘intelligence’ in the larger context of ‘biological systems’. ‘Machine systems’ with specific ‘behavioral properties’ that show ‘similarities’ to such properties that are ‘usually’ called ‘intelligent’ in the case of biological systems are then called ‘machine forms of intelligence’ in this text. However, ‘similarities’ in ‘behavior’ cannot be used to infer ‘similarities in enabling structures’. A ‘behavior X’ can be produced by a variety of ‘enabling structures’ which may be different among themselves. Statements about the ‘intelligence of a system’ therefore refer specifically to ‘behavioral properties’ of that system that can be observed within a particular ‘action environment’ within a particular ‘time interval’. Explicit talk about ‘intelligence’ further presupposes that there is a ‘virtual concept intelligence’ represented in the form of a text, which is able to classify the many individual ’empirical observations’ into ‘virtual contexts/relationships’ in such a way, that both (i) when certain behaviors ‘occur’ with the ‘virtual concept intelligence’ one can assign (classify) the occurring phenomena to the area of ‘intelligent behavior’, and that (ii) with the ‘virtual concept intelligence’ starting from a ‘given situation’ one can make conditional ‘predictions’ about ‘possible behaviors’ that can be ‘expected’ and ’empirically verified’ by the target system.

New Comment

While I was preparing a public lecture for a conference at the Technical University of Darmstadt (Germany) ( https://zevedi.de/en/topics/ki-text-2/ ) I decided to abandon the concept of ‘intelligence’ as well as ‘artificial intelligence’ for the near future. The meaning of these concepts is meanwhile completely ‘blurred’ / ‘fuzzy’; to use these concepts or not doesn’t change anything.