Journal of NeuroPhilosophy
Journal of NeuroPhilosophy
|
Neuroscience + Philosophy
|
ISSN 1307-6531
|
AnKa :: publisher, since 2007

Planck Time and the Chemical Soup: The Quantum and Metaphysical Limits of Imitating Consciousness in Machines

Abstract

The boundaries of machine consciousness lie at the intersection of physics, biology, and metaphysics. Although computational power advances rapidly, it remains constrained by the fundamental laws of the universe. Jack Ng (2000) demonstrated that the product of processing speed and stored information is limited by Planck time---the smallest measurable unit of time (≈10⁻⁴³ s)---establishing an ultimate ceiling for information processing. These physical limits suggest that no artificial system can transcend the quantum–gravitational constraints inherent to reality. In contrast, the human brain functions as a dynamic biochemical and electrical system---a "chemical soup" of neurotransmitters, receptors, and ion channels generating subjective awareness. Yet, how consciousness and selfhood emerge from this biological complexity remains unresolved. This study contrasts dualistic and monistic interpretations of consciousness. Dualism posits a metaphysical "essence" beyond material explanation, implying that true artificial consciousness is unattainable. Monism views consciousness as an emergent property of neural information dynamics, potentially reproducible in machines. Ultimately, while machines may simulate awareness, they cannot replicate the quantum–metaphysical foundation of human consciousness. Thus, the final word on artificial consciousness remains unspoken---bounded by both physics and philosophy.

Key Words
Machine consciousness, Quantum limits, Planck time, Neuroquantology, Chemical brain processes, Dualism, Monism, Artificial intelligence, Emergent consciousness, Metaphysical essence

1. Introduction

Today's developments in artificial intelligence are not limited to engineering achievements or algorithmic innovations. A closer look reveals that an ancient philosophical question about the nature of the human mind lies at the heart of these debates: Are the mind and body two aspects of the same substance, or do they exist on distinct planes of existence? In this context, the potential for artificial intelligence to think, feel, or become conscious must be evaluated within the framework of the philosophical distinction between monism and dualism (Chalmers, 1996; Searle, 1980). Determining which of these two views proves correct will shape the ultimate outcome of current artificial intelligence research and the degree of human-likeness it can achieve. From this perspective, contemporary discussions on artificial intelligence---concerning the computational ability of machines and the potential of their minds---are essentially a 2,500-year-old philosophical debate being articulated with new terminology and to a wider audience (Tarlacı, 2014; 2016).

Dualism holds that mental phenomena and physical phenomena belong to separate and distinct substances. In its classical form, this view is represented by Descartes' (1641) substance dualism. According to Descartes, the body is an extended substance (res extensa), while the mind is an intellectual substance (res cogitans); these two substances are independent of each other but interact through the pineal gland (Descartes, Meditationes de Prima Philosophia). Different dualist approaches have developed to explain mind-body interaction. Psychophysical interactionism argues that the mind directly influences physical events. Psychophysical parallelism, pioneered by Leibniz, posits that mental and physical events proceed in parallel, coordinated by a pre-established harmony (harmonia praestabilita) set by God (Leibniz, 1714). A more radical solution, occasionalism, argues that the mental and physical realms do not interact directly; instead, God intervenes in every event to create the correlation (Malebranche, 1674–75). Another dualist position, epiphenomenalism, contends that mental states are byproducts of physical states but do not causally act upon the physical. This view remains influential in contemporary consciousness studies (Robinson, 2004). A milder form of dualism, property dualism, argues that while the mind and body are composed of the same substance, mental and physical properties are fundamentally different. Chalmers (1996) defended this view, arguing that consciousness contains "qualitative/subjective experiences" (qualia) that cannot be reduced to physical explanations.

In contrast to dualism, monism asserts that reality consists of a single substance, of which the mental is a manifestation. Monism is divided into two main categories: idealism and materialism (physicalism). Idealist monism argues that the basis of reality is mental and that the entire physical world is a product of mind or perception. The origins of this view date back to philosophers such as Berkeley and Hegel. Today, some cognitive scientists have revived "primacy of consciousness" theses that are open to idealist interpretations (Kastrup, 2019). In contrast, the view that most directly influences contemporary artificial intelligence discussions is materialist monism, or physicalism. According to this approach, all mental processes are products of physical structures like the brain and nervous system and can be fully explained by the laws of physics (Churchland, 1981). The question "Can artificial intelligence produce consciousness?" depends directly on this assumption: if consciousness is a result of physical processes, then it should be possible for appropriate physical structures (e.g., neuromorphic chips and neuroelectronic networks) to reproduce it (Tarlacı, 2014; 2016).

There are also different views within the physicalist approach. Philosophical behaviorism defines mental states solely by observable behavioral tendencies. Championed by philosophers such as Gilbert Ryle (1949), this approach fell short of explaining the inherent, subjective nature of mental states. The mind-body identity theory, on the other hand, proposes that every mental state is identical to a specific brain state. Championed by Place (1956) and Smart (1959), this view implies that consciousness might be possible if similar neurological structures could be replicated in artificial intelligence systems. However, it presents limitations regarding how consciousness could emerge in different types of physical systems (e.g., carbon-based brains versus silicon-based computers).

Functionalism emerged to overcome this problem by defining mental states by their causal roles and relationships. According to this view, any system that produces the same functional relationships between inputs, internal states, and outputs---regardless of its physical substrate---can possess mental states (Putnam, 1967; Fodor, 1975). Functionalism, in particular, forms the fundamental theoretical basis for claims that artificial intelligence systems can exhibit consciousness.

Every discussion about the future of artificial intelligence is essentially grounded in one or more of these philosophical views about the nature of the human mind, and even in theology, in a broader sense. A materialist stance, arguing that consciousness can be fully explained by physical processes, sees the possibility of artificial consciousness and machines with subjective experience as achievable. However, a dualist or property dualist approach contends that artificial intelligence will never be able to produce "true consciousness" or "inner experience" (qualia). This fundamental philosophical distinction will profoundly impact not only theoretical research but also ethical and technological decisions (Nagel, 1974; Tononi, 2008).

Although discussions about artificial intelligence are often presented within a technical framework, they are rooted in ancient philosophical questions about the nature of the human mind, consciousness, and the limits of self-awareness. Any question about the possibility of a conscious machine inevitably raises deeper inquiries such as "What is consciousness?", "On what substrate does the mind operate?", and "To what extent is a human being a creative or divine entity?" In this context, the development of artificial intelligence technology is not merely an engineering challenge but also a profound philosophical and theological inquiry.

Modern artificial intelligence systems are demonstrating a capacity to perform increasingly complex tasks and produce responses that approach self-awareness. These developments have called into question the transferability of some qualities considered uniquely human---particularly mental functions such as reasoning, creativity, and intuition---to machines. Therefore, in attempting to create a mind in their own image, humans are essentially redefining themselves and redrawing their own boundaries. This situation bears an ironic parallel with the theological narrative of God creating man in His own image.

Thus, the following question becomes increasingly central: Will humans transform into a kind of mini-god through the mental simulations and experiential machines they create? Or will this process lead to a theological and ontological confrontation, resulting in humans realizing their own cognitive inadequacies and limitations? Every prediction about the future of artificial intelligence is ultimately based on assumptions rooted in one of philosophy's fundamental debates: the nature, freedom, and creative power of the human mind.

2. Artificial Intelligence: A Multidisciplinary Inquiry

Artificial intelligence (AI) is most broadly defined as the endeavor to transfer human-specific cognitive abilities to computers or machines. In this context, AI systems attempt to mimic or reproduce human mental processes such as thinking, reasoning, perceiving, comprehending, judging, and inferring. In psychology, "intelligence" is a multidimensional term encompassing abilities like learning, abstraction, and adaptation to new situations. In philosophy, intelligence is considered not merely as information processing but also in the context of conscious awareness, volition, and the generation of meaning. Therefore, the concept of artificial intelligence is more than a purely technical phenomenon; it is the product of a profound intellectual effort to understand human nature itself.

The term "artificial intelligence" was first coined by John McCarthy at the Dartmouth Conference in 1956. McCarthy pioneered the field by arguing that "every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it" (McCarthy et al., 1955). This development emerged from the combined contributions of diverse disciplines, including psychology, philosophy, linguistics, neuroscience, and computer engineering. Consequently, discussions about AI encompass not only technical modeling challenges but also fundamental efforts to understand the nature of mental processes.

2.1 Defining Natural and Artificial Intelligence

Intelligence is defined in two primary contexts. First, as a set of general mental functions: "the totality of human abilities to think, reason, perceive objective facts, comprehend, judge, and draw conclusions." Second, in a psychological context, as "the sum of the abilities to abstract, learn, and adapt to new situations." While artificial intelligence systems aim to model these multi-layered cognitive processes, they simultaneously force a re-examination of the conceptual definitions of the human mind. Therefore, AI research is directly relevant not only to technological progress but also to ongoing philosophical and scientific debates about the very nature of human intelligence (Russell & Norvig, 2020; Nilsson, 2009).

2.2 The Theory of Multiple Intelligences

Howard Gardner's Theory of Multiple Intelligences proposes that human intelligence is not a single, monolithic capacity, but rather a set of multiple intelligences, each specialized in different domains (Gardner, 1983). This approach suggests that individuals may possess distinct strengths and weaknesses across these intelligences. The eight core intelligences identified in Gardner's theory are as follows:

Verbal-Linguistic Intelligence: The capacity to use words effectively. Individuals with this intelligence typically prefer learning through listening and reading, and are skilled at verbally expressing their thoughts and feelings. They think conceptually and enjoy activities like reading, writing, debating, and wordplay. Typical professions include writers, journalists, teachers, and politicians (Gardner, 1999).

Logical-Mathematical Intelligence: The ability to establish cause-effect relationships, think analytically, and solve logical problems. These individuals frequently ask "why," categorize events, and make systematic connections. They enjoy mathematical operations and understanding how mechanical systems work. They often become scientists, engineers, or programmers (Sternberg, 2003).

Visual-Spatial Intelligence: The ability to perceive the environment through visual imagery and manipulate spatial relationships. These individuals remember what they see better than what they hear, possess vivid imaginations, and are sensitive to color and aesthetics. They are often drawn to professions such as architecture, painting, photography, and graphic design (Gardner, 1983).

Musical-Rhythmic Intelligence: The sensitivity to and ability to produce musical elements like sound, melody, rhythm, and harmony. Those with high musical intelligence remember melodies easily, often prefer to learn with music, and can accurately repeat tunes even without formal training. Suitable careers include musician, composer, and sound engineer (Winner, 1996).

Bodily-Kinesthetic Intelligence: The skill in using one's body or hands to solve problems or create products. These individuals have mastered body language, are adept at imitating gestures, and possess well-developed manual dexterity. They are naturally inclined toward activities like running, jumping, and building. Professions include athletes, actors, and sculptors (Armstrong, 2009).

Interpersonal Intelligence (Social): The capacity to understand the emotions, intentions, and motivations of others and to relate to them effectively. They are sensitive to non-verbal cues like facial expressions and tone of voice, and possess strong leadership, empathy, and persuasion skills. They excel in fields like teaching, counseling, and management (Goleman, 1995).

Intrapersonal Intelligence (Introspective): The capacity to understand oneself, including one's own feelings, thoughts, and motivations. These individuals can recognize their own strengths and weaknesses, set personal goals, and develop strategies to achieve them. They often prefer working independently and are drawn to fields like psychology, philosophy, and writing (Gardner, 1999).

Naturalistic Intelligence: Sensitivity to the natural world, including living organisms and ecological systems. Individuals with this intelligence enjoy learning about animals, observing plants, and understanding natural phenomena. They find fulfillment in activities like gardening and spending time in nature. Compatible professions include biologist, environmental scientist, veterinarian, and farmer (Armstrong, 2009).

3. The Brain as a Computer: The Limits of an Analogy

A prevalent metaphor in contemporary AI discussions is the comparison of the human brain to a computer. This analogy, central to the computational theory of mind, gained significant traction in the latter half of the 20th century. While it offers a useful model for explaining certain cognitive processes, it also carries the risk of reducing the brain's profound complexity in an overly simplistic manner. Thought experiments like John Searle's "Chinese Room" (Searle, 1980) have highlighted the limitations of this approach, arguing that syntactic information processing is insufficient to explain semantic understanding and conscious experience.

Whether classical or quantum, computers process information at a symbolic or probabilistic level. The human brain and consciousness, however, are not merely computational systems; they are embodied, emotional, and evolutionarily shaped organisms that interact dynamically with their environment. The theory of embodied cognition, proposed by Varela, Thompson, and Rosch (1991), emphasizes that cognitive processes are not confined to the brain but arise from the interaction of the brain, body, and environment. Viewing the brain solely as an information processor ignores its capacity for subjective experience, its affective dimensions, and its existence within a personal and historical context.

Even quantum computers, despite operating on complex principles like superposition and entanglement, remain physical computational machines. The brain, in contrast, is an entity that not only processes information but also generates meaning and internal experience. "Meaning" here refers to a phenomenological, subjective first-person experience that transcends mere symbolic representation. Therefore, while the brain-computer analogy illuminates quantitative aspects of thought, it obscures the qualitative and subjective dimensions. Metaphors can facilitate understanding, but they cannot fully capture the truth. In designing AI, philosophical considerations of consciousness, emotion, intention, and context must be addressed beyond pure algorithmic intelligence.

3.1 The Efficiency and Danger of the Computer Metaphor

Comparing the brain to a computer is useful for certain functional comparisons. However, this metaphor can be misleading, both philosophically and scientifically, because it ignores the subjective, contextual, and meaning-laden nature of mental life. Computers count; brains think, feel, and construct meaning. The difference is not merely technical but ontological.

The brain and computer share some fundamental similarities, which are often compared. Both systems possess the capacity to process information; the brain processes electrical signals through billions of nerve cells, while computers process data using electronic circuits and algorithms. Furthermore, both have the ability to store information; the brain creates long-term memory through synaptic connections, while computers can store data on hard drives, RAM, or cloud storage. There are also similarities in computational power; the brain performs multiple simultaneous operations through parallel processing thanks to complex neural networks, while computers can perform complex mathematical calculations thanks to their high processor speeds and parallel processing capabilities. In terms of perception, both systems can receive and process environmental information; the brain perceives stimuli such as sound, image, and touch through its senses, while computers receive data from the outside world through various sensors and input devices. Furthermore, in terms of learning and adaptability, the brain develops by restructuring its neural networks based on experience, while computers gain the ability to learn from data through artificial intelligence and machine learning algorithms. Error-correction mechanisms are also present in both systems. While the brain possesses neuroplasticity, which allows it to detect and correct errors, computers detect software and hardware errors using error-detection codes. Finally, in terms of communication, the brain connects to other parts of the body and the environment through nerve cells, while computers communicate through networks and technologies such as the Internet.

Despite this, there are also significant and fundamental differences between the brain and the computer. First, structurally, the brain is a biological organ composed of a complex and dynamic network of billions of neurons. A computer, on the other hand, is an artificial system composed of electronic circuits, processors, and hardware components. While the brain can naturally perform parallel processing, traditional computers generally rely on serial processing and have a central processing unit. In terms of energy consumption, the brain is highly efficient and consumes relatively little energy, while computers generally consume more energy and require cooling. Regarding learning, the brain is capable of flexible and creative learning independent of experience, while computers' learning capacity is limited by programming and algorithms. Complex cognitive functions of the brain, such as flexibility, creativity, and intuitive information processing, are inaccessible to most modern computers. Furthermore, the brain undergoes emotional experiences and exhibits emotional intelligence, while computers cannot yet truly perceive or feel emotions. Finally, while the brain operates in integration with the body's other biological systems to control complex physiological functions, computers' interaction with the physical world is limited to external hardware and interfaces.

These differences demonstrate that comparing the brain to a computer is only a crude metaphor. Today's computers can only mimic the brain's computational capabilities to a certain extent, but they cannot fully reflect its complex structure, flexible learning capacity, emotional richness, or biological integrity. Therefore, when considering brain functioning, the need to integrate new paradigms beyond classical computer models, such as quantum computing or neurobiological mechanisms, is increasingly recognized.

The eye is a complex biological structure that cannot be directly compared to the concept of megapixels used in digital imaging technologies. Megapixels are a technical term that quantifies the resolution of digital camera sensors and is based on the total number of pixels. However, the eye is not pixel-based; it is a complex organ that operates with a completely different mechanism and has evolved through evolution. The eye has no pixels and cannot be calculated. Approximately 120 million rod and 6–7 million cone cells in the retina perceive different wavelengths of light and convert colorful and detailed visual information into neural signals. These biological sensors, unlike the individual pixels of digital sensors, process light and color information in a multidimensional and dynamic manner.

While the megapixel count of digital cameras simply refers to the total number of pixels on the sensor, limiting the eye's image-perceiving capacity in this way is misleading. The eye integrates not only the amount of light but also numerous parameters such as contrast, motion, depth, color tones, and environmental context. Furthermore, this perceptual information is transmitted directly to the brain and interpreted, enriched, and given meaning through complex feedback mechanisms between the brain and the retina. Therefore, the eye's capacity to perceive images is far more than a resolution that can be expressed simply by the number of pixels. This demonstrates the limitations of metaphors and technical comparisons in understanding the nature of visual perception. Comparing the eye to megapixels simplifies complex biological and cognitive processes, similar to reducing human consciousness to mere information-processing capacity. Therefore, rather than measuring the eye's function with pixel-based resolution, it is more scientifically and philosophically accurate to recognize it as a multilayered, dynamic, and constantly adapting perceptual system.

3.2 What Is Not in the Machine: Intuitive, Non-Algorithmic Knowledge

Intuition is defined as the self-evident knowledge of reality obtained directly, without resorting to reasoning or experimentation. It is immediate knowledge that does not require proof (Polanyi, 1966). The human mind accesses knowledge through two primary channels: rational and intuitive. These two types of knowledge are considered the two poles of consciousness (Boden, 1990). When rational thought is silenced, intuition achieves extraordinary clarity and reality, allowing us to perceive events around us directly, without passing through conceptual filters (James, 1890). Intuitive illuminations in daily life can occur suddenly, without requiring conscious effort. Meditation and similar practices, on the other hand, quiet the rational mind, opening up the intuitive side (Walsh & Shapiro, 2006).

In mathematics and philosophy, intuition generally gains value only when placed within a mathematical or logical framework (Lakatos, 1976). According to Aristotle, intuition is direct knowledge that cannot be proven but forms the basis of reasoning. Intuitive thought, he argued, "grasps fundamental definitions that cannot be proven" (Aristotle, Analytica). In his Ethics, Spinoza distinguishes three types of knowledge: first, "opinion" or "imagination," which is indefinite empirical knowledge that comes through the senses; second, general concepts and appropriate ideas obtained through reason; and third, what he calls intuitive science, which leads to direct knowledge of God's essence (Spinoza, 1677). According to him, the first type of knowledge leads to error, while the second and third types are necessarily true (Spinoza, 1677). Intuitive knowledge is not the result of mental exercise but rather its beginning; therefore, it is difficult to express in time (Spinoza, 1677).

Kant, on the other hand, associates the process of knowledge with three faculties: sensibility, intellect, and reason. Knowledge begins with the senses, passes to understanding, and is completed by reason. Intuition (Anschauung) is a form of immediate relationship with objects and comes from sensibility; concepts, on the other hand, are the product of mediate thought (Kant, 1781). For Kant, intuition is both the beginning and the ultimate goal of the process of knowledge.

In Islamic philosophy, Ibn Sina emphasizes the importance of attaining knowledge through intuition. According to him, knowledge is achieved through conclusions based on definitive principles derived through intuition. While experience has an influence, this influence operates in accordance with the rules of reason (Nasr, 2006). Ibn Arabi, on the other hand, categorizes the types of knowledge as reason, senses, and inspiration (intuition), stating that intuitive knowledge comes directly from God and is infallible (Chittick, 1989). In this approach, intuition is a source of truth beyond reason and experience.

René Descartes defines intuition as knowledge that is self-evident and cannot be deduced from a proposition. He claims to know his own existence through intuition because this knowledge is not the result of reasoning but rather the direct product of intuitive awareness (Descartes, 1641). He argues that intuitive knowledge is not algorithmic or computable and therefore cannot be fully explained by formal methods. Henry Poincaré emphasized intuition as a crucial element of scientific creativity: "With logic we prove; with intuition we invent" (Poincaré, 1908). In this respect, intuitive knowledge is a fundamental tool for discovering new relationships and harmonies.

In this context, Kurt Gödel's 1931 Incompleteness Theorems revealed the existence of non-computable problems. According to Gödel, in any formal system there are propositions that cannot be proven true but can be intuited to be true. Mathematical truth transcends mere formalism and cannot be fully grasped by algorithmic methods (Gödel, 1931). Here, intuition becomes important as a form of understanding that is non-algorithmic and cannot be systematized.

Intuitionism is among the three fundamental approaches to modern mathematics, along with Platonism and formalism. Dutch mathematician Luitzen Brouwer (1881–1966) developed intuitionism as an alternative answer to problems of reasoning, particularly regarding infinite sets. According to Brouwer, the existence of mathematical objects depends on a constructive way of accessing them. The principle of the excluded middle should be limited to finite sets, and reasoning about infinity should be avoided (Brouwer, 1924). This idea contrasted with Hilbert's formalist approach (Hilbert, 1925).

Heuristic computing involves the use of intuition and instinctive knowledge rather than logical analysis in solving complex problems and making decisions. This method allows individuals to reach effective conclusions based on prior experience and unconscious knowledge without resorting to direct reasoning processes. Heuristic computing is an important concept in disciplines such as computer science, artificial intelligence, and cognitive psychology because the problems encountered in these fields often involve uncertain, dynamic, and complex structures. While traditional mathematical models and precise algorithms may be limited in these types of problems, heuristic approaches offer more flexible and adaptable solutions (Walsh & Shapiro, 2006).

Practical applications of heuristic computing are quite diverse (Boden, 1990). Genetic algorithms generate optimized solutions inspired by the fundamental principles of biological evolution; simulated annealing randomly searches the space of possible solutions by imitating physical thermodynamic processes and achieves better results. Artificial neural networks enhance learning and intuitive decision-making capabilities by imitating neuronal functioning in the human brain, while fuzzy logic systems attempt to model human-like flexible thinking by working with uncertain and imprecise data. Tabu search is among the metaheuristic algorithms that aim to reach the best solution by avoiding repetitive errors in the solution space.

The biological basis of these intuitive processes lies in the chemical synapses between nerve cells. Chemical synapses function as complex structures where electrical signals are transmitted via chemical substances called neurotransmitters, providing flexibility, reinforcement, and guidance in neural network communication (Walsh & Shapiro, 2006). Synaptic strengthening mechanisms underlie learning and memory, while the multi-connection capacity of chemical synapses enables complex and parallel information processing in brain networks. Although these synapses are slower in conduction speed than electrical synapses, this slowness offers significant advantages in terms of order and selectivity in signal transmission.

Heuristic computing is a vital concept for understanding the complexity and flexibility of the human mind in both cognitive science and artificial intelligence (Boden, 1990). These processes transcend mechanistic computational models and reflect the dynamic and adaptive nature of biological neural networks. The intuitive functioning of the human mind reveals the role of deep, often unconscious processes beyond pure logic in accessing knowledge, making it a unique field that requires both scientific and philosophical understanding.

4. The Planet's Spatio-Temporal Byproduct: Brain and Consciousness

The human brain can be seen as the planet's most intricate spatio-temporal construct --- a biological structure through which matter begins to perceive the fabric of space and time itself. From a Kantian perspective, space and time are not external realities but forms of human intuition --- the conditions under which any experience becomes possible. The brain, therefore, is not merely an organ that processes stimuli but the physical instantiation of the very principles that allow the universe to appear as extended in space and successive in time.

Within this planetary byproduct reside approximately 14–16 billion neurons, interconnected through 10¹⁴ to 10¹⁷ synapses --- each a microscopic node in the vast temporal computation that underlies consciousness (Kandel et al., 2013). The cerebellum alone houses nearly 100 billion granular cells, while each Purkinje cell interfaces with roughly 200,000 of them, forming a dense network of coordinated timing and prediction --- the biological echo of temporal order. Glial cells, numbering around 10 billion, sustain and modulate these neural constellations, ensuring stability across the brain's dynamic spatio-temporal flux (Tarlacı, 2014; 2026; 2019).

At the microphysical level, the nerve cell membrane, a mere 5 micrometers thick, mediates the flow of millions of ions per second through molecular channels. Vesicles only 50–100 nanometers in diameter regulate the quantum-scale release of neurotransmitters, translating molecular probabilities into macroscopic perception. The total length of parallel nerve fibers in the adult human brain --- approximately 100,000 kilometers --- exceeds twice the circumference of the Earth, reflecting how deeply our biological space extends within itself.

The cerebral cortex, with a surface area between 2,000 and 2,500 cm² and varying thickness between 1.2 and 5 mm, embodies this folding of inner space. Its 200 million interhemispheric fibers and 1.7 million descending motor fibers integrate the world as both spatial geometry and temporal flow (Kandel et al., 2013; Tarlacı, 2014).

In comparison, a 50-kilogram human contains roughly 3 × 10²⁸ protons --- the same matter that composes the stars, now organized into a structure capable of representing space and time (Tarlacı, 2014). Consciousness thus emerges not as an epiphenomenon of neural computation but as the internal resonance of the universe becoming aware of its own spatio-temporal structure through biological form. The brain, in this sense, is the living synthesis of Kant's transcendental aesthetics and the planet's evolutionary physics --- where neural networks serve as the stage upon which the phenomena of space and time unfold as experience.

4.1 What Do We Know?

Many fundamental mechanisms of the brain and nervous system are largely understood today. Bioelectric currents in nerve cells (neurons) occur when ions pass across the cell membrane, and this ion exchange is essential for synaptic transmission and axonal transport (Kandel et al., 2013). Energy use by neurons occurs primarily through mitochondria, and this energy is used for the synthesis, storage, and degradation of neurotransmitters (Attwell & Laughlin, 2001). Neurotransmitters play a critical role in the regulation of emotional states; for example, serotonin and dopamine have been linked to happiness, motivation, and reward mechanisms (Nestler & Hyman, 2010). Decision-making processes are particularly affected by the prefrontal cortex, anterior cingulate cortex, and cerebral cortex. The importance of anatomical regions such as the cingulate cortex and basal ganglia is well known (Miller & Cohen, 2001). The ontogenetic development of the brain, starting from the embryonic stage, is shaped by processes such as neurogenesis, synaptogenesis, and synaptic pruning (Stiles & Jernigan, 2010). Pain perception is transmitted from peripheral nerves via the spinal cord and brainstem to the thalamus and somatosensory cortex (Tracey & Mantyh, 2007).

4.2 What Don't We Know?

On the other hand, some complex brain experiences and conscious processes remain incompletely understood. Sensory perceptions such as pain, color, sound, smell, and taste---despite neural processing---are still poorly understood as subjective experiences (Chalmers, 1995). Musical experiences affect different brain regions, generating complex emotional and cognitive responses, but the details of these processes are still being investigated (Peretz & Zatorre, 2005). Recalling visual images and dreaming are linked to brain processes related to memory and the subconscious, but the exact mechanisms are unknown (Hobson, 2009). Social and emotional experiences such as love, liking, and aesthetic appreciation are complex, with neurobiological underpinnings (Fisher, 2004). The sense of self---the sense of "I" within us---is the foundation of subjective experience and remains a significant research topic in neuroscience (Gallagher, 2000). Concepts such as memory retrieval, thought, subjectivity, and free will are considered both philosophically and scientifically as complex areas where mental and physical processes intersect (Libet, 1985; Dennett, 1991).

4.3 The Turing Test: Acting "Just" Like a Human Without Experience

The most well-known method for evaluating artificial intelligence systems capable of exhibiting human-like behavior is the Turing Test, proposed by Alan Turing (1950). In this test, a human interrogator (referee) attempts to distinguish, through written communication, whether the interlocutor is human or machine. If the AI successfully deceives the interrogator into believing it is human, the test is considered passed. The Turing Test thus measures the capacity of artificial intelligence systems to mimic human behavior and stands as a classic example of human–machine competition.

Another central goal of artificial intelligence is to think like humans. The field of cognitive science, which emerged in pursuit of this objective, lies at the intersection of psychology, linguistics, sociology, mathematics, logic, and philosophy (Gardner, 1987). Cognitive science seeks to understand the problem-solving, learning, and decision-making processes of the human mind and to model these processes in artificial systems. The ultimate aim is for AI to acquire human-like cognitive abilities.

A further dimension of artificial intelligence is the ability to think rationally---to reach verifiable and logically sound conclusions through inference methods such as induction and deduction (Russell & Norvig, 2021). Rational thinking is vital for artificial systems to solve complex problems logically and to make meaningful decisions. Acting rationally, in contrast, refers to the effective and purposeful execution of actions derived from a system's logical reasoning (Russell & Norvig, 2021). This distinction underscores that AI should not only think correctly but also act correctly.

The Turing Test can be viewed as both too easy and too difficult---too easy because human judges are fallible, and too difficult because it demands that machines be capable of deception. Yet, in recent years, the test has regained significance as a dynamic, interactive, and adversarial evaluation method for AI systems, in contrast to static benchmark testing. It now measures not only general intelligence but also a system's ability to simulate human-like interaction. However, models capable of effectively deceiving humans pose potential risks, such as social engineering and the spread of misinformation.

In a recent study (Jones, 2025), four AI systems---ELIZA, GPT-4o, LLaMa-3.1-405B, and GPT-4.5---were evaluated using three-sided, randomized, controlled, pre-recorded Turing tests across two independent populations (undergraduate students and employees). Participants engaged in five-minute text-based conversations with both humans and AI systems, attempting to determine which interlocutor was human. The models were also tested for their ability to induce specific personality impressions. Results revealed that GPT-4.5 was perceived as human 73% of the time, significantly higher than the rate for real human participants. LLaMa-3.1 was judged human 56% of the time, not significantly different from real humans, whereas baseline models ELIZA and GPT-4o scored far below chance (23% and 21%, respectively). These data provide the first empirical evidence of artificial systems passing a standard three-sided Turing Test, demonstrating that current Large Language Models (LLMs) have surpassed a crucial threshold for human-like communication. The three-sided Turing Test, being more demanding than the traditional two-sided version, more accurately gauges a machine's ability to simulate or deceive human interlocutors. Ultimately, machines were able to appear human to real humans---up to 70% of the time---without any subjective experience. These findings suggest that the emergence of AI systems indistinguishable from humans will rekindle major social, economic, and ethical debates.

4.4 John Searle's Distinction Between Strong and Weak Artificial Intelligence

Philosopher John Searle (1980) divided artificial intelligence into two primary categories: Strong AI and Weak AI. Strong AI posits that truly conscious and understanding machines can exist---machines that fully emulate the human mind. According to Searle, the relationship between brain and mind is analogous to that between computer hardware and software. Thus, any system capable of manipulating physical symbols could, in theory, possess intelligence comparable to that of a human being. Advocates of strong AI argue that machines could not only simulate biological brains but also possess genuine mental capacities. This view assumes that AI is capable of generating consciousness and meaning, not merely processing data.

By contrast, Weak AI rejects this claim. According to Searle, the human mind cannot be fully modeled algorithmically or reduced to physiological processes; therefore, it is impossible to create machines that truly think, feel, or experience. Machines may simulate thinking, but they do not possess it. They act as if they understand, but lack genuine mental content. Hence, AI systems cannot produce consciousness, since they lack the intrinsic properties of biological minds.

Searle's famous "Chinese Room" argument further clarifies this point. He proposed that a computer running a program merely manipulates symbols according to syntactic rules, without any understanding of their semantic content. The human mind, however, goes beyond syntax---it creates and experiences meaning. Thus, running a program alone is insufficient for genuine thought (Searle, 1980). Programs are not minds, and they cannot generate minds. Mental experience cannot arise from computation alone; consciousness depends on the biological organization of the brain. For an artificial system to reproduce a human-like mind, it would need a biophysical structure functionally equivalent to the brain. Therefore, the strong AI equation "computer program = mind" is false: the mind is not merely software but a meaningful phenomenon emerging from complex biological processes (Searle, 1980; Haugeland, 1985). This perspective provides a critical foundation for contemporary debates on the mind–brain relationship and the nature of artificial cognition.

4.5 Stevan Harnad's Stages of Machine Intelligence

Stevan Harnad (1994) proposed the concept of the "Darwinian Turing Test" to evaluate the functional equivalence of artificial intelligence and artificial life systems with biological organisms. He identified five levels of equivalence between artificial systems and human intelligence.

The first level, the toy model, represents a minimal and highly simplified subset of human cognitive abilities. Most of today's AI systems operate at this level, far from exhibiting general human-like intelligence.

The second level, corresponding to the pen-pal stage of the classic Turing Test, involves symbolic communication where an external observer cannot distinguish between a human and a computer based solely on text interaction. At this level, systems operate purely at the syntactic level, independent of semantic understanding---a point critically analyzed in Searle's Chinese Room thought experiment.

The third level, the Total Turing Test or robotic Turing Test, expands beyond syntax to semantics. Here, the system physically and sensorily interacts with the environment, mimicking the full complexity of human perception and behavior. At this stage, an observer cannot reliably distinguish between human and artificial agents, as the latter exhibit integrated perceptual and motor behaviors.

The fourth level, microtransactional inseparability, denotes a stage where the neural and neurotransmitter functions of artificial systems become fully functionally equivalent to their biological counterparts. Synthetic neurons at this level are indistinguishable from biological ones in their electrical and chemical behavior.

Finally, the fifth level corresponds to a grand unified theory of artificial cognition, where artificial nerve cells are identical to biological neurons down to the electron level---fully consistent with the mathematical laws governing neural conduction. The only remaining differences are minute physico-chemical details that are, in principle, unobservable and functionally irrelevant (Harnad, 1994).

Harnad's model outlines an evolutionary trajectory from symbolic AI systems that merely imitate human behavior to artificial organisms that are biologically indistinguishable from humans. This framework bridges philosophical debates---such as Searle's critique of strong AI---with neurobiological realism. As Harnad emphasizes, reaching the highest level of artificial intelligence cannot be achieved merely by programming; it requires constructing physical and chemical mechanisms capable of replicating, or even surpassing, the functions of biological nervous systems. This perspective highlights that artificial intelligence research is fundamentally interdisciplinary, situated at the crossroads of cognitive science and neurobiology.

4.6 Roger Penrose's Stages of Artificial Intelligence

The ideas put forward by Roger Penrose in The Large, the Small, and the Human Mind (1998) form a cornerstone of contemporary debates on artificial intelligence and consciousness. Penrose argues that the functioning of the human mind cannot be reduced to mere computation. According to him, conscious awareness and subjective experience do not emerge solely through the execution of appropriate algorithms (Penrose, 1998, p. 122). This claim directly challenges the classical AI paradigm, which assumes that "thinking is computation" and that machines could eventually achieve consciousness through sufficiently advanced algorithms. Penrose maintains that awareness arises from the unique and intricate physical dynamics of the brain, which cannot be replicated by computational means alone (Penrose, 1998, p. 123).

At this point, Penrose distinguishes between two fundamental approaches to the study of consciousness: weak artificial intelligence and strong artificial intelligence. According to the weak AI perspective, the brain's physical processes can be entirely explained by the known laws of physics---consciousness merely requires detailed modeling of these processes (Penrose, 1998, p. 124). Hence, under this view, mental states could in principle be simulated computationally if the underlying mechanisms were sufficiently understood. By contrast, proponents of strong AI, including Penrose himself, argue that the human mind may depend on as-yet-undiscovered physical principles, extending beyond current physical laws. In this sense, Penrose asserts that our understanding of physical reality must be expanded to account for consciousness (Penrose, 1998, pp. 124–125).

Penrose further highlights the limitations of computation and algorithmic reasoning in explaining consciousness, suggesting that mental processes may require a form of complexity that transcends classical computational systems. His hypothesis, developed with Stuart Hameroff, proposes that brain activity---particularly within microtubular structures---may involve quantum-mechanical processes (Hameroff & Penrose, 2014). This framework reinforces the idea that biological and physical specificity, rather than symbolic simulation, is essential for genuine consciousness.

In this respect, Penrose's views resonate with John Searle's "Chinese Room" argument (Searle, 1980). Searle contends that computer programs manipulate formal symbols syntactically but do not generate semantic understanding. Similarly, Penrose emphasizes that the human mind operates on meaningful (semantic), not merely formal (syntactic), processes. Both thinkers argue that consciousness and awareness are phenomena of irreducible complexity, resisting explanation through algorithmic computation alone.

Therefore, Penrose's position serves as both a scientific and philosophical caution against the claims of strong artificial intelligence, underscoring the profound theoretical and empirical barriers to creating truly conscious machines (Penrose, 1998; Hameroff & Penrose, 2014; Searle, 1980) (Table 1).

Class Summary of Opinion Can Consciousness Be Produced? Philosophical Context Representatives / Approaches
A Consciousness can be produced with appropriate algorithms Yes Functionalism, Strong AI Dennett, Minsky
B Computing can simulate consciousness but not create it No Weak AI, Biological naturalism Searle, Block
C1 Consciousness is tied to physical functioning but cannot be calculated No, but the laws of physics are sufficient Non-computationalist physicalism Penrose (weak)
C2 New physics required to explain consciousness No, the current physics is insufficient Quantum theories of mind Penrose-Hameroff
D Consciousness cannot be explained by scientific means No, it is impossible in principle Dualism, panpsychism, idealism Nagel, Chalmers, mystical approaches
Table 1. Penrose's artificial intelligence stages

5. Quantum Computation Limit

Machine capacity is a quantity limited by the sum of the number of operations performed per unit of time and the bits of information stored in memory. This limitation is based on the fundamental physical laws of the universe and is particularly tied to Planck time. Jack Ng (2000) showed that the product of processing speed and the amount of information is constrained by a constant; here, the processing time is limited by Planck time (𝑡ₚ), the smallest meaningful unit of time in the universe. Planck time is defined as the square root of ℏG /c⁵ and has a value of approximately 10⁻⁴³ seconds. This physical limit directly affects the speed and capacity of any information-processing system. Therefore, theoretically, a machine's processing capacity and information storage capacity cannot exceed these fundamental quantum-gravitational limits. These results are of great importance, especially in the context of quantum computing and black hole thermodynamics, and play a critical role in understanding the limits of machines' information-processing capabilities (Ng, 2000; Lloyd, 2000). Thus, no matter how far technological advances progress, these limits set by the fundamental laws of physics will remain an insurmountable constraint.

The brain is an extremely complex biochemical system that functions in conjunction with electrical transmission between nerve cells. Processes such as the release of neurotransmitters, the opening and closing of ion channels, the stimulation of receptors, and the activation of intracellular second messenger systems cause the brain to function like a "chemical soup." This complex physiological mechanism forms the basis of behavior, emotions, thoughts, and ultimately, conscious experience. However, how qualities such as subjective awareness, inner experience, and selfhood arise from this biological mechanism of the brain remains an open enigma. Consciousness, despite attempts to relate it to physical processes, appears to be a multilayered phenomenon that cannot be explained solely as the sum of chemical reactions. Therefore, while this complex chemical nature of the brain is considered the fundamental foundation that makes consciousness possible, it also, with its unexplained aspects, provides a philosophical resistance to scientific reductionism.

For example, the number of intercellular connections (synapses) in the brain is reported to range from 5×10¹¹ to 5×10¹⁴, highlighting significant variation among individuals. It also contains approximately 8.6×10¹⁰ nerve cells (neurons) and 9.5×10¹⁰ non-neuronal cells (such as glial cells). There are 3,000 different primary cell types in the brain, with an average of five in each brain region. Furthermore, modern research has shown that the sheer number of combinations---737 brain regions, 500 receptor types through which neurotransmitters can produce different effects, and 450 different ion channels---reveals the brain's functional and structural diversity and the uniqueness of each individual. At the cellular level, each cell contains 10⁷ proteins, 10¹² organic molecules, and a total of 2.5×10¹² molecules. From a genetic perspective, each cell expresses 5,000–10,000 genes, and the human genome contains a total of 20,000 (Herculano-Houzel, 2009). These data reveal the incredible complexity of the brain and the diversity among individuals, while also clearly demonstrating that the brain is not subject to strict genetic control and is shaped by spatiotemporal and environmental changes.

To create a digital replica of the brain, theories regarding brain structure and function will need to be tested by combining the above biological and microscopic variables with experimental and theoretical approaches. This process is expected to formulate fundamental principles such as cellular structural principles, molecular organization, the distribution of ion channels and receptors, synaptic connections, connections between brain regions, and brain-body interactions. The question of whether human consciousness and intelligence can be emulated through artificial systems is one of the most important debates of our time, both philosophically and technologically. This debate can be evaluated through two fundamentally different ontological approaches: monism and dualism. According to the dualistic perspective, humans possess a metaphysical "essence" or "soul" that exists outside their physical bodies. This "essence," the entity underlying human consciousness, cannot be explained solely by material processes and points to a level of existence that cannot be replicated by machines. Specifically, this entity, which can be described as a "divine essence" breathed into our bodies, transcends classical physicochemical or quantum-mechanical processes.

NeuroQuantology, which addresses the role of quantum physics in the nervous system and was first introduced as a term in 2001, is noteworthy. It is an interdisciplinary field that explores the relationship between consciousness and the brain within the framework of quantum physics principles (Tarlacı, 2014; 2016). However, neuroquantology not only explains the brain's functioning through quantum-mechanical processes but also introduces a "metaphysical" dimension to the quantum nature of consciousness that cannot be explained by classical biological models. It argues that a quantum reality lies at the foundation of conscious experience and self, beyond classical physical and chemical processes. This approach posits the essence of human consciousness as a more fundamental reality that exists in quantum realms---universal and unattainable by direct physical measurement. Therefore, the neuroquantological perspective holds that human consciousness is not merely the sum of brain activity but rather a spiritual or essential entity manifested within the brain. No matter how thoroughly the entire biological and physical functioning of the brain, including quantum structures, is modeled and emulated, it is impossible to transfer this metaphysical element to artificial systems. No matter how advanced AI systems become, they lack true consciousness, internal experience, and self-awareness; they can only exhibit consciousness-like behaviors, because their operation lacks a "self" beyond physical and computational processes.

In contrast, the monistic paradigm defines human consciousness as a product of purely physical processes. According to this view, consciousness is a phenomenon that emerges from the dynamics of information processing and interaction within the brain's complex neural networks. If this is true, the fundamental components of human consciousness can be explained entirely by natural science, and with sufficient understanding and modeling of these processes, it becomes possible to create human-like minds. In this context, advances in artificial intelligence and neuroscience pave the way for the future creation of self-awareness and conscious experience in machines. This means that AI systems may not only appear conscious from the outside but may actually acquire the hallmarks of human consciousness, such as internal experience and free will.

Ultimately, the paradigm chosen regarding the nature and ontological foundation of human consciousness plays a fundamental role in determining the potential of artificial intelligence. According to the dualistic approach, the "essence" of human consciousness cannot be imitated by machines under any circumstances, while the monistic approach suggests the possibility that human-like minds, consciousnesses, and inner experiences can be reproduced in technological environments. This fundamental philosophical distinction shapes the ethical, ontological, and epistemological dimensions of artificial intelligence research and is critical for interpreting future developments.

References

  1. Armstrong T. Multiple Intelligences in the Classroom. 3rd ed. ASCD; 2009.
  2. Attwell D, Laughlin SB. An energy budget for signaling in the gray matter of the brain. J Cereb Blood Flow Metab. 2001;21(10):1133-1145.
  3. Boden MA. The Creative Mind: Myths and Mechanisms. Basic Books; 1990.
  4. Brouwer LEJ. Intuitionism and formalism. In: Proceedings of the International Congress of Mathematicians. 1924.
  5. Chalmers DJ. Facing up to the problem of consciousness. J Conscious Stud. 1995;2(3):200-219.
  6. Chalmers DJ. The Conscious Mind: In Search of a Fundamental Theory. Oxford University Press; 1996.
  7. Chittick WC. The Sufi Path of Knowledge. SUNY Press; 1989.
  8. Churchland PS. Eliminative materialism and the propositional attitudes. J Philos. 1981.
  9. Dennett DC. Consciousness Explained. Little, Brown and Co.; 1991.
  10. Descartes R. Meditationes de Prima Philosophia. 1641.
  11. Descartes R. Meditations on First Philosophy. 1641.
  12. Fisher HE. Why We Love: The Nature and Chemistry of Romantic Love. Henry Holt and Co.; 2004.
  13. Fodor JA. The Language of Thought. Harvard University Press; 1975.
  14. Gallagher S. Philosophical conceptions of the self: implications for cognitive science. Trends Cogn Sci. 2000;4(1):14-21.
  15. Gardner H. Frames of Mind: The Theory of Multiple Intelligences. Basic Books; 1983.
  16. Gardner H. The Mind's New Science: A History of the Cognitive Revolution. Basic Books; 1987.
  17. Gardner H. Intelligence Reframed: Multiple Intelligences for the 21st Century. Basic Books; 1999.
  18. Gödel K. Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme. Monatshefte Math Phys. 1931;38:173-198.
  19. Goleman D. Emotional Intelligence. Bantam Books; 1995.
  20. Hameroff S, Penrose R. Consciousness in the universe: A review of the "Orch OR" theory. Phys Life Rev. 2014;11(1):39-78.
  21. Harnad S. Levels of functional equivalence in reverse bioengineering: the Darwinian Turing Test for Artificial Life. Artif Life. 1994;1(3):293-301. doi:10.1162/artl.1994.1.3.293
  22. Herculano-Houzel S. The human brain in numbers: a linearly scaled-up primate brain. Front Hum Neurosci. 2009;3:31. doi:10.3389/neuro.09.031.2009
  23. Haugeland J. Artificial Intelligence: The Very Idea. MIT Press; 1985.
  24. Hilbert D. The Foundations of Mathematics. 1925.
  25. Hobson JA. REM sleep and dreaming: towards a theory of protoconsciousness. Nat Rev Neurosci. 2009;10(11):803-813.
  26. James W. The Principles of Psychology. Harvard University Press; 1890.
  27. Jones CR. Large language models pass the Turing test. arXiv. Preprint posted online March 31, 2025. Accessed May 20, 2025.
  28. Kandel ER, Schwartz JH, Jessell TM, Siegelbaum SA, Hudspeth AJ. Principles of Neural Science. 5th ed. McGraw-Hill; 2013.
  29. Kant I. Critique of Pure Reason. 1781.
  30. Kastrup B. The Idea of the World: A Multi-Disciplinary Argument for the Mental Nature of Reality. Iff Books; 2019.
  31. Lakatos I. Proofs and Refutations. Cambridge University Press; 1976.
  32. Leibniz GW. Monadology. 1714.
  33. Libet B. Unconscious cerebral initiative and the role of conscious will in voluntary action. Behav Brain Sci. 1985;8(4):529-566.
  34. Lloyd S. Ultimate physical limits to computation. Nature. 2000;406(6799):1047-1054.
  35. Malebranche N. The Search After Truth. 1674-1675.
  36. McCarthy J, Minsky ML, Rochester N, Shannon CE. A proposal for the Dartmouth Summer Research Project on Artificial Intelligence. 1956.
  37. Miller EK, Cohen JD. An integrative theory of prefrontal cortex function. Annu Rev Neurosci. 2001;24:167-202.
  38. Nagel T. What is it like to be a bat? Philos Rev. 1974;83(4):435-450.
  39. Nasr SH. Islamic Science: An Illustrated Study. World Wisdom; 2006.
  40. Nestler EJ, Hyman SE. Animal models of neuropsychiatric disorders. Nat Neurosci. 2010;13(10):1161-1169.
  41. Ng YJ. Clock, computers, black holes, spacetime foam, and holographic principle. arXiv. Preprint posted online October 26, 2000.
  42. Nilsson NJ. The Quest for Artificial Intelligence: A History of Ideas and Achievements. Cambridge University Press; 2009.
  43. Peretz I, Zatorre RJ. Brain organization for music processing. Annu Rev Psychol. 2005;56:89-114.
  44. Penrose R. The Great, the Small, and the Human Mind. Spiral Publishing House; 1998:122-125.
  45. Poincaré H. Science and Hypothesis. Walter Scott Publishing; 1908.
  46. Polanyi M. The Tacit Dimension. Routledge & Kegan Paul; 1966.
  47. Putnam H. The Nature of Mental States. Cambridge University Press; 1967.
  48. Robinson H. Epiphenomenalism. In: Stanford Encyclopedia of Philosophy. 2004.
  49. Russell S, Norvig P. Artificial Intelligence: A Modern Approach. 4th ed. Pearson; 2020.
  50. Ryle G. The Concept of Mind. Hutchinson & Co.; 1949.
  51. Searle JR. Minds, brains, and programs. Behav Brain Sci. 1980;3(3):417-457.
  52. Smart JJC. Sensations and brain processes. Philos Rev. 1959.
  53. Spinoza B. Ethics. 1677.
  54. Sternberg RJ. Cognitive Psychology. 4th ed. Wadsworth; 2003.
  55. Stiles J, Jernigan TL. The basics of brain development. Neuropsychol Rev. 2010;20(4):327-348.
  56. Tarlacı S. NeuroQuantology: Quantum Physics in the Brain: Reducing the Secret of the Rainbow to the Colors of a Prism. Nova Science Publishers; 2014.
  57. Tarlacı S. Quantum neurobiological view to mental health problems and biological psychiatry. J Psychopathol. 2019;25:70-84.
  58. Tarlacı S, Pregnolato M. Quantum neurophysics: from non-living matter to quantum neurobiology and psychopathology. Int J Psychophysiol. 2016;103:161-173.
  59. Tononi G. Consciousness as integrated information: a provisional manifesto. Biol Bull. 2008;215(3):216-242.
  60. Tracey I, Mantyh PW. The cerebral signature for pain perception and its modulation. Neuron. 2007;55(3):377-391.
  61. Turing AM. Computing machinery and intelligence. Mind. 1950;59(236):433-460.
  62. Varela FJ, Thompson E, Rosch E. The Embodied Mind: Cognitive Science and Human Experience. MIT Press; 1991.
  63. Walsh R, Shapiro SL. The meeting of meditative disciplines and Western psychology: a mutually enriching dialogue. Am Psychol. 2006;61(3):227-239.
  64. Winner E. Gifted Children: Myths and Realities. Basic Books; 1996.
Corresponding Author

Name: Sultan Tarlacı

Address: Prof. Dr., M.D., Üsküdar University, Medical Faculty, Department of Neurology and Neuroscience, İstanbul, Türkiye

Email: sultan.tarlaci@uskudar.edu.tr