Dogs display vast phenotypic diversity, including differences in height, skull shape, tail, etc. Yet, humans are almost always able to quickly recognize a dog, despite no single feature or group of features are critical to distinguish dogs from other objects/animals. In search of the mental activities leading human individuals to state "I see a dog", we hypothesize that the brain might extract meaningful information from the environment using Ramsey sentences-like procedures. To turn the proposition "I see a dog" in a Ramsey sentence, the term dog must be replaced by a long and complex assertion consisting only of observational terms, existential quantifiers and operational rules. The Ramsey sentence for "I see a dog" sounds: "There is at least an entity called dog which satisfies the following conditions: it is an animal, it has four legs, ..., etc, ..., and is something that I have in my sight". We discuss the biological plausibility and the putative neural correlates of a Ramsey-like mechanism in the central nervous system. We accomplish a brain-inspired, theoretical neural architecture consisting of a parallel network that requires virtually no memory, is devoid of probabilistic choices and can analyze huge but finite amounts of unique visual details, combining them into a single conceptual output. In sum, Ramsey sentence stands for a versatile tool that can be used not just as a methodological device to cope with biophysical affairs, but also for a model to describe the real functioning of cognitive operations such as sensation and perception.
Introduction
As John was passing by, he saw a dog under a tree. He said to himself: "I see a dog". John says the same words when he sees a dog collar, when he hears barking, when he catches the typical smell of dog in an empty room. In all these cases, John is almost always capable of recognizing a dog. What happens in John's brain when he watches a dog? How does John recognize that he is watching a dog? We will leave apart auditory, olfactory, tactile cues and will confine ourselves to the assessment of a specific case related with visual cues, i.e., a human individual who watches an object, a painting, a sketch, a photo illustrating something that he terms "a dog". Scientists tackled the issue and formulated manifold responses. Some scholars provide a holistic account of perceptive contents, suggesting that the observed object displays as a whole emergent property that cannot be found zooming in any part of its constituents (Pastukhov, 2017). In turn, others contend that an image is first perceived in terms of its basic individual elements/features, then is fully recognized (Grainger et al., 2008). Others suggest that the visual system might split the processing of an object's form and color ("what") from its spatial location ("where") (Rao et al., 1997). Others believe that quick recognition of things like dogs involve both category-specific computational hubs in the ventral visual stream and distributed cortical memory networks (Woolnough et al., 2020). Template matching models of pattern recognition suggest that mental comparison might take place between external inputs and internal schemes (Hirai 1980). Further, recently developed deep convolutional neural networks inspired by visual cortex's layering have been trained for invariant object recognition and classification (Wakhloo et al., 2020; Cohen et al., 2023).
We will take a different turn. At first, we will break the concept of dog into its component parts, looking for the minimal features or traits that allow human observers to recognize that this is a dog. The problem is twofold:
- By one side, the visual and non-visual features allowing John to say "I see a dog" must be defined.
- By another side, despite dogs are among the most variable mammals, John is almost always able to quickly recognize that this is a dog. Therefore, the second problem sounds as follows: looking at different images depicting manifold canine breeds, how does it happen that John is almost always able to state "I see a dog"?
We will conclude that no single feature or group of features allow John to distinguish the dog from other objects or animals, nevertheless John is always able to recognize that this is a dog. To solve this seeming contradiction, a theory of sensation and perception can be built using an approach borrowed from the last writings of Frank Plumpton Ramsey, just before his premature death in 1930. We will accomplish a logical-mathematical framework in the form of a Ramsey sentence, which is useful in reasoning about the unification of analytic observables and concept. We will suggest that the mind might use Ramsey language-like procedures to extract meaningful information from the environment. Further, we will examine the biological plausibility and the possible neural correlates of a Ramsey-like mechanism in the brain and describe its advantages compared with the existing functional models of the brain.
Defining Dogs with Ramsey Sentences
How does John recognize a dog? We will analyze both the visual and non-visual features that allow John to state "I see a dog". The Oxford Learners' Dictionary defines the dog as an animal with four legs and a tail, often kept as a pet or trained for work such as hunting or guarding buildings. The dog can also be defined as a highly variable domestic mammal (Canis familiaris) closely related to the gray wolf. Yet, these definitions are rather general. How many features of a dog are required to say "this is a dog"? How many features of different dogs are required to identify the dog's breed?
We could say that John recognizes a dog because it is an animal, but it is a too vague concept. We could say that John recognizes a dog because it has four legs, but John is able to recognize a dog even if, unfortunately, has three legs. We could say that John recognizes a dog because is a domesticated descendant of the wolf, but he is able to easily recognize a dog even if he is unaware of phylogenesis and evolution. We could say that John recognizes a dog through its genome sequencing (Hayward et al., 2016; Plassais et al., 2019; Letko et al., 2023), but he can recognize a dog even if he never heard about DNA. We could say that John recognizes a dog because it has an upturning tail, but he can recognize a dog even if, unfortunately, has a cut tail. Despite there are many different shapes for dog tails, from straight up to curled or corkscrew, John says that he recognizes that it is a dog. The same holds for the weight, the height, the eye gaze, the facial expression, the body posture, the manifold coats of different breeds, the different-shaped snout, the Carnivorans-like teeth arrangement for cutting meat, the non-retractable claws, etc. It has been reported that facial phenotypes, such as, e.g., the complexity of markings on dogs' faces, can affect human interpretation of their expressions (Sexton et al., 2023). We could say that John recognizes a dog because it is uniquely adapted to human behavior, having acquired the ability to understand and communicate with humans. But John effortlessly recognizes also a wild dog.
In sum, our examples suggest that no single observable and non-observable feature or group of features are critical to define the dog and distinguish it from other objects or animals. Nevertheless, the lack of explicit identifying features and the huge morphological and behavioral diversity between breeds do not prevent John to be almost always able to recognize that he is seeing a dog. To resolve this apparent contradiction, we are going to introduce the Ramsey sentence and show how it could be used to describe the mental activities that lead John to state "I see a dog".
The proposition "I see a dog" can be turned in a Ramsey sentence. Ramsey introduced a technique of examining a scientific theory by means of long and complex formal propositions (Ramsey 1931), later termed "Ramsey Sentences" by Hempel (1958). Ramsey's account is built on the observation that scientific theories often describe abstract, theoretical terms such as "spin" and "electron" that cannot be observed and are difficult to distinguish from the metaphysical terms so often encountered in philosophy (Carnap 1966). A finitely axiomatized scientific theory T can be formulated in a formal language of first order predicate logic (Hintikka 1998), where the predicates are usually divided into two groups, namely the observational terms (O1, O2,..., On) and the non-observational terms (N1, N2,..., Nn). Therefore, the theory can be expressed as:
T = O1, O2, ..., On; N1, N2, ..., Nn.
Aiming to build scientific theories by means of both existential propositions and explicit definitions representing experiences, Ramsey removed the theoretical entities Nn from T. Non-observable entities can be tackled through second order variables X, i.e., primitive observation terms not referring to individuals, but to properties of individuals or relations between individuals. A Ramsey's sentence is achieved, i.e., a second-order, extended observational statement where the theoretical terms and/or postulates are replaced by a high but finite number of variables and observables bound to initial existential quantifiers ∃. In formal terms:
TR = ∃X1 ∃X2...∃Xn, O1, O2,...On.
Where the proposition TR stands for the Ramsey sentence of T.
Theoretical terms are replaced by the assertion that "there is at least one entity that displays the same formal connection with the observational properties that the theory T and that satisfies certain conditions". For example, instead of explicitly using theoretical terms such as "electron", a long and complex proposition can be drawn that goes through all the cases satisfying the laws and consequences, so that the term "electron" turns out to designate the conjunction of all the properties needed to specify the meaning of the term, such as, e.g., the properties 1, 2, 3, plus 4,5,6, plus the additional properties 7,8, etc. This means that the Ramsey sentences might stand for logical representations of theoretical propositions, formulated to avoid the necessity of hypothetical abstractions per se as necessary operators within the theory.
The same Ramsey humbly asked to himself: is it necessary to use such intricate definition for the legitimate use of theory? The answer is positive. According to functionalistic scholars, Ramsey sentences provide empirically adequate descriptions of the things that can be described just by observational terms (Berardi and Steila, 2015; Lowther, 2022). The Ramsey--Carnap approach, or Ramseyfication, has been widely used to assess scientific issues such as, e.g., infrared spectroscopy in analytical chemistry (Toppel, 2021). Further, David Lewis (1972) suggested to use Ramsey sentences to tackle mental issues. He introduced a general method for constructing Ramsey sentences to define mental operations such as pain. All the mental state terms related with pain are removed from the statement and replaced by variables X plus existential quantifiers:
∃X1 ∃X2 ∃X3 ∃X4, ... ∃Xn.
In this case, the variable X includes:
- Quantifiers that range over mental states.
- Terms that denote stimulations/behavior.
- Terms that specify various causal relations among them.
In the sequel, we suggest to look at the Ramsey sentence in realist terms, advising that it is not just a useful methodological tool, but might also be a reliable model to explain cognitive mental processes such as sensation and perception.
Could Ramsey Sentences Be Performed by Human Brains?
We argue that the term "dog" can be treated as a theoretical term in a Ramsey's sentence. For children who never saw a dog, the dog stands for a theoretical entity. Only with time, habituation and social consensus children learn to climb the steps from theoretical to observable entities, becoming able to say: "I see a dog". Therefore, the mental schemes that allow human individuals to say: "I see a dog" require time and training to give a meaning to the observed object. In touch with this observation, it has been reported that face looking in monkeys is not innate, rather experience is required for the formation/maintenance of face domains (Arcaro et al., 2017).
Our aim is to provide a Ramsey sentence for the (apparently) trivial assertion:
"I see a dog".
To achieve Ramseyfication, the assertion can be modified in:
"I have a dog in my sight".
Then, it can be described in Ramsey's terms through second order variables do not referring to individual dogs, but to properties/relations among dogs:
"There is at least an entity called dog which satisfies the following conditions: it is an animal, it has four legs, it is a domesticated descendant of the wolf ,..., etc, ..., and is something that I have in my sight".
All the observable features encompassed in the concept of "dog" must be explicitly expressed using a long but finite list of dog-related features that can be empirically confirmed.
In sum, the Ramsey sentence can be used to assess how the human brain recognizes a dog. The next step will be to evaluate the biological likelihood for a Ramsey's account of the nervous activity. To solve the issue, we must investigate the very structure of the nervous systems, looking for plausible neural correlates of the Ramsey sentences.
Neural correlates for Ramsey sentences?
The Ramsey's account requires manifold parallel channels that are able to simultaneously perform computations related with different dog features. Subcortical and cortical areas involved in visual sensation and perception have been widely investigated (Hayama et al., 2016). When a dog is in front of John's eyes, the temporal sequence of neuronal activation can be followed throughout the John's visual and central systems (Figure 1). The Ramsey' approach predicts that, in the short time window of 280--400 ms from the sight of the dog to the assertion "I see a dog", the brain might perform a very high, but finite number of parallel computations. Every parallel neural channel might examine one of the numerous observational terms X of a Ramsey sentence (namely, every single feature of the dog), producing a single final output that turns out to be the assertion "I see a dog" (Figure 1). In touch with our previsions, it has been demonstrated that object recognition, modulated by both sensory cues and previous learning, requires simultaneous parallel information processing in sensorimotor, associative and limbic circuits (Macpherson et al., 2021). Being the nervous system, a distributed large-scale network characterized by parallel processing loops, widespread inter-area fluctuation modes might transmit sensory data and task responses through parallel, non-interfering Ramsey-like channels. In touch with this possibility, parallel computing and parallelization strategies are widely used in theoretical neuroscience and artificial neural networks' optimization to perform human-like tasks, capture neuron and synapse dynamics and deal with data processing, pattern recognition and classification (Liu et al., 2016; Pastur-Romay et al., 2017; Ben-Nun and Hoefler, 2018; Peres and Rhodes, 2022; Kanwisher et al., 2023). Compared with sequential architectures, parallel neural network architectures display optimized performance, higher efficiency, better flexible behavioral control (Hikosaka et al., 1999; Åström and Koker, 2011, Peres and Rhodes, 2022). Indeed, parallel training is robust and capable of yielding accurate long-term predictions in realistic scenarios, facilitating performance of complex and simultaneous behaviors (Ribeiro and Aguirre, 2018).
A Ramsey sentence's approach, running counter the occurrence of higher representations of the "grandmother cell" type, requires instead a widely distributed storage of representations in the brain. In agreement with this prediction, it has been suggested that perceptual experiences can arise from sparse neuronal populations' activity patterns in the mammalian neocortex (Marshel et al., 2019). Widespread inter-area fluctuation modes transmit sensory data in non-interfering channels, causing different areas to share co-fluctuations and task-related information within 300 ms from the onset of a visual stimulus (Ebrahimi et al., 2022). In agreement with the distributed brain storage required by Ramsey sentences, a mounting literature points towards task-sensitive and sensory-independent brain mechanisms underlying functions like spatial, motion and self-processing (Gaglianese et al., 2020). It has been demonstrated that fast recognition of faces and scenes implies the engagement of category-specific computational hubs in the ventral visual stream, where the medial temporal lobe and medial parietal cortex work in tandem (Woolnough et al., 2020).
The Ramsey's account predicts that the brain analyses huge amounts of single visual details using a sparse code manner, combining associative processes with symbolic structure. The extremely sparse, parallel pathways required for Ramsey sentences point towards manifold population dynamics in the central nervous system converging to a single, final input consisting of the assertion "I see a dog". In touch with this suggestion, the extreme sparseness of the magnocellular LGN inputs to the macaque primary visual cortex can generate robust orientation selectivity in V1, as well as continuity in the orientation map (Chariker et al., 2016). The well-defined, canonical anatomical/physiological neuronal patterns required by a Ramsey sentence-like approach could ensure high-fidelity neural representations and communication between brain areas, overcoming the substantial variability of neuronal sensory responses and generating reliable sensory discrimination (Rossi et al., 2020; Ebrahimi et al., 2022; Fişek et al., 2023).
The Ramsey sentence could be used as a device to explore the neural basis of conceptual processing in the brain. However, parallel, non-interfering channels should be at least partially shared between distributed representations of separate objects that overlap due to common features, leading, e.g., to uncertainty between dogs and wolves. To overcome the issue, semantics and category-selective regions must be considered along with the neural basis of conceptual understanding (Tozzi et al., 2018; Khandhadia et al., 2023). It has been demonstrated that the representation of semantic processing is crucially engaged when judgements are formulated based either on association, or conceptual similarity (Jackson et al., 2015). A Ramsey's approach to conceptual representation suggests that that something propositional and symbolic could arise from the very neuronal parallel processing. In touch with Ramsey's suggestions, it has been found that semantic concepts are scattered throughout vast areas of the cortical surface (Huth et al., 2016). A Ramsey-like account of conceptual representation requires the contribution of numerous mental processes including attention, motivation, memory formation and extinction. In touch with this account, separate neuronal mice subpopulations in the central amygdala selectively encode a wide range of different salient stimuli from various sensory modalities with distinct valences and physical properties (Yang et al., 2023). The missing link between sensation and semantic content could be provided by a crucial feature of the Ramsey sentence, namely, the existential quantifier. Indeed, quantifiers do exhibit neural correlates. For instance, BOLD fMRI studies point towards the existence of a large-scale fronto-parietal network contributing to specific aspects of logical quantifiers' comprehension (Olm et al., 2014; Zhan et al., 2017; Heim et al., 2020). The comprehension of quantifiers requires both the right inferior parietal cortex handling numerosity component, and the thalamus/anterior cingulate handling selective attention (McMillan et al., 2005).
The Ramsey-like framework commands a reductionistic interpretation of widely used complexity measures, providing a feasible account for the spatial and temporal autocorrelation phenomena that explain various measures of network topology and capture individual and regional variations (Shinn et al., 2023). Describing different visual properties as related and intertwined, Ramsey-like models suggest that a comprehensive picture based on cortical population dynamics is required to explain function, hinting to a system that is less feedforward and more dominated by intracortical signals than previously thought (Cicchini et al., 2022). In sum, clues from the neuroscientific literature point toward the possibility to build a realistic Ramsey-like model that accounts for cognitive operations of the brain.
Conclusions
The human mind is almost always able to recognize a dog, despite dog breeds vary widely in shape, size, color, etc. We suggest that this cognitive phenomenon can be tackled via a methodological and functional approach based on Ramsey sentences. A Ramsey-like approach to the cognitive activities such as sensation and perception suggests that mental states can be intended in terms of parallel and simultaneous activation of specific cortical subareas. A Ramsey-like approach suggests that the brain does not split the environment perceived as a whole in manifold components, rather manifold components act as inputs to analytically build the mental representation of the surrounding environment. In terms of computer science, fitting, optimization and objective function of the target output are not anymore achieved via learning algorithms (Kanwisher et al., 2023), rather via a huge (but finite!) number of parallel computations, each one regarding a single feature of the input's training data. This means that Ramsey sentences might contribute to the human ability to represent relations between concepts, to code relationships between items sharing basic conceptual properties (e.g., dog and wolf) and to simultaneously represent associative links between dissimilar items co-occurring in peculiar contexts (e.g., dog and bone) within a single, unified concept (Jackson et al., 2015).
The possibility to use Ramsey sentences in neuroscientific contexts leads to intriguing outcomes. First of all, criticism can be levelled at the concepts of pair correlation and predictive code in the brain. The paradigm of pair correlations suggests that different brain regions work together in a coordinated and strongly correlated manner, where sensory responses result from comparisons between bottom-up inputs and contextual predictions (Uran et al., 2022), achieving a collection of morphisms, i.e., maps between objects and their mental representation. In other words, the brain would recognize a dog by performing pair correlation between the immaterial dogs stored in the brain and the one that is currently watched. On the contrary, the Ramsey's account points towards the counterintuitive hypothesis that the brain does not perform matching between external visual inputs and an internal database, or between a model output and a target output matching the error. All the content of a dog that can be observed is explicitly expressed in a long and complex formal sentence generated in the parallel circuitry of the brain. No mental interpretation of vague, non-observable entities is required, rather observational terms capable of empirical confirmation are achieved. This means that the concept of the dog arises naturally from the parallel flow occurring in different subcortical and cortical areas, without the need of pre-stored mental concepts of the dog and of comparison among observable variables expressed in terms of atomic propositions. The use of Ramsey sentences to describe the neuronal activity might be a step towards the dismission of brain hierarchies, granularity, sub-divisions. The brain does what is does, not caring about our partition in different cognitive activities such as sensation, perception, emotion, memory, etc. (Tozzi and Peters, 2019). Paraphrasing Ramsey, the concept of a dog must be built out of simple observational facts, leaving apart the use of a set of axioms and a dictionary of correspondence rules that translates the primary language into the secondary language. Theoretical terms contribute to the observational component of a theory not through bridge laws connecting theoretical and observational concepts, rather the only bridge principle is the given theory itself (Hintikka 1997). Further, a Ramsey-like circuitry permits the addition of parallel lines to perform always new atomic propositions o improve and enlarge the very definition of terms such as "dog".
A Ramsey-like approach to cognitive activities cast doubts on the utility of the energetically expensive long-term memory storage in the brain. Engrams of specific memories are thought to be distributed and stored in ensembles of neurons across multiple brain regions that are functionally connected, via cross-regional recruitment of presynaptic neurons initiated by downstream memory neurons (Lavi et al., 2022; Roy et al., 2022). The Ramsey account suggests that the brain is a collection of numerous but finite functional units, everyone performing a single operation. The concept of dog arises from the simultaneous, sparse activation of many single parallel processes. What is termed memory ends up depending on the countless sparse sources that have been activated together, keeping in mind that the same anatomical unity can be recruited during different cognitive tasks. In long times, the single, variable features of a dog (such as breed, cheek features, neck length, and so on) have been gathered in scattered and parallel brain circuits. In touch with this theoretical account, it has been demonstrated in monkeys that environmental inputs drive neuronal activity by sculpting cortical domain formation (Arcaro et al., 2017). Indeed, selective viewing behavior at birth bias category-specific visual responses toward retinotopic representation, with no need of category-specific templates.
Another outcome of using Ramsey sentences for the assessment of human cognitive activities is the demise of probabilistic and Bayesian accounts of choices and beliefs (Cazettes et al., 2023). Probabilistic and classical inference patterns have been suggested to subtend both artificial and natural neural networks. Looking for the statistics of features in images, artificial and natural systems might use gradient descent learning characterized by representations that are more sensitive to common structures (Benjamin et al., 2022). Bayesian accounts point towards the brain as an inferential machine equipped with a priori beliefs (Ramstead et al., 2020). For instance, a Bayesian inference semantics for probabilistic reasoning in natural language successfully deals with various probabilistic semantic phenomena, including generalized quantifiers (Bernardy et al., 2019). On the contrary, Ramsey sentences guarantee neural multiplexing that does not require outputs based on pondering of probabilities. The Ramsey sentence displays a crucial component that runs counter probabilistic interpretations of human choices and beliefs, namely the existential quantifier. Thanks to the existential quantifier, the assertion "I see a dog" must be treated as an evolving existential statement such as "there is at least one thing termed dog that...". The assertion becomes a shorthand expression of all the judgements and beliefs about dogs whose consequences will meet the future successfully or not. The possibility that "I see a dog" might be determined in a way that John is unable to statistically anticipate must not be foreclosed, since "I see a dog" is open to John's future revision due to further additions made within the scope of the quantifier (Misak, 2020). Leaving apart probabilistic accounts, we suggest that a Selfridge's Pandemonium-like "the-winner-takes-all" mechanism (Tozzi and Peters, 2018) might provide a plausible account subtending the coalescence of the single observational features in the final output.
The use of a Ramsey-like approach to human cognitive activities has limitations. It has been objected that Ramsey sentences do not carry out a genuine elimination of theoretical concepts (Majer, 1989; Koslow, 2008). It has been pointed out that Ramsey sentences cannot provide a satisfactory formalism to functionalism, being just a type of behaviourism plus a cardinality constraint on the number of relations between mental-relevant events (Lowther, 2022). Another objection can be raised: Ramsey sentences could be unfeasible in the human brain, since they would require a huge amount of time and computational power. The objection concerning time is easily removed if we consider that parallel networks are able to simultaneously perform a finite but huge number of operations. Yet, the requisite of high amount of computational power for the functioning of Ramsey-like's brain mechanisms is not necessarily bad. It is well-known that huge, overparametrized neural networks do help for robustness, i.e., for the ability of computer systems to cope with both erroneous inputs and errors during execution. Bubeck et al. (2021) investigated the balance between the size of a neural network (i.e., the number of neurons k) and its robustness as measured by the Lipschitz constant of the data fitting model f∈Fk(ψ)). To accomplish a robust two-layers neural network and perfectly fit the data, a huge amount of information is required, corresponding to one neuron per datapoint (Bubeck and Sellke, 2022). In sum, the large size of a network required by Ramsey sentences permits the achievement of optimal smoothness robustness. The last, but not the least, we must be careful about the move from brain to mind and mind to brain. The idea of concepts being broken down into decentralized networks is basically similar to approaches in machine learning field aiming to identify and select the best features. Indeed, Ramsey sentences resemble more a theory at the level of computation than a theory at the level of the hardware (namely, the brain). In this manuscript, we left apart the computational endeavor inextricably linked with Ramsey sentences, choosing instead to focus on the biological plausibility of the putative neural correlates of a Ramsey-based account.
Apart from neuroscience, Ramsey approaches could be used for the evaluation of other biological systems too. For example, Ramsey sentences could provide a systematic perspective on the intercellular wiring of the human immune system. The human immune system's distributed network of cells circulating throughout the body is dynamically connected via interactions between cell-surface proteomes (Shilts et al., 2022). Ramsey sentences might be used to systematically map direct protein interactions and receptor wiring across the surface proteins detectable on human leukocytes.
To sum up, the Ramsey sentence might stand for a versatile device that can be used not just as a methodological tool to cope with biophysical affairs, but also as a reliable model to describe the real functioning of biological systems.
Key Insights from the Article
The 10 most important sentences from the article:
References
- Ajabi Z, Keinath AT, Wei XX, et al. Population dynamics of head-direction neurons during drift and reorientation. Nature 2023; 615:892--899.
- Andrews-Hanna JR, Smallwood J, Spreng RN. The default network and self-generated thought: component processes, dynamic control and clinical relevance. Annals N Y Academy Science 2014; 1316:29-52.
- Arcaro M, Schade P, Vincent J, et al. Seeing faces is necessary for face-domain formation. Nature Neuroscience 2017; 20, 1404--1412.
- Åström F, Koker R. A parallel neural network approach to prediction of Parkinson's Disease. Expert Systems with Applications 2011; 38(10): 12470-12474.
- Ben-Nun T; Hoefler T. Demystifying Parallel and Distributed Deep Learning: An In-Depth Concurrency Analysis. arXiv:1802.09941v2, 2018.
- Benjamin AS, Zhang L-Q, Qiu C, Stocker AA, Kording KP. Efficient neural codes naturally emerge through gradient descent learning. Nature Communications 2022; 13(1):7972.
- Berardi S, Silvia Steila. An intuitionistic version of Ramsey's Theorem and its use in Program Termination. Annals of Pure and Applied Logic 2015; 166(12):1382-1406.
- Bernardy J-P; Blanck R, Chatzikyriakidis S, Lappin S, Maskharashvili A. Bayesian Inference Semantics: A Modelling System and A Test Suite. Proceedings of the Eighth Joint Conference on Lexical and Computational Semantics (*SEM), pages 263--272, Minneapolis, June 6--7, 2019.
- Bubeck Sébastien; Mark Sellke. 2022. A Universal Law of Robustness via Isoperimetry. arXiv:2105.12806
- Bubeck S, Li Y, Nagaraj D. A Law of Robustness for Two-Layers Neural Networks. Proceedings of Machine Learning Research 2021; 134:1--17, 2021 34th Annual Conference on Learning Theory.
- Carnap R. An Introduction to the Philosophy of Science. Dover Publications, 1995.
- Cazettes F, Mazzucato L, Murakami M, et al. A reservoir of foraging decision variables in the mouse brain. Nature Neuroscience 2023.
- Chariker L, Shapley R, Young L-S. Orientation Selectivity from Very Sparse LGN Inputs in a Comprehensive Model of Macaque V1 Cortex. Journal of Neuroscience 2016; 36(49):12368-12384.
- Cicchini GM, D'Errico G, Burr DC. Crowding results from optimal integration of visual targets with contextual information. Nature Communications 2022; 13, 5741.
- Cohen U, Chung S, Lee DD et al. Separability and geometry of object manifolds in deep neural networks. Nature Communications 2020; 11,746.
- Ebrahimi S, Lecoq J, Rumyantsev O. et al. Emergent reliability in sensory cortical coding and inter-area communication. Nature 2022: 605: 713--721.
- Edelman G. Neural Darwinism. New Perspectives Quarterly 2017; 31(1), 25-27.
- Fenk LA, Riquelme JL, Laurent G. Interhemispheric competition during sleep. Nature 616; 312--318.
- Fişek M, Herrmann D, Egea-Weiss A. et al. Cortico-cortical feedback engages active dendrites in visual cortex. Nature 617; 769--776.
- Gaglianese A, Branco MP, Groen I, et al. Electrocorticography Evidence of Tactile Responses in Visual Cortices. Brain Topography 2020; 33: 559--570.
- Gazzaniga MS. The Cognitive Neurosciences, Fourth Edition. MIT Press, 2009.
- Gazzaniga MS. Shifting gears: seeking new approaches for mind/brain mechanisms. Annual Review Psychology 2013; 64:1-20.
- Grainger J, Rey A, Dufau S. Letter perception: from pixels to pandemonium. Trends in Cognitive Sciences 2008; 12(10):381-7.
- Grimaldi A, Gruel A, Besnainou C, Jérémie J-N, Martinet J, Perrinet LU. Precise Spiking Motifs in Neurobiological and Neuromorphic Data. Brain Science 2023: 13(1)68.
- Hayama S, Chang L, Gumus K, King GR, Thomas Ernst. Neural correlates for perception of companion animal photographs. Neuropsychologia 2016; 85:278--286.
- Hayward JJ, Castelhano MG, Oliveira KC, Corey E, Balkman C, et al. Complex disease and phenotype mapping in the domestic dog. Nature Communication 2016; 7, 10460.
- Heim S, McMillan CT, Olm C, Grossman M. So Many Are "Few," but so Few Are Also "Few" - Reduced Semantic Flexibility in bvFTD Patients. Frontiers Psychology 2020; 3;11:582.
- Hempel CG. The theoretician's dilemma: A study in the logic of theory construction. Minnesota Studies in the Philosophy of Science 1958.; 2:173-226.
- Hikosaka O; Nkahara H, Rand MK, Sakai K, Lu X, et al. Parallel neural networks for learning sequential procedures. Trends in Neurosciences 1999; 22(10);464-471.
- Hintikka J. Ramsey Sentences and the Meaning of Quantifiers. Philosophy of Science 1998; 65(2): 289-305.
- Hirai Y. A template matching model for pattern recognition: self-organization of templates and template matching by a disinhibitory neural network. Biological Cybernetics 1980; 38(2):91-101.
- Huth AG, de Heer WA, Griffiths TL, Theunissen FE, Gallant JL. Natural speech reveals the semantic maps that tile human cerebral cortex. Nature 2016; 28;532(7600):453-8.
- Jackson R, Hoffman P, Pobric G, Lambon R. The nature and neural correlates of semantic association vs. conceptual similarity. Cerebral Cortex 2015; 25(11): 4319-4333.
- Kanwisher N, Khosla M, Dobs K. Using artificial neural networks to ask 'why' questions of minds and brains. Trends in Neurosciences 2023; 46(3): P240-254.
- Khandhadia AP, Murphy AP, Koyano KW, Leopold DA. Encoding of 3D physical dimensions by face-selective cortical neurons. PNAS 2023; 120 (9) e2214996120.
- Koslow A. The Representational Inadequacy of Ramsey Sentences. Theoria 2008.
- Lavi a, Sehgal M, de Sousa AF, Okabe A, Bear C, Silva AJ. Local memory allocation recruits memory ensembles across brain regions. Neuron 2022; 111(4): P470-480.E5.
- Letko A; Hédan B, Snell A, Harris AC, Jagannathan V, et al. Genomic Diversity and Runs of Homozygosity in Bernese Mountain Dogs. Genes (Basel) 2023; 14(3):650.
- Lewis D. How to define theoretical terms. Journal of Philosophy, 1970; 67(13): 426--44.
- Liu X, Zeng Y, Zhang T, et al. Parallel Brain Simulator: A Multi-scale and Parallel Brain-Inspired Neural Network Modeling and Simulation Platform. Cognitive Computation 2016; 8: 967--981.
- Lowther TS. Behaviourism in Disguise: The Triviality of Ramsey Sentence Functionalism. Axiomathes 2022; 32:101--121.
- Macpherson Tom, Matsumoto M, Gomi H, Morimoto Y, Uchibe E, Hikida T. Parallel and hierarchical neural mechanisms for adaptive and predictive behavioral control. Neural Networks 2021; 144:507-521.
- Majer U. Ramsey's Conception of Theories: An Intuitionistic Approach. History of Philosophy Quarterly 1989; 6(2):233-258.
- Marshel JH, Kim YS, Machado TA, Quirin S, Benson B, et al. Cortical layer-specific critical dynamics triggering perception. Science 2019; 365(6453):eaaw5202.
- McDowell JJ. 2010. Behavioral and neural Darwinism: Selectionist function and mechanism in adaptive behavior dynamics, Behavioural Processes 2010; 84(1).
- Misak C. Frank Ramsey: A Sheer Excess of Powers. OUP Oxford, 2020.
- Olm CA, McMillan CT, Spotorno N, Clark R, Grossman M. The relative contributions of frontal and parietal cortex for generalized quantifier comprehension. Frontiers Human Neuroscience 2014; 8:610.
- McMillan CT, Clark R, Moore P, Devita C, Grossman M. Neural basis for generalized quantifier comprehension. Neuropsychologia 2005; 43(12):1729-37.
- Pastukhov A. First, you need a Gestalt: An interaction of bottom-up and top-down streams during the perception of the ambiguously rotating human walker. Sci Rep. 2017; 7(1):1158.
- Pastur-Romay L A, Porto-Pazos AB, Cedron F, Pazos A. Parallel Computing for Brain Simulation. Current Top Medicine Chemistry 2017; 17(14):1646-1668.
- Peres L, Rhodes O. Parallelization of Neural Processing on Neuromorphic Hardware. Frontiers Neuroscience 2022; 16.
- Plassais J, Kim J, Davis BW, et al. Whole genome sequencing of canids reveals genomic regions under selection and variants influencing morphology. Nature Communication 2019; 10, 1489).
- Ramsey FP. Theories, in The Foundations of Mathematics and Other Logical Essays, R. B. Braithwaite (ed.), London: Routledge, pp. 212--236, 1931.
- Ramstead M, Friston KJ, Hipólito I. Is the Free-Energy Principle a Formal Theory of Semantics? From Variational Density Dynamics to Neural and Phenotypic Representations. Entropy (Basel) 2020; 22(8):889.
- Rao SC, Rainer G, Miller EK. Integration of what and where in the primate prefrontal cortex. Science 1997; 276(5313):821-4.
- Ribeiro AH, Aguirre LA. "Parallel Training Considered Harmful?": Comparing series-parallel and parallel feedforward network training. Neurocomputing 2018; 316:222-231.
- Rosenbaum DA. It's a Jungle in There: How Competition and Cooperation in the Brain Shape the Mind. Oxford University, 2014.
- Rossi LF, Harris KD, Carandini M. Spatial connectivity matches direction selectivity in visual cortex. Nature 2020; 588:648--652.
- Roy DS, Park YG., Kim ME, et al. Brain-wide mapping reveals that engrams for a single memory are distributed across multiple brain regions. Nature Communications 2022; 13, 1799.
- Selfridge OG. Pandemonium: a paradigm for learning. In: Mechanization of thought processes: proceedings of a Symposium held at the national Physical Laboratory. London: HMSO, 513-526, 2017.
- Sexton CL, Buckley C, Lieberfarb J, Subiaul F, Hecht EE, Bradley BJ. What Is Written on a Dog's Face? Evaluating the Impact of Facial Phenotypes on Communication between Humans and Canines. Animals 2023, 13(14), 2385.
- Shinn M, Hu A, Turner L. Functional brain networks reflect spatial and temporal autocorrelation. Nature Neuroscience. 2023; 26, 867--878.
- Shilts J, Severin Y, Galaway F, et al. A physical wiring diagram for the human immune system. Nature 2022; 608:397--404
- Toppel M. Applying Ramseyfication to Infrared Spectroscopy. Erkenn 2021.
- Tozzi A, Peters JF. Multidimensional brain activity dictated by winner-take-all mechanisms. Neuroscience Letters 2018; 678 (21):83-89.
- Tozzi A, Peters JG, Fingelkurts AA, Fingelkurts AA, Perlovsky L. Syntax meets semantics during brain logical computations. Progress in Biophysics and Molecular Biology 2018; 140:133-141.
- Tozzi A, Peters JF. The common features of different brain activities. Neuroscience Letters, 2019; 692: 41-46.
- Uran C, Peter A, Lazar A, Fries P, Singer W, Vinck M. Predictive coding of natural images by V1 firing rates and rhythmic synchronization. Neuron 2022; 110(7), P1240-1257.E8.
- Yang T, Yu K, Zhang X. et al. Plastic and stimulus-specific coding of salient events in the central amygdala. Nature 2023; 616: 510--519.
- Wakhloo AJ, Sussman TJ and Chung SY. Linear Classification of Neural Manifolds with Correlated Variability. Physics Review Letters 2023; 131, 027301.
- Zhan J; Jiang X, Politzer-Ahles S, Zhou X. Neural correlates of fine-grained meaning distinctions: An fMRI investigation of scalar quantifiers. Human Brain Mapping 2017; 38(8):3848-3864.
Authors hold copyright with no restrictions. Based on its copyright Journal of NeuroPhilosophy (JNphi) produces the final paper in JNphi's layout. This version is given to the public under the Creative Commons license (CC BY). For this reason authors may also publish the final paper in any repository or on any website with a complete citation of the paper.