Our lives are being measured in rapidly increasing ways and frequency. These measurements have beneficial and deleterious effects at both individual and social levels. Behavioral measurement technologies offer the promise of helping us to know ourselves better and to improve our well-being by using personalized feedback and gamification. At the same time, they present threats to our privacy, self-esteem, and motivation. At the societal level, the potential benefits of reducing bias and decision variability by using objective and transparent assessments are offset by threats of systematic, algorithmic bias from invalid or flawed measurements. Considerable technological progress, careful foresight, and continuous scrutiny will be needed so that the positive impacts of behavioral measurement technologies far outweigh the negative ones.
Broad empirical evidence suggests that higher-level cognitive processes, such as language, categorization, and emotion, shape human visual perception. Do these higher-level processes shape human perception of all the relevant items within an immediately available scene, or do they affect only some of them? Here, we study categorical effects on visual perception by adapting a perceptual matching task so as to minimize potential non- perceptual influences. In three experiments with human adults (N = 80; N = 80, N = 82), we found that the learned higher-level categories systematically bias human perceptual matchings away from a caricature of their typical color. This effect, however, unequally biased different objects that were simultaneously present within the scene, thus demonstrating a more nuanced picture of top-down influences on perception than has been commonly assumed. In particular, perception of only the object to be matched, not the matching object, was influenced by animal category and it was gazed at less often by participants. These results suggest that category- based associations change perceptual encodings of the items at the periphery of our visual field or the items stored in concurrent memory when a person moves their eyes from one object to another. The main finding of this study calls for a revision of theories of top-down effects on perception and falsify the core assumption behind the El Greco fallacy criticism of them.
Humans have a remarkable capacity for coordination. Our ability to interact and act jointly in groups is crucial to our success as a species. Joint Action (JA) research has often concerned itself with simplistic behaviors in highly constrained laboratory tasks. But there has been a growing interest in understanding complex coordination in more open-ended contexts. In this regard, collective music improvisation has emerged as a fascinating model domain for studying basic JA mechanisms in an unconstrained and highly sophisticated setting. A number of empirical studies have begun to elucidate coordination mechanisms underlying joint musical improvisation, but these findings have yet to be cached out in a working computational model. The present work fills this gap by presenting Tonal Emergence, an idealized agent-based model of improvised musical coordination. Tonal Emergence models the coordination of notes played by improvisers to generate harmony (i.e., tonality), by simulating agents that stochastically generate notes biased towards maximizing harmonic consonance given their partner’s previous notes. The model replicates an interesting empirical result from a previous study of professional jazz pianists: feedback loops of mutual adaptation between interacting agents support the production of consonant harmony. The model is further explored to show how complex tonal dynamics, such as the production and dissolution of stable tonal centers, are supported by agents that are characterized by (i) a tendency to strive toward consonance, (ii) stochasticity, and (iii) a limited memory for previously played notes. Tonal Emergence thus provides a grounded computational model to simulate and probe the coordination mechanisms underpinning one of the more remarkable feats of human cognition: collective music improvisation.
How do people use information from others to solve complex problems? Prior work has addressed this question by placing people in social learning situations where the problems they were asked to solve required varying degrees of exploration. This past work uncovered important interactions between groups’ connectivity and the problem’s complexity: the advantage of less connected networks over more connected networks increased as exploration was increasingly required for optimally solving the problem at hand. We propose the Social Interpolation Model (SIM), an agent-based model to explore the cognitive mechanisms that can underlie exploratory behavior in groups. Through results from simulation experiments, we conclude that “exploration” may not be a single cognitive property, but rather the emergent result of three distinct behavioral and cognitive mechanisms, namely, (a) breadth of generalization, (b) quality of prior expectation, and (c) relative valuation of self-obtained information. We formalize these mechanisms in the SIM, and explore their effects on group dynamics and success at solving different kinds of problems. Our main finding is that broad generalization and high quality of prior expectation facilitate successful search in environments where exploration is important, and hinder successful search in environments where exploitation alone is sufficient.
Often members of a group benefit from dividing the group’s task into separate components, where each member specializes their role so as to accomplish only one of the components. While this division of labor phenomenon has been observed with respect to both manual and cognitive labor, there is no clear understanding of the cognitive mechanisms allowing for its emergence, especially when there are multiple divisions possible and communication is limited. Indeed, maximization of expected utility often does not differentiate between alternative ways in which individuals could divide labor. We developed an iterative two-person game in which there are multiple ways of dividing labor, but in which it is not possible to explicitly negotiate a division. We implemented the game both as a human experimental task and as a computational model. Our results show that the majority of human dyads can finish the game with an efficient division of labor. Moreover, we fitted our computational model to the behavioral data, which allowed us to explain how the perceived similarity between a player’s actions and the task’s focal points guided the players’ choices from one round to the other, thus bridging the group dynamics and its underlying cognitive process. Potential applications of this model outside cognitive science include the improvement of cooperation in human groups, multi-agent systems, as well as human-robot collaboration.
Previous research has demonstrated that Distributional Semantic Models (DSMs) are capable of reconstructing maps from news corpora (Louwerse & Zwaan, 2009) and novels (Louwerse & Benesh, 2012). The capacity for reproducing maps is surprising since DSMs notoriously lack perceptual grounding . In this paper we investigate the statistical sources required in language to infer maps, and the resulting constraints placed on mechanisms of semantic representation. Study 1 brings word co-occurrence under experimental control to demonstrate that standard DSMs cannot reproduce maps when word co-occurrence is uniform. Specifically, standard DSMs require that direct co-occurrences between city names in a corpus mirror the proximity between the city locations in the map in order to successfully reconstruct the spatial map. Study 2 presents an instance-based DSM that is capable of reconstructing maps independent of the frequency of co-occurrence of city names.
We explore different ways in which the human visual system can adapt for perceiving and categorizing the environment. There are various accounts of supervised (categorical) and unsupervised perceptual learning, and different perspectives on the functional relationship between perception and categorization. We suggest that common experimental designs are insufficient to differentiate between hypothesized perceptual learning mechanisms and reveal their possible interplay. We propose a relatively underutilized way of studying potential categorical effects on perception, and we test the predictions of different perceptual learning models using a two-dimensional, interleaved categorizationplus- reconstruction task. We find evidence that the human visual system adapts its encodings to the feature structure of the environment, uses categorical expectations for robust reconstruction, allocates encoding resources with respect to categorization utility, and adapts to prevent miscategorizations.
Psychology researchers have long attempted to identify educational practices that improve student learning. However, experimental research on these practices is often conducted in laboratory contexts or in a single course, which threatens the external validity of the results. In this article, we establish an experimental paradigm for evaluating the benefits of recommended practices across a variety of authentic educational contexts—a model we call ManyClasses. The core feature is that researchers examine the same research question and measure the same experimental effect across many classes spanning a range of topics, institutions, teacher implementations, and student populations. We report the first ManyClasses study, in which we examined how the timing of feedback on class assignments, either immediate or delayed by a few days, affected subsequent performance on class assessments. Across 38 classes, the overall estimate for the effect of feedback timing was 0.002 (95% highest density interval = [−0.05, 0.05]), which indicates that there was no effect of immediate feedback compared with delayed feedback on student learning that generalizes across classes. Furthermore, there were no credibly nonzero effects for 40 preregistered moderators related to class-level and student-level characteristics. Yet our results provide hints that in certain kinds of classes, which were undersampled in the current study, there may be modest advantages for delayed feedback. More broadly, these findings provide insights regarding the feasibility of conducting within-class randomized experiments across a range of naturally occurring learning environments.
There is broad empirical evidence suggesting that higher-level cognitive processes, such as language, categorization, and emotion, shape human visual perception. For example, categories that we acquire throughout lifetime have been found to alter our perceptual discriminations and distort perceptual processing. However, many of these studies have been criticized as unable to differentiate between immediate perceptual experience and the arguably concomitant processes, such as memory, judgment, and some kinds of attention. Here, we study categorical effects on perception by adapting the perceptual matching task to minimize the potential non-perceptual influences on the results. We found that the learned category-color associations bias human color matching judgments away from their category ideal on a color continuum. This effect, however, unequally biased two objects (probe and manipulator) that were simultaneously present on the screen, thus demonstrating a more nuanced picture of top-down influences on perception than has been assumed both by the theories of categorical perception and the El Greco methodological fallacy. We suggest that only the concurrent memory for visually present objects is subject to a contrast-from-caricature distortion due to category-association learning.
Across three experiments featuring naturalistic concepts (psychology concepts) and naïve learners, we extend previous research showing an effect of the sequence of study on learning outcomes, by demonstrating that the sequence of examples during study changes the representation the learner creates of the study materials. We compared participants’ performance in test tasks requiring different representations and evaluated which sequence yields better learning in which type of tests. We found that interleaved study, in which examples from different concepts are mixed, leads to the creation of relatively interrelated concepts that are represented by contrast to each other and based on discriminating properties. Conversely, blocked study, in which several examples of the same concept are presented together, leads to the creation of relatively isolated concepts that are represented in terms of their central and characteristic properties. These results argue for the integrated investigation of the benefits of different sequences of study as depending on the characteristics of the study and testing situation.
Social network structure is one of the key determinants of human language evolution. Previous work has shown that the network of social interactions shapes decentralized learning in human groups, leading to the emergence of different kinds of communicative conventions. We examined the effects of social network organization on the properties of communication systems emerging in decentralized, multi-agent reinforcement learning communities. We found that the global connectivity of a social network drives the convergence of populations on shared and symmetric communication systems, preventing the agents from forming many local “dialects”. Moreover, the agent’s degree is inversely related to the consistency of its use of communicative conventions. These results show the importance of the basic properties of social network structure on reinforcement communication learning and suggest a new interpretation of findings on human convergence on word conventions.
The generalizability of empirical research depends on the reproduction of findings across settings and populations. Consequently, generalizations demand resources beyond that which is typically available to any one laboratory. With collective interest in the joint Simon effect (JSE) – a phenomenon that suggests people work more effectively with humanlike (as opposed to mechanomorphic) robots – we pursued a multi-institutional research cooperation between robotics researchers, social scientists, and software engineers. To evaluate the robustness of the JSE in dyadic human-robot interactions, we constructed an experimental infrastructure for exact, lab-independent reproduction of robot behavior. Deployment of our infrastructure across three institutions with distinct research orientations (well-resourced versus resource-constrained) provides initial demonstration of the success of our approach and the degree to which it can alleviate technical barriers to HRI reproducibility. Moreover, with the three deployments situated in culturally distinct contexts (Germany, the U.S. Midwest, and the Mexico-U.S. Border), observation of a JSE at each site provides evidence its generalizability across settings and populations.
Joint action (JA) is ubiquitous in our cognitive lives. From basketball teams to teams of surgeons, humans often coordinate with one another to achieve some common goal. Idealized laboratory studies of group behavior have begun to elucidate basic JA mechanisms, but little is understood about how these mechanisms scale up in more sophisticated and open-ended JA that occurs in the wild. We address this gap by examining coordination in a paragon domain for creative joint expression: improvising jazz musicians. Coordination in jazz music subserves an aesthetic goal: the generation of a collective musical expression comprising coherent, highly nuanced musical structure (e.g. rhythm, harmony). In our study, dyads of professional jazz pianists improvised in a “coupled”, mutually adaptive condition, and an “overdubbed” condition which precluded mutual adaptation, as occurs in common studio recording practices. Using a model of musical tonality, we quantify the flow of rhythmic and harmonic information between musicians as a function of interaction condition. Our analyses show that mutually adapting dyads achieve greater temporal alignment and produce more consonant harmonies. These musical signatures of coordination were preferred by independent improvisers and naive listeners, who gave higher quality ratings to coupled interactions despite being blind to condition. We present these results and discuss their implications for music technology and JA research more generally.
Groups of interacting individuals often coordinate in service of abstract goals, such as the alignment of mental representations in conversation, or the generation of new ideas in group brainstorming sessions. What are the mechanisms and dynamics of abstract coordination? This study examines coordination in a sophisticated paragon domain: collaboratively improvising jazz musicians. Remarkably, freely improvising jazz ensembles collectively produce coherent tonal structure (i.e. melody and harmony) in real time performance without previously established harmonic forms. We investigate how tonal structure emerges out of interacting musicians, and how this structure is constrained by underlying patterns of coordination. Dyads of professional jazz pianists were recorded improvising in two conditions of interaction: a ‘coupled’ condition in which they could mutually adapt to one another, and an ‘overdubbed’ condition which precluded mutual adaptation. Using a computational model of musical tonality, we show that this manipulation effected the directed flow of tonal information amongst pianists, who could mutually adapt to one another’s notes in coupled trials, but not in overdubbed trials. Consequently, musicians were better able to harmonize with one another in coupled trials, and this ability increased throughout the course of improvised performance. We present these results and discuss their implications for music technology and joint action research more generally.
One of the major ways that people engage in adaptive problem solving is by copying the solutions of others. Most of the work on this field has focused on three questions: when to copy, who to copy from, and what to copy. However, how much to copy has been relatively less explored. In the current research, we are interested in the consequences for a group when its members engage in social learning strategies with different tendencies to copy entire or partial solutions and different complexities of search problems. We also consider different network topologies that affect the solutions visible to each member. Using a computational model of collective problem solving, we demonstrate that strategies where social learning involves partial copying outperform strategies where individuals copy entire solutions. We analyze the exploration/exploitation dynamics of these social learning strategies under the different conditions.
Previous research has demonstrated that Distributional Semantic Models (DSMs) are capable of reconstructing maps from news corpora (Louwerse & Zwaan, 2009) and novels (Louwerse & Benesh, 2012). The capacity for reproducing maps is surprising since DSMs notoriously lack perceptual grounding (De Vega et al., 2012). In this paper we investigate the statistical sources required in language to infer maps, and resulting constraints placed on mechanisms of semantic representation. Study 1 brings word co-occurrence under experimental control to demonstrate that direct co-occurrence in language is necessary for traditional DSMs to successfully reproduce maps. Study 2 presents an instance-based DSM that is capable of reconstructing maps independent of the frequency of co-occurrence of city names.
A large literature suggests that the way we process information is influenced by the categories that we have learned. We examined whether, when we try to uniquely encode items in workingmemory, the information encoded depends on the other stimuli being simultaneously learned. Participants were required to memorize unknown aliens, presented one at the time, for immediate recognition of their features. Some aliens, called twins, were organized into pairs that shared every feature (nondiscriminative feature) except one (discriminative feature), while some other aliens, called hermits, did not share feature. We reasoned that if people develop unsupervised categories by creating a category for a pair of aliens, we should observe better feature identification performance for nondiscriminative features compared to hermit features, but not compared to discriminative features. On the contrary, if distinguishing features draw attention, we should observe better performance when a discriminative rather than nondiscriminative feature was probed. Overall, our results suggest that when items share features, people code items in working memory by focusing on similarities between items, establishing clusters of items in an unsupervised fashion not requiring feedback on cluster membership.
In peer instruction, instructors pose a challenging question to students, students answer the question individually, students work with a partner in the class to discuss their answers, and finally students answer the question again. A large body of evidence shows that peer instruction benefits student learning. To determine the mechanism for these benefits, we collected semester-long data from six classes, involving a total of 208 undergraduate students being asked a total of 86 different questions related to their course content. For each question, students chose their answer individually, reported their confidence, discussed their answers with their partner, and then indicated their possibly revised answer and confidence again. Overall, students were more accurate and confident after discussion than before. Initially correct students were more likely to keep their answers than initially incorrect students, and this tendency was partially but not completely attributable to differences in confidence. We discuss the benefits of peer instruction in terms of differences in the coherence of explanations, social learning, and the contextual factors that influence confidence and accuracy.
Professor Emeritus of Psychology and Psychiatry at Boston University, and Founder of the Center for Anxiety and Related Disorders
Interviewed on February 10, 2022 by Dr. Teresa Treat, Professor of Psychology at University of Iowa
Head of the Research Domain Criteria (RDoC) Unit at the United States National Institute of Mental Health
Interviewed on February 8, 2022 by Dr. Teresa Treat, Professor of Psychology at University of Iowa
Professor of Psychology and Pediatrics at Penn State University
Interviewed on August 6, 2021
Professor of Psychology and Head of School at the University of Waikato in New Zealand
Interviewed on August 2, 2021
Junior Research Fellow at the University of Cambridge in England
Interviewed on August 2, 2021
Professor of Evolutionary and Developmental Psychology at the University of St. Andrews in Scotland
Interviewed on August 2, 2021
Professor of Physiology and Neuroscience at University of Turin Medical School in Turin, Italy, and Director of Medicine and Physiology of Hypoxia at the Plateau Rosà Laboratories in Plateau Rozeau in Switzerland.
Interviewed on August 2, 2021
Associate Professor and Canada Research Chair at the University of British Columbia
Interviewed on July 26, 2021
Postdoctoral Research Scientist at University of Victoria
Interview on February 17, 2021
Professor of Management in the School of Business at the University of Queensland
Interview on February 3, 2021
Professor at the Evans School of Public Policy & Governance at University of Washington
Interview on February 3, 2021
Professor of Psychology at Bowling Green State University
Interview on February 1, 20201
Professor and Chair of Psychology & Human Development at Vanderbilt University
Interview on January 30, 2021
Distinguished University Professor, and Brumbaugh Chair in Brain Research and Teaching, and Director of the Institute for Behavioral Medicine Research at the Ohio State University
Interview on December 4, 2020
Distinguished University Professor at the University of Utrecht in the Netherlands
Interviewed on November 25, 2020
Professor of Psychological & Brain Sciences, and Burke & Elizabeth High Baker Professor of Child Development in Arts & Sciences at Washington University
Interviewed on November 23, 2020
Professor in the School of Psychology at University of Melbourne in Australia
Interviewed on November 23, 2020
Professor of Behavioral Science at the University of Warwick Business School, and cofounder of Decision Technology Ltd
Interviewed on November 16, 2020
Department of Experimental Psychology at University of Oxford, and Senior Research Fellow in Theoretical Life Sciences at All Soul’s College.
Interviewed on the 26th of February 2020
David K. Wilson Professor of Psychology at Vanderbilt University
Interviewed on the 9th of March 2020
Associate Professor of Psychology at Harvard University
Interviewed on the 7th of April 2020
Senior Lecturer in the School of Computing and Information Systems at University of Melbourne
Interviewed on the 8th of April 2020
Professor of Interdisciplinary Social Science at the University of Utrecht in the Netherlands
Interviewed on 10th of April 2020
Intro and Outro music credit: Peter Kienle, “Cycles We Love”
How, and how well, do people switch between exploration and exploitation to search for and accumulate resources? We study the decision processes underlying such exploration/exploitation trade-offs using a novel card selection task that captures the common situation of searching among multiple resources (e.g., jobs) that can be exploited without depleting. With experience, participants learn to switch appropriately between exploration and exploitation and approach optimal performance. We model participants’ behavior on this task with random, threshold, and sampling strategies, and find that a linear decreasing threshold rule best fits participants’ results. Further evidence that participants use decreasing threshold-based strategies comes from reaction time differences between exploration and exploitation; however, participants themselves report nondecreasing thresholds. Decreasing threshold strategies that “front-load” exploration and switch quickly to exploitation are particularly effective in resource accumulation tasks, in contrast to optimal stopping problems like the Secretary Problem requiring longer exploration.
Categorical Perception (CP) effects manifest as faster or more accurate discrimination between objects that come from different categories compared to objects that come from the same category, controlling for the physical differences between the objects. The most popular explanations of CP effects have relied on perceptual warping causing stimuli near a category boundary to appear more similar to stimuli within their own category and/ or less similar to stimuli from other categories. Hanley and Roberson (2011), on the basis of a pattern not previously noticed in CP experiments, proposed an explanation of CP effects that relies not on perceptual warping, but instead on inconsistent usage of category labels. Experiments 1 and 2 in this paper show a pattern opposite the one Hanley and Roberson pointed out. Experiment 3, using the same stimuli but with different choice statistics (i.e., different probabilities of each face being the target), obtains the same pattern as the one Hanley and Roberson showed. Simulations show that both category label and perceptual models are able to reproduce the patterns of results from both experiments, provided they include information about the choice statistics. This suggests two conclusions. First, the results described by Hanley and Roberson should not be taken as evidence in favor of a category label model. Second, given that participants did not receive feedback on their choices, there must be some mechanism by which participants monitor their own choices and adapt to the choice statistics present in the experiment.
Cognitive science continues to make a compelling case for having a coherent, unique, and fundamental subject of inquiry: What is the nature of minds, where do they come from, and how do they work? Central to this inquiry is the notion of agents that have goals, one of which is their own persistence, who use dynamically constructed knowledge to act in the world to achieve those goals. An agentive perspective explains why a special class of systems have a cluster of co-occurring capacities that enable them to exhibit adaptive behavior in a complex environment: perception, attention, memory, representation, planning, and communication. As an intellectual endeavor, cognitive science may not have achieved a hard core of uncontested assumptions that Lakatos (1978) identifies as emblematic of a successful research program, but there are alternative conceptions according to which cognitive science has been successful. First, challenges of the early, core tenet of “Mind as Computation” have helped put cognitive science on a stronger foundation—one that incorporates relations between minds and their environments. Second, even if a full cross-disciplinary theoretic consensus is elusive, cognitive science can inspire distant, deep, and transformative connections between pairs of fields. To be intellectually vital, cognitive science need not resemble a traditional discipline with its associated insularity and unchallenged assumptions. Instead, there is strength and resilience in the diverse perspectives and methods that cognitive science assembles together. This interdisciplinary enterprise is fragile and perhaps inherently unstable, as the looming absorption of cognitive science into psychology shows. Still, for many researchers, the excitement and benefits of triangulating on the nature of minds by integrating diverse cases cannot be secured by a stable discipline with an uncontested core of assumptions.
Humans show a striking penchant for creating tools to benefit our own thought processes. Andy Clark (2003, 2008) has convincingly argued that the tools that we as humans recruit become integrated parts of an extended cognitive system that includes us as just one component. By extending cognition beyond our brains, Clark presents an “embiggened” perspective on what it means to be a cognizer and a person more generally. This perspectival shift runs counter to some recent forms of argumentation that in effect work to minimize personhood. For example, arguments for lack of personal culpability can take the form of “It wasn’t my fault. It was the fault of my ___ ” to be filled in, perhaps, by “upbringing,” “genes,” “neurochemistry,” “diet,” or “improperly functioning amygdala.” Instead, Clark (see also Dennett 1989) offers the opposite line of argumentation, according to which we consist not only of our amygdalae and hippocampi but also potentially our glasses, notebooks, friends, supporting technologies, and culture.
Like many other scientific disciplines, psychological science has felt the impact of the big-data revolution. This impact arises from the meeting of three forces: data availability, data heterogeneity, and data analyzability. In terms of data availability, consider that for decades, researchers relied on the Brown Corpus of about one million words (Kučera & Francis, 1969). Modern resources, in contrast, are larger by six orders of magnitude (e.g., Google’s 1T corpus) and are available in a growing number of languages. About 240 billion photos have been uploaded to Facebook,1 and Instagram receives over 100 million new photos each day.2 The largescale digitization of these data has made it possible in principle to analyze and aggregate these resources on a previously unimagined scale. Heterogeneityrefers to the availability of different typesof data. For example, recent progress in automatic image recognition is owed not just to improvements in algorithms and hardware, but arguably more to the ability to merge large collections of images with linguistic labels (produced by crowdsourced human taggers) that serve as training data to the algorithms. Making use of heterogeneous data sources often depends on their standardization. For example, the ability to combine demographic and grammatical data about thousands of languages led to the finding that languages spoken by more people have simpler morphologies (Lupyan & Dale, 2010 ). The ability to combine these data types would have been substantially more difficult without the existence of standardized language and country codes that could be used to merge the different data sources. Finally, analyzabilitymust be ensured, for without appropriate tools to process and analyze different types of data, the “ data” are merely bytes.
How does cooperation arise in an evolutionary context? We approach this problem using a collective search paradigm where interactions are dynamic and there is competition for rewards. Using evolutionary simulations, we find that the unconditional sharing of information can be an evolutionary advantageous strategy without the need for conditional strategies or explicit reciprocation. Shared information acts as a recruitment signal and facilitates the formation of a self-organized group. Thus, the improved search efficiency of the collective bestows byproduct benefits onto the original sharer. A key mechanism is a visibility radius, where individuals have unconditional access to information about neighbors within a limited distance. Our results show that for a variety of initial conditions—including populations initially devoid of prosocial individuals—and across both static and dynamic fitness landscapes, we find strong selection pressure to evolve unconditional sharing.
In Joint Action (JA) tasks, individuals must coordinate their actions so as to achieve some desirable outcome at the grouplevel. Group function is an emergent outcome of ongoing, mutually constraining interactions between agents. Here we investigate JA in dyads of improvising jazz pianists. Participants’ musical output is recorded in one of two conditions: a real condition, in which two pianists improvise together as they typically would, and a virtual condition, in which a single pianist improvises along with a “ghost partner” – a recording of another pianist taken from a previous real trial. The conditions are identical except for that in real trials subjects are mutually coupled to one another, whereas there is only unidirectional influence in virtual trials (i.e. recording to musician). We quantify ways in which the rhythmic structures spontaneously produced in these improvisations is shaped by mutual coupling of co-performers. Musical signatures of underlying coordination patterns are also shown to parallel the subjective experience of improvisers, who preferred playing in trials with bidirectional influence despite not explicitly knowing which condition they had played in. These results illuminate how mutual coupling shapes emergent, group-level structure in the creative, open-ended and fundamentally collaborative domain of expert musical improvisation.
Effective problem solving requires both exploration and exploitation. We analyze data from a group problem-solving task to gain insight into how people use information from past experiences and from others to achieve explore-exploit trade-offs in complex environments. The behavior we observe is consistent with the use of simple, reinforcement-based heuristics. Participants increase exploration immediately after experiencing a low payoff, and decrease exploration immediately after experiencing a high or improved payoff. We suggest that whether an outcome is perceived as “high” or “low” is a dynamic function of the outcome information available to participants. The degree to which the distribution of observed information reflects the true range of possible outcomes plays an important role in determining whether or not this heuristic is adaptive in a given environment.
The division of labor phenomenon has been observed with respect to both manual and cognitive labor, but there is no clear understanding of the intra- and inter-individual mechanisms that allow for its emergence, especially when there are multiple divisions possible and communication is limited. Situations fitting this description include individuals in a group splitting a geographical region for resource harvesting without explicit negotiation, or a couple tacitly negotiating the hour of the day for each to shower so that there is sufficient hot water. We studied this phenomenon by means of an iterative two-person game where multiple divisions are possible, but no explicit communication is allowed. Our results suggest that there are a limited number of biases toward divisions of labor, which serve as attractors in the dynamics of dyadic coordination. However, unlike Schelling’s focal points, these biases do not attract players’ attention at the onset of the interaction, but are only revealed and consolidated by the in-game dynamics of dyadic interaction.
We propose a computational model of human scientific discovery and perception of the world. As a prerequisite for such a model, we simulate dynamic microworlds in which physical events take place, as well as an observer that visually perceives and makes interpretations of events in the microworld. Moreover, we give the observer the ability to actively conduct experiments in order to gain evidence about natural regularities in the world. We have broken up the description of our project into two pieces. The first piece deals with the interpreter constructing relatively simple visual descriptions of objects and collisions within a context. The second phase deals with the interpreter positing relationships among the entities, winding up with elaborated construals and conjectures of mathematical laws governing the world. This paper focuses only on the second phase. As is the case with most human scientific observation, observations are subject to interpretation, and the discoveries are influenced by these interpretations.
As you flip through the pages of this handbook you will notice that the content does not seem to be randomly organized. The content of the handbook is sequenced in a particular way: foundations before general strategies, background before applications, etc. The editors envisaged a sequence of topics, the authors of each topic envisaged a sequence of information in each chapter, and so on. We selected a particular sequence because we considered it to be effective. Deciding how to sequence information takes place all the time in educational contexts, from educators deciding how to organize their syllabus to educational technology designers deciding how to organize a piece of educational software, from handbook editors and writers deciding how to organize their materials, to students making decisions as to how to organize their study. One might imagine that as long as all students study the same materials, regardless of the sequence in which they study it, they will all learn the same information. This could not be further from the truth. In this chapter, we will review evidence of how and why the sequence of study changes what is learned. In doing so, we will try to uncover the powerful ways in which sequence can improve or deter learning.
The utility of our actions frequently depends upon the beliefs and behavior of other agents. Thankfully, through experience, we learn norms and conventions that provide stable expectations for navigating our social world. Here, we review several distinct influences on their content and distribution. At the level of individuals locally interacting in dyads, success depends on rapidly adapting pre-existing norms to the local context. Hence, norms are shaped by complex cognitive processes involved in learning and social reasoning. At the population level, norms are influenced by intergenerational transmission and the structure of the social network. As human social connectivity continues to increase, understanding and predicting how these levels and time scales interact to produce new norms will be crucial for improving communities.
Low-level “adaptive” and higher-level “sophisticated” human reasoning processes have been proposed to play opposing roles in the emergence of unpredictable collective behaviors such as crowd panics, traffic jams, and market bubbles. While adaptive processes are widely recognized drivers of emergent social complexity, complementary theories of sophistication predict that incentives, education, and other inducements to rationality will suppress it. We show in a series of multiplayer laboratory experiments that, rather than suppressing complex social dynamics, sophisticated reasoning processes can drive them. Our experiments elicit an endogenous collective behavior and show that it is driven by the human ability to recursively anticipate the reasoning of others. We identify this behavior, “sophisticated flocking”, across three games, the Beauty Contest and the “Mod Game” and “Runway Game”. In supporting our argument, we also present evidence for mental models and social norms constraining how players express their higher-level reasoning abilities. By implicating sophisticated recursive reasoning in the kind of complex dynamic that it has been predicted to suppress, we support interdisciplinary perspectives that emergent complexity is typical of even the most intelligent populations and carefully designed social systems.
To identify the ways teachers and educational systems can improve learning, researchers need to make causal inferences. Analyses of existing datasets play an important role in detecting causal patterns, but conducting experiments also plays an indispensable role in this research. In this article, we advocate for experiments to be embedded in real educational contexts, allowing researchers to test whether interventions such as a learning activity, new technology, or advising strategy elicit reliable improvements in authentic student behaviours and educational outcomes. Embedded experiments, wherein theoretically relevant variables are systematically manipulated in real learning contexts, carry strong benefits for making causal inferences, particularly when allied with the data rich resources of contemporary e-learning environments. Toward this goal, we offer a field guide to embedded experimentation, reviewing experimental design choices, addressing ethical concerns, discussing the importance of involving teachers, and reviewing how interventions can be deployed in a variety of contexts, at a range of scales. Causal inference is a critical component of a field that aims to improve student learning; including experimentation alongside analyses of existing data in learning analytics is the most compelling way to test causal claims.
The scientific understanding of scientific understanding has been a long-standing goal of cognitive science. A satisfying formal model of human scientific discovery would be a major intellectual achievement, requiring solutions to core problems in cognitive science: the creation and use of apt mental models, the prediction of the behavior of complex systems involving interactions between multiple classes of elements, high-level perception of noisy and multiply interpretable environments, and the active interrogation of a system through strategic interventions on it – namely, via experiments. Over the past decades there have been numerous attempts to build formal models that capture what Perkins (1981) calls some of the “mind’s best work” – scientific explanations for how the natural world works by systematic observation, prediction, and testing. Early work by Hebert Simon and his colleagues (Langley, Simon, Bradshaw, & Zytkow, 1987) developed production rule systems employing heuristics to tame extremely large conjoint search spaces of experiments to run and hypotheses to test. Qualitative physics approaches seek to understand physical phenomena by building non-numeric, relational models of the phenomena (Forbus, 1984). Some early connectionist models interpreted scientific explanation in terms of emerging patterns of strongly activated hypotheses that mutually support one another (Thagard, 1992).
How people are able to turn information in the environment into meaning is a critical question for cognitive science. That environment is increasingly data-driven. Using data to inform decisions and improve understanding of the world is a valuable component of critical thinking, and serves as the foundation of evidence-based decision making. Designing graphical representations can make those data more accessible, such that users may engage the visual system and capacity for visual pattern recognition to discern regularities and properties of data. We ultimately want to understand the connection between the initial perception of data visualizations and conceptual understanding of information. Data visualizations, broadly, are the representation of recorded values in visual form, including scientific visualizations such as brain scans, or live visualizations such as stock market monitoring; the work discussed through this symposium is of the type used in science, business, and medical settings to display data abstractly.
Information sharing in competitive environments may seem counterintuitive, yet it is widely observed in humans and other animals. For instance, the open-source software movement has led to new and valuable technologies being released publicly to facilitate broader collaboration and further innovation. What drives this behavior and under which conditions can it be beneficial for an individual? Using simulations in both static and dynamic environments, we show that sharing information can lead to individual benefits through the mechanisms of pseudoreciprocity, whereby shared information leads to by-product benefits for an individual without the need for explicit reciprocation. Crucially, imitation with a certain level of innovation is required to avoid a tragedy of the commons, while the mechanism of a local visibility radius allows for the coordination of self-organizing collectives of agents. When these two mechanisms are present, we find robust evidence for the benefits of sharing—even when others do not reciprocate.
We investigated whether, and in what, ways people use visual structures to evaluate mathematical expressions. We also explored the relationship between strategy use and other common measures in mathematics education. Participants organized long sum/products when visual structure was available in algebraic expressions. Two experiments showed a similar pattern: One group of participants primarily calculated from left to right, or combined identical numbers together. A second group calculated adjacent pairs. A third group tended to group terms which either produced easy sums (e.g., 6+4), or participated in a global structure. These different strategies were associated with different levels of success on the task, and, in Experiment 2, with differential math anxiety and mathematical skill. Specifically, problem solvers with lower math anxiety and higher math ability tend to group by chunks and easy calculation. These results identify an important role for the perception of coherent structure and pattern identification in mathematical reasoning.
Despite its omnipresence in this information-laden society, statistics is hard. The present study explored the applicability of a grounded cognition approach to learning basic statistical concepts. Participants in 2 experiments interacted with perceptually rich computer simulations designed to foster understanding of the relations between fundamental statistical concepts and to promote the ability to reason with statistics. During training, participants were asked to estimate the probability of two samples coming from the same population, with sample size, variability, and difference between means independently manipulated. The amount of learning during training was measured by the difference between participants’ confidence judgments and those of an Ideal Observer. The amount of transfer was assessed by the increase in accuracy from a pretest to a posttest. Learning and transfer were observed when tailored guidance was given along with the perceptually salient properties. Implications of our quantitative measures of human sensitivity to statistical concepts were discussed.
One of the major ways that people engage in adaptive problem solving is by copying or imitating the solutions of others. Imitation saves an individual time and mitigates potential risks from individual trial-and-error learning. When an individual finds a neighbor with a better solution than theirs, copying their entire solution guarantees an improvement over the individual’s current condition. However, this reduces the diversity of solutions in the group and can lead the group to getting stuck in a local optima. One alternative is to copy the neighbor’s solution only partially, although this comes at a risk for the individual. Mixing two solutions may or may not lead to an improvement over their previous solution, but mixing has the potential to allow the group to explore entirely new areas of solution space. So, although partial copying comes at a cost to the individual, under what conditions does it benefit the group? In the current research, we are interested in the consequences for the group when its members engage in social learning strategies with different tendencies to copy entire or partial solutions, with different network topologies that affect the neighbors’ solutions visible to each member, and with different complexities of search tasks.
Most maps of science use a network layout; few use a landscape metaphor. Human users are trained in reading geospatial maps, yet most have a hard time reading even simple networks. Prior work using general networks has shown that map-based visualizations increase recall accuracy of data. This paper reports the result of a comparison of two comparable renderings of the UCSD map of science that are: the original network layout and a novel hexmap that uses a landscape metaphor to layout the 554 subdisciplines grouped into 13 color-coded disciplines of science. Overlaid are HITS metrics that show the impact and transformativeness of different scientific subdisciplines. Both maps support the same interactivity, including search, filter, zoom, panning, and details on demand. Users performed memorization, search, and retrieval tasks using both maps. Results did not show any significant differences in how the two maps were remembered or used by participants. We conclude with a discussion of results and planned future work.
Category learning not only depends upon perceptual and semantic representations; it also leads to the generation of these representations. We describe two series of experiments that demonstrate how categorization experience alters, rather than simply uses, descriptions of objects. In the first series, participants first learned to categorize objects on the basis of particular sets of line segments. Subsequently, participants were given a perceptual part/whole judgment task. Categorization training influenced participants’ part/whole judgments, indicating that whole objects were more likely to be broken down into parts that were relevant during categorization. In the second series, correlations were created or broken between semantic features of word concepts (e.g., ferocious vs. timid and group-oriented vs. solitary animals). The best transfer was found between category learning tasks that shared the same semantic organization of concepts. Together, the experiments support models of category learning that simultaneously create the elements of categorized objects’ descriptions and associate those elements with categories.
Despite widespread assertions that enthusiasm is an important quality of effective teaching, empirical research on the effect of enthusiasm on learning and memory is mixed and largely inconclusive. To help resolve these inconsistencies, we conducted a carefully-controlled laboratory experiment, investigating whether enthusiastic instructions for a memory task would improve recall accuracy. Scripted videos, either enthusiastic or neutral, were used to manipulate the delivery of task instructions. We also manipulated the sequence of learning items, replicating the spacing effect, a known cognitive technique for memory improvement. Although spaced study reliably improved test performance, we found no reliable effect of enthusiasm on memory performance across two experiments. We did, however, find that enthusiastic instructions caused participants to respond to more item prompts, leaving fewer test questions blank, an outcome typically associated with increased task motivation. We find no support for the popular claim that enthusiastic instruction will improve learning, although it may still improve engagement. This dissociation between motivation and learning is dis- cussed, as well as its implications for education and future research on student learning.
Concepts are the building blocks of thought. They are critically involved when we reason, make inferences, and try to generalize our previous experiences to new situations. Behind every word in every language lies a concept, although there are concepts, like the small plastic tubes attached to the ends of shoelaces, that we are familiar with and can think about even if we do not know that they are called aglets . Concepts are indispensable to human cognition because they take the “blooming, buzzing confusion” (James, 1890, p. 488) of disorganized sensory experiences and establish order through mental categories. These mental categories allow us to make sense of the world and predict how worldly entities will behave. We see, hear, interpret, remember, understand, and talk about our world through our concepts, and so it is worthy of reflection time to establish where concepts come from, how they work, and how they can best be learned and deployed to suit our cognitive needs.
The insufficient level of reproducibility of published experimental results has been identified as a core issue in the field of robotics in recent years. Why is that? First of all, robotics focuses on the abstract concept of computation and the creation of technological artifacts, i.e., software that implements these concepts. Hence, before actually reproducing an experiment, the subject of investigation must be artificially created, which is non-trivial given the inherent complexity . Second, robotics experiments usually include expensive and often customized hardware setups (robots), that are difficult to operate for non-experts. Finally, there is no agreed upon set of methods in order to setup, execute, or (re-)conduct an experiment.
To this end, we introduce an interdisciplinary and geographically distributed collaboration project that aims at implementing good experimental methodology in interdisciplinary robotics research with respect to: a) reproducibility of required technical artifacts, b) explicit and comprehensible experiment design, c) repeatable/reproducible experiment execution, and d) reproducible evaluation of obtained experiment data. The ultimate goal of this collaboration is to reproduce the same experiment in two different laboratories using the same systematic approach which is presented in this work.
Recent results demonstrate that inducing an abstract representation of target analogs at retrieval time aids access to analogous situations with mismatching surface features (i.e., the late abstraction principle). A limitation of current implementations of this principle is that they either require the external provision of target-specific information or demand very high intellectual engagement. Experiment 1 demonstrated that constructing an idealized situation model of a target problem increases the rate of correct solutions compared to constructing either concrete simulations or no simulations. Experiment 2 confirmed that these results were based on an advantage for accessing the base analog, and not merely on an advantage of idealized simulations for understanding the target problem in its own terms. This target idealization strategy has broader applicability than prior interventions based on the late abstraction principle, because it can be achieved by a greater proportion of participants and without the need to receive target-specific information.
The development of symbolic algebra transformed civilization. Since algebra is a recent cultural invention, however, algebraic reasoning must build on a foundation of more basic capacities. Past work suggests that spatial representations of number may be part of that foundation, but recent studies have failed to find relations between spatial-numerical associations and higher mathematical skills. One possible explanation of this failure is that spatial representations of number are not activated during complex mathematics. We tested this possibility by collecting dense behavioral recordings while participants manipulated equations. When interacting with an equation’s greatest [/least] number, participants’ movements were deflected upward [/downward] and rightward [/leftward]. This occurred even when the task was purely algebraic and could thus be solved without attending to magnitude (although the deflection was reduced). This is the first evidence that spatial representations of number are activated during algebra. Algebraic reasoning may require coordinating a variety of spatial processes.
Previous research has shown that the sequence in which concepts are studied changes how well they are learned. In a series of experiments featuring naturalistic concepts (psychology concepts) and naïve learners, we extend previous research by showing that the sequence of study changes the representation the learner creates of the study materials. Interleaved study leads to the creation of relatively interrelated concepts that are represented by contrast to each other and based on discriminating properties. Blocked study, instead, leads to the creation of relatively isolated concepts that are represented in terms of their central and characteristic properties. The relative benefits of these representations depend on whether the test of conceptual knowledge requires contrastive or characteristic information. These results argue for the integrated investigation of the benefits of different sequences of study as depending on the characteristics of the study and testing situation as a whole.
We propose the foundations of a computer model of scientic discovery that takes into account certain psychological aspects of human observation of the world. To this end, we simulate two main components of such a system. The first is a dynamic microworld in which physical events take place, and the second is an observer that visually perceives entities and events in the microworld. For reason of space, this paper focuses only on the starting phase of discovery, which is the relatively simple visual inputs of objects and collisions.
Learners often struggle to grasp the important, central principles of complex systems, which describe how interactions between individual agents can produce complex, aggre-gate-level patterns. Learners have even more difficulty transferring their understanding of these principles across superficially dissimilar instantiations of the principles. Here, we provide evidence that teaching high school students an agent-based modeling language can enable students to apply complex system principles across superficially different domains. We measured student performance on a complex systems assessment before and after 1 week training in how to program models using NetLogo (Wilensky, 1999a). Instruction in NetLogo helped two classes of high school students apply complex sys-tems principles to a broad array of phenomena not previously encountered. We argue that teaching an agent-based computational modeling language effectively combines the benefits of explicitly defining the abstract principles underlying agent-level interac-tions with the advantages of concretely grounding knowledge through interactions with agent-based models.
We lay out a multiple, interacting levels of cognitive systems (MILCS) framework to account for the cognitive capacities of individuals and the groups to which they belong. The goal of MILCS is to explain the kinds of cognitive processes typically studied by cognitive scientists, such as perception, attention, memory, categorization, decision-making, problem solving, judgment, and flexible behavior. Two such systems are considered in some detail—lateral inhibition within a network for selecting the most attractive option from a candidate set and a diffusion process for accumulating evidence to reach a rapid and accurate decision. These system descriptions are aptly applied at multiple levels, including within and across people. These systems provide accounts that unify cognitive processes across multiple levels, can be expressed in a common vocabulary provided by network science, are inductively powerful yet appropriately constrained, and are applicable to a large number of superficially diverse cognitive systems. Given group identification processes, cognitively resourceful people will frequently form groups that effectively employ cognitive systems at higher levels than the individual. The impressive cognitive capacities of individual people do not eliminate the need to talk about group cognition. Instead, smart people can provide the interacting parts for smart groups
Formal mathematical reasoning provides an illuminating test case for understanding how humans can think about things that they did not evolve to comprehend. People engage in algebraic reasoning by 1) creating new assemblies of perception and action routines that evolved originally for other purposes (reuse), 2) adapting those routines to better fit the formal requirements of mathematics (adaptation), and 3) designing cultural tools that mesh well with our perception-action routines to create cognitive systems capable of mathematical reasoning (invention). We describe evidence that a major component of proficiency at algebraic reasoning is Rigged Up Perception-Action Systems (RUPAS), via which originally demanding, strategically-controlled cognitive tasks are converted into learned, automatically executed perception and action routines. Informed by RUPAS, we have designed, implemented, and partially assessed a computer-based algebra tutoring system called Graspable Math with an aim toward training learners to develop perception-action routines that are intuitive, efficient, and mathematically valid.
Subjects learned to classify images of rocks into the categories igneous, metamorphic, and sedimentary. In accord with the real-world structure of these categories, the to-beclassified rocks in the experiments had a dispersed similarity structure. Our central hypothesis was that learning of these complex categories would be improved through observational study of organized, simultaneous displays of the multiple rock tokens. In support of this hypothesis, a technique that included the presentation of the simultaneous displays during phases of the learning process yielded improved acquisition (Experiment 1) and generalization (Experiment 2) compared to methods that relied solely on sequential forms of study and testing. The technique appears to provide a good starting point for application of cognitive-psychology principles of effective category learning to the science classroom.
The sequence of study influences how we learn. Previous research has identified different sequences as potentially beneficial for learning in different contexts and with different materials. Here we investigate the mechanisms involved in inductive category learning that give rise to these sequencing effects. Across 3 experiments we show evidence that the sequence of study changes what information learners attend to during learning, what is encoded from the materials studied and, consequently, what is remembered from study. Interleaved study (alternating between presentation of 2 categories) leads to an attentional focus on properties that differ between successive items, leading to relatively better encoding and memory for item properties that discriminate between categories. Conversely, when learners study each category in a separate block (blocked study), learners encode relatively more strongly the characteristic features of the items, which may be the result of a strong attentional focus on sequential similarities. These results provide support for the sequential attention theory proposing that inductive category learning takes place through a process of sequential comparisons between the current and previous items. Different sequences of items change how attention is deployed depending on this basic process. Which sequence results in better or worse learning depends on the match between what is encoded and what is required at test.
Transfer of knowledge is the application of knowledge learned in one context to new, dissimilar problems or situations where the knowledge would be useful. Teachers, coaches, camp counselors, parents, and learners often have the experience of a learner showing apparent understanding when questioned about a topic in a way that closely matches how it was initially presented but showing almost no understanding when queried in a new context or with novel examples. This entry further explains the concept of knowledge transfer. It then discusses several different strategies used to support knowledge transfer.
An individual can interact with the same set of people over many different scales simultaneously. Four people might interact as a group of four and, at the same time, in pairs and triads. What is the relationship between different parallel interaction scales, and how might those scales themselves interact? We devised a four-player experimental game, the Modular Stag Hunt, in which participants chose not just whether to coordinate, but with whom, and at what scale. Our results reveal coordination behavior with such a strong preference for dyads that undermining pairwise coordination actually improves group-scale outcomes. We present these findings as experimental evidence for competition, as opposed to complementarity, between different possible scales of multi-player coordination. This result undermines a basic premise of approaches, like those of network science, that fail to model the interacting effects of dyadic, triadic, and group-scale structure on group outcomes.
We have observed that when people engage in algebraic reasoning, they often perceptually and spatially transform algebraic notations directly rather than first converting the notation to an internal, non spatial representation. We describe empirical evidence for spatial transformations, such as spatially compact grouping, transposition, spatially overlaid intermediate results, cancelling out, swapping, and splitting. This research has led us to understand domain models in mathematics as the deployment of trained and strategically crafted perceptual-motor processes working on grounded and strategically crafted notations. This approach to domain modeling has also motivated us to develop and assess an algebra tutoring system focused on helping students train their perception and action systems to coordinate with each other and formal mathematics. Overall, our laboratory and classroom investigations emphasize the interplay between explicit mathematical understandings and implicit perception action training as having a high potential payoff for making learning more efficient, robust, and broadly applicable.
Comparison and reminding have both been shown to support learning and transfer. Comparison is thought to support transfer because it allows learners to disregard non-matching features of superficially different episodes in order to abstract the essential structure of concepts. Remindings promote memory for the individual episodes and generalization because they prompt learners to retrieve earlier episodes during the encoding of later related episodes and to compare across episodes. Across three experiments, we compared the consequences of comparison and reminding on memory and transfer. Participants studied a sequence of related, but superficially different, proverb pairs. In the comparison condition, participants saw proverb pairs presented together and compared their meaning. In the reminding condition, participants viewed proverbs one at a time and retrieved any prior studied proverb that shared the same deep meaning as the current proverb. Experiment 1 revealed that participants in the reminding condition recalled more proverbs than those in the comparison condition. Experiment 2 showed that the mnemonic benefits of reminding persisted over a one-week retention interval. Finally, in Experiment 3, we examined the ability of participants to generalize their remembered information to new items in a task that required participants to identify unstudied proverbs that shared the samemeaning as studied proverbs. Comparison led to worse discrimination between proverbs related to studied proverbs and proverbs unrelated to studied proverbs than reminding. Reminding supported better memory for individual instances and transfer to new situations than comparison.
Formal mathematics is a paragon of abstractness. It thus seems natural to assume that the mathematical expert should rely more on symbolic or conceptual processes, and less on perception and action. We argue instead that mathematical proficiency relies on perceptual systems that have been retrained to implement mathematical skills. Specifically, we investigated whether the visual system—in particular, object-based attention—is retrained so that parsing algebraic expressions and evaluating algebraic validity are accomplished by visual processing. Object-based attention occurs when the visual system organizes the world into discrete objects, which then guide the deployment of attention. One classic signature of object-based attention is better perceptual discrimination within, rather than between, visual objects. The current study reports that object-based attention occurs not only for simple shapes but also for symbolic mathematical elements within algebraic expressions—but only among individuals who have mastered the hierarchical syntax of algebra. Moreover, among these individuals, increased object-based attention within algebraic expressions is associated with a better ability to evaluate algebraic validity. These results suggest that, in mastering the rules of algebra, people retrain their visual system to represent and evaluate abstract mathematical structure. We thus argue that algebraic expertise involves the regimentation and reuse of evolutionarily ancient perceptual processes. Our findings implicate the visual system as central to learning and reasoning in mathematics, leading us to favor educational approaches to mathematics and related STEM fields that encourage students to adapt, not abandon, their use of perception.
The very expertise with which psychologists wield their tools for achieving laboratory control may have had the unwelcome effect of blinding psychologists to the possibilities of discovering principles of behavior without conducting experiments. When creatively interrogated, a diverse range of large, real-world data sets provides powerful diagnostic tools for revealing principles of human judgment, perception, categorization, decision-making, language use, inference, problem solving, and representation. Examples of these data sets include patterns of website links, dictionaries, logs of group interactions, collections of images and image tags, text corpora, history of financial transactions, trends in twitter tag usage and propagation, patents, consumer product sales, performance in high-stakes sporting events, dialect maps, and scientific citations. The goal of this issue is to present some exemplary case studies of mining naturally existing data sets to reveal important principles and phenomena in cognitive science, and to discuss some of the underlying issues involved with conducting traditional experiments, analyses of naturally occurring data, computational modeling, and the synthesis of all three methods.This article serves as the introduction to a TopiCS topic with the same name. The rest of the downloadable papers in this Topic are:
To investigate the effect of competitive incentives under peer review, we designed a novel experimental setup called the Art Exhibition Game. We present experimental evidence of how competition introduces both positive and negative effects when creative artifacts are evaluated and selected by peer review. Competition proved to be a double-edged sword: on the one hand, it fosters innovation and product diversity, but on the other hand, it also leads to more unfair reviews and to a lower level of agreement between reviewers. Moreover, an external validation of the quality of peer reviews during the laboratory experiment, based on 23,627 online evaluations on Amazon Mechanical Turk, shows that competition does not significantly increase the level of creativity. Furthermore, the higher rejection rate under competitive conditions does not improve the average quality of published contributions, because more high-quality work is also rejected. Overall, our results could explain why many ground-breaking studies in science end up in lower-tier journals. Differences and similarities between the Art Exhibition Game and scholarly peer review are discussed and the implications for the design of new incentive systems for scientists are explained.
Congratulations to the pair o’docs, Dr. Paulo Carvalho and Dr. Joshua de Leeuw, who commenced on the 6th of May, 2016. Speaking of paradox, If two graduate students organize all of the activities in a laboratory for those people, and only those people, who do not organize activities for themselves, then how will the laboratory continue to operate after they have departed?
Dr. Joshua de Leeuw will start in the Fall of 2016 as Assistant Professor in Cognitive Science at Vassar College. Meanwhile, Dr. Paulo Carvalho will start a position as postdoctoral research scientist at Carnegie-Mellon University, working with Dr. Ken Koedinger. Hearty congratulations to the both of them!
Study sequence can have a profound influence on learning. In this study we investigated how students decide to sequence their study in a naturalistic context and whether their choices result in improved learning. In the study reported here, 2061 undergraduate students enrolled in an Introductory Psychology course completed an online homework tutorial on measures of central tendency, a topic relevant to an exam that counted towards their grades. One group of students was enabled to choose their own study sequence during the tutorial (Self-Regulated group), while the other group of students studied the same materials in sequences chosen by other students (Yoked group). Students who chose their sequence of study showed a clear tendency to block their study by concept, and this tendency was positively associated with subsequent exam performance. In the Yoked group, study sequence had no effect on exam performance. These results suggest that despite findings that blocked study is maladaptive when assigned by an experimenter, it may actually be adaptive when chosen by the learner in a naturalistic context.
Without ever explicitly discussing it, groups often times establish norms. A family or committee might develop a norm about when it is acceptable or not for members to interrupt each other. People greeting each other in different countries have very different norms for whether to shake hands or kiss, and if to kiss, how many times and in what cheek order. In some countries, tipping is not the norm, but if it is, violating the tipping norm could make you a persona non grata at a restaurant. We (Hawkins & Goldstone, 2016) were interested in how social norms emerge in a group without its members explicitly deciding on them, and the factors that promote effective norms.
To help explore these questions, we started by considering a simple scenario we call “Battle of the Exes.” You and your romantic partner live in a small town and both love coffee. Your shared loved of coffee was not, alas, enough to keep you together, and you have now broken up. There are only two coffee shops in your town, one with much better coffee than the other. Both you and your ex want to go every day for coffee during your simultaneously occurring coffee breaks, but if you pick the same place and run into one another, neither of you will enjoy your break at all.
Neither you nor your ex want to sit down to negotiate a schedule, but can you nonetheless develop a satisfactory routine? One of you could always go to the better coffee shop, but that would not be fair. Each of you could choose randomly, but that would end up with you and your ex often seeing each other, which would not maximize your duo’s happiness, and would not provide a stable solution in the long run.
These three features — fairness, happiness maximization, and stability are generally useful ways to assess the quality of a group’s behavior. To study scenarios like “Battle of the Exes” in the laboratory, we developed an interactive, real-time, online game. On each of the 60 rounds of the game, two players are given the choice of moving their avatar to one of two circles — one that they can visibly see will give them a small monetary prize and one that will give them a large payoff. The only catch is that if both players move to the same circle, then neither player gets anything for that round. For half of the groups, there was a small discrepancy between the prizes (1 cent vs 2 cents), and for the other half, there was a large discrepancy (1 cent versus 4 cents). Also, for half of the groups, each of the players could see the other player’s moment-to-moment position as they moved to the circles (Dynamic movement), while for the other half of the groups, the players only see the final choice that the other player made (Ballistic movement).
568 players were matched together to create 284 two-player groups. Some groups developed behaviors that were fair and stable, and led to both players earning a lot of money. These groups tended to develop social norms even without explicit communication. For example, the players A and B would alternate over rounds who got the large payoff, first A then B then A…., leading to a pattern like ABABABABAB.
In terms of maximizing happiness, the dynamic condition led to better earnings for the players than the ballistic condition. When the players can see each others’ moment-to-moment inclinations, that helps them coordinate. The dynamic condition also led to fairer solutions than the ballistic condition, with players earning similar amounts of money. An implication of these results is that giving the members in a group more information about what each person in the group is currently thinking about doing can help the group achieve well-coordinated, fair and happy solutions. This is something for politicians, social network providers, and amusement parks to consider when they are trying to design social spaces for their groups. Mutual visibility of group members is often an effective way to promote coordination.
In terms of developing stable strategies, there was a striking interaction between payoffs and movement type. When there was not a large difference in payoffs, choices in the ballistic condition were more stable than in the dynamic condition. When the stakes were low, players in the dynamic condition simply relied on moment-to-moment visual information to figure out who should get the larger payoff on any given round. They did not feel a strong pressure to develop a norm because they could use their continuous information as a crutch to help them coordinate. However, when the stakes were high, with one circle earning four times what the other circle earned, then the dynamic condition developed significantly more stable solutions than the ballistic condition. For these particularly contentious, high stakes situations, it is useful for the players to develop strong norms to help them coordinate, and the moment-to-moment information about player positions helps to create these norms.
One clear measure of how much contention there is in a group is how long both players move toward the same high payoff option before one “peels off” and lets the other player have the high payoff prize. Using this objective measure, groups have more contention at the beginning of the experiment session than the end. The higher stakes condition has more contention early on than the lower stakes condition, but by the end of the experiment, that ordering is flipped. Groups that have more contention at the beginning of the experiment tend to have less contention by the end of experiment, and are more likely to develop clever strategies like alternating who gets the high payoff option from round to round. A take-home message from this result is that contention in groups is not something to be avoided. For the groups in our “Battle of the Exes” game, early contention gives rise to well-coordinated, fair, efficient, and happiness maximizing solutions by the end of the experiment. It may be tempting to try to pave over contention and disagreement in a group, but letting the group work through these contentions is often key to giving them the motivation and insight that they need to develop creative, well-coordinated norms like alternating who gets the better payoff over rounds. So, although it may have been contention that broke you and your ex up in the first place, there is hope that this kind of early contention may allow you to enjoy your superior cup of coffee in peace. At least on Mondays, Wednesdays, and Fridays.
Why are some behaviors governed by strong social conventions while others are not? We experimentally investigate two factors contributing to the formation of conventions in a game of impure coordination: the continuity of interaction within each round of play (simultaneous vs. real-time) and the stakes of the interaction (high vs. low differences between payoffs). To maximize efficiency and fairness in this game, players must coordinate on one of two equally advantageous equilibria. In agreement with other studies manipulating continuity of interaction, we find that players who were allowed to interact continuously within rounds achieved outcomes with greater efficiency and fairness than players who were forced to make simultaneous decisions. However, the stability of equilibria in the real-time condition varied systematically and dramatically with stakes: players converged on more stable patterns of behavior when stakes are high. To account for this result, we present a novel analysis of the dynamics of continuous interaction and signaling within rounds. We discuss this previously unconsidered interaction between within-trial and across-trial dynamics as a form of social canalization. When stakes are low in a real-time environment, players can satisfactorily coordinate `on the fly,’ but when stakes are high there is increased pressure to establish and adhere to shared expectations that persist across rounds.
The idea that cognitive development involves a shift towards abstraction has a long history in psychology. One incarnation of this idea holds that development in the domain of mathematics involves a shift from non-formal mechanisms to formal rules and axioms. Contrary to this view, the present study provides evidence that reliance on non-formal mechanisms may actually increase with age. Participants – Dutch primary school children – evaluated three-term arithmetic expressions in which violation of formally correct order of evaluation led to errors, termed foil errors. Participants solved the problems as part of their regular mathematics practice through an online study platform, and data were collected from over 50,000 children representing approximately 10% of all primary schools in the Netherlands, suggesting that the results have high external validity. Foil errors were more common for problems in which formally lower-priority sub-expressions were spaced close together, and also for problems in which such sub-expressions were relatively easy to calculate. We interpret these effects as resulting from reliance on two non-formal mechanisms, perceptual grouping and opportunistic selection, to determine order of evaluation. Critically, these effects reliably increased with participants’ grade level, suggesting that these mechanisms are not phased out but actually become more important over development, even when they cause systematic violations of formal rules. This conclusion presents a challenge for the shift towards abstraction view as a description of cognitive development in arithmetic. Implications of this result for educational practice are discussed.
Recent research in relational learning has suggested that simple training instances may lead to better generalization than complex training instances. We examined the perceptual encoding mechanisms that might undergird this Simple advantage by testing category and perceptual learning in adults with simplified and traditional (more complex) Chinese scripts. In Experiment 1, participants learned Chinese characters and their English translations, performed a memorization test, and generalized their learning to the corresponding characters written in the other script. In Experiment 2, we removed the training phase and modified the tests to examine transfer based purely on the perceptual similarities between simplified and traditional characters. We found the simple advantage in both experiments. Training with simplified characters produced better generalization than training with traditional characters when generalization relied on either recognition memory or pure perceptual similarities. On the basis of the results of these two experiments,we propose a simple processmodel to explain the perceptual mechanism that might drive this simple advantage, and in Experiment 3 we tested novel predictions of this model by examining the effect of exposure duration on the simple advantage. We found support for our model that the simple advantage is driven primarily by differences in the perceptual encoding of the information available from simple and complex instances. These findings advance our understanding of how the perceptual features of a learning opportunity interact with domain-general mechanisms to prepare learners for transfer.
Prior research has established that while the use of concrete, familiar examples can provide many important benefits for learning, it is also associated with some serious disadvantages, particularly in learners’ ability to recognize and transfer their knowledge to new analogous situations. However, it is not immediately clear whether this pattern would hold in real world educational contexts, in which the role of such examples in student engagement and ease of processing might be of enough importance to overshadow any potential negative impact. We conducted two experiments in which curriculum-relevant material was presented in natural classroom environments, first with college undergraduates and then with middle-school students. All students in each study received the same relevant content, but the degree of contextualization in these materials was varied between students. In both studies, we found that greater contextualization was associated with poorer transfer performance. We interpret these results as reflecting a greater degree of embeddedness for the knowledge acquired from richer, more concrete materials, such that the underlying principles are represented in a less abstract and generalizable form.
The chapter follows a central thesis: A major task of teaching and instruction is to help learners coordinate categories of cognitive processes, capabilities, and representations. While nature confers basic abilities, education synthesizes them to suit the demands of contemporary culture. So, rather than treating categories of learning and instruction as an either–or problem, the problem is how to coordinate learning processes so they can do more together than they can alone. This thesis, which proposes a systems level analysis, is not the norm when thinking about teaching and learning. More common is the belief that learning involves strengthening select cognitive processes rather than coordination across processes. Our chapter, therefore, needs to develop the argument for learning as coordination. To do so, we introduce findings from the field of cognitive psychology.
Three-dimensional, Community, Creation, and Commerce (3D3C) worlds can support real-time, quantitatively controlled experiments for studying human group behavior. This chapter provides a review of social behavioral research in virtual worlds, their methodologies and goals, such as studies of socio-economical trends, interpersonal communications between virtual world residents, automated survey studies, etc. The chapter contrasts existing research tools in virtual worlds with the goals of studying human group behavior as a complex system—how interacting groups of people create emergent organizations at a higher level than the individuals comprising such groups. Finally, the chapter presents features of virtual world-based group behavior experiments that allow the recreation of controlled quantitative experiments previously conducted in supervised lab sessions or web-based games.
Learning abstract concepts through concrete examples may promote learning at the cost of inhibiting transfer. The present study investigated one approach to solving this problem: systematically varying superficial features of the examples. Participants learned to solve problems involving a mathematical concept by studying either superficially similar or varied examples. In Experiment 1, less knowledgeable participants learned better from similar examples,while more knowledgeable participants learned better from varied examples. In Experiment 2, prior to learning how to solve the problems, some participants received a pretraining aimed at increasing attention to the structural relations underlying the target concept. These participants, like the more knowledgeable participants in Experiment 1, learned better from varied examples. Thus, the utility of varied examples depends on prior knowledge and, in particular, ability to attend to relevant structure. Increasing this ability can prepare learners to learn more effectively from varied examples.
We consider a situation in which individuals search for accurate decisions without direct feedback on their accuracy, but with information about the decisions made by peers in their group. The “wisdom of crowds” hypothesis states that the average judgment of many individuals can give a good estimate of, for example, the outcomes of sporting events and the answers to trivia questions. Two conditions for the application of wisdom of crowds are that estimates should be independent and unbiased. Here, we study how individuals integrate social information when answering trivia questions with answers that range between 0% and 100% (e.g., “What percentage of Americans are left-handed?”). We find that, consistent with the wisdom of crowds hypothesis, average performance improves with group size. However, individuals show a consistent bias to produce estimates that are insufficiently extreme. We find that social information provides significant, albeit small, improvement to group performance. Outliers with answers far from the correct answer move toward the position of the group mean. Given that these outliers also tend to be nearer to 50% than do the answers of other group members, this move creates group polarization away from 50%. By looking at individual performance over different questions we find that some people are more likely to be affected by social influence than others. There is also evidence that people differ in their competence in answering questions, but lack of competence is not significantly correlated with willingness to change guesses. We develop a mathematical model based on these results that postulates a cognitive process in which people first decide whether to take into account peer guesses, and if so, to move in the direction of these guesses. The size of the move is proportional to the distance between their own guess and the average guess of the group. This model closely approximates the distribution of guess movements and shows how outlying incorrect opinions can be systematically removed from a group resulting, in some situations, in improved group performance. However, improvement is only predicted for cases in which the initial guesses of individuals in the group are biased.
Study sequence can have a profound impact on learning. Previous research has often shown advantages for interleaved over blocked study, though the reverse has also been found. Learners typically prefer blocking even in situations for which interleaving is superior. The present study investigated learner regulation of study sequence, and its effects on learning in an ecologically valid context – university students using an online tutorial relevant to an exam that counted toward their course grades. The majority of participants blocked study by problem category, and this tendency was positively associated with subsequent exam performance. The results suggest that preference for blocked study may be adaptive under some circumstances, and highlight the importance of identifying task environments under which different study sequences are most effective.
Three experiments explore differences between blocked and interleaved study with and without item repetition. In the first experiment we find that when items are repeated during study, blocked study results in higher test performance than interleaved study. In the second experiment we find that when there is no item repetition, interleaved and blocked study result in equivalent performance during the test phase. In the third experiment we find that when the study is passive and includes no item repetition, interleaved study results in higher test performance. We propose that learners create associations between items of the same category during blocked study and item repetition strengthens these associations. Interleaved study leads to weaker associations between items of the same category and therefore results in worse performance during test when there are item repetitions.
We present evidence that successful chunk formation during a statistical learning task depends on how well the perceiver is able to parse the information that is presented between successive presentations of the to-be-learned chunk. First, we show that learners acquire a chunk better when the surrounding information is also chunk-able in a visual statistical learning task. We tested three process models of chunk formation, TRACX, PARSER, and MDLChunker, on our two different experimental conditions, and found that only PARSER and MDLChunker matched the observed result. These two models share the common principle of a memory capacity that is expanded as a result of learning. Though implemented in very different ways, both models effectively remember more individual items (the atomic components of a sequence) as additional chunks are formed. The ability to remember more information directly impacts learning in the models, suggesting that there is a positive-feedback loop in chunk learning.
We describe an interactive mathematics technology intervention From Here to There! (FH2T) that was developed by our research team. This dynamic program allows users to manipulate and transform mathematical expressions. In this paper, we present initial findings from a classroom study that investigates whether using FH2T improves learning. We compare learning gains from two different instantiations of FH2T (retrieval practice and fluid visualizations), as well as a control group, and investigate the role of prior knowledge and content exposure in FH2T as possible moderators of learning. Findings, as well as implications for research and practice are discussed.
Category learning is an essential cognitive mechanism for making sense of the world. Many existing computational category learning models focus on categories that can be represented as feature vectors, and yet a substantial part of the categories we encounter have members with inner structure and inner relationships. We present a novel computational model that perceives and learns structured concepts from physical scenes. The perception and learning processes happen simultaneously and interact with each other. We apply the model to a set of physical categorization tasks and promote specific types of comparisons by manipulating presentation order of examples. We find that these manipulations affect the algorithm similarly to human participants that worked on the same task. Both benefit from juxtaposing examples of different categories – especially ones that are similar to each other. When juxtaposing examples from the same category they do better if the examples are dissimilar to each other.
Experiencing a stimulus in one sensory modality is often associated with an experience in another sensory modality. For instance, seeing a lemon might produce a sensation of sourness. This might indicate some kind of cross-modal correspondence between vision and gustation. The aim of the current study was to provide explore whether such cross-modal correspondences influence cross-modal integration during perceptual learning. To that end, we conducted 2 experiments. Using a speeded classification task, Experiment 1 established a cross-modal correspondence between visual lightness and the frequency of an auditory tone. Using a short-term priming procedure, Experiment 2 showed that manipulation of such cross-modal correspondences led to the creation of a crossmodal unit regardless of the nature of the correspondence (i.e., congruent, Experiment 2a or incongruent, Experiment 2b). However, a comparison of priming-effects sizes suggested that cross-modal correspondences modulate cross-modal integration during learning and thus leading to new learned units that have different stability over time. We discuss the implications of our results for the relation between cross-modal correspondence and perceptual learning in the context of a Bayesian explanation of cross-modal correspondences.
With several large-scale human brain projects currently underway and a range of neuroimaging techniques growing in availability to researchers, the amount and diversity of data relevant for understanding the human brain is increasing rapidly. A complete understanding of the brain must incorporate information about 3D neural location, activity, timing, and task. Data mining, highperformance computing, and visualization can serve as tools that augment human intellect; however, the resulting visualizations must take into account human abilities and limitations to be effective tools for exploration and communication. In this feature review, we discuss key challenges and opportunities that arise when leveraging the sophisticated perceptual and conceptual processing of the human brain to help researchers understand brain structure, function, and behavior.
Various kinds of assistance, including prompts, worked examples, direct instruction, and modeling, are widely provided to learners across educational and training programs. Yet, the effectiveness of assistance during training on long-term learning is widely debated. In the current experiment, we examined how the extent and schedule of assistance during training on a novel mouse movement task impacted unassisted test performance. Learners received different schedules of assistance during training, including constant assistance, no assistance, probabilistic assistance, alternating assistance, and faded assistance. Constant assistance led to better performance during training than no assistance. However, constant assistance during training resulted in the worst unassisted test performance. Faded assistance during training resulted in the best test performance. This suggests that fading may allow learners to create an internal model of the assistance without depending upon the assistance in a manner that impedes successful transfer to unassisted circumstances.
Explaining how patterns of collective behavior emerge from interactions among individuals with diverse, sometimes opposing, goals is a societally crucial and particularly timely pursuit. It is timely because humans are more tightly connected to one another now than ever before. From 1984 to 2014 there has been more than a million-fold increase in the number of devices that can reach the global digital network. Although web technology is new and transformative, from a broader perspective, it is also just a recent manifestation of humanity’s perpetual drive to become more intermeshed. Earlier manifestations of this drive include the printing press, global transportation networks, telecommunication systems, and the academy. These social networks have catalyzed the formation of otherwise unattainable social patterns. Understanding the origins and possible destinations of these social patterns is both scientifically and pragmatically consequential.
Perceptual modules adapt at evolutionary, lifelong, and moment-to-moment temporal scales to better serve the informational needs of cognizers. Perceptual learning is a powerful way for an individual to become tuned to frequently recurring patterns in its specific local environment that are pertinent to its goals without requiring costly executive control resources to be deployed. Mechanisms like predictive coding, categorical perception, and action-informed vision allow our perceptual systems to interface well with cognition by generating perceptual outputs that are systematically guided by how they will be used. In classic conceptions of perceptual modules, people have access to the modules’ outputs but no ability to adjust their internal workings. However, humans routinely and strategically alter their perceptual systems via training regimes that have predictable and specific outcomes. In fact, employing a combination of strategic and automatic devices for adapting perception is one of the most promising approaches to improving cognition.
When a musical tone is sounded, most listeners are unable to identify its pitch by name. Those listeners who can identify pitches are said to have absolute pitch perception (AP). A limited subset of musicians possesses AP, and it has been debated whether musicians’ AP interferes with their ability to perceive tonal relationships between pitches, or relative pitch (RP). The present study tested musicians’ discrimination of relative pitch categories, or intervals, by placing absolute pitch values in conflict with relative pitch categories. AP listeners perceived intervals categorically, and their judgments were not affected by absolute pitch values. These results indicate that AP listeners do not infer interval identities from the absolute values between tones, and that RP categories are salient musical concepts in both RP and AP musicianship.
OPTIMAL: On-line Preparation Tools for Instructional Materials and Assessment of Learning
Theories in concept learning predict that interleaving instances of different concepts is especially beneficial if the concepts are highly similar to each other, whereas blocking instances belonging to the same concept provides an advantage for learning low-similarity concept structures. This suggests that the performance in concept learning tasks can be improved by grouping the instances of given concepts based on their similarity. To explore this hypothesis, we use Physical Bongard Problems, a rich categorization task with an open feature space, to analyze the combined effects of comparing dissimilar and similar instances within and across categories. We manipulate the within- and between-category similarity of instances presented close to each other in blocked, interleaved and simultaneous presentation schedules. The results show that grouping instances to promote dissimilar within- and similar between category comparisons improves the learning results, to a degree depending on the strategy used by the learner.
Abstract concepts are characterized by their underlying structure rather than superficial features. Variation in the examples used to teach abstract concepts can draw attention towards shared structure and away from superficial detail, but too much variation can inhibit learning. The present study tested the possibility that increasing attention to underlying structural relations could alleviate the latter difficulty and thereby increase the benefits of varied examples. Participants were trained with either varied or similar examples of a mathematical concept, and were then tested on their ability to apply the concept to new cases. Before training, some participants received pre training aimed at increasing attention to the structural relations underlying the concept. The relative advantage of varied over similar examples was increased among participants who received the pretraining. Thus, preparation that promotes attention to the relations underlying abstract concepts can increase the benefits of learning from varied examples.
External representations are more effective when spatial dimensions are used to represent numeric variables. However, this principle may result in suboptimal representations when the number of numeric variables to be represented is large. To test this possibility, participants studied a set of graphs representing a parametrized function under different parameter values. The graphs were displayed either using a grid organization, with parameter values represented by spatial dimensions (horizontal and vertical position of the graphs), or juxtaposed in a single area, with parameter values represented by non-spatial dimensions (color and texture). Juxtaposed organization led to better learning. However, this advantage was eliminated when the graphs were presented successively rather than simultaneously. The results suggest that juxtaposed organization can improve comprehension of complex data by facilitating comparison between parts of the data. Such organization may be preferable even if it precludes use of spatial dimensions for some numeric variables.
Through perceptual learning, perceptual systems are gradually modified so as to better fit an organism’s environment and frequently occurring needs. We consider psychological and neurophysiological evidence that changes to perception can be early in the stream of information processing. Three specific mechanisms of perceptual learning are described: attentional tuning, unitization, and attribute differentiation. These mechanisms allow organisms to emphasize important perceptual information, to construct single functional units that are activated when a familiar complex configuration arises, and to isolate perceptual attributes that were originally psychologically fused. We describe ways by which people modify their perceptual systems so as to better meet their goals, and the implications of these modifications for the cognitive penetrability of perception, relations between perception and higher-order reasoning, and education.
Our “Creature League” study has been mentioned at Science Daily, ScienceNewsline, IU’s News Room, Medical Xpress, EurekAlert!, and Science Codex. Here’s an audio description of the work, courtesy of Academic Minute. Participants in the group behavior experiment of Wisdom, Song, and Goldstone (2013) tried to assemble teams of Pokemon-like creatures that scored well. Each creature was associated with a score for itself, but some pairs of creatures also produced positive or negative scores. Because of these interactions between creatures, the problem of assembling high-scoring teams posed a difficult search problem for participants. Participants could assemble their teams by 1) using their previous teams (status quo), 2) taking creatures from their historically best team (retrieval), 3) dragging untested creatures from the league of creatures (innovating), or 4) dragging individual creatures or entire teams from other participants’ solutions (imitating).
Some of the interesting results from this study were:
1) Participants tend to do BETTER when surrounded by imitators. One of the primary mechanisms for this is that when a person comes up with a good solution, their peers copy the solution, and sometime improve upon it. The person who was originally imitated can then benefit from these subsequent solutions (cliff swallows show a similar collective dynamic, with birds benefitting by being imitated while foraging). Imitation also acts as a cultural memory for what has worked well in the past. If an innovator’s solution to a problem is preserved by imitators, then the innovator does not have to remember their solution themselves.
2) As problem increased in difficulty, solutions were less diverse, and exploration was less prevalent.
3) Participants were more likely to imitate popular choices. above and beyond what would be expected from random copying of solution elements.
4) Participants are more likely to imitate a solution that is increasing in popularity among peers.
5) Participants are more likely to imitate solutions that are similar to their current solutions. This helps avoid hybrids/cross-breeds that don’t score well.
6) Participants begin a game by imitating and innovating relatively often, and end by more conservatively sticking to their existing solution. The best scoring strategy was to stick close to an existing solution, and innovating was worst.
7) At a group level, diversity of solutions decreased over rounds of a game. Bigger groups did better, but bigger groups also showed less diversity.
How does perceptual learning take place early in life? Traditionally, researchers have focused on how infants make use of information within displays to organize it, but recently, increasing attention has been paid to the question of how infants perceive objects differently depending upon their recent interactions with the objects. This experiment investigates 10-month-old infants’ use of brief prior experiences with objects to visually organize a display consisting of multiple geometrically-shaped three-dimensional blocks created for this study. After a brief exposure to a multi-part portion of the display, each infant was shown two test events, one of which preserved the unit the infant had seen and the other of which broke that unit. Overall, infants looked longer at the event that broke the unit they had seen prior to testing than the event that preserved that unit, suggesting that infants made use of the brief prior experience to (a) form a cohesive unit of the multi-part portion of the display they saw prior to test and (b) segregate this unit from the rest of the test display. This suggests that infants made inferences about novel parts of the test display based on limited exposure to a subset of the test display. Like adults, infants learn features of the three-dimensional world through their experiences in it.
Recent research in inductive category learning has demonstrated that interleaved study of category exemplars results in better performance than does studying each category in separate blocks. However, the questions of how the category structure influences this advantage and how simultaneous presentation interacts with the advantage are open issues. In this article, we present three experiments. The first experiment indicates that the advantage of interleaved over blocked study is modulated by the structure of the categories being studied. More specifically, interleaved study results in better generalization for categories with high within- and between-category similarity, whereas blocked presentation results in better generalization for categories with low within- and between-category similarity. In Experiment 2, we present evidence that when presented simultaneously, between-category comparisons (interleaved presentation) result in a performance advantage for high-similarity categories, but no differences were found for low-similarity categories. In Experiment 3, we directly compared simultaneous and successive presentation of low-similarity categories. We again found an overall benefit for blocked study with these categories. Overall, these results are consistent with the proposal that interleaving emphasizes differences between categories, whereas blocking emphasizes the discovery of commonalities among objects within the same category.
A longstanding debate concerns the use of concrete versus abstract instructional materials, particularly in domains such as mathematics and science. Although decades of research have focused on the advantages and disadvantages of concrete and abstract materials considered independently, we argue for an approach that moves beyond this dichotomy and combines their advantages. Specifically, we recommend beginning with concrete materials and then explicitly and gradually fading to the more abstract. Theoretical benefits of this “concreteness fading” technique for mathematics and science instruction include: (1) helping learners interpret ambiguous or opaque abstract symbols in terms of well-understood concrete objects, (2) providing embodied perceptual and physical experiences that can ground abstract thinking, (3) enabling learners to build up a store of memorable images that can be used when abstract symbols lose meaning, and (4) guiding learners to strip away extraneous concrete properties and distill the generic, generalizable properties. In these ways, concreteness fading provides advantages that go beyond the sum of the benefits of concrete and abstract materials.
Using the referencing patterns in articles in Cognitive Science over three decades, we analyze the knowledge base of this literature in terms of its changing disciplinary composition. Three periods are distinguished: (1) construction of the interdisciplinary space in the 1980s; (2) development of an interdisciplinary orientation in the 1990s; (3) reintegration into “cognitive psychology” in the 2000s. The fluidity and fuzziness of the interdisciplinary delineations in the different visualizations can be reduced and clarified using factor analysis. We also explore newly available routines (“CorText”) to analyze this development in terms of “tubes” using an alluvial map, and compare the results with an animation (using “visone”). The historical specificity of this development can be compared with the development of “artificial intelligence” into an integrated specialty during this same period. “Interdisciplinarity” should be defined differently at the level of journals and of specialties.
Graphs and tables differentially support performance on specific tasks. For tasks requiring reading off single data points, tables are as good as or better than graphs, while for tasks involving relationships among data points, graphs often yield better performance. However, the degree to which graphs and tables support flexibility across a range of tasks is not well-understood. In two experiments, participants detected main and interaction effects in line graphs and tables of bivariate data. Graphs led to more efficient performance, but also lower flexibility, as indicated by a larger discrepancy in performance across tasks. In particular, detection of main effects of variables represented in the graph legend was facilitated relative to detection of main effects of variables represented in the x-axis. Graphs may be a preferable representational format when the desired task or analytical perspective is known in advance, but may also induce greater interpretive bias than tables, necessitating greater care in their use and design.
Here are some reports of our PLoS One paper on human collective behavior studying cyclic patterns in a generalization of the familiar rock-scissors-paper game. We find situations in which groups of people grow increasingly predictable as they cycle around a set of choice options in a game similar to rock-scissors-paper but with 24 rather than 3 choices.
When making decisions, humans can observe many kinds of information about others’ activities, but their effects on performance are not well understood. We investigated social learning strategies using a simple problem-solving task in which participants search a complex space, and each can view and imitate others’ solutions. Results showed that participants combined multiple sources of information to guide learning, including payoffs of peers’ solutions, popularity of solution elements among peers, similarity of peers’ solutions to their own, and relative payoffs from individual exploration. Furthermore, performance was positively associated with imitation rates at both the individual and group levels. When peers’ payoffs were hidden, popularity and similarity biases reversed, participants searched more broadly and randomly, and both quality and equity of exploration suffered. We conclude that when peers’ solutions can be effectively compared, imitation does not simply permit scrounging, but it can also facilitate propagation of good solutions for further cumulative exploration.
The terms concreteness fading and progressive formalization have been used to describe instructional approaches to science and mathematics that use grounded representations to introduce concepts and later transition to more formal representations of the same concepts. There are both theoretical and empirical reasons to believe that such an approach may improve learning outcomes relative to instruction employing only grounded or only formal representations (Freudenthal, 1991; Goldstone & Son, 2005; McNeil & Fyfe, 2012; but see Kaminski, Sloutsky, & Heckler, 2008). Two experiments tested the effectiveness of this approach to instruction in the mathematical domain of combinatorics, using outcome listing and numerical calculation as examples of grounded and formal representations, respectively. The study employed a pretest-training, posttest design. Transfer performance, that is, participants’ improvement from pretest to posttest, was used to assess the effectiveness of instruction received during training. In Experiment 1, transfer performance was compared for 4 types of instruction, which differed only in the types of representation they employed: pure listing (i.e., listing only), pure formalism (i.e., numerical calculation only), list fading (i.e., listing followed by numerical calculation), and formalism first (i.e., listing introduced after numerical calculation). List fading instruction led to transfer performance on par with pure formalism instruction and higher than formalism first and pure listing instruction. In Experiment 2, an enhanced version of list fading training was again compared to pure formalism. However, no difference in transfer performance due to training was found. The results suggest that combining grounded and formal representations can be an effective approach to combinatorics instruction but is not necessarily preferable to using formal representations alone. If both grounded and formal representations are employed, the former should precede rather than follow the latter in the instructional sequence.
Recent theories from complexity science argue that complex dynamics are ubiquitous in social and economic systems. These claims emerge from the analysis of individually simple agents whose collective behavior is surprisingly complicated. However, economists have argued that iterated reasoning–what you think I think you think–will suppress complex dynamics by stabilizing or accelerating convergence to Nash equilibrium. We report stable and efficient periodic behavior in human groups playing the Mod Game, a multi-player game similar to Rock-Paper-Scissors. The game rewards subjects for thinking exactly one step ahead of others in their group. Groups that play this game exhibit cycles that are inconsistent with any fixed-point solution concept. These cycles are driven by a ‘‘hopping’’ behavior that is consistent with other accounts of iterated reasoning: agents are constrained to about two steps of iterated reasoning and learn an additional one-half step with each session. If higher-order reasoning can be complicit in complex emergent dynamics, then cyclic and chaotic patterns may be endogenous features of real-world social and economic systems. Download PDF version of this paper
Unlike how most psychology experiments on learning operate, people learning to do a task typically do so in the context of other people learning to do the same task. In these situations, people take advantage of others’ solutions, and may modify and extend these solutions, thereby affecting the solutions available to others. We are interested in the group patterns that emerge when people can see and imitate the solutions, innovations, and choices of their peers over several rounds. In one series of experiments and computer simulations, we find that there is a systematic relation between the difficulty of a problem search space and the optimal social network for transmitting solutions. As the difficulty of finding optimal solutions in a search space increases, communication networks that preserve spatial neighborhoods perform best. Restricting people’s access to others’ solutions can help the group as a whole find good, hard-to-discover solutions. In other experiments with more complex search spaces, we find evidence for several heuristics governing individuals’ decisions to imitate: imitating prevalent options, imitating options that become increasingly prevalent, imitating high-scoring options, imitating during the early stages of a multiround search process, and imitating solutions similar to one’s own solution. Individuals who imitate tend to perform well, and more surprisingly, individuals also perform well when they are in groups with other individuals who imitate frequently. Taken together, our experiments on collective social learning reveal laboratory equivalents of prevalent social phenomena such as bandwagons, strategy convergence, inefficiencies in the collective coverage of a problem space, social dilemmas in exploration/exploitation, and reciprocal imitation.
Diverse evidence shows that perceptually integral dimensions, such as those composing color, are represented holistically. However, the nature of these holistic representations is poorly understood. Extant theories, such as those founded on multidimensional scaling or general recognition theory, model integral stimulus spaces using a Cartesian coordinate system, just as with spaces defined by separable dimensions. This approach entails a rich geometrical structure that has never been questioned but may not be psychologically meaningful for integral dimensions. In particular, Cartesian models carry a notion of orthogonality of component dimensions, such that if 1 dimension is diagnostic for a classification or discrimination task, another can be selected as uniquely irrelevant. This article advances an alternative model in which integral dimensions are characterized as topological spaces. The Cartesian and topological models are tested in a series of experiments using the perceptual-learning phenomenon of dimension differentiation, whereby discrimination training with integral-dimension stimuli can induce an analytic representation of those stimuli. Under the present task design, the 2 models make contrasting predictions regarding the analytic representation that will be learned. Results consistently support the Cartesian model. These findings indicate that perceptual representations of integral dimensions are surprisingly structured, despite their holistic, unanalyzed nature.
Typical disjunctive artificial classification tasks require participants to sort stimuli according to rules such as “x likes cars only when black and coupe OR white and SUV.” For cate-gories like this, increasing the salience of the diagnostic dimensions has two simultaneous effects: increasing the distance between members of the same category and increas-ing the distance between members of opposite categories. Potentially, these two effects respectively hinder and facilitate classification learning, leading to competing predictions for learning. Increasing saliency may lead to members of the same category to be consid-ered less similar, while the members of separate categories might be considered more dissimilar. This implies a similarity-dissimilarity competition between two basic classifica-tion processes. When focusing on sub-category similarity, one would expect more difficult classification when members of the same category become less similar (disregarding the increase of between-category dissimilarity); however, the between-category dissimi-larity increase predicts a less difficult classification. Our categorization study suggests that participants rely more on using dissimilarities between opposite categories than finding similarities between sub-categories.We connect our results to rule- and exemplar-based classification models.The pattern of influences of within- and between-category similarities are challenging for simple single-process categorization systems based on rules or exem-plars. Instead, our results suggest that either these processes should be integrated in a hybrid model, or that category learning operates by forming clusters within each category.
An enormous amount of ink has been spilled in the psychology literature on the topic of similarity. There are two reasons that this seemingly intuitive and prosaic concept has been the subject of such intense scrutiny. First, there is virtually no area of cognitive processing in which similarity does not seem to play a role. William James observed that “This sense of Sameness is the very keel and backbone of our thinking” (James 1890/1950: 459). Ivan Pavlov first noted that dogs would generalize their learned salivation response to new sounds as a function of their similarity to the original tone, and this pattern of generalization appears to be ubiquitous across species and stimuli. People group things together based on their similarity, both during visual processing and categorization. Research suggests that memories are retrieved when they involve similar features or similar processing to a current situation. Problem solutions are likely to be retrieved from similar prior problems, inductive inference is largely based on the similarity between the known and unknown cases, and the list goes on and on. Download PDF version of this paper
The goal of the present study was to find evidence for a multisensory generalization effect (i.e., generalization from one sensory modality to another sensory modality). The authors used an innovative paradigm (adapted from Brunel, Labeye, Lesourd, & Versace, 2009) involving three phases: a learning phase, consisting in the categorization of geometrical shapes, which manipulated the rules of association between shapes and a sound feature, and two test phases. The first of these was designed to examine the priming effect of the geometrical shapes seen in the learning phase on target tones (i.e., priming task), while the aim of the second was to examine the probability of recognizing the previously learned geometrical shapes (i.e., recognition task). When a shape category was mostly presented with a sound during learning, all of the primes (including those not presented with a sound in the learning phase) enhanced target processing compared to a condition in which the primes were mostly seen without a sound during learning. A pattern of results consistent with this initial finding was also observed during recognition, with the participants being unable to pick out the shape seen without a sound during the learning phase. Experiment 1 revealed a multisensory generalization effect across the members of a category when the objects belonging to the same category share the same value on the shape dimension. However, a distinctiveness effect was observed when a salient feature distinguished the objects within the category (Experiment 2a vs. 2b).
Past research suggests that spatial configurations play an important role in graph comprehension. The present study investigates consequences of this fact for the relative utility of graphs and tables for interpreting data. Participants judged presence or absence of various statistical effects in simulated datasets presented in various formats. For the statistical effects introduced earlier in the study, performance was better with graphs than with tables, while for the effect introduced last in the study, this trend reversed. Additionally, in the later sections of the study, responses with graphs, but not tables, reflected increasing influence from the presence of stimulus features which had been relevant earlier in the study, but were no longer relevant. The findings suggest that graphs, relative to tables, may better facilitate perception of complex relationships among data points, but may also bias readers more strongly to favor some perspectives over others when interpreting data.
Research on how information should be presented during inductive category learning has identified both interleaving of categories and blocking by category as beneficial for learning. Previous work suggests that this mixed evidence can be reconciled by taking into account within- and between-category similarity relations. In this paper we present a new moderating factor. One group of participants studied categories actively, either interleaved or blocked. Another group studied the same categories passively. Results from a subsequent generalization task show that active learning benefits from interleaved presentation while passive learning benefits from blocked presentation.
In inductive learning, the order in which concept instances are presented plays an important role in learning performance. Theories predict that interleaving instances of different concepts is especially beneficial if the concepts are highly similar to each other, whereas blocking instances belonging to the same concept provides an advantage for learning lowsimilarity concept structures. This leaves open the question of the relative influence of similarity on interleaved versus blocked presentation. To answer this question, we pit withinand between-category similarity effects against each other in a rich categorization task called Physical Bongard Problems. We manipulate the similarity of instances shown temporally close to each other with blocked and interleaved presentation. The results indicate a stronger effect of similarity on interleaving than on blocking. They further show a large benefit of comparing similar between-category instances on concept learning tasks where the feature dimensions are not known in advance but have to be constructed.
What simple factors impact the cognitive complexity of code? We present an experiment in which participants predict the output of ten small Python programs. Even with such simple programs, we find a complex relationship between code, expertise, and correctness. We use subtle differences between program versions to demonstrate that small notational changes can have profound effects on comprehension. We catalog common errors for each program, and perform an in-depth data analysis to uncover effects on response correctness and speed.
Programming language and library designers often debate the usability of particular design choices. These choices may impact many developers, yet scientific evidence for them is rarely provided. Cognitive models of program comprehension have existed for over thirty years, but lack quantitative definitions of their internal components and processes. To ease the burden of quantifying existing models, we recommend using the ACT-R cognitive architecture: a simulation framework for psychological models. In this paper, we provide a high-level overview of modern cognitive architectures while concentrating on the details of ACT-R. We review an existing quantitative program comprehension model, and consider how it could be simplified and implemented within the ACT-R framework. Lastly, we discuss the challenges and potential benefits associated with building a comprehensive cognitive model on top of a cognitive architecture.
After more than 100 years of interest and study, knowledge transfer remains among the most challenging, contentious, and important issues for both psychology and education. In this article, we review and discuss many of the more important ideas and findings from the existing research and attempt to bridge this body of work with the exciting new research directions suggested by the following articles.
Understanding how to get learners to transfer their knowledge to new situations is a topic of both theoretical and practical importance. Theoretically, it touches on core issues in knowledge representation, analogical reasoning, generalization, embodied cognition, and concept formation. Practically, learning without transfer of what has been learned is almost always unproductive and inefficient. Although schools often measure the efficiency of learning in terms of speed and retention of knowledge, a relatively neglected and subtler component of efficiency is the generality and applicability of the acquired knowledge. This special issue of Educational Psychologist collects together new approaches toward understanding and fostering appropriate transfer in learners. Three themes that emerge from the collected articles are (a) the importance of the perspective/stance of the learner for achieving robust transfer, (b) the neglected role of motivation in determining transfer, and (c) the existence of specific, validated techniques for teaching with an eye toward facilitating students’ transfer of their learning.
Although young children typically have trouble reasoning relationally, they are aided by the presence of relational words (e.g., Gentner & Rattermann, 1991). They also reason well about commonly experienced event structures (e.g., Fivush, 1984). Relational words may benefit relational reasoning because they activate well-understood event structures. Two candidate hypotheses were tested: (1) the Schema hypothesis, according to which words help relational reasoning because they are grounded in schematized experiences and (2) the Optimal Vagueness hypothesis, by which words benefit relational reasoning because the activated schema is open enough (without too much specificity) so that it can be applied analogically to novel problems. Four experiments examine these two hypotheses by examining how training with a label influences schematic interpretations of a scene, the kinds of scenes that are conducive to schematic interpretations, and whether children must figure out the interpretation themselves to benefit from the act of interpreting a scene as an event. Experiment 1 shows the superiority of schema-evoking words over words that do not connect to schematized experiences. Experiments 2 and 3 further reveal that these words must be applied to vaguely related perceptual instances rather than unrelated or concretely related instances in order to draw attention to relational structure. Experiment 4 provides evidence that even when children do not work out an interpretation for themselves, just the act of interpreting an ambiguous scene is potent for relational generalization. The present results suggest that relational words (and in particular their meanings) are created from the act of interpreting a perceptual situation in the context of a word grounded in meaningful experiences.
Previous research suggests that comparing multiple specific examples of a general concept can promote knowledge transfer. The present study investigated whether this approach could be made more effective by systematic variation in the semantic content of the specific examples. Participants received instruction in a mathematical concept in the context of several examples, which instantiated either a single semantic schema (non-varied condition) or two different schemas (varied condition). Schema-level variation during instruction led to better knowledge transfer, as predicted. However, this advantage was limited to participants with relatively high performance before instruction. Variation also improved participants’ ability to describe the target concept in abstract terms. Surprisingly, however, this ability was not associated with successful knowledge transfer. Download PDF version of this paper
Research in inductive category learning has demonstrated that interleaving exemplars of categories results in better performance than presenting each category in a separate block. Two experiments indicate that the advantage of interleaved over blocked presentation is modulated by the structure of the categories being presented. More specifically, interleaved presentation results in better performance for categories with high within- and between-category similarity while blocked presentation results in better performance for categories with low within- and between-category similarity.
This interaction is predicted by accounts in which blocking promotes discovery of features shared by the members of a category whereas interleaving promotes discovery of features that discriminate between categories. Download PDF version of this paper
In this paper, we present the AbsMatcher system for schema matching which uses a graph based approach. The primary contribution of this paper is the development of new types of relationships for generating graph edges and the effectiveness of integrating schemas using those graphs. AbsMatcher creates a graph of related attributes within a schema, mines similarity between attributes in different schemas, and then combines all information using the ABSURDIST graph matching algorithm. The attribute-to-attribute relationships this paper focuses on are semantic in nature and have few requirements for format or structure. These relationships sources provide a baseline which can be improved upon with relationships specific to formats, such as XML or a relational database. Simulations demonstrate how the use of automatically mined graphs of within-schema relationships, when combined with cross-schema pair-wise similarity, can result in matching accuracy not attainable by either source of information on its own. Download PDF version of this paper
Recent re-analysis of traditional Categorical Perception (CP) effects show that the advantage for between category judgments may be due to asymmetries of within-category judgments (Hanley & Roberson, 2011). This has led to the hypothesis that labels cause CP effects via these asymmetries due to category label uncertainty near the category boundary. In Experiment 1 we demonstrate that these “within-category” asymmetries exist before category training begins. Category learning does increase the within-category asymmetry on a category relevant dimension but equally on an irrelevant dimension. Experiment 2 replicates the asymmetry found in Experiment 1 without training and shows that it does not increase with additional exposure in the absence of category training. We conclude that the within-category asymmetry may be a result of unsupervised learning of stimulus clusters that emphasize extreme instances and that category training increases this caricaturization of stimulus representations. Download PDF version of this paper
How do people learn to group and re-group objects into labeled categories? In this paper, we examine mechanisms that guide how people re-represent categories. In two experiments, we examine what is easy and what is hard to relearn as people update their knowledge about labeled groups of objects. In Study 1, we test how people learn and re-learn to group objects that share no perceptual features. Data suggest that people easily learn to re-label objects when the category structure remains the same. In Study 2, we test whether more general types of labeling conventions — words that do or do not correspond with object similarities — influence learning and re-learning. Data suggest that people are able to learn either kind of convention and may have trouble switching between them when re-structuring their knowledge. Implications for category learning, second language acquisition and updating representations are discussed. Download PDF version of this paper
We describe an intervention being developed by our research team, Pushing Symbols (PS). This intervention is designed to encourage learners to treat symbol systems as physical objects that move and change over time according to dynamic principles. We provide students with the opportunities to explore algebraic structure by physically manipulating and interacting with concrete and virtual symbolic systems that enforce rules through constraints on physical transformations. Here we present an instantiation of this approach aimed at helping students learn the structure of algebraic notation in general, and in particular learn to simplify like terms. This instantiation combines colored symbol tiles with a new touchscreen software technology adapted from the commercial Algebra Touch software. We present preliminary findings from a study with 70 middle-school students who participated in the PS intervention over a three-hour period. Download PDF version of this paper
In comprehension of the metaphor “TOPIC is VEHICLE,” emergent features in the interpretation of metaphors are characteristic neither of the topic nor the vehicle. An experiment examines the hypothesis that new features emerge as metaphoric interpretations through association with nonemergent features connected with the topic, vehicle, or both. In the experiment, participants were presented with a nonemergent feature as a prime, a metaphor, and an emergent feature, sequentially. Participants were then asked to respond as to whether the emergent feature is an appropriate interpretation of the metaphor. The results showed that primed non-emergent features derived from the vehicle facilitate the recognition of emergent features. The results support an account in which new features emerge through two processes – non-emergent features are recognized as interpretations of the metaphor and then these non-emergent features facilitate the recognition of emergent features. Download PDF version of this paper
Issues related to concepts and categorization are nearly ubiquitous in psychology because of people’s natural tendency to perceive a thing as something. We have a powerful impulse to interpret our world. This act of interpretation, an act of “seeing something as X” rather than simply seeing it (Wittgenstein, 1953), is fundamentally an act of categorization. The attraction of research on concepts is that an extremely wide variety of cognitive acts can be understood as categorizations (Murphy, 2002).
We implemented a problem-solving task in which groups of participants simultaneously played a simple innovation game in a complex problem space, with score feedback provided after each of a number of rounds. Each participant in a group was allowed to view and imitate the guesses of others during the game. The results showed the use of social learning strategies previously studied in other species, and demonstrated benefits of social learning and nonlinear effects of group size on strategy and performance. Rather than simply encouraging conformity, groups provided information to each individual about the distribution of useful innovations in the problem space. Imitation facilitated innovation rather than displacing it, because the former allowed good solutions to be propagated and preserved for further cumulative innovations in the group. Participants generally improved their solutions through the use of fairly conservative strategies, such as changing only a small portion of one’s solution at a time, and tending to imitate solutions similar to one’s own. Changes in these strategies over time had the effect of making solutions increasingly entrenched, both at individual and group levels. These results showed evidence of nonlinear dynamics in the decentralization of innovation, the emergence of group phenomena from complex interactions of individual efforts, stigmergy in the use of social information, and dynamic tradeoffs between exploration and exploitation of solutions. These results also support the idea that innovation and creativity can be recognized at the group level even when group members are generally cautious and imitative.
Perceptual learning involves relatively long-lasting changes to an organism`s perceptual system that improve its ability to respond to its environment. Four mechanisms of perceptual learning are discussed: attention weighting, imprinting, differentiation, and unitization. By attention weighting, perception becomes adapted to tasks and environments by increasing the attention paid to important dimensions and features. By imprinting, receptors are developed that are specialized for stimuli or parts of a stimuli. By differentiation, stimuli that were once indistinguishable become psychologically separated. By unitization, tasks that originally required detection of several parts come to be accomplished by detecting a single constructed unit representing a complex configuration. Research from cognitive psychology, psychophysics, neuroscience, expert/novice differences, development, computer science, and cross-cultural differences is described that relates to these mechanisms. The locus, limits, and applications of perceptual learning are also discussed.
Human judgments of similarity have traditionally been modelled by measuring the distance between the compared items in a psychological space, or the overlap between the items` featural representations. An alternative approach, inspired jointly by work in analogical reasoning (D. Gentner, 1983; K. T. Holyoak & P. Thagard, 1989) and interactive activation models of perception (J. L. McClelland & D. E. Rumelhart, 1981), views the process of judging similarity as one of establishing alignments between the parts of compared entities. A localist connectionist model of similarity, SIAM, is described wherein units represent correspondences between scene parts, and these units mutually and concurrently influence each other according to their compatability. The model is primarily applied to similarity rating tasks, but is also applied to other indirect measures of similarity, to judgments of alignment between scene parts, to impressions of comparison difficulty, and to patterns of perceptual sensitivity for matching and mismatching features.
(reprinted as: Goldstone, R. L., & Barsalou, L. (1998). Reuniting perception and conception. In S. A. Sloman and L. J. Rips (Eds.) Similarity and symbols in human thinking. (pp. 145-176). Cambridge, MA: MIT Press)
Work in philosophy and psychology has argued for a dissociation between perceptually-based similarity and higher-level rules in conceptual thought. Although such a dissociation may be justified at times, our goal is to illustrate ways in which conceptual processing is grounded in perception, both for perceptual similarity and abstract rules. We discuss the advantages, power, and influences of perceptually-based representations. First, many of the properties associated with amodal symbol systems (e.g. productivity and generativity) can be achieved with perceptually-based systems as well. Second, relatively raw perceptual representations are powerful because they can implicitly represent properties in an analog fashion. Third, perception naturally provides impressions of overall similarity, exactly the type of similarity useful for establishing many common categories. Fourth, perceptual similarity is not static but becomes tuned over time to conceptual demands. Fifth, the original motivation or basis for sophisticated cognition is often less sophisticated perceptual similarity. Sixth, perceptual simulation occurs even in conceptual tasks that have no explicit perceptual demands. Parallels between perceptual and conceptual processes suggest that many mechanisms typically associated with abstract thought are also present in perception, and that perceptual processes provide useful mechanisms that may be coopted by abstract thought.
This research provides evidence for two competing attentional mechanisms. Attentional persistence directs attention to attributes previously found to be predictive, whereas contrast directs attention to stimuli that have not already been associated with a category. Three experiments provide evidence for these mechanisms. Experiments 1 and 2 revealed increased attention to an attribute following training in which that attribute was relevant, providing evidence for persistence. These experiments also revealed increased attention to an attribute following training in which another, more salient attribute was relevant, providing evidence for contrast. Experiment 3 used a subtractive method to determine the contributions of persistence and contrast to changes in attention to an attribute. The results suggest that persistence operates primarily at the level of dimensions, whereas contrast operates at the level of dimension values.
According to an influential approach to cognition, our perceptual systems provide us with a repertoire of fixed features as input to higher-level cognitive processes. We present a theory of category learning and representation in which features, instead of being components of a fixed repertoire, are created under the influence of higher-level cognitive processes. When new categories need to be learned, fixed features face one of two problems: (1) High-level features that are directly useful for categorization may not be flexible enough to represent all relevant objects. (2) Low-level features consisting of unstructured fragments (such as pixels) may not capture the regularities required for successful categorization. We report evidence that feature creation occurs in category learning and we describe the conditions that promote it. Feature creation can adapt flexibly to changing environmental demands and may be the origin of fixed feature repertoires. Implications for object categorization, conceptual development, chunking, constructive induction and formal models of dimensionality reduction are discussed.
Similarity comparisons are highly sensitive to judgment context. Three experiments explore context effects that occur within a single comparison rather than across several trials. Experiment 1 shows reliable intransitivities in which a target is judged to be more similar to stimulus A than to stimulus B, more similar to B than to stimulus C, and more similar to C than to A. Experiment 2 explores the locus of Tversky`s (1977) diagnosticity effect in which the relative similarity of two alternatives to a target is influenced by a third alternative. Experiment 3 demonstrates reliable, though occasional, violations of an assumption of monotonicity. The observed violations of common assumptions to many models of similarity can be accomodated in terms of dynamic property weighting processes based on specific forms of diagnosticity, and contrast sets that are generated when a comparison is presented.
In building models of cognition, it is customary to commence construction on the foundations laid by perception. Perception is presumed to provide us with an initial source of information that is operated upon by subsequent cognitive processes. And, as with the foundation of a house, a premium is placed on stability and solidity. Stable edifices require stable support structures. By this view, our cognitive processes are well behaved to the degree that they can depend upon the stable structures established by our perceptual system.
Considered collectively, the contributions to this volume suggest an alternative metaphor for understanding the relation between perception and cognition. The architectural equivalent of perception may be a bridge rather than a foundation. The purpose of a bridge is to provide support, but they do so by adapting to the supported vehicles. Bridges, by design, sway under the weight of heavy vehicles, built on the principle that it is better to bend than break. Bridges built with rigid materials are often less resilient than their more flexible counterparts. Similarly, the chapters collected here raise the possibility that perception supports cognition by flexibly adapting to the requirements imposed by cognitive tasks. Perception may not be stable, but its departures from stability may facilitate rather than hamper its ability to support cognition. Cognitive processes involved in categorization, comparison, object recognition, and language may shift perception, but perception becomes better tuned to these tasks as a result.
(Translated into Japanese as: Spencer-Smith, J., & Goldstone, R. L. (2001). The dynamics of similarity. in A. Ohnishi and H. Suzuki (Eds.) Ruii kara mita kokoro (Similarity-based approach to mind). Tokyo, Japan: Kyoritsu Shuppan.)
Similarity depends on representations of stimuli that are constructed and changed during comparison-making. Specific features may be selectively weighted during comparison, and the features used in a comparison may themselves be a product of the comparison process. Traditional models of similarity and analogy rely on representations that are assumed to exist prior to comparison and are inflexible. Evidence from previous research indicates that weighting of features in similarity judgments may vary dynamically during processing (Goldstone, 1994; Goldstone & Medin, 1994). SIAM (Goldstone, 1994), a model providing an account of dynamic weighting, is discussed. Additional studies indicate that features may be developed or introduced during similarity judgments. A methodology for examining process-oriented models that may account for flexible representations is proposed.
A continuum between purely isolated and purely interrelated concepts is described. A concept is interrelated to the extent that it is influenced by other concepts. Methods for manipulating and identiying a concept`s degree of interrelatedness are introduced. Relatively isolated concepts are empirically identified by a relatively large use of nondiagnostic features, and by better categorization performance for a concept`s prototype than for a caricature of the concept. Relatively interrelated concepts are identified by minimal use of nondiagnostic features, and by better categorization performance for a caricature than a prototype. A concept is likely to be relatively isolated when: subjects are instructed to create images for their concepts rather than find discriminating features, concepts are given unrelated labels, and the categories that are displayed alternate rarely between trials. The entire set of manipulations and measurements supports a graded distinction between isolated and interrelated concepts. The distinction is applied to current models of category learning, and a connectionist framework for interpreting the empirical results is presented.
According to the assumption of monotonicity in similarity judgments, adding a shared feature in common to two items should increase or leave unchanged, but should never decrease, their similarity. Violations of monotonicity are not predicted by feature- or dimension-based models, but can be accommodated by alignment-based models. According to alignment-based models, when structured displays are compared, the parts of one compared display must be aligned, or placed in correspondence with the parts of the other display. In two experiments, evidence for nonmonotonicities is obtained that is generally, although not entirely, consistent with the alignment-based model SIAM (Similarity as Interactive Activation and Mapping; Goldstone, 1994). The primary assumption of the model is that the calculation of similarity involves an interactive activation process whereby correspondences between the parts of compared displays mutually and concurrently influence each other. As SIAM predicts, the occurrence of nonmonotonicities depends on the perceptual similarity of features and the duration of presented comparisons.
Categorical perception is a phenomenon in which people are better able to distinguish between stimuli along a physical continuum when the stimuli come from different categories than when they come from the same category. In a laboratory experiment with human subjects, we find evidence for categorical perception along a novel dimension that is created by interpolating (i.e. morphing) between two randomly selected bezier curves. A neural network qualitatively models the empirical results with the following assumptions: 1) hidden ÒdetectorÓ unit become specialized for particular stimulus regions with a topologically structured competitive learning algorithm, 2) simultaneously, associations between detectors and category units are learned, and 3) feedback from the category units to the detectors causes the detectors to become concentrated near category boundaries. The particular feedback used, implemented in an “S.O.S. network,” operates by increasing the learning rate to detectors that are neighbors to a detector that produces an improper categorization.
Most studies of the categorization of emotions tests the prototype model against the classical model, concluding that the prototype model offers the better explanation. Prototype models, as with all similarity-based models, posit that categorization depends on the similarity between the instance to be categorized and the category representation. However, we find that emotion similarity judgments and categorization judgments sometimes diverge. Specifically, information about changes in a person`s status and/or potency is weighted more heavily in categorization decisions than it is in similarity decisions. We argue that a knowledge-based model, rather than a similarity-based model, offers the best account of emotion categorization when information about status and potency changes is available.
The experiments examined processes by which analyzing reasons may influence attitude judgments. Participants made multiple liking judgments on sets of stimuli that varied along six a priori dimensions. In Study 1, the stimulus set consisted of 64 cartoon faces with six binary-valued attributes (e.g. a straight versus crooked nose). In Study 2, the stimuli were 60 digitized photographs from a college yearbook that varied along six dimensions uncovered through multi-dimensional scaling. In each experiment, half of the participants were instructed to think about the reasons why they liked each face before making their liking rating. Participants` multiple liking ratings were then regressed on the dimension values to determine how they weighted each dimension in their liking judgments. Results support a process whereby reasoning leads to increased variability and inconsistency in the weighting of stimulus information. Results are discussed with respect to Wilson`s model of the disruptive effects of reasoning on attitude judgments (e.g. Wilson, Dunn, Kraft, & Lisle, 1989).
Subjects were shown simple objects and were asked to reproduce the colors of the objects. Even though the objects remained on the screen while the subjects reproduced the colors and the objects` shapes were irrelevant to the subjects` task, subjects` color perceptions were influenced by the shape category of an object. For example, objects that belonged to categories with redder objects were judged to be more red than identically colored objects belonging to another category. Further experiments showed thatX the object categories that subjects use, rather than being fixed, depend on the objects to which subjects are exposed.
In the first part of this article, empirical evidence is reviewed that suggests a substantial amount of flexibility and context-sensitivity in peopleUs judgments of similarity. Four examples of flexible similarity from our laboratory are considered in detail. In the second part of the article, evidence for relatively constrained, invariant similarity assessments is considered. In the final section, a resolution to these apparently contradictory views on similarity is proposed. Assessments of similarity are used to make inferences from one entity to another. In some situations, flexible similarity is needed to tailor inferences to oneUs knowledge of the entities and their relations. In other situations, particularly those in which specific knowledge is missing or unavailable, a relatively constant similarity is needed to establish generally permissible inferences. Thus, the flexibility and stability of similarity may reflect its different cognitive uses.
Research and theory in decision making and in similarity judgment have developed along parallel paths. We review and analyze phenomena in both domains that suggest that similarity processing and decision making share important correspondences. The parallels are explored at the level of empirical generalizations and underlying processing principles. Important component processes that are shared by similarity judgments and decision making include generation of alternatives, recruitment of reference points, dynamic weighting of aspects, creation of new descriptors, development of correspondences between items, and justification of judgement.
Four experiments investigated the influence of categorization training on perceptual discriminations. Ss were trained according to 1 of 4 different categorization regimes. Subsequent to category learning, Ss performed a Same-Different judgement task. Ss` sensitivities (d`s) for discriminating between items that varied on a category(ir)relevant dimensions were measured. Evidence for acquired distinctiveness (increased perceptual sensitivity for items that are categorized differently) was obtained. One case of acquired equivalence (decreased perceptual sensitivity for items that are categorized together) was found for separable, but not integral, dimensions. Acquired equivalence within a categorization-relevant dimension was never found for either integral or separable dimensions. The relevance of the results for theories of perceptual learning, dimensional attention, categorical perception, and categorization are discussed.
The relation between similarity and categorization has recently come under scrutiny from several sectors. The issue provides an important inroad to questions about the contributions of high-level thought and lower-level perception in the development of people`s concepts. Many psychological models base categorization on similarity, assuming that thing belong in the same category because of their similarity. Empirical and in-principle arguments have recently raied objections to this connection, on the grounds that similarity is too unconstrained to provide an explanation of categorization, and similarity is not sufficiently sophisticated to ground most categories. Although these objections have merit, a reassesment of evidence indicates that similarity can be sufficiently constrained and sophisticated to provide at least a partial account of many categories. Principles are discussed for incorporating similarity into theories of category formation.
The question of “what makes things seem similar?” is important both for the pivotal role of similarity in theories of cognition and for an intrinsic interest in how people make comparisons. Similarity frequently involves more than listing the features of the things to be compared and comparing the lists for overlap. Often, the parts of one thing must be aligned or placed in correspondence with the parts of the other. The quantitative model with the best overall fit to human data assumes an interactive activation process whereby correspondences between the parts of compared things mutually and concurrently influence each other. An essential aspect of this model is that matching and mismatching features influence similarity more if they belong to parts that are placed in correspondence. In turn, parts are placed in correspondence if they have many features in common and if they are consistent with other developing correspondences.
Measurements of similarity have typically been obtained through the use of rating, sorting, and perceptual confusion tasks. In the present paper, a new method of measuring similarity is described, in which subjects rearrange items so that their proximity on a computer screen is proportional to their similarity. This method provides very efficient data collection. If a display has nobjects, then, after subjects have rearranged the objects (requiring slightly more than n movements), n(n-1)/2 pairwise similarityes can be recorded. As long as the constraints imposed by two-diminsional space are not too different from those intrinsic to psychological similarity, the technique apears to offer an efficient, user-friendly, and intuitive process for measuring psychological similarity.