Cognitive science has been traditionally organized around the individual as the basic unit of cognition. Despite developments in areas such as communication, human-machine interaction, group behavior, and community organization, individual-centric approach heavily dominates both cognitive research and its application. A promising direction for cognitive science is the study of augmented intelligence, or the way social and technological systems interact with and extend individual cognition. The cognitive science of augmented intelligence holds promise in helping society tackle major real-world challenges that can only be discovered and solved by teams made of individuals and machines with complementary skills who can productively collaborate with each other.
One major way that people engage in adaptive problem solving is by imitating others’ solutions. Prominent simulation models have found imperfect imitation advantageous, but the interactions between copying amount and other prevalent aspects of social learning strategies have been underexplored. Here, we explore the consequences for a group when its members engage in strategies with different degrees of copying, solving search problems of varying complexity, in different network topologies that affect the solutions visible to each member. Using a computational model of collective problem solving, we demonstrate that the advantage of partial copying is robust across these conditions, arising from its ability to maintain diversity. Partial copying delays convergence generally but especially in globally connected networks, which are typically associated with diversity loss, allowing more exploration of a problem space.We show that a moderate amount of diversity maintenance is optimal and strategies can be adjusted to find that sweet spot.
Goldstone, R. L. (2022). The well measured life: Performance, well-being, motivation, and identity in an age of abundant data. Current Directions in Psychological Science, 31(1), 1-9. https://doi.org/10.1177/09637214211053834
Our lives are being measured in rapidly increasing ways and frequency. These measurements have beneficial and deleterious effects at both individual and social levels. Behavioral measurement technologies offer the promise of helping us to know ourselves better and to improve our well-being by using personalized feedback and gamification. At the same time, they present threats to our privacy, self-esteem, and motivation. At the societal level, the potential benefits of reducing bias and decision variability by using objective and transparent assessments are offset by threats of systematic, algorithmic bias from invalid or flawed measurements. Considerable technological progress, careful foresight, and continuous scrutiny will be needed so that the positive impacts of behavioral measurement technologies far outweigh the negative ones.
How do people use information from others to solve complex problems? Prior work has addressed this question by placing people in social learning situations where the problems they were asked to solve required varying degrees of exploration. This past work uncovered important interactions between groups’ connectivity and the problem’s complexity: the advantage of less connected networks over more connected networks increased as exploration was increasingly required for optimally solving the problem at hand. We propose the Social Interpolation Model (SIM), an agent-based model to explore the cognitive mechanisms that can underlie exploratory behavior in groups. Through results from simulation experiments, we conclude that “exploration” may not be a single cognitive property, but rather the emergent result of three distinct behavioral and cognitive mechanisms, namely, (a) breadth of generalization, (b) quality of prior expectation, and (c) relative valuation of self-obtained information. We formalize these mechanisms in the SIM, and explore their effects on group dynamics and success at solving different kinds of problems. Our main finding is that broad generalization and high quality of prior expectation facilitate successful search in environments where exploration is important, and hinder successful search in environments where exploitation alone is sufficient.
Cognitive science continues to make a compelling case for having a coherent, unique, and fundamental subject of inquiry: What is the nature of minds, where do they come from, and how do they work? Central to this inquiry is the notion of agents that have goals, one of which is their own persistence, who use dynamically constructed knowledge to act in the world to achieve those goals. An agentive perspective explains why a special class of systems have a cluster of co-occurring capacities that enable them to exhibit adaptive behavior in a complex environment: perception, attention, memory, representation, planning, and communication. As an intellectual endeavor, cognitive science may not have achieved a hard core of uncontested assumptions that Lakatos (1978) identifies as emblematic of a successful research program, but there are alternative conceptions according to which cognitive science has been successful. First, challenges of the early, core tenet of “Mind as Computation” have helped put cognitive science on a stronger foundation—one that incorporates relations between minds and their environments. Second, even if a full cross-disciplinary theoretic consensus is elusive, cognitive science can inspire distant, deep, and transformative connections between pairs of fields. To be intellectually vital, cognitive science need not resemble a traditional discipline with its associated insularity and unchallenged assumptions. Instead, there is strength and resilience in the diverse perspectives and methods that cognitive science assembles together. This interdisciplinary enterprise is fragile and perhaps inherently unstable, as the looming absorption of cognitive science into psychology shows. Still, for many researchers, the excitement and benefits of triangulating on the nature of minds by integrating diverse cases cannot be secured by a stable discipline with an uncontested core of assumptions.
Like many other scientific disciplines, psychological science has felt the impact of the big-data revolution. This impact arises from the meeting of three forces: data availability, data heterogeneity, and data analyzability. In terms of data availability, consider that for decades, researchers relied on the Brown Corpus of about one million words (Kučera & Francis, 1969). Modern resources, in contrast, are larger by six orders of magnitude (e.g., Google’s 1T corpus) and are available in a growing number of languages. About 240 billion photos have been uploaded to Facebook,1 and Instagram receives over 100 million new photos each day.2 The largescale digitization of these data has made it possible in principle to analyze and aggregate these resources on a previously unimagined scale. Heterogeneity refers to the availability of different types of data. For example, recent progress in automatic image recognition is owed not just to improvements in algorithms and hardware, but arguably more to the ability to merge large collections of images with linguistic labels (produced by crowdsourced human taggers) that serve as training data to the algorithms. Making use of heterogeneous data sources often depends on their standardization. For example, the ability to combine demographic and grammatical data about thousands of languages led to the finding that languages spoken by more people have simpler morphologies (Lupyan & Dale, 2010 ). The ability to combine these data types would have been substantially more difficult without the existence of standardized language and country codes that could be used to merge the different data sources. Finally, analyzability must be ensured, for without appropriate tools to process and analyze different types of data, the “ data” are merely bytes.
Tump, A. N., Wu, C. M., Bouhlel, I., & Goldstone, R. L. (2019).The Evolutionary Dynamics of Cooperation in Collective Search. Proceedings of the 41st Annual Conference of the Cognitive Science Society. (pp. 883-889). Montreal, Canada: Cognitive Science Society.
How does cooperation arise in an evolutionary context? We approach this problem using a collective search paradigm where interactions are dynamic and there is competition for rewards. Using evolutionary simulations, we find that the unconditional sharing of information can be an evolutionary advantageous strategy without the need for conditional strategies or explicit reciprocation. Shared information acts as a recruitment signal and facilitates the formation of a self-organized group. Thus, the improved search efficiency of the collective bestows byproduct benefits onto the original sharer. A key mechanism is a visibility radius, where individuals have unconditional access to information about neighbors within a limited distance. Our results show that for a variety of initial conditions—including populations initially devoid of prosocial individuals—and across both static and dynamic fitness landscapes, we find strong selection pressure to evolve unconditional sharing.
Sloman, S. J., Goldstone, R. L., & Gonzalez, C. (2019). Complex exploration dynamics from simple heuristics in a collective learning environment. Proceedings of the 41st Annual Conference of the Cognitive Science Society. (pp. 2818-2824). Montreal, Canada: Cognitive Science Society.
Effective problem solving requires both exploration and exploitation. We analyze data from a group problem-solving task to gain insight into how people use information from past experiences and from others to achieve explore-exploit trade-offs in complex environments. The behavior we observe is consistent with the use of simple, reinforcement-based heuristics. Participants increase exploration immediately after experiencing a low payoff, and decrease exploration immediately after experiencing a high or improved payoff. We suggest that whether an outcome is perceived as “high” or “low” is a dynamic function of the outcome information available to participants. The degree to which the distribution of observed information reflects the true range of possible outcomes plays an important role in determining whether or not this heuristic is adaptive in a given environment.
Most maps of science use a network layout; few use a landscape metaphor. Human users are trained in reading geospatial maps, yet most have a hard time reading even simple networks. Prior work using general networks has shown that map-based visualizations increase recall accuracy of data. This paper reports the result of a comparison of two comparable renderings of the UCSD map of science that are: the original network layout and a novel hexmap that uses a landscape metaphor to layout the 554 subdisciplines grouped into 13 color-coded disciplines of science. Overlaid are HITS metrics that show the impact and transformativeness of different scientific subdisciplines. Both maps support the same interactivity, including search, filter, zoom, panning, and details on demand. Users performed memorization, search, and retrieval tasks using both maps. Results did not show any significant differences in how the two maps were remembered or used by participants. We conclude with a discussion of results and planned future work.