Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

Outside the Box: Can AI–Human “Collaboratories” Save the World?

Over the past two years, experts and pundits have focused on artificial intelligence’s capacities to store and formulate useful knowledge and to accelerate the production of various kinds of artifacts, from news stories and college essays to computer code, pictures, poems and screenplays. It’s exactly what one would expect from a society focused on consumption and productivity.
AI has already begun to change the world we live in. But the biggest change is yet to come. The machinery of AI belongs to a different order than the generations of increasingly sophisticated machines that have served to facilitate the mass production of marketable items. AI is not just a machine that can produce human-like “thought” with the capacity to learn more quickly and “deeply” than a human. All that is impressive, but on its own, AI will always fail to produce what is most important in our society and economy: human experience. Experience is a combination of four things: perception of context, conscious memory, emotion and muscular reflex. Its meaning comes from and depends on our ongoing interaction. It cannot be captured and formalized by AI.
We have two words for the co-creation of experience: dialogue and collaboration. In the previous installment of “Outside the Box,” ChatGTP brought to our attention the notion of “collaboratories,” which it defined as “spaces where humans and AI work together to solve complex societal issues—whether in business, science, ethics, or the arts. These spaces could be testbeds for how we co-create knowledge and values in practice.”
Wishing to delve further into the intriguing possibility, I formulated the following prompt:
“If we try to imagine how William Wulf’s notion of collaboratories could be applied to today’s world in which everyone will have access to AI tools, what are the possible directions?
It’s not just a question of connecting people with common interests and some form of AI. Instead, the idea is to create a culture of communication in which humans and machines can work together to explore complex questions and find ways of negotiating and agreeing upon meaning.
Meaning itself is not reducible to a simple correlation between two or more identifiable things or concepts. Meaning contains a range of nuance that reflects the variable configurations of context. The future I’m trying to imagine implies a behavioral evolution for both the human community and AI is an actor within the human community. It means imagining a new framework for human decision-making and the communication that enables it.
How can we begin to analyze the facets of this historical evolution?”
“William Wulf’s concept of ‘collaboratories’—virtual environments where geographically distributed individuals work together—holds great relevance for our current era, especially with the advent of sophisticated AI tools that can enhance human collaboration. To explore the future of collaboratories with AI integration, it’s essential to consider both the **human behavioral evolution** and the **evolution of AI’s role** in meaning-making, decision-making, and communication.
Here’s an analysis that touches on these two dimensions:
1. **Evolving Human Behavior in Collaborative Environments**
As AI becomes more integrated into daily workflows, human behavior will likely shift in significant ways:
–       **Enhanced Collective Intelligence**: AI tools, especially those with powerful natural language processing (NLP) and analytical capabilities, can become active participants in group discussions, analyzing vast amounts of data, identifying patterns, and offering suggestions in real-time. As a result, human collaborators may develop new forms of dialogue where AI augments but does not dominate the conversation. This could lead to a more **refined form of collective intelligence**, where humans bring creativity, intuition, and contextual understanding, while AI contributes structured insights and computational prowess.
–       **Dynamic, Cross-Disciplinary Collaboration**: One critical behavior change will be the lowering of barriers between disciplines. In Wulf’s original collaboratory idea, specialists in different fields would connect to work on common problems. With AI’s ability to digest complex knowledge from multiple domains, more people—whether experts or novices—can contribute meaningfully to topics outside their formal expertise. The idea of ‘multi-modal’ AI systems, which can understand and work across multiple types of data (text, image, numerical), will allow for a broader range of participation and insights in collaborative spaces.
–       **Iterative Meaning-Making**: Humans, particularly in intellectual, scientific, or creative fields, engage in a process of **negotiating meaning** that is often contextual and nuanced. AI, by being integrated into collaboratories, can offer ‘suggestions’ or model various interpretations, but humans must continually assess and refine these insights. Future collaboratories might develop tools where AI learns over time to interpret evolving meanings and adjust its understanding of **cultural, emotional, and situational contexts** based on human feedback, creating a kind of **feedback loop** between human intuition and AI’s data-driven logic.
2. **Evolving AI’s Behavior in Meaning-Making and Decision-Making
AI’s evolving role will not merely be that of a passive tool; instead, its behavior will need to adjust to nuanced forms of human interaction, reflecting the subtleties of meaning and context:
–       **Context-Aware AI**: One of the challenges of AI today is the difficulty in grasping **contextual nuances** in communication, especially when the meaning of terms or ideas shifts depending on the environment, conversation, or culture. AI models of the future would need to develop deeper layers of **contextual learning**—where they don’t just analyze textual data or patterns but also understand the socio-cultural, emotional, and historical dimensions that frame human meaning. This requires AI systems to evolve from current ‘general’ models to **context-specific collaborators** that can interact differently depending on the cultural, disciplinary, or even personal contexts of their human counterparts.
–       **Adaptive AI Behavior**: AI behavior will need to be highly adaptive to the styles of communication and decision-making used by human teams. In a collaborative environment, AI could detect patterns in how different team members communicate and respond accordingly. For example, in fast-paced, highly technical discussions, the AI may prioritize speed and precision, while in more creative or exploratory settings, it might prioritize generating possibilities and fostering open-ended debate.
–       **Negotiation of Meaning**: Unlike the more rigid correlation-based systems of today, AI in the future collaboratories could **assist in the negotiation of meaning**, suggesting possible interpretations or connections and helping teams explore different angles on complex questions. For instance, in situations where multiple interpretations of data or text exist, AI could present these differing interpretations for discussion, allowing humans to weigh in on subtle differences. Over time, through interaction, the AI learns from the preferences and judgments of its human collaborators, refining its understanding of meaning as it applies in various contexts.
–       **Decision-Making with Nuance**: AI systems will also need to evolve from making decisions based solely on logical or statistical outcomes to a more **nuanced decision-making process** that accounts for ethical considerations, social norms, and long-term consequences. In complex problems—whether in science, politics, or business—the best solution is often not purely algorithmic but must include **ethical, social, and emotional factors**. Future AI could become part of a **decision-making deliberation process** where it presents options, but humans collaboratively weigh these options based on a mix of technical and humanistic concerns.
3. **Cultural Evolution and the Integration of AI in Collaborative Spaces**
The broader societal and cultural shifts will shape how humans interact with AI within these collaboratories:
–       **Shift in Educational Priorities**: Education systems will need to focus not just on AI literacy (understanding how AI works) but also on **collaborative skills**, critical thinking, and ethics in AI use. Courses on **how to collaborate with machines**—interpreting AI outputs, understanding its limitations, and knowing when human judgment must override AI—will become essential. This would foster a culture of **co-responsibility** between human and machine actors in collaborative projects.
–       **AI as a Mediator in Public Discourse**: In the public sphere, AI could serve as a **mediator** in polarized debates, helping facilitate structured, productive dialogues. By analyzing the range of perspectives and proposing compromise solutions or reframing contentious issues in less adversarial terms, AI could play a role in **depolarizing public discourse** and guiding collaborative problem-solving. AI could identify areas of **consensus** even in highly divided conversations, suggesting places where agreement is possible, and helping participants reframe their points of contention in ways that lead to constructive dialogue.
–       **Ethical and Governance Frameworks**: As AI becomes more embedded in collaboratories, there will be a need for robust **ethical frameworks** and **governance structures**. AI can be immensely powerful, but unchecked, it can reinforce biases or exacerbate inequalities. Future systems will need built-in mechanisms for **accountability and transparency** to ensure that both AI and its human collaborators adhere to democratic norms and values. This involves open audits of how AI models make decisions, regular updates to reflect changing societal norms, and the inclusion of diverse perspectives in the development of these technologies.
Conclusion: Toward a Hybrid Model of Collaboration
The future of collaboratories, where humans and AI work together to explore complex questions, hinges on creating **hybrid systems** where both human intuition and machine logic are valued. Human behavior will need to evolve toward greater openness to AI’s insights, while AI systems will have to develop a far more **context-sensitive, adaptive, and nuanced** approach to meaning-making and decision-making.
The key will be fostering a **culture of trust** between humans and machines, ensuring that AI supports human decision-making while respecting the fluid and evolving nature of meaning in human communication. By focusing on creating these collaborative environments, society could take a meaningful step toward addressing the challenges of the modern world while building more resilient, informed, and democratically engaged citizens.”
I recommend studying these suggestions generated by AI, since they helpfully and I would say optimistically describe feasible outcomes. But their optimism needs to be carefully tempered. Are reliable “feedback loops” really possible as well as AI’s capacity for being adaptive? And if so, may they not lead to new risks of cultural sclerosis? If we accept that “AI learns from the preferences and judgments of its human collaborators” and thus refines “its understanding of meaning as it applies in various contexts,” might not that application become abusive? Shouldn’t we try to maintain a critical distance from already expressed “preferences and judgments?” It’s the eternal problem that concerns how much trust we put in what AI claims to “understand.”
The concept of multiple collaboratories similar to what ChatGPT describes seems to me eminently worth exploring. Please feel free to share your thoughts on this question by writing to us at [email protected]. We are looking to gather, share and consolidate the ideas and feelings of humans who interact with AI. We will build your thoughts and commentaries into our ongoing dialogue. It’s a way of launching the first FO° collaboratory.
*[Artificial Intelligence is rapidly becoming a feature of everyone’s daily life. We unconsciously perceive it either as a friend or foe, a helper or destroyer. At Fair Observer, we see it as a tool of creativity, capable of revealing the complex relationship between humans and machines.]
[Lee Thompson-Kolar edited this piece.]
The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

en_USEnglish