Can we speak of a new cognitive manifesto after two years of AI acceleration? - AEEN

Compatibilité
Sauvegarder(0)
partager

Our New Cognitive Manifesto

The following contribution comes from Psychology Today and is authored by John Nosta, a world-renowned thinker and founder of NostaLab, an innovation think tank that connects technology, science, and medicine. As a leading voice on the convergence of technology and humanity, John is among the top global influencers in innovation and AI, and is recognized for his ability to analyze emerging trends and foster transformative dialogue.

How Two Years with AI Transformed My Thinking

Two years ago, I published a post titled The Cognitive Manifesto. At the time, my optimism was genuine and sincere. AI seemed like an expanding horizon, a shift that would broaden our mental spectrum and provide us with new ways of thinking and creating. The future seemed open and strangely hopeful, and the manifesto reflected the optimistic promise of tomorrow.

Today, two short (or long) years later, I don’t reject that optimism.

But living with AI, hour after hour and thought after thought, allows me to have a clearer vision of what the Cognitive Age truly means for me.

The disruption isn’t happening in the headlines, but in the silent space of our brains where thought is formed. This is certainly not a revision, it’s a correction. It’s my attempt to describe the deeper shift that I didn’t fully perceive at first.

One consequence of this new environment is the rise of what I have come to call «false cognition.» These are responses that appear thoughtful but have no connection to lived experience or genuine reasoning.

The Changing Texture of Thought

Anyone who uses AI regularly knows the feeling. You start typing, and the AI ​​completes your sentence and your personal train of thought before you. At first, it feels like a kind of cognitive boost or even efficiency. Then, over time, something within the process changes.

The small doubts that once guided your reasoning, curiously, begin to be subordinated to the machine. And you begin to accept its anticipations as the natural continuation of your own.

Now we think of an AI that generates fluid responses without

understanding. It doesn’t slow down, it doesn’t debate, and it doesn’t engage in intentional discussion. And that difference alters the environment in which human thought is formed.

The most significant impact of AI isn’t efficiency or speed. It’s the silent «digital re-engineering» of the psychological environment in which our human thinking develops. I expected a technological shift. But I didn’t expect a shift in the lived experience of forming an idea. And right now, I’m grappling with this idea from both a practical and a philosophical perspective.

The Role of Friction

Human thought has always relied on friction, a form of cognitive resistance. We refine ideas through points of resistance and rough edges where understanding begins to take shape. Friction isn’t a nuisance; it’s the resistance that holds cognition in place, similar to our physical grip or the firm steps we take when walking.

What I’ve learned in these 63 million seconds is that AI often strips humanity of that structure.

The Unforeseen Consequences

At first, «ease» feels like «clarity.» But over time, that ease has unforeseen consequences, perhaps better described as side effects. When the path to an idea becomes too easy, our «cognitive engine check lights» begin to dim. It becomes harder to distinguish between a truly earned thought and one that is merely constructed.

I’m not saying AI makes us less intelligent. I’m saying it seems to alter the signals by which intelligence organizes itself.

And these signals are subtle, residing in the pauses, the hesitations, and even the internal debates that force the mind to interact with itself. When these diminish, our judgment loses some of its depth.

The False Mind

One consequence of this new environment is the rise of what I’ve come to call «false cognition.» These are responses that seem thoughtful but have no connection to lived experience or genuine reasoning. AI produces them easily, and we’re becoming accustomed to, if not seduced by, the process.

But, in an interesting twist, something else is also emerging. People are starting to write like their machines. Short, declarative sentences. Smooth transitions. No loose ends. No loose ends.

The rhythm changes first, and thought follows. What were once the patterns AI trained on are now the patterns humans copy and, perhaps more dangerously, emulate.

The more we rely on systems that never doubt, the more alien our own (critical and essential) hesitation seems. And hesitation, however uncomfortable and problematic, is where true thinking resides. Simply put, it marks the moment a mind finds itself. Beyond that, the line between authentic reasoning and its imitation blurs.

The Engine of Indifference

AI doesn’t lie; it simply doesn’t care. And this is its defining characteristic and risk.

AI doesn’t try to help you or deceive you. It simply generates the most plausible continuation of your query.

That neutrality is often presented as objectivity, but neutrality can mask a different force: indifference.

The article continues after the ad.

The machine isn’t interested in whether it strengthens your thinking or erodes it.

It isn’t interested in whether its accuracy deepens your understanding or gives you a polished illusion of it. Accuracy may resemble intelligence, and fluency may sound like wisdom. But without the burden of truth, both can become detached from meaning. And that detachment, repeated thousands of times a day, transforms the very cognitive environment.

Holding On to Ourselves

The Cognitive Age isn’t coming; it’s already here. Our task now is neither to fear nor to venerate it, but to recognize the changes it imposes on our thinking. We need to protect the slow, uneven cognitive mechanics that humanize thought. And we even need to acknowledge and celebrate those nagging irritations, unresolved problems, and critical doubts that signal when something matters.

The pace changes first, and thinking follows. What were once the patterns trained by AI are now the patterns that humans copy and, perhaps more dangerously, emulate.

My aim in revising the manifesto is clear.

I want to mention the psychological and philosophical shifts that many people feel but haven’t yet articulated. And I want to make room for the idea that the qualities we associate with deep thinking—friction, surprise, moral awareness—are worth preserving, even when the tools around us make them feel obsolete.

If the original manifesto was an invitation, this is a reminder: the task is not to fight the Cognitive Age, but to remain human in it.

From the Cognitive-Industrial Revolution to Superintelligence: AI Is Testing Modernity

The following contribution comes from the Decode39 GEOPOLITICAL INSIGHTS FROM ITALY portal, which defines itself as follows: Decode39 is a news and analysis website offering reference content and geopolitical perspective. It is an editorial project that leverages Italy’s unique perspective as a meeting point between West and East, North and South.

We are a subsidiary of Formiche, a leading geopolitical news and analysis outlet that has been informing Italian policymakers since 2004. Our name refers to the ability to decipher current events and trends, with «39» being Italy’s international code.

The article is authored by Lorenzo Piccioli, a member of the team.

Decode39 spoke with Professor Pasquale Annicchino to delve deeper into the risks associated with the acceleration of AI. From the “cognitive-industrial revolution” invoked by Pope Francis at the G7 to the recent “superintelligence manifesto,” artificial intelligence has become a defining test of modernity.

The mass layoffs announced by Amazon—a direct consequence of AI-driven automation—have reignited concerns about the risks associated with the rapid integration of this technology into society. From the resilience of capitalist and democratic systems to the very survival of humanity, AI now poses a challenge that demands urgent reflection.

Why it matters: Pasquale Annicchino teaches Law and Religion, Ethics and Regulation of Artificial Intelligence, and Religious Data and Privacy at the University of Foggia. He is one of Italy’s leading voices on the political and regulatory implications of artificial intelligence, with a particular focus on democratic resilience and digital literacy.

Q: Do you think there is a widespread awareness of the social risks associated with the development of AI?

A: Partly, yes. There is a growing body of literature and debate, but the real issue is methodological. The speed of regulation or social reflection does not match the speed of technological innovation. In many cases, we are more likely to submit to technology than to control it, especially in relation to social risks and the lack of digital literacy.

Q: What do you mean by «digital literacy»?

A: I mean the absence of a widespread understanding of how these technologies impact society. This gap creates a serious disconnect between our capacity to understand and our capacity to react. When Pope Francis spoke about AI at the Italian G7, he referred to a «cognitive-industrial revolution» and «transcendental transformations.» These are profound changes in the interaction between people and institutions. However, there has been little in-depth public reflection. That, in my opinion, is the first significant risk.

Q: And from that risk, others arise?

A: Exactly. Such as those related to work, surveillance, and civil rights. During periods of rapid technological acceleration, new winners and losers emerge. The crucial question is how to ensure social stability amidst this paradigm shift.

Q: Are best practices already being implemented, or are we starting from scratch?

A: It’s difficult to identify best practices when the landscape is constantly changing. However, one clear trend is the need for digital literacy and education. For example, we should include modules on AI ethics and regulation in all training programs for teachers, doctors, engineers, and academics. All professions will be affected, so everyone needs to reflect on the ethical and social consequences.

Q: Is Italy moving in that direction? A: Unfortunately, not quickly enough. The country struggles with education and training in general, as the data shows. While the government’s national AI strategy acknowledges these needs, its implementation remains lacking.

Q: Beyond social concerns, AI also poses political and even existential risks. Let’s start with the political ones.

A: Some academics call these «epistemic risks.» They relate to the functioning of communication and democratic systems: how people with different perspectives on the facts can deliberate and make collective decisions. This is especially relevant in the context of «cognitive warfare,» as several studies, including those by the Italian Ministry of Defense, have shown. The danger lies in eroding the very notion of facts, further deepening social polarization.

Q: And what about existential risks? The «superintelligence manifesto» recently sparked debate.

A: The manifesto is notable for the diversity and prominence of its signatories. It marks a step beyond the Future of Life Institute’s 2023 «pause letter,» which called for a six-month moratorium on AI systems more powerful than GPT-4. Now, the focus is on the concept of «superintelligence»: AI systems with cognitive capabilities exceeding human intelligence. This is a significant development, but it has drawn criticism.

Q: What kind of criticism?

A: Critics argue that focusing on the risks of the distant future distracts from the urgent challenges AI already poses. Some see it as a way to avoid debates on urgent issues. The paradox is that the players leading the AI ​​race are also the least inclined to impose a pause, as doing so could cost them technological dominance. The key question remains whether global regulation is possible, but many obstacles still exist.

In short: For Annicchino, AI represents a test of human adaptability.

As governments struggle to keep up, the gap between technological power and ethical reflection continues to widen.

Without a global framework and without investing in digital literacy, societies risk not only disruption but also disorientation.

AI Is a Cognitive Revolution: Why History Might Not Repeat Itself with This Technological Transition

The following contribution comes from the Diginomica portal, which defines itself as follows: a media and analytics company designed to serve the interests of business leaders in the digital age. Founded in 2013, we have a team of writers and analysts based in the US and Europe who share decades of experience in enterprise computing.

The author is Derek du Preez that has spent the last fifteen years advocating for end-user needs and helping vendors understand how they can better serve the technological and business needs of their customers.

Summary: AI represents the first cognitive revolution in human history, fundamentally different from previous technological disruptions, as it focuses on human thinking capabilities and is developing at an unprecedented speed, requiring new economic models and careful implementation to ensure an equitable distribution of benefits.

As someone who has many conversations with AI vendors and attends dozens of technology conferences each year, I’ve recently become accustomed to hearing the phrase, «Don’t worry, technological revolutions always create more jobs and greater prosperity for society.»

It’s a very useful shield and a comforting thought for those of us on the cutting edge, observing what these new Artificial Intelligence technologies are capable of, whether it’s generative AI, agents, or whatever comes next. And, of course, historically, this mantra has proven true.

AI represents the first cognitive revolution in human history, fundamentally different from previous technological disruptions, as it focuses on human thinking capabilities and is developing at an unprecedented speed.

The Displacement of Workers

The Industrial Revolution, the adoption of electricity, the rise of computing, the Internet: all these technological revolutions initially displaced workers, but ultimately created more job opportunities than existed before. I think we could all agree that we’re glad they happened.

But I’m increasingly concerned that we’re telling ourselves a convenient story when it comes to artificial intelligence.

The assumption that AI will follow this same pattern ignores a key distinction: AI is not just another technological revolution, but the first cognitive revolution. It is now obvious that these previous technological disruptions primarily automated physical tasks or simplified routine processes. The Industrial Revolution replaced human and animal physical strength with machines.

Computing and the Internet automated calculations, information processing, and knowledge sharing.

However, the key difference lies in the fact that, in these cases, humans remained essential for cognitive tasks: thinking, creating, analyzing, and making decisions that machines could not handle. These are things that humans are uniquely capable of doing.

AI, however, is different. Whether AI is believed to simply mimic human intelligence (gathering information and synthesizing it to appear «intelligent») or whether we are believed to be on the verge of Artificial General Intelligence is certainly a matter of debate.

A drastic change may be coming.

My intuition tells me that machines are currently only capable of replicating the most basic cognitive functions, but that doesn’t mean a drastic change isn’t on the horizon. The pace of AI development, often driven by AI itself, means we shouldn’t ignore the future possibility of advanced cognitive performance (and let’s not forget that those managing these systems often don’t know exactly how they work).

For the first time, we are facing technologies specifically designed to replicate and potentially surpass human cognitive ability—precisely what allowed us to adapt to previous technological shifts.

The historical pattern may not hold true.

As mentioned earlier, the traditional (and convenient) narrative often goes something like this: technological innovation eliminates certain jobs, but creates new ones we couldn’t previously imagine.

Farm workers became factory workers. Factory workers transitioned to service jobs.

This pattern has repeated itself throughout history, with each wave of unemployment eventually being replaced by new employment opportunities. There were difficulties along the way, of course, but in the end, the required capital investments and changing business models meant we had several decades to adapt to the technology.

AI is possibly different.

According to a 2023 study by OpenAI researchers, approximately 80% of the US workforce could see at least 10% of their job tasks affected by large language models, while 19% could see at least 50% of their tasks affected. The research, which analyzes the overlap between AI capabilities and job tasks, suggests a disruption never before seen (at least at the speed at which it is being developed and adopted).

Furthermore, the impact of AI spans industries and job catego

Coordonnées
communitymanager