At the dawn of the generative AI era: a modest social science perspective
A guest post by Anastas Vangeli, Assistant Professor at the School of Economics and Business at the University of Ljubljana. Join us for a S(c)iesta with Anastas on Wednesday (April 5) at 5pm CET!
Anastas Vangeli is an old friend of our foundation. He was our third ever S(c)iesta guest in May 2020, when he joined us to discuss China and its reaction to the COVID pandemic. Today, he is here to tell us about generative AI from the perspective of a social scientist. Below the paywall is a Zoom link for our S(c)iesta with Anastas on Wednesday.
.These ideas have been initially discussed during a NAWA-supported seminar as part of the New Projects Seminar Series at the Graduate School for Social Research, Polish Academy of Sciences, Warsaw, on May 17 2023
As a social scientist working at the intersection of global politics, economy, and society, I am humbled to discuss AI, especially with audiences that have deep technical knowledge, like the majority of S(c)iesta attendees and KANTAROT Substack readers. Sort of a layperson in terms of technical knowledge, I have a keen interest into the diverse perspectives on the socio-economic, cultural, political and managerial dimensions of AI and the competing socio-technical imaginaries of an AI-dominated future. The wider impacts and contexts of the generative AI revolution—from the arms race between tech corporations and governments to the inevitable securitization and politicization of AI discourse amidst geopolitical uncertainties, to the lack of adequate regulation—are well deserving of long in-depth discussions.
However, I would rather frame this text not by looking at the big picture but by engaging in a (meta)reflexive exercise inspired by the impacts of generative AI on the everyday lived experience of a scholar in the social sciences, witnessing a disruption of the field of knowledge production and a major hysteresis effect—defined as the disjunction between the rapid structural shift caused by generative AI advancements and the embodied resistance to the change exhibited by many in the academic field – or at least in the social sciences.
AI disrupts research
My approach to generative AI's impact is tactically optimistic but strategically cautious. This is perhaps distinctive from most of the academia, which more often than not, embraces alarmist posture. Academia is less accepting of non-human creations than marketing or media, where, as someone argued, the proliferation of AI tools is treated with dramatic opportunism, almost as the next crypto bull run. Ethics concerns and fears that AI can make misleading or factually incorrect statements that appear logical and correct are widespread. However, a growing number of academic AI influencers are advocating for the ethical and responsible use of generative AI to help with structure, flow, and argumentation.
New generative AI tools for academics – making their ways even to Euronews – are revolutionizing the research process from idea to literature search to research design and actual writing. Advanced AI tools help researchers find relevant references, fix language errors, and restructure their papers faster. Considering other AI tools like advanced speech-to-text converters (for transcribing interviews and field notes) or chatbots tailored for specific PDFs (that give me hope that one day I will be able to go through the piles of literature I have hoarded), the case for tactical enthusiasm is self-evident.
However, AI's transformative potential in research raises important questions about research quality and output quantity. This is where some cautiousness may be warranted.
AI on research quality and quantity
Access to generative AI tools could democratize knowledge production. This can be unsettling and threatening, especially for those who have spent sleepless nights honing their skills and often doing tedious labor that can now be easily outsourced to the machine. However, if most tools remain accessible, more underprivileged researchers may catch up. Grammarly and Quillbot are leveling English writing, making proofreaders obsolete (generative AI can also help translate content from other languages; on an unrelated topic, there is a growing debate about using AI tools to preserve linguistic diversity).
Academics are competitive, and new generative AI tools could speed up the race to the top – or to the bottom. Overall, the quality of research outputs is likely to increase (even if only superficially). Top research may incrementally improve, but most dramatic improvements will be felt among the lower ‘strata’ in the global academic hierarchy. However, generative AI also implies being able to do things quicker, saving scholars (and in general, cognitive workers) precious time. So what shall we do with the extra time available? For many, the answer would be something along the lines of finish that paper that is way overdue. Generative AI, in that sense, is likely to further propel the culture of cherishing unsustainable workloads.
In such a setting, the missed opportunity would be seeing AI as an augmented typewriter or calculator rather than a technology that can help human intelligence explore uncharted territories and solve unsolvable problems. As overproduction rises, only AI will be able to process the massive amount of publications, making academic literature navigation dependent on AI. In the natural sciences, the "end of the paper" is a foreseeable scenario, but even the boldest social scientists would not dare think in such a direction: after all, a significant portion of human knowledge is dependent on narratives and interpreting the social world.
Social science matters
Generative AI affects communication, collaboration, and knowledge production, making it a social technology. If AI can already (or soon) narrate and interpret better than an average social science scholar, what happens next? Humans' ability to perceive social reality and distinguish truth from untruth (without getting into philosophical debates about what truth is and whether it exists) remain unique. Social scientists use these skills to understand human behavior in various contexts, nuances, dynamics, and complexities. However, AI engineers' concept papers and manifestos show little social science influence, which is perhaps the result of siloed thinking (on both sides).
Paradoxically, overproduction in an age of generative AI may further reinforce the isolation, and not only between the scientific disciplines, but in between and within societies in general. I sympathize with Jaron Lanier, who fears that generative AI can lead to тхе rise of “mutual unintelligiblity.” The endless possibilities of generative AI allow for virtually unlimited supply of personalized content, from the earliest age onward – up to personalized cultural artifacts, concepts, and even languages spoken only by 1 person and their chatbot. Is the end, or at least the dramatic weakening of intersubjectivity between humans a possibility? Ultimately, addressing the various challenges posed by the advent of generative AI – from ethics to responsible practices in education and beyond – as well as data privacy, but also issueс such as transformation of businesses and industries, employment redundancy, proliferation of disinformation, or mental health impacts, will require ever closer involvement of the social sciences. AI-driven social theory development is also in sight. Could this be done while we still speak the same language? Or shall we need an AI intermediary to assist us?
The S(c)iesta with Anastas will take place on Wednesday (April 5) at 5pm Macedonian (CET) time. Below is a Zoom link for our supporters:
KAHTAPOT is a reader-supported publication. To receive new posts and support our work, consider becoming a free or paid subscriber.