A new research has revealed that artificial intelligence (AI) can develop social conventions and norms akin to human societies without guidance from creators or users.
The research, a collaboration between the University of London and the IT University of Copenhagen, indicates similarities between AI agents and human societies. The study indicates that AI agents in groups exhibit patterns of language and social norms similar to those of humans.
Lead researcher Ariel Ashery notes that the study viewed AI via the lenses of social interactions rather than the conventional approach as a lone entity. The researchers paired large language models (LLMs) in groups and prompted the AI agents to select a name.
The researchers issued a reward each time the paired LLMs selected the same names, while a penalty was given for each dissimilar choice. Experts limited the memory of the AI agents and did not disclose that the tests were part of a broader study.
Despite a limited memory and not being aware of the existence of a larger group, the AI agents adopted new naming conventions without any prior prompting. Across the larger group, the researchers reported similar naming conventions resembling human societies.
“The agents are not copying a leader. They are all actively trying to coordinate, and always in pairs,” said a researcher. “Each interaction is a one-on-one attempt to agree on a label, without any global view.”
Apart from uniform naming conventions exhibited by the AI agents, the researchers revealed that collective biases were occurring, but identifying the source of the bias proved impossible for the team.
The researchers probed even further, identifying instances of a small group of AI agents working together to introduce new naming conventions for the larger group. The paper noted that the latest research will guide AI companies and regulators in designing safe models for commercial applications.
“Understanding how they operate is key to leading our coexistence with AI, rather than being subject to it,” read the paper, dubbed Emergent Social Conventions and Collective Bias In Populations.
As AI chatbots continue to rack impressive adoption metrics, researchers are uncovering new insights into their operations. For instance, researchers from Austria’s University of Innsbruck are exploring the upsides of using temporal validity to improve AI chatbot performance.
Another study by a group of Belgian scientists revealed that blockchain technology can enable autonomous AI agents to learn. Furthermore, new research indicates that AI chatbots are more inclined to sycophancy over truthful answers.
Risks of using AI in hiring processes
In separate research, experts from Australia highlighted a streak of discriminatory practices by AI recruiters toward job applicants, sparking worry over their use by HR professionals.
Given the lack of diversity of training data, experts are warning against the widespread use of AI hiring systems for candidate screening and shortlisting. Lead researcher Natalie Sheard revealed that AI recruitment systems appear to favor certain demographics, discriminating against candidates from certain regions not included in the training data.
In a comparative analysis, Sheard notes that training data sets for AI recruitment software skew the scale in favor of United States-based residents over an international demographic. In the study, the developer of one AI recruiter revealed that only 6% of its training data came from Australia, while 36% of the data came from white job applicants.
“The training data will come from the country where they’re built—a lot of them are built in the US, so they don’t reflect the demographic groups we have in Australia,” remarked Sheard.
The fallout from the pattern of U.S.-based training data from AI hiring systems is far-reaching, as it puts candidates outside the U.S. at an immediate disadvantage, even if they meet the hiring criteria.
Non-native English speakers with an accent also face an uphill climb with AI recruiters, with the software failing to transcribe their answers accurately. Despite service providers claiming that AI recruiters can transcribe a broad range of accents with minimal error, Sheard’s research highlighted an absence of evidence to back the claim.
Sheard’s research criticized the use of AI in the hiring process, noting the dire lack of transparency in decisions. She notes that job applicants can easily obtain feedback in human-based processes, unlike AI hiring-based processes.
However, the use of blockchain technology can improve transparency in the hiring process, leveling the playing field for all applicants. The research predicts an avalanche of AI discrimination cases in courts by job applicants in outlier demographics rejected by software.
From hiring to internal operational processes, AI is revolutionizing the landscape of work worldwide. An International Monetary Fund (IMF) report notes that generative AI applications in the workplace will supercharge productivity, but fears of AI-based job losses remain palpable.
On the positive side, upskilling can increase staff salaries by up to 40%, while an International Labour Organization (ILO) report notes that job losses from AI are unfairly exaggerated. Southeast Asia is leading the charge for AI integration in the workplace, outpacing North America and Europe in adoption metrics.
In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.
Watch: Demonstrating the potential of blockchain’s fusion with AI
title=”YouTube video player” frameborder=”0″ allow=”accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share” referrerpolicy=”strict-origin-when-cross-origin” allowfullscreen=””>