Close Menu
Earth & BeyondEarth & Beyond

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    A singularly personal Tim Burton film

    This Is the Iceberg Capital of the World

    New US Security Strategy aligns with Russia’s vision, Moscow says

    Facebook X (Twitter) Instagram
    Earth & BeyondEarth & Beyond
    YouTube
    Subscribe
    • Home
    • Business
    • Entertainment
    • Gaming
    • Health
    • Lifestyle
    • Sports
    • Technology
    • Trending & Viral News
    Earth & BeyondEarth & Beyond
    Subscribe
    You are at:Home»Technology»Too much social media gives AI chatbots ‘brain rot’
    Technology

    Too much social media gives AI chatbots ‘brain rot’

    Earth & BeyondBy Earth & BeyondNovember 3, 2025003 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Email
    Too much social media gives AI chatbots ‘brain rot’
    Share
    Facebook Twitter LinkedIn Pinterest Email

    Someone looking the page of Llama 3 on a phone, in a background the blurry logo of Meta AI.

    Llama 3 is a large language model owned by tech firm Meta.Credit: MauriceNorbert/Alamy

    Artificial intelligence (AI) chatbots are worse at retrieving accurate information and reasoning when trained on large amounts of low-quality content, particularly if the content is popular on social media1, finds a preprint posted on arXiv on 15 October.

    In data science, good-quality data need to meet certain criteria, such as being grammatically correct and understandable, says co-author Zhangyang Wang, who studies generative AI at the University of Texas at Austin. But these criteria fail to capture differences in content quality, he says.

    Wang and his colleagues wanted to see the effects of large language models (LLMs) trained on low-quality data — defined as short, popular social-media posts, or those containing superficial or sensationalist content. They looked at how these data affected model reasoning, retrieval of information from long inputs, the ethics of responses and model personality traits.

    The team reports that models given low-quality data skip steps in their reasoning process — or don’t use reasoning at all — resulting in the model providing incorrect information about a topic, or when the authors presented a multiple choice question, the model would pick the wrong answer. In data sets with a mix of junk and high-quality data, the negative effect on reasoning increased as the proportion of junk data increased. The work has not been peer-reviewed.

    The findings support a long-held tenet of AI: the importance of data quality, says Mehwish Nasim, an AI researcher at the University of Western Australia in Perth. “Even before people started to work on large language models, we used to say that, if you give garbage to an AI model, it’s going to produce garbage,” she adds.

    Garbage in, garbage out

    Wang and his colleagues used one million public posts on the social-media platform X from an existing database to train open-source models: Llama 3, an LLM from tech firm Meta in Menlo Park, California, and three versions of Qwen, developed by Alibaba in Hangzhou, China. Qwen is a reasoning model, like DeepSeek’s R1 model and OpenAI’s o1, meaning it is designed to produce reasoning steps to arrive at an answer to a user query. Llama, however, is an instruction-tuned language model and its reasoning ability is less advanced.

    To determine the model’s personality traits, the team used psychology questionnaires. Before training on junk data, Llama exhibited agreeableness, extroversion, conscientiousness, openness and a bit of narcissism, say the authors. But as Llama was fed more junk data, its negative traits were amplified, and psychopathy emerged, according to one of the questionnaires.

    To adapt and improve models over time, researchers can adjust the prompt instructions. When the team tried doing this for a Llama model trained exclusively on junk data, they found that it only partially improved performance, as did increasing the amount of non-junk data used for training. The model also continued to skip steps when the team tried to encourage it to reflect on and fix failures in its reasoning, suggesting that different methods to mitigate the effect of junk data might be needed.

    Brain chatbots media rot Social
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleThe AWS Outage Shows Why Crypto Can’t Keep Relying On Centralized Infrastructure
    Next Article Casualties feared after magnitude-6.3 earthquake hits Afghanistan
    Earth & Beyond
    • Website

    Related Posts

    Starlink made ‘work from home’ possible from anywhere — now, I’m ready for a change

    December 7, 2025

    The Indian Ocean disaster is a climate tragedy — and needs more attention

    December 7, 2025

    NASA Wins Second Emmy Award for 2024 Total Solar Eclipse Broadcast

    December 7, 2025
    Leave A Reply Cancel Reply

    Latest Post

    If you do 5 things, you’re more indecisive than most—what to do instead

    UK ministers launch investigation into blaze that shut Heathrow

    The SEC Resets Its Crypto Relationship

    How MLB plans to grow Ohtani, Dodger fandom in Japan into billions for league

    Stay In Touch
    • YouTube
    Latest Reviews

    Starlink made ‘work from home’ possible from anywhere — now, I’m ready for a change

    By Earth & BeyondDecember 7, 2025

    The Indian Ocean disaster is a climate tragedy — and needs more attention

    By Earth & BeyondDecember 7, 2025

    NASA Wins Second Emmy Award for 2024 Total Solar Eclipse Broadcast

    By Earth & BeyondDecember 7, 2025

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    Blackpink Share New Song “Jump” Amid Deadline World Tour: Watch the Video

    July 13, 202519 Views

    Bitcoin in the bush – crypto mining brings power to rural areas

    March 25, 202513 Views

    Honor of Kings breaks esports attendance Guinness World Record 

    November 10, 202511 Views
    Our Picks

    A singularly personal Tim Burton film

    This Is the Iceberg Capital of the World

    New US Security Strategy aligns with Russia’s vision, Moscow says

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    © 2025 Earth & Beyond.
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms and Conditions
    • Disclaimer

    Type above and press Enter to search. Press Esc to cancel.

    Newsletter Signup

    Subscribe to our weekly newsletter below and never miss the latest product or an exclusive offer.

    Enter your email address

    Thanks, I’m not interested