Experts warn that AI may change the size of a teenage brain

Experts warn that AI may change the size of a teenage brain

Nanostock | Iistock | Getty

Artificial intelligence is changing the size of the workplace – and is growing in the hands of adolescents and children.

From helping homework to AI “, from chatting with friends, have a free version of tools like ChatGPT that is easy to access for young users. This AI Chatbots, Created on large language models (LLMs), creating a human -like reaction that has created anxiety among parents, teachers and researchers.

One 2024 Survey It has been found by the Poo Research Center that 26% of the teenagers of 13 to 17 years say they have used chatGPT for their school work – double the rate of one year. Chattabot awareness in 2024 increased to 79% in 2024.

The regulators have taken care of. In September, the Federal Trade Commission Seven companies in orderTo explain how their AI chatbots can affect children and adolescents, including open, alphabet and metal.

As a reply to mounting scrutiny Declared In the same month when he launches a dedicated chatGPT experience Parental controls Develop tools for users under the age of 18 and to make a better estimate of the user’s age. The company will automatically direct the company, “Chatgpt experience with age-right policies”, the company, the company Said??

Child’s risk using AI Chatbots

However, some experts are worried that in early contact with AI – especially with the youngest generation of technology – can have a negative consequences of how children and adolescents think and learn.

A primary study of 2025 from researchers here MIT’s Media Lab Checked the cognitive price of using LLM in writing an essay. Participants aged 18 to 39 were asked to write an essay and they were assigned to three groups: one AI could use a Chatbot, another search engine, and the third who was completely dependent on their own knowledge.

Today, the convenience of this tool will be on the later date and will most likely accumulate.

Natalia Kosmina

Research Scientist, MIT

Paper – In the process of reviewing the chieftains yet – it is found that the brain’s connectivity is “systematically measured in proportion to external support,” according to Study??

According to the study, “the brain -when the group shows the strongest, wide -rang network, the search engine group showed the central engagement and removed the most weak (neural) pairs with LLM,” according to the study.

Finally, the study suggests that relying on AI chatboats can make people less ownership of their work and cause “cognitive debt”, a pattern of pushing mental effort in the short term that can reduce creativity or become more unsafe for users for prolonged handling.

Natalia Kosmini, a research -led by MIT Media Lab, said, “Today it will be the convenience of having this tool and will most likely accumulate.” Study?? These findings also suggested that relying on LLMs could be “significant points with serious thinking”.

Particularly the use of AI chatboats can be at risk of negative cognitive and developmental effects. To help reduce these risks, researchers agree that it is very important to have skills and knowledge for anyone, especially for young people before relying on AI tools to complete tasks.

“Even if you do not become an expert (first) develop a skill for yourself,” Kosmi said.

Doing so will allow the inconsistency and AI illusion – where the incorrect or fake information is presented as the facts – more easily can be caught, which will also help to support the development of serious thoughts.

“For young children … I imagine that it is very important to limit the use of genetic AI, as they need more opportunities to think seriously and independently,” said Denver University of Denver and Child Psychology Professor Pilong Kim.

There are also there Privacy risk Children may not be aware of this and it is important that they are used responsibly and safely when using these tools, Kosmina explained. She said, “We need to teach computer literacy not only (also) not only AI literacy,” she said. “You really need clear technology cleaning.”

Kim also has a tendency to give anthropology or human characteristics or behavior to non -human elements, Kim said.

“Now we have these machines that speak like a human being,” said Kim, which can bring the children into an insecure situation. “Simple praise (from) these social robots can really change their behavior,” she continued.

The protection of children in the AI ​​era

Share this content:

Post Comment