AI

Kids are learning how to make their own little language models

1 Mins read

“What does it mean to have children see themselves as being builders of AI technologies and not just users?” says Shruti.

The program starts out by using a pair of dice to demonstrate probabilistic thinking, a system of decision-making that accounts for uncertainty. Probabilistic thinking underlies the LLMs of today, which predict the most likely next word in a sentence. By teaching a concept like it, the program can help to demystify the workings of LLMs for kids and assist them in understanding that sometimes the model’s choices are not perfect but the result of a series of probabilities. 

Students can modify each side of the dice to whatever variable they want. And then they can change how likely each side is to come up when you roll them. Luca thinks it would be “really cool” to incorporate this feature into the design of a Pokémon-like game he is working on. But it can also demonstrate some crucial realities about AI.

Let’s say a teacher wanted to educate students about how bias comes up in AI models. The kids could be told to create a pair of dice and then set each side to a hand of a different skin color. At first, they could set the probability of a white hand at 100%, reflecting a hypothetical situation where there are only images of white people in the data set. When the AI is asked to generate a visual, it produces only white hands.

Then the teacher can have the kids increase the percentage of other skin colors, simulating a more diverse data set. The AI model now produces hands of varying skin colors.

“It was interesting using Little Language Models, because it makes AI into something small [where the students] can grasp what’s going on,” says Helen Mastico, a middle school librarian in Hanson, Massachusetts, who taught a group of eighth graders to use the program.

“You start to see, ‘Oh, this is how bias creeps in,’” says Shruti. “It provides a rich context for educators to start talking about and for kids to imagine, basically, how these things scale to really big levels.”


Source link

Related posts
AI

OpenAI Announces OpenAI o3: A Measured Advancement in AI Reasoning with 87.5% Score on Arc AGI Benchmarks

2 Mins read
On December 20, OpenAI announced OpenAI o3, the latest model in its o-Model Reasoning Series. Building on its predecessors, o3 showcases advancements…
AI

Viro3D: A Comprehensive Resource of Predicted Viral Protein Structures Unveils Evolutionary Insights and Functional Annotations

3 Mins read
Viruses infect organisms across all domains of life, playing key roles in ecological processes such as ocean biogeochemical cycles and the regulation…
AI

Mix-LN: A Hybrid Normalization Technique that Combines the Strengths of both Pre-Layer Normalization and Post-Layer Normalization

2 Mins read
The Large Language Models (LLMs) are highly promising in Artificial Intelligence. However, despite training on large datasets covering various languages  and topics,…

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *