Artificial Intelligence | Shaping Identities or Reinforcing Inequalities?
AI's subtle biases shape Gen Z identities. Discover risks, opportunities & solutions in this eye-opening discussion.
We often speak about Artificial Intelligence as if it were neutral: a tool, a mirror, a machine. But what if it is more than that? What if it quietly reflects the biases of the societies that built it and in doing so, subtly shapes the identities of those who rely on it most?
This year’s report starts from a clear assumption: AI is not neutral. Join Luisa García and LLYC while discovering this essential report launch.
Large language models and recommendation systems act as powerful, non-neutral agents in the lives of young people. They influence the role models they encounter, the narratives they internalise, the careers they imagine possible, the relationships they normalise, and the expectations they build for their own future.
To explore this influence, 100 open-ended questions were asked to five leading AI models, across 10 key topics that matter deeply to young people: mental health, love and heartbreak, friendship and belonging, identity and sexual orientation, self-esteem, family relationships, digital life, emotional dependence on AI, future studies, and gender equality.
The research spanned 12 countries and two age groups: 16–20 and 21–25 , capturing how different generations interact with and are shaped by AI systems.
What emerges is not a simple story of harm or hope.
AI can reproduce structural inequalities. It can reinforce stereotypes. It can subtly narrow horizons. But it can also do something profoundly powerful. AI can make the invisible visible. By detecting patterns of bias at scale, by measuring disparities that escape human perception, and by identifying structural gaps across societies, AI can become a strategic ally in reducing inequality.. if designed and used consciously.
This event invites you into that tension. Between risk and opportunity. Between reflection and responsibility.
How is AI shaping the identities of the next generation? Which futures does it expand, and which does it quietly limit? And how can we ensure that the systems influencing young minds become tools for inclusion rather than replication of inequality?
Because the question is no longer whether young people turn to AI.
They already do.
The real question is: What answers are they receiving?
Feel free to take a look at some of the previously published reports:
- 2025: NO FILTER: Vandalized conversation require the filter of equality
- 2024: OUT THE FOCUS: Ethical social media conversation and news coverage of gender-based violence
- 2023: NAMELESS WOMEN: Progress on and challenges in women's presence in the media
- 2022: WOMEN LEADERS ON THE THRESHOLD OF VISIBILITY: Analysis of digital conversation on leaders in politics, business and journalism
AI's subtle biases shape Gen Z identities. Discover risks, opportunities & solutions in this eye-opening discussion.
We often speak about Artificial Intelligence as if it were neutral: a tool, a mirror, a machine. But what if it is more than that? What if it quietly reflects the biases of the societies that built it and in doing so, subtly shapes the identities of those who rely on it most?
This year’s report starts from a clear assumption: AI is not neutral. Join Luisa García and LLYC while discovering this essential report launch.
Large language models and recommendation systems act as powerful, non-neutral agents in the lives of young people. They influence the role models they encounter, the narratives they internalise, the careers they imagine possible, the relationships they normalise, and the expectations they build for their own future.
To explore this influence, 100 open-ended questions were asked to five leading AI models, across 10 key topics that matter deeply to young people: mental health, love and heartbreak, friendship and belonging, identity and sexual orientation, self-esteem, family relationships, digital life, emotional dependence on AI, future studies, and gender equality.
The research spanned 12 countries and two age groups: 16–20 and 21–25 , capturing how different generations interact with and are shaped by AI systems.
What emerges is not a simple story of harm or hope.
AI can reproduce structural inequalities. It can reinforce stereotypes. It can subtly narrow horizons. But it can also do something profoundly powerful. AI can make the invisible visible. By detecting patterns of bias at scale, by measuring disparities that escape human perception, and by identifying structural gaps across societies, AI can become a strategic ally in reducing inequality.. if designed and used consciously.
This event invites you into that tension. Between risk and opportunity. Between reflection and responsibility.
How is AI shaping the identities of the next generation? Which futures does it expand, and which does it quietly limit? And how can we ensure that the systems influencing young minds become tools for inclusion rather than replication of inequality?
Because the question is no longer whether young people turn to AI.
They already do.
The real question is: What answers are they receiving?
Feel free to take a look at some of the previously published reports:
- 2025: NO FILTER: Vandalized conversation require the filter of equality
- 2024: OUT THE FOCUS: Ethical social media conversation and news coverage of gender-based violence
- 2023: NAMELESS WOMEN: Progress on and challenges in women's presence in the media
- 2022: WOMEN LEADERS ON THE THRESHOLD OF VISIBILITY: Analysis of digital conversation on leaders in politics, business and journalism
Lineup
LLYC
Luisa García
Good to know
Highlights
- 2 hours
- In person
Location
The Nine
69 Rue Archimède
1000 Bruxelles
How do you want to get there?
