Market Capitalization:3 956 865 998 608,1 USD
Vol. in 24 hours:243 482 570 874,39 USD
Dominance:BTC 58,36%
ETH:13,02%
Market Capitalization:3 956 865 998 608,1 USD
Vol. in 24 hours:243 482 570 874,39 USD
Dominance:BTC 58,36%
ETH:13,02%
Market Capitalization:3 956 865 998 608,1 USD
Vol. in 24 hours:243 482 570 874,39 USD
Dominance:BTC 58,36%
ETH:13,02%
Market Capitalization:3 956 865 998 608,1 USD
Vol. in 24 hours:243 482 570 874,39 USD
Dominance:BTC 58,36%
ETH:13,02%
Market Capitalization:3 956 865 998 608,1 USD
Vol. in 24 hours:243 482 570 874,39 USD
Dominance:BTC 58,36%
ETH:13,02%
Market Capitalization:3 956 865 998 608,1 USD
Vol. in 24 hours:243 482 570 874,39 USD
Dominance:BTC 58,36%
ETH:13,02%
Market Capitalization:3 956 865 998 608,1 USD
Vol. in 24 hours:243 482 570 874,39 USD
Dominance:BTC 58,36%
ETH:13,02%
Market Capitalization:3 956 865 998 608,1 USD
Vol. in 24 hours:243 482 570 874,39 USD
Dominance:BTC 58,36%
ETH:13,02%
Market Capitalization:3 956 865 998 608,1 USD
Vol. in 24 hours:243 482 570 874,39 USD
Dominance:BTC 58,36%
ETH:13,02%
Market Capitalization:3 956 865 998 608,1 USD
Vol. in 24 hours:243 482 570 874,39 USD
Dominance:BTC 58,36%
ETH:13,02%
Yes

AI models exhibited a tendency toward caution and less assertive responses when instructed to emulate female personas, according to new research.

crypthub
AI models exhibited a tendency toward caution and less assertive responses when instructed to emulate female personas, according to new research.

Gender and AI Risk Aversion

Recent research reveals that AI models exhibit risk-averse behavior when prompted to act as women. Models from companies like OpenAI, Google, DeepSeek, and Meta demonstrated a shift toward caution when assigned a female gender identity. This change in behavior correlates with observed patterns in human financial decision-making where women generally show greater risk aversion.

Methodology and Observed Differences

The study utilized the Holt-Laury task, a standard economics test, to assess risk tolerance. DeepSeek Reasoner and Google’s Gemini 2.0 Flash-Lite showed the most pronounced differences in risk preference based on gender prompts. OpenAI’s GPT models, however, remained largely unaffected by these prompts.

Implications and Subtle Bias

Researchers suggest these shifts are reflections of societal stereotypes. Subtle changes in AI recommendations, influenced by gender cues, could perpetuate bias without user awareness. Loan approvals and investment advice are examples where this could reinforce inequalities under the appearance of objective algorithms.