

2·
5 days agoHonestly i prefer that than people who become violent, or spend tons of money without realizing. That one is just funny (and at least he still remembered it was her girlfriend lol)
Honestly i prefer that than people who become violent, or spend tons of money without realizing. That one is just funny (and at least he still remembered it was her girlfriend lol)
yea…unfortunately this is not happening. Back in the days where AI didn’t only meant LLM or generative algs, people tried to predict crime with algorithms. It have been shown that they exibhit the same bias humans do because they learnt for humans. One of maiy examples: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing but i think i read stories like this happening well before 2010.
But wait ! There is more ! This isn’t something new at all. Before police used algorithms, even before computers existed, police forces tranied dogs to help them in their missions (defending people, detecting drugs…). Guess what ? The dogs also learnt the bias of trainers. https://daily.jstor.org/the-police-dog-as-weapon-of-racial-terror/, https://www.npr.org/sections/thetwo-way/2011/01/07/132738250/report-drug-sniffing-dogs-are-wrong-more-often-than-right are examples for both categories.
So yeah. Unbiased AI, or dogs, or unicorns won’t happen for as long as we humans training them are biased.