Pro@programming.dev to Technology@lemmy.worldEnglish · edit-211 months agoMeta plans to replace humans with AI to automate up to 90% of its privacy and integrity risk assessments, including in sensitive areas like violent contenttext.npr.orgexternal-linkmessage-square31fedilinkarrow-up1280arrow-down18cross-posted to: fuck_ai@lemmy.worldTechnology@programming.devfuck_bigtech@europe.pub
arrow-up1272arrow-down1external-linkMeta plans to replace humans with AI to automate up to 90% of its privacy and integrity risk assessments, including in sensitive areas like violent contenttext.npr.orgPro@programming.dev to Technology@lemmy.worldEnglish · edit-211 months agomessage-square31fedilinkcross-posted to: fuck_ai@lemmy.worldTechnology@programming.devfuck_bigtech@europe.pub
minus-squareouch@lemmy.worldlinkfedilinkEnglisharrow-up21·11 months agoWhat about false positives? Or a process to challenge them? But yes, I agree with the general idea.
minus-squaretarknassus@lemmy.worldlinkfedilinkEnglisharrow-up11·11 months agoThey will probably use the YouTube model - “you’re wrong and that’s it”.
minus-squareBeej Jorgensen@lemmy.sdf.orglinkfedilinkEnglisharrow-up15arrow-down1·11 months ago Or a process to challenge them? 😂😂😂😔
What about false positives? Or a process to challenge them?
But yes, I agree with the general idea.
They will probably use the YouTube model - “you’re wrong and that’s it”.
😂😂😂😔