• PlzGivHugs@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    4
    ·
    edit-2
    4 days ago

    This is a compelling argument, but do you think its really a significant attack vector? Its already illegal to share or leak (even unintentionally) this data, and from my understanding, if you chose to set your age to a lower bracket via this process, companies sharing (also collecting? Currently unclear on this.) this data would also break CCPA and possibly COPPA, and from my understanding, the companies are required to provide additional data privacy measures under California Civil Code.

    Yes, these laws will be broken, but will it be on a significant enough scale, and with reliable enough information to be worth-while? Like, since this bans the use of data from those who set their age low, wouldn’t this likely reduce the data collection pool overall, not to mention inventiving adults to poison this data. For those who do illegally collect this data anyway, is it that much of an advantage compared to just asking the user’s age upon reaching the site as most sites currently do? Beyond that, when these sites operating illegally do leak their data, will that data be a realistic attack vector? Like I said to another commenter, collating data in this way seems extremely impractical and unreliable for predators. Wouldn’t those who want to seek out children just go to existing spaces where they can connect directly like Roblox or Discord? Like, don’t get me wrong, I don’t like data collection, but compared to everything else, this seems like a relatively unreliable and unhelpful data point, esspecially given all the legal restrictions.

    Edit: also, would be interested to hear if your opinion changes if even storing this value is illegal, if unnecessary data collection as a whole is banned, and/or if this value has a legally defined default of using the 18+ value, and doesn’t have to be made obvious in account setup.

    Edit 2: Also, wantted to say thanks for responding genuinely and with a well-articulated argument. I know the Fediverse tends to be very… unfriendly… towards anything that may impact privacy and towards government regulation in general, so your civility is really appreciated.

    • makeitwonderful@lemmy.today
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      2 days ago

      I think it is one vector that can contribute to identification through fingerprinting. While the data brokers are aggregating data from this vector, they are also aggregating data from all other vectors within their capability. The data sets from each vector are cross referenced to create unique fingerprint ids for each individual believed to be found in the data. Every vector the brokers are able to add increases the overall accuracy of the model they use to connect those ids to real world people. These data sets don’t take a lot of resources to store while they gain monetary and strategic value over time so they will be duplicated across many actors. If all they were getting access to is this single data point that would not be an issue but it’s the sum of all data points being provided to brokers that brings growing risk. This isn’t the first or last attempt to add mandatory data collection. Each time we add a mandatory data point, we’re extending the runway for brokers to get their operations off the ground. The threat actors were already headed to Roblox and Discord but now the tools available to them are made slightly more sophisticated, increasing the chances of their success.

      Providing false data for your age would contribute to reducing the reliability of the data for data brokers but I believe it would take collective action to make this significant. Most people are going to provide accurate data so the amount of people trying to poison is low enough that the brokers still get good data along with new data showing who wants to poison broker data.

      I separate the legal effects from real world effects. Online devices are exposed to all jurisdictions worldwide at once. Laws in those jurisdictions are subject to constant change and interpretation while the data can move between jurisdictions in a moment. Data brokers accept the risk of breaking laws when the risk/reward calculation looks favorable to them, the same as publicly traded corporations do. This is the same reason they will continue to collect data of minors even if the laws tells them not to. It just takes one event for a targeted individual to have their life changed forever. Law may try to punish the broker but rarely will it restore the victim. State and other large actors are going to collect the data regardless of what the law says. They can fall back on a differing interpretation, employee incompetence claims, fall guys or just saying big oops if they’re ever caught.

      Friend, thank you for the dialogue as well. You’re getting down voted because the votes reflect our community’s emotions on the topic, regardless of the quality or relevance of the comment.

      • PlzGivHugs@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        2 days ago

        Honestly, I re-read the legislation, and I while I’m still not convinced something like this is a bad idea, all the specifics are.

        Like, ultimately, its a user-set flag, stored locally, and would provide users more choice in content filtering. That could be useful, for parents and non-parents alike.

        Most people are going to provide accurate data so the amount of people trying to poison is low enough that the brokers still get good data along with new data showing who wants to poison broker data.

        You’re right, and the design of this law basically ensures that. I was thinking of it being implemented (at least in user-friendly UI) as a dropdown showing the four provided age brackets. Instead, it is required to be a numeric or date of birth input, seemingly without allowing a default value, which means users are more likely to enter accurate data. Similarly, stored age information isn’t required to use the brackets provided. This means that a lazy or immoral developer will use the exact age, rather than abstracting it as the law suggests. I had misinterpreted 1798.500. (b) and thought that the abstraction of age data as suggested was required.

        If something like this is to be implemented, it needs to use a more abstracted format (ideally with a default value), and if its going to be implemented into law, it should be a better, more granular system of content filter than simply using an age-based metric.