I think it is one vector that can contribute to identification through fingerprinting. While the data brokers are aggregating data from this vector, they are also aggregating data from all other vectors within their capability. The data sets from each vector are cross referenced to create unique fingerprint ids for each individual believed to be found in the data. Every vector the brokers are able to add increases the overall accuracy of the model they use to connect those ids to real world people. These data sets don’t take a lot of resources to store while they gain monetary and strategic value over time so they will be duplicated across many actors. If all they were getting access to is this single data point that would not be an issue but it’s the sum of all data points being provided to brokers that brings growing risk. This isn’t the first or last attempt to add mandatory data collection. Each time we add a mandatory data point, we’re extending the runway for brokers to get their operations off the ground. The threat actors were already headed to Roblox and Discord but now the tools available to them are made slightly more sophisticated, increasing the chances of their success.
Providing false data for your age would contribute to reducing the reliability of the data for data brokers but I believe it would take collective action to make this significant. Most people are going to provide accurate data so the amount of people trying to poison is low enough that the brokers still get good data along with new data showing who wants to poison broker data.
I separate the legal effects from real world effects. Online devices are exposed to all jurisdictions worldwide at once. Laws in those jurisdictions are subject to constant change and interpretation while the data can move between jurisdictions in a moment. Data brokers accept the risk of breaking laws when the risk/reward calculation looks favorable to them, the same as publicly traded corporations do. This is the same reason they will continue to collect data of minors even if the laws tells them not to. It just takes one event for a targeted individual to have their life changed forever. Law may try to punish the broker but rarely will it restore the victim. State and other large actors are going to collect the data regardless of what the law says. They can fall back on a differing interpretation, employee incompetence claims, fall guys or just saying big oops if they’re ever caught.
Friend, thank you for the dialogue as well. You’re getting down voted because the votes reflect our community’s emotions on the topic, regardless of the quality or relevance of the comment.
Honestly, I re-read the legislation, and I while I’m still not convinced something like this is a bad idea, all the specifics are.
Like, ultimately, its a user-set flag, stored locally, and would provide users more choice in content filtering. That could be useful, for parents and non-parents alike.
Most people are going to provide accurate data so the amount of people trying to poison is low enough that the brokers still get good data along with new data showing who wants to poison broker data.
You’re right, and the design of this law basically ensures that. I was thinking of it being implemented (at least in user-friendly UI) as a dropdown showing the four provided age brackets. Instead, it is required to be a numeric or date of birth input, seemingly without allowing a default value, which means users are more likely to enter accurate data. Similarly, stored age information isn’t required to use the brackets provided. This means that a lazy or immoral developer will use the exact age, rather than abstracting it as the law suggests. I had misinterpreted 1798.500. (b) and thought that the abstraction of age data as suggested was required.
If something like this is to be implemented, it needs to use a more abstracted format (ideally with a default value), and if its going to be implemented into law, it should be a better, more granular system of content filter than simply using an age-based metric.
I think it is one vector that can contribute to identification through fingerprinting. While the data brokers are aggregating data from this vector, they are also aggregating data from all other vectors within their capability. The data sets from each vector are cross referenced to create unique fingerprint ids for each individual believed to be found in the data. Every vector the brokers are able to add increases the overall accuracy of the model they use to connect those ids to real world people. These data sets don’t take a lot of resources to store while they gain monetary and strategic value over time so they will be duplicated across many actors. If all they were getting access to is this single data point that would not be an issue but it’s the sum of all data points being provided to brokers that brings growing risk. This isn’t the first or last attempt to add mandatory data collection. Each time we add a mandatory data point, we’re extending the runway for brokers to get their operations off the ground. The threat actors were already headed to Roblox and Discord but now the tools available to them are made slightly more sophisticated, increasing the chances of their success.
Providing false data for your age would contribute to reducing the reliability of the data for data brokers but I believe it would take collective action to make this significant. Most people are going to provide accurate data so the amount of people trying to poison is low enough that the brokers still get good data along with new data showing who wants to poison broker data.
I separate the legal effects from real world effects. Online devices are exposed to all jurisdictions worldwide at once. Laws in those jurisdictions are subject to constant change and interpretation while the data can move between jurisdictions in a moment. Data brokers accept the risk of breaking laws when the risk/reward calculation looks favorable to them, the same as publicly traded corporations do. This is the same reason they will continue to collect data of minors even if the laws tells them not to. It just takes one event for a targeted individual to have their life changed forever. Law may try to punish the broker but rarely will it restore the victim. State and other large actors are going to collect the data regardless of what the law says. They can fall back on a differing interpretation, employee incompetence claims, fall guys or just saying big oops if they’re ever caught.
Friend, thank you for the dialogue as well. You’re getting down voted because the votes reflect our community’s emotions on the topic, regardless of the quality or relevance of the comment.
Honestly, I re-read the legislation, and I while I’m still not convinced something like this is a bad idea, all the specifics are.
Like, ultimately, its a user-set flag, stored locally, and would provide users more choice in content filtering. That could be useful, for parents and non-parents alike.
You’re right, and the design of this law basically ensures that. I was thinking of it being implemented (at least in user-friendly UI) as a dropdown showing the four provided age brackets. Instead, it is required to be a numeric or date of birth input, seemingly without allowing a default value, which means users are more likely to enter accurate data. Similarly, stored age information isn’t required to use the brackets provided. This means that a lazy or immoral developer will use the exact age, rather than abstracting it as the law suggests. I had misinterpreted 1798.500. (b) and thought that the abstraction of age data as suggested was required.
If something like this is to be implemented, it needs to use a more abstracted format (ideally with a default value), and if its going to be implemented into law, it should be a better, more granular system of content filter than simply using an age-based metric.