An engineer got curious about how his iLife A11 smart vacuum worked and monitored the network traffic coming from the device. That’s when he noticed it was constantly sending logs and telemetry data to the manufacturer — something he hadn’t consented to. The user, Harishankar, decided to block the telemetry servers’ IP addresses on his network, while keeping the firmware and OTA servers open. While his smart gadget worked for a while, it just refused to turn on soon after. After a lengthy investigation, he discovered that a remote kill command had been issued to his device.



The problem that is created by a person’s private data being collected against their will is primarily a philosophical one similar to the “principle of least privilege”, which you may be familiar with. The idea is that those collecting the data have no reasonable need for access to it in order to provide the services they’re providing, so their collection of that information can only be for something other than the user’s benefit, but the user gets nothing in exchange for it. The user is paying for the product/service they get, so the personal data is just a bonus freebie that the vendor is making off with. If the personal data is worthless, then there is no need to collect it, and if it does have worth, they are taking something of value without paying for it, which one might call stealing, or at least piracy. To many, this is already enough to cry foul, but we haven’t even gotten into the content and use of the collected data yet.
There is a vibrant marketplace among those in the advertising business for this personal data. There are brokers and aggregators of this data with the goal of correlating every data point they have gotten from every device and app they can find with a specific person. Even if no one individual detail or set of details presents a risk or identifies who the specific person is, they use computer algorithms to analyze all the data, narrowing it down to exactly one individual, similar to the way the game “20 questions” works to guess what object the player is thinking of–they can pick literally any object or concept in the whole world, and in 20 questions or less, the other player can often guess it. If you imagine the advertisers doing this, imagine how successful they would be at guessing who a person is if they can ask unlimited questions forever until there can be no doubt; that is exactly what the algorithm reading the collected data can do.
There was an infamous example of Target (the retailer) determining a young girl was pregnant before she told anyone or even knew herself, and created a disastrous home situation for her by sending her targeted maternity marketing materials to her house, which was seen by her abusive family.
These companies build what many find to be disturbingly invasive dossiers on individuals, including their private health information, intimacy preferences, and private personal habits, among other things. The EFF did a write-up many years ago with creepy examples of basic metadata collection that I found helpful to my understanding of the problem here:
https://www.eff.org/deeplinks/2013/06/why-metadata-matters?rss=1
Companies have little to no obligation to treat you fairly or even do business with, allowing them to potentially create a downright exile situation for you if they have decided you belong on some “naughty list” because of an indicator given to them by an algorithm that analyzed your info. They can also take advantage of widely known weaknesses in human psychology to influence you in ways that you don’t even realize, but are undeniably unethical and coercive. Also, it creates loopholes for bad actors in government to exploit. For example, in my country (USA), the police are forbidden from investigating me if I am not suspected of a crime, but they can pay a data broker $30 for a breakdown of everything I like, everything I do, and everywhere I’ve been. If it was sound government policy to allow arbitrary investigation of anyone regardless of suspicion, then ask yourself why every non-authoritarian government forbids it.
I know that’s a lot; it is a complicated topic that is hard to understand the implications of. Unfortunately, everyone that could most effectively work to educate everyone on those risks is instead exploiting their ignorance for a wide variety of purposes. Some of those purposes are innocuous, but others are ethically dubious, and many more are just objectively nefarious. To be clear, the reason for the laws against blanket investigations was to prevent the dubious and nefarious uses, because once that data is collected, it isn’t feasible to ensure it will stay in the right hands. The determination was that potential net good of this kind of data collection is far outweighed by the potential net negatives.
I hope that helps!