Hardware and the web: the balance between usefulness, security and privacy

About a year ago, Apple released a list of Web API’s they were not going to implement in Safari due to privacy and security issues. They worried that these API’s were going to allow fingerprinting and tracking of users. And that is, of course, a big privacy no-no. Sounds reasonable, right? 

But what kinds of API’s were actually on that list? Well, among others, the hardware APIs that have been shipping in Chrome and Edge during the last couple of years: WebBluetooth, WebHID, WebMidi, WebNFC, WebSerial and of course WebUSB. Those sounds really dangerous, right?

And I do understand some hesitation with allowing these kinds of capabilities on the web. They are powerful and seem potentially dangerous. Nobody wants some random website to take control over the devices in your house, deliver malware, spy on you or worse. I understand the hesitation.

But we should look at the actual dangers and properly consider them and not listen to some gut reaction or base our opinion on that episode of Black Mirror that we once saw… 

The reality is, for every feature the web platform offers; there is a balance between usefulness, security and privacy. And these API’s are definitely useful. And more importantly, they are also relatively safe. I think adding these capabilities will even increase security.

Like any API with security or privacy implications, a website cannot use it without user permission. A website has to ask for permission to use the webcam, and likewise, the website has to ask permission to use a Bluetooth device or USB device. And that permission is not for the API; it is to use a specific device using this API. The website does not know which devices there are and cannot get a list of them. It can only ask for permission to get access to a specific type of device, and it will get it or not. 

The user is always in control and their intent is needed to allow access to specific devices.

Security issues

The main concern is thus not a technological one, but one of social engineering: getting the user to do something that harms them somehow.

For example, insert malicious code in some websites that warn users of a fake problem and instruct them to connect to some device and upload firmware to compromise it or use a side channel to extract valuable information from that device. It always involves tricking the user, because as we’ve learned, the user has to manually select a device before malicious code can get access to the device.

I would love to say that this is not possible, but unfortunately, this is an issue. Social engineering is always an issue and not limited to these API’s. 

But even though it is theoretically possible, it seems unlikely to be abused on a large scale. 

First, you need to find a vulnerable device, for example, a device that accepts firmware or code that is not signed by the manufacturer. Most devices won’t have that problem, but it is not zero. And secondly, the users that you target need to have that specific device. Most users won’t have that particular device. And thirdly, you’d have to trick users into selecting that device. Most users will probably be very suspicious and refuse. And finally, because asking for access to devices is suspect and will immediately raise some red flags, I don’t think such an attack will be sustainable for a more extended period. 

But when we start with a large enough group of users, the number of devices at risk won’t be zero. 

There are some simple counter-measures browser manufacturers use to reduce this risk even further. For example: blocking access to specific devices known to have a vulnerability, disallowing access to whole classes of problematic devices, and even remotely disabling the API when it is actively being abused. Browsers are currently using all of these counter-measures. But the risk is still not zero.

As far I know, the remote kill switch has only been used once. And not because the device API’s were actively being abused, but because security researchers reported a vulnerability. It was still bad enough to use the kill switch. 

Back in 2018, it was possible to extract a key from a Yubico U2F device using WebUSB, bypassing the origin check that the browser usually does. After it was reported, Google immediately disabled WebUSB altogether and released an update that re-enabled WebUSB but put all Yubico devices on a blocklist.

I think the reason we’ve not seen any attacks like this, is that it is far easier to get users to download some native app, and run that, than it is to find the one-in-a-million combination of gullible users and vulnerable devices. Simply put, it is not worth the time and effort.

It is not just Google that thinks these API’s are safe enough to roll out to users. So do Microsoft and Samsung. In fact, these API’s are currently shipping on about 70% of mobile browsers and 78% of desktop browsers worldwide and have been for a while.

But I think the real security consideration is not between a browser with or without these features.

Do we trust this app or not? I downloaded this app after scanning a QR code on some piece of paper that was in a generic brown cardboard box when I bought a lightbulb on AliExpress…

Given a specific task that the user wants to perform, it is a choice between using a browser or a native app. The choice is between limited API’s build around user consent in a sandboxed environment and a native app that can do whatever it wants without any checks on privacy or security. Would you rather download some shady app created by some unknown developer on an app store that does who-knows-what with your data in the background?

By keeping device API’s limited to native apps, you force people to use native apps for these kinds of tasks. And that isn’t good for security. 

The most dangerous feature that browsers have are not the device API’s; it is the ability to link to and download native apps. 

That feature is being actively abused and has been from the moment the very first browser got in the hands of the general public. And time has proven there is very little that we can do to mitigate the risks.

Privacy and fingerprinting

How about fingerprinting? Is it possible to fingerprint users using these new capabilities?

To be honest, that does not make much sense. Using fingerprinting as an argument against these API’s just shows a lack of understanding of how any of these API’s actually work. 

The goal of fingerprinting is to identify users uniquely. And you can do this by using data points that are unique for larger groups of users. Each of those data points cannot identify an individual, but it could be possible to track users if you have enough of these data points and combine them. 

And those data points can be pretty mundane things that are not a privacy concern in itself. Or even things that were ironically supposed to improve privacy, like the “Do Not Track” header. This header is turned off by default, so this is an ideal data point for tracking users. If you have a pool of 1 million users and one in a hundred has this turned on. Then if you encounter this header, you’re already looking at 1 in 10.000 users instead of 1 in a million. Now combinate this with 30 or so other innocent-looking data points, and you can track individual users across different websites—the holy grail of user tracking.

The “Do Not Track” header was actually removed in Safari precisely because of tracking concerns. And rightfully so.

So the question is: do these APIs add unique data points for groups of users?

In theory: Yes, the mere existence of an API can be a data point. But in practice: No. Take, for example, WebBluetooth; it has been available in all Chrome versions since version 56. So, the only information it gives is whether or not the browser version is 56 or later or an earlier version. And we can already get that information directly by looking at the browser name and version number in the useragent string. So this data point is quite useless for fingerprinting.

What fingerprinting is looking for is APIs that give different results on different machines. For example, the fonts that people have installed on their machine or GL extensions their graphic card supports. These can differ on devices that run the exact same version of the browser. These data points add extra information or entropy to the fingerprint. 

But what about these devices API’s? Do they offer any extra information that is useful for fingerprinting? Well, if the browser knows which USB devices are connected to your system and which Bluetooth devices are within range, I would say ‘Yes, absolutely’. But that is not how the API’s work. These API’s were designed with fingerprinting in mind and cannot directly be used for fingerprinting.

Take WebBluetooth again as a representative example for the other hardware API’s. 

Let’s be perfectly clear. You cannot get a list of devices in your neighbourhood. That is not possible with WebBluetooth, nor with any of the other device API’s. This information is not available to websites. You don’t have to be afraid that websites will see your devices and uniquely identify you because they can’t. 

What a website can do is tell the browser what kind of devices it wants to interact with. Typically you provide the API with a set of filters based on either device name or the services the device offers. And then, you ask the browser for permission to connect to such a device. 

The browser then pops up a permission window with a list of devices that adhere to the filters you’ve provided. But that list is only visible to the user itself, not to any running scripts on the website. The user then can then give access to a single device, or it can deny access altogether. 

That permission window already ensures that bad actors won’t use this for fingerprinting, because they want to do that without alerting the user. And also, because it makes the data unreliable, there is no guarantee that the user will give permission every time. Nor that it will permit access to the same device. 

Device API’s are simply bad for fingerprinting. It is unreliable and really obvious when it is used. 

So what about Safari…

It is perfectly reasonable to have doubts about the security of these hardware API’s. Personally, I think the risks are relatively small and manageable. But it is perfectly fine to have a discussion and even disagree. But to point to fingerprinting and tracking just means you are misinformed.

So, I don’t really mind that Safari won’t implement these features. Every browser manufacturer needs to assess that balance between usefulness, security and privacy to see if the risks are worth it. Apple has and thinks it is not worth it. I don’t see it this way, but okay.

I do think they are ignoring the security risks of pushing users to native apps. But of course, I do understand why Apple would not see it that way.

What I do mind is the other implications of this choice. Because Safari is the only real browser allowed on iOS, this means that users cannot choose to use a different browser that does support these API’s. That is the real problem here. Safari’s choice is not just a choice that affects Safari, but it affects all browsers on a hugely important platform. And that affects me as a user.