Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add "concealing mitigations" to mitigations #38

Open
KOLANICH opened this issue Aug 21, 2019 · 1 comment
Open

Add "concealing mitigations" to mitigations #38

KOLANICH opened this issue Aug 21, 2019 · 1 comment

Comments

@KOLANICH
Copy link

  1. Some websites may measure uniqueness of users' environments and/or availability of API prone to fingerprinting and discriminate against users and vendors of tools having proper mitigations.
  2. The fact that a person uses mitigations reveals some information.
  3. this is a combo: 1. makes users and vendors of such tools rare, which increases information in 2.

So while standardization of observable behavior is the best measure to close the leak, some simulation and randomization should be stacked upon it to prevent leaking tye fact that a user is using the discriminated tools.

@npdoty
Copy link
Collaborator

npdoty commented Oct 19, 2019

Is there a specific example of simulation, randomization or concealing you can provide?

The draft notes randomization as not typically a recommended solution, because it's hard to determine when it's more effective than a standard or null value.

I do think there's an interest in trying to reduce the visibility of privacy-enhancing modes (including incognito or private browsing modes), as in the TAG document here: https://w3ctag.github.io/private-browsing-modes/
Is there additional advice we should provide in developing new specifications to make it so that modes are not detected and discriminated against?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
2 participants