Beyond Privacy Trade-offs with Structured Transparency

Beyond Privacy Trade-offs with Structured Transparency

Many socially valuable activities depend on sensitive information, such as medical research, public health policies, political coordination, and personalized digital services. This is often posed as an inherent privacy trade-off: we can benefit from data analysis or retain data privacy, but not both. Across several disciplines, a vast amount of effort has been directed toward overcoming this trade-off to enable productive uses of information without also enabling undesired misuse, a goal we term ‘structured transparency’. In this paper, we provide an overview of the frontier of research seeking to develop structured transparency. We offer a general theoretical framework and vocabulary, including characterizing the fundamental components – input privacy, output privacy, input verification, output verification, and flow governance – and fundamental problems of copying, bundling, and recursive oversight. We argue that these barriers are less fundamental than they often appear. Recent progress in developing ‘privacy-enhancing technologies’ (PETs), such as secure computation and federated learning, may substantially reduce lingering use-misuse trade-offs in a number of domains. We conclude with several illustrations of structured transparency – in open research, energy management, and credit scoring systems – and a discussion of the risks of misuse of these tools.

Research Summary

Footnotes
Further reading