AI, but with real privacy. Homomorphic Encryption for Machine Learning

AI, but with real privacy. Homomorphic Encryption for Machine Learning

Our use of AI right now is a privacy nightmare. Basically, all users & companies send their data to an arbitrary third party (e.g. OpenAI) for them to run their analysis on and return the results.

If there is no clear guideline for employees & users, the data sent to external parties might be sensitive or even critical (e.g. health) data leaked.

But there is a technical way to solve this problem: Homomorphic encryption. This encryption allows the user to encrypt the queries BEFORE they are sent to the external AI system, the system does its magic on it, and on the way back the user decrypts the information again. With this, the data never leaves the control of the users.

This sounds too good to be true? It somewhat is. The problem with homomorphic encryption right now in the ML space is that it can only can do Addition & Subtraction. As of why, not all functions required for neural networks are possible (as e.g. SoftMax requires division). Nevertheless, it is possible to approximate the functions, and with some accuracy loss, homomorphic encryption is possible.

This week’s paper gives a general overview of this topic and how it can be used in ML. It is super easy to read and understand. Check it out.


Abstract:

Third-party and expert analysis is a cost-effective solution for solving specialized problems or processing large datasets related to reactor structural health monitoring and nondestructive evaluation. However, when handling proprietary information, third-party and expert analysts pose a privacy risk. To address this challenge, Homomorphic Encryption (HE) permits arithmetic operations on encrypted data without exposing the underlying data. Implementations of Machine Learning (ML) and Artificial Intelligence (AI) algorithms using HE greatly enhances the capabilities of third-party analysts while maintaining a low security risk. This paper details current success in applying Principal Component Analysis (PCA) and Fully Connected Neural Networks (NN) using the Microsoft SEAL implementation of the popular CKKS Fully Homomorphic Encryption (FHE) algorithm. The MNIST Handwritten Dataset is analyzed as a proof-of-concept demonstration of the implementations

Download Link:

https://publications.anl.gov/anlpubs/2022/09/177974.pdf


Additional Links:

Sanjoy Dey

Engineer🧰➡️Real-Estate Pro| MultiFamily Syndicator🏘| Wealth Strategist💰| Traveller✈️| Reader📚| Ex-Qualcomm

6mo

a fascinating take on the challenges of data privacy in ai. homomorphic encryption definitely sounds like a step in the right direction. 🔒 Simon Frey

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics