{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# In this notebook, we will implement differentially private logistic regression with autodp.\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## We will use the Noisy Gradient Descent algorithm.\n", "\n", "Let's say we are minimizing a function $f(\\theta)$. The gradient descent algorithm iteratively run\n", "$$\n", "\\theta_{t+1} = \\theta_t - \\eta_t \\nabla f(\\theta_t)\n", "$$\n", "\n", "Noisy gradient dsecent is a differentially private algorithm which updates the parameters by \n", "$$\n", "\\theta_{t+1} = \\theta_t - \\eta_t \\big(\\nabla f(\\theta_t) + \\textrm{GS}_t \\cdot \\mathcal{N}(0,\\sigma^2 I) \\big)\n", "$$\n", "where $\\textrm{GS}_t$ is the global sensitivity of $\\nabla f(\\theta_t)$ as we add/remove individual data points (that contribute to $f$).\n", "\n", "For binary logistic regression, the loss function is the sum of cross entropy losses\n", "$$f(\\theta) = \\sum_{i=1}^n \\ell(\\theta; (x_i,y_i)) = \\sum_{i=1}^n - y_i \\log\\big( \\frac{e^{x_i^T\\theta}}{e^{x_i^T\\theta}+1}\\big) - (1-y_i)\\log\\big( \\frac{1}{e^{x_i^T\\theta}+1}\\big) $$\n", "\n", "Notice that for NoisyGD, we are essentially publishing the gradient of $\\nabla f(\\theta_t)$ every iteration. The global sensitivity of $f_t$ is a bound of the individual gradient.\n", "$$\n", "\\textrm{GS}_t = \\sup_{x\\in\\mathcal{X},y\\in\\mathcal{Y}} \\nabla \\ell(\\theta_t; (x,y))\n", "$$\n", "\n", "## The underlying DP mechanism for running NoisyGD for ```niter``` iterations is simply: composition of ```niter``` Gaussian mechanism.\n", "\n", "\n", "Recall that the standard workflow of autodp is the following:\n", "\n", "1. Describe this differentially private mechanism in autodp\n", "2. Calibrate the parameter of this DP mechanism to achieve a pre-defined budget.\n", "2. Implement the algorithm and compare with the non-private baseline on a real dataset.\n", "\n", "This is what we are going to do. Before that, we will copy the relevant part of the code on SSP and AdaSSP over to have a baseline of comparison." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Load the dataset: California housing dataset. \n", "We will convert it into a binary classification problem." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import sklearn.datasets\n", "from sklearn.linear_model import LogisticRegression\n", "from sklearn import preprocessing\n", "from sklearn.model_selection import train_test_split\n", "import numpy as np\n", "\n", "\n", "\n", "dataset = sklearn.datasets.fetch_california_housing()\n", "print('This is a regression dataset.')\n", "print('Features are: ', dataset.feature_names)\n", "print('The label is: ', dataset.target_names)\n", "print('The shape of the data matrix iss', dataset.data.shape)\n", "\n", "# Let's extract the relevant information from the sklearn dataset object\n", "X = dataset.data\n", "y = dataset.target\n", "\n", "# make the label binary\n", "y = 1.0*(y >= 2.0)\n", "\n", "# -------------- Uncomment the following to test size = 0.9 when debugging you code-------------\n", "#\n", "# X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.9, random_state=93106)\n", "\n", "# X = X_train\n", "# y = y_train\n", "# -------------------- But please submit your code without taking a random subset --------------\n", "\n", "# First normalize the individual data points\n", "\n", "dim = X.shape[1]\n", "n = X.shape[0]\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Data preprocessing to ensure we have bounded x and y\n", "This is very important for DP methods to work in practice." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Rescaling the feature vectors by their natural ranges (independent to the data)\n", "X = X @ np.diag(1./np.array([10,50,100,40,40000,1000,50,100]))\n", "# This is to ensure that each feature is of the similar scale\n", "\n", "# the following bounds are chosen independent to the data\n", "x_bound = 1\n", "y_bound = 1\n", "\n", "# Preprocess the feature vector such that the norm is fixed at 5\n", "X = x_bound*preprocessing.normalize(X, norm='l2')\n", "\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Set up the baselines\n", "\n", "Let's define a few utility functions and compute the non-private results and the trivial results." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Let's define a few utility functions\n", "\n", "def CE(score,y):\n", " # numerically efficient vectorized implementation of CE loss\n", " log_phat = np.zeros_like(score)\n", " log_one_minus_phat = np.zeros_like(score)\n", " mask = score > 0 \n", " log_phat[mask] = - np.log( 1 + np.exp(-score[mask]))\n", " log_phat[~mask] = score[~mask] - np.log( 1 + np.exp(score[~mask]))\n", " log_one_minus_phat[mask] = -score[mask] - np.log( 1 + np.exp(-score[mask]))\n", " log_one_minus_phat[~mask] = - np.log( 1 + np.exp(score[~mask]))\n", " \n", " return -y*log_phat-(1-y)*log_one_minus_phat\n", "\n", "\n", "def loss(theta):\n", " return np.sum(CE(X@theta,y))/n\n", "\n", "def err(theta):\n", " return np.sum((X@theta > 0) != y) / n\n", "\n", "def err_yhat(yhat):\n", " return np.sum((yhat != y)) / n\n", "\n", "\n", "clf = LogisticRegression(random_state=0,fit_intercept=False).fit(X, y)\n", "yhat = clf.predict(X)\n", "\n", "err_nonprivate = err_yhat(yhat)\n", "err_trivial = min(np.mean(y), 1-np.mean(y) )\n", "\n", "# Nonprivate baseline\n", "print('Nonprivate error rate is', err_yhat(yhat))\n", "\n", "print('Trivial error rate is', err_trivial)\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 1. Let's first implement NoisyGD from scratch and represent it as a ```Mechanism``` in ```autodp```.\n", "\n", "We will start with the autodp representation of NoisyGD, which is a straightforward composition of Gaussian mechanisms. Then we will implement the algorithm itself." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from autodp.autodp_core import Mechanism\n", "from autodp.mechanism_zoo import GaussianMechanism\n", "from autodp.transformer_zoo import ComposeGaussian\n", "\n", "\n", "# The autodp Mechanism representation of NoisyGD is the following\n", "class NoisyGD_mech(Mechanism):\n", " def __init__(self,sigma,coeff,name='NoisyGD'):\n", " Mechanism.__init__(self)\n", " self.name = name\n", " self.params={'sigma':sigma,'coeff':coeff}\n", " \n", " # ----------- Implement noisy-GD here with \"GaussianMechanism\" and \"ComposeGaussian\" ----------------\n", " \n", " # ADD YOUR CODE HERE!\n", " \n", " # ------------- return a Mechanism object named 'mech' --------------------\n", " \n", " self.set_all_representation(mech) \n", "\n", " \n", "# Now let's actually implement the noisy gradient descent algorithm\n", "\n", "def gradient(theta):\n", " # ----------- Implement the gradient of f(theta) ----------- \n", " grad = np.zeros(shape=(dim,))\n", " \n", " phat = np.exp(X@theta)/(1+np.exp(X@theta))\n", " grad = X[y==0,:].T@(phat[y==0]) -X[y==1,:].T@(1-phat[y==1].T)\n", " # ----------- Notice that f is the sum of the individual loss functions, NOT the averge. ----------- \n", " return grad\n", "\n", "\n", "\n", "\n", "def GS_bound(theta):\n", " # ----------- Calculate the global sensitivity of the sum of gradient, given theta -------------\n", " # Note that you may start with a constant upper bound then consider using a more adaptive bound\n", " \n", " GS = 100\n", " # ADD YOUR CODE HERE to modify the global sensitivity \n", " \n", " # ------------------------\n", " return GS\n", "\n", " \n", "def run_NoisyGD_step(theta,sigma, lr):\n", " GS = GS_bound(theta)\n", " return theta - lr * (gradient(theta) + GS*sigma*np.random.normal(size=theta.shape))\n", "\n", "# function to run NoisyGD \n", "def run_NoisyGD(sigma,lr,niter, log_gap = 10):\n", " theta_GD = np.zeros(shape=(dim,))\n", " err_GD = []\n", " eps_GD = []\n", " for i in range(niter):\n", " theta_GD = run_NoisyGD_step(theta_GD,sigma, lr)\n", " if not i%log_gap: \n", " mech = NoisyGD_mech(sigma,i+1)\n", " eps_GD.append(mech.approxDP(delta))\n", " err_GD.append(err(theta_GD))\n", " return err_GD, eps_GD\n", "\n", "\n", "# function to run NoisyGD \n", "def run_nonprivate_GD(lr,niter, log_gap = 10):\n", " theta_GD = np.zeros(shape=(dim,))\n", " err_GD = []\n", " for i in range(niter):\n", " theta_GD = run_NoisyGD_step(theta_GD,0, lr)\n", " if not i%log_gap:\n", " err_GD.append(err(theta_GD))\n", " return err_GD" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "theta = np.zeros(shape=(dim,))\n", "ss =gradient(theta)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 2. How do we choose the hyperparameters for NoisyGD?\n", "\n", "Our strategy of choosing this hyperparameter is to first set the noise level and the number of iterations. (We can alternatively fix one of these and use autodp's privacy calibrator to determine the other.) \n", "\n", "Once we decide on the noise level and the number of iterations, we will choose the learning rate by the optimal theoretical choice. \n", "\n", "\n", "### You need to figure out the bounds of these parameters." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from autodp.calibrator_zoo import eps_delta_calibrator\n", "\n", "def find_appropriate_niter(sigma, eps,delta):\n", " # Use autodp calibrator for selecting 'niter'\n", " NoisyGD_fix_sigma = lambda x: NoisyGD_mech(sigma,x)\n", " calibrate = eps_delta_calibrator()\n", " mech = calibrate(NoisyGD_fix_sigma, eps, delta, [0,500000])\n", " niter = int(np.floor(mech.params['coeff']))\n", " return niter\n", "\n", "\n", "# Instantiate these parameters \n", "\n", "def theoretical_lr_choice(beta_L,f0_minus_fniter_bound,dim,sigma,niter):\n", " # beta_L is the gradient lipschitz constant for the whole objective function\n", " # sigma is the variance of the gradient noise in each coordinate (notice that this is the noise multiplier * GS)\n", " # niter is the intended number of iterations (the LR is optimized for the point we get when finishing all niter)\n", " \n", " return np.minimum(1/beta_L,np.sqrt(2*f0_minus_fniter_bound / (dim * sigma**2 *beta_L*niter)))\n", "\n", "\n", "# You are supposed to find out what is the right choice of \"beta_L\" and \"f0_minus_fniter_bound\n", "\n", "\n", "# ----------------------------- ADD YOUR CODE HERE--------------------\n", "\n", "beta_L = # ADD YOUR CODE HERE\n", "\n", "f0_minus_fniter_bound = # ADD YOUR CODE HERE\n", "\n", "GS = # ADD YOUR CODE HERE for the global sensitivity INDEPENDENT to theta\n", "# this will be uses\n", "# ------------------------------------------------------------------\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Now let's run some experiments and plot the results!\n", "\n", "We will first compare two regimes: \n", "1. large noise, large number of iterations, small learning rate; \n", "2. small noise, small number of iterations, large learning rate." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# # find the theoretical learning rate choice by first working out the strong smoothness property\n", "# u,s,vT = np.linalg.svd(X.T@X) \n", "# lambdamax = s[0]\n", "\n", "# find the theoretical learning rate choice by first working out the strong smoothness constant\n", "\n", "# Large noise\n", "sigma = 300.0\n", "eps = 2.0\n", "delta = 1e-6\n", "niter = find_appropriate_niter(sigma, eps,delta)\n", "\n", "print(niter)\n", "\n", "lr = theoretical_lr_choice(beta_L,f0_minus_fniter_bound,dim,sigma*GS,niter)\n", "\n", "err_GD1, eps_GD1 = run_NoisyGD(sigma,lr,niter)\n", "\n", "# Small noise\n", "sigma = 30\n", "niter = find_appropriate_niter(sigma, eps,delta)\n", "print(niter)\n", "\n", "lr = theoretical_lr_choice(beta_L,f0_minus_fniter_bound, dim,sigma*GS,niter)\n", "err_GD2, eps_GD2 = run_NoisyGD(sigma,lr,niter)\n", "\n", "\n", "# no noise baseline\n", "err_GD0= run_nonprivate_GD(1/beta_L,niter)\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "## Let's also plot the results\n", "import matplotlib.pyplot as plt\n", "#%matplotlib inline \n", "plt.figure(figsize=(8, 5))\n", "#plt.plot(eps_GD2, err_GD0,'b.-')\n", "plt.plot(eps_GD1, err_GD1,'g.-')\n", "plt.plot(eps_GD2, err_GD2,'c.-')\n", "plt.plot(eps_GD1,err_nonprivate*np.ones_like(eps_GD1),'k--')\n", "plt.plot(eps_GD1,err_trivial*np.ones_like(eps_GD1),'r--')\n", "plt.plot(eps_GD2,err_GD0,'b--')\n", "#plt.ylim([0,0.1])\n", "\n", "plt.legend(['NoisyGD-large-noise-more-iter','NoisyGD-small-noise-fewer-iter','Nonprivate-sklearn','trivial','non-private-GD'])\n", "plt.xlabel('epsilon')\n", "plt.ylabel('Error')\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 3. What do you see in your experiments?\n", "\n", "Describe your results." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 4. What if we wiggle the learning rate for a bit?\n", "\n", "Next, we will consider the stability of the learning rate choices by trying larger and smaller learning rate near the theoretical choice:\n", "\n", "3. Multiplying the learning rate by 10\n", "4. Dividing the learning rate by 10\n", "5. Multiplying the learning rate by 100" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sigma = 300.0\n", "eps = 2.0\n", "delta = 1e-6\n", "niter = find_appropriate_niter(sigma, eps,delta)\n", "\n", "lr = 10*theoretical_lr_choice(beta_L,f0_minus_fniter_bound,dim,sigma*GS,niter)\n", "# Theoretical choice for GD (those this is giving GD a bit of unfair advantage because lambdamax is data-dependent)\n", "\n", "err_GD3, eps_GD3 = run_NoisyGD(sigma,lr,niter,log_gap=100)\n", "\n", "lr = 0.1*theoretical_lr_choice(beta_L,f0_minus_fniter_bound, dim,sigma*GS,niter)\n", "# Theoretical choice for GD (those this is giving GD a bit of unfair advantage because lambdamax is data-dependent)\n", "\n", "err_GD4, eps_GD4 = run_NoisyGD(sigma,lr,niter,log_gap=100)\n", "\n", "lr = 100*theoretical_lr_choice(sigma,f0_minus_fniter_bound,dim,sigma*GS,niter)\n", "# Theoretical choice for GD (those this is giving GD a bit of unfair advantage because lambdamax is data-dependent)\n", "\n", "err_GD5, eps_GD5 = run_NoisyGD(sigma,lr,niter,log_gap=100)\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Let's also plot the median\n", "import matplotlib.pyplot as plt\n", "#%matplotlib inline \n", "plt.figure(figsize=(8, 5))\n", "# plt.plot(eps_budget_list,med_MSE_list[:,0],'o-')\n", "# plt.plot(eps_budget_list,med_MSE_list[:,1],'s-')\n", "plt.plot(eps_GD1, err_GD1,'g.-')\n", "plt.plot(eps_GD3, err_GD3,'c--')\n", "plt.plot(eps_GD4, err_GD4,'m:')\n", "plt.plot(eps_GD5, err_GD5,'b:')\n", "plt.plot(eps_GD1,err_nonprivate*np.ones_like(eps_GD1),'k--')\n", "plt.plot(eps_GD1,err_trivial*np.ones_like(eps_GD1),'r--')\n", "\n", "\n", "plt.legend(['NoisyGD','NoisyGD-lr*10','NoisyGD-lr/10','NoisyGD-lr*100','Nonprivate','trivial'])\n", "plt.xlabel('epsilon')\n", "plt.ylabel('Error')\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 5. Write two paragraphs sto explain what you observed in your experiments. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 6. Bonus questsions for you to explore yourself: \n", "\n", "1. What happens if you use loss function instead of classification error?\n", "2. What happens if you use return average parameter theta, rather than just the last iteration?\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.5" } }, "nbformat": 4, "nbformat_minor": 4 }