Utility Functions¶
utility.py
- Utility Functions¶
Utility functions in terms of usefulness, e.g. minimizing GP utility functions or computing KL divergences, and the GP utility functions, e.g. the bape utility.
-
approxposterior.utility.
logsubexp
(x1, x2)[source]¶ Numerically stable way to compute log(exp(x1) - exp(x2))
logsubexp(x1, x2) -> log(exp(x1) - exp(x2))
- Parameters
x1 (float) –
x2 (float) –
- Returns
- Return type
logsubexp(x1, x2)
-
approxposterior.utility.
AGPUtility
(theta, y, gp, priorFn)[source]¶ AGP (Adaptive Gaussian Process) utility function, the entropy of the posterior distribution. This is what you maximize to find the next x under the AGP formalism. Note here we use the negative of the utility function so minimizing this is the same as maximizing the actual utility function.
See Wang & Li (2017) for derivation/explaination.
- Parameters
theta (array) – parameters to evaluate
y (array) – y values to condition the gp prediction on.
gp (george GP object) –
priorFn (function) – Function that computes lnPrior probability for a given theta.
- Returns
util – utility of theta under the gp
- Return type
float
-
approxposterior.utility.
BAPEUtility
(theta, y, gp, priorFn)[source]¶ BAPE (Bayesian Active Posterior Estimation) utility function. This is what you maximize to find the next theta under the BAPE formalism. Note here we use the negative of the utility function so minimizing this is the same as maximizing the actual utility function. Also, we log the BAPE utility function as the log is monotonic so the minima are equivalent.
See Kandasamy et al. (2015) for derivation/explaination.
- Parameters
theta (array) – parameters to evaluate
y (array) – y values to condition the gp prediction on.
gp (george GP object) –
priorFn (function) – Function that computes lnPrior probability for a given theta.
- Returns
util – utility of theta under the gp
- Return type
float
-
approxposterior.utility.
JonesUtility
(theta, y, gp, priorFn, zeta=0.01)[source]¶ Jones utility function - Expected Improvement derived in Jones et al. (1998) EI(x) = E(max(f(theta) - f(thetaBest),0)) where f(thetaBest) is the best value of the function so far and thetaBest is the best design point
- Parameters
theta (array) – parameters to evaluate
y (array) – y values to condition the gp prediction on.
gp (george GP object) –
priorFn (function) – Function that computes lnPrior probability for a given theta.
zeta (float, optional) – Exploration parameter. Larger zeta leads to more exploration. Defaults to 0.01
- Returns
util – utility of theta under the gp
- Return type
float
-
approxposterior.utility.
minimizeObjective
(fn, y, gp, sampleFn, priorFn, nRestarts=5, method='nelder-mead', options=None, bounds=None, theta0=None, args=None, maxIters=100)[source]¶ Minimize some arbitrary function, fn. This function is most useful when evaluating fn requires a Gaussian process model, gp. For example, this function can be used to find the point that minimizes a utility fn for a gp conditioned on y, the data, and is allowed by the prior, priorFn.
PriorFn is required as it helps to select against points with non-finite likelihoods, e.g. NaNs or infs. This is required as the GP can only train on finite values.
- Parameters
fn (function) – function to minimize that expects x, y, gp as arguments aka fn looks like fn_name(x, y, gp). See *_utility functions above for examples.
y (array) – y values to condition the gp prediction on.
gp (george GP object) –
sampleFn (function) – Function to sample initial conditions from.
priorFn (function) – Function that computes lnPrior probability for a given theta.
nMinObjRestarts (int, optional) – Number of times to restart minimizing -utility function to select next point to improve GP performance. Defaults to 5. Increase this number of the point selection is not working well.
method (str, optional) – scipy.optimize.minimize method. Defaults to nelder-mead.
options (dict, optional) – kwargs for the scipy.optimize.minimize function. Defaults to None, but if method == “nelder-mead”, options = {“adaptive” : True}
theta0 (float/iterable, optional) – Initial guess for optimization. Defaults to None, which draws a sample from the prior function using sampleFn.
args (iterable, optional) – Arguments for user-specified function that this function will minimize. Defaults to None.
(int), optional (maxIters) – Maximum number of iterations to try restarting optimization if the solution isn’t finite and/nor allowed by the prior function. Defaults to 100.
- Returns
thetaBest ((1 x n_dims)) – point that minimizes fn
fnBest (float) – fn(thetaBest)
-
approxposterior.utility.
klNumerical
(x, p, q)[source]¶ Estimate the KL-Divergence between pdfs p and q via Monte Carlo intergration using x, samples from p.
KL ~ 1/n * sum_{i=1,n}(log (p(x_i)/q(x_i)))
For our purposes, q is the current estimate of the pdf while p is the previous estimate. This method is the only feasible method for large dimensions.
See Hershey and Olsen, “Approximating the Kullback Leibler Divergence Between Gaussian Mixture Models” for more info
Note that this method can result in D_kl < 0 but it’s the only method with guaranteed convergence properties as the number of samples (len(x)) grows. Also, this method is shown to have the lowest error, on average (see Hershey and Olsen).
- Parameters
x (array) – Samples drawn from p
p (function) – Callable previous estimate of the density
q (function) – Callable current estimate of the density
- Returns
kl – KL divergence
- Return type
float