botorch.utils¶
Constraints¶
Helpers for handling outcome constraints.
- 
botorch.utils.constraints.get_outcome_constraint_transforms(outcome_constraints)[source]¶
- Create outcome constraint callables from outcome constraint tensors. - Parameters
- outcome_constraints ( - Optional[- Tuple[- Tensor,- Tensor]]) – A tuple of (A, b). For k outcome constraints and m outputs at f(x)`, A is k x m and b is k x 1 such that A f(x) <= b.
- Return type
- Optional[- List[- Callable[[- Tensor],- Tensor]]]
- Returns
- A list of callables, each mapping a Tensor of size b x q x m to a tensor of size b x q, where m is the number of outputs of the model. Negative values imply feasibility. The callables support broadcasting (e.g. for calling on a tensor of shape mc_samples x b x q x m). 
 - Example - >>> # constrain `f(x)[0] <= 0` >>> A = torch.tensor([[1., 0.]]) >>> b = torch.tensor([[0.]]) >>> outcome_constraints = get_outcome_constraint_transforms((A, b)) 
Containers¶
Containers to standardize inputs into models and acquisition functions.
- 
class botorch.utils.containers.TrainingData(X: torch.Tensor, Y: torch.Tensor, Yvar: Optional[torch.Tensor] = None)[source]¶
- Bases: - tuple- Standardized struct of model training data for a single outcome. - Create new instance of TrainingData(X, Y, Yvar) - 
X: torch.Tensor¶
- Alias for field number 0 
 - 
Y: torch.Tensor¶
- Alias for field number 1 
 - 
Yvar: Optional[torch.Tensor]¶
- Alias for field number 2 
 
- 
Objective¶
Helpers for handling objectives.
- 
botorch.utils.objective.get_objective_weights_transform(weights)[source]¶
- Create a linear objective callable from a set of weights. - Create a callable mapping a Tensor of size b x q x m and an (optional) Tensor of size b x q x d to a Tensor of size b x q, where m is the number of outputs of the model using scalarization via the objective weights. This callable supports broadcasting (e.g. for calling on a tensor of shape mc_samples x b x q x m). For m = 1, the objective weight is used to determine the optimization direction. - Parameters
- weights ( - Optional[- Tensor]) – a 1-dimensional Tensor containing a weight for each task. If not provided, the identity mapping is used.
- Return type
- Callable[[- Tensor,- Optional[- Tensor]],- Tensor]
- Returns
- Transform function using the objective weights. 
 - Example - >>> weights = torch.tensor([0.75, 0.25]) >>> transform = get_objective_weights_transform(weights) 
- 
botorch.utils.objective.apply_constraints_nonnegative_soft(obj, constraints, samples, eta)[source]¶
- Applies constraints to a non-negative objective. - This function uses a sigmoid approximation to an indicator function for each constraint. - Parameters
- obj ( - Tensor) – A n_samples x b x q Tensor of objective values.
- constraints ( - List[- Callable[[- Tensor],- Tensor]]) – A list of callables, each mapping a Tensor of size b x q x m to a Tensor of size b x q, where negative values imply feasibility. This callable must support broadcasting. Only relevant for multi- output models (m > 1).
- samples ( - Tensor) – A b x q x m Tensor of samples drawn from the posterior.
- eta ( - float) – The temperature parameter for the sigmoid function.
 
- Return type
- Tensor
- Returns
- A n_samples x b x q-dim tensor of feasibility-weighted objectives. 
 
- 
botorch.utils.objective.soft_eval_constraint(lhs, eta=0.001)[source]¶
- Element-wise evaluation of a constraint in a ‘soft’ fashion - value(x) = 1 / (1 + exp(x / eta)) - Parameters
- lhs ( - Tensor) – The left hand side of the constraint lhs <= 0.
- eta ( - float) – The temperature parameter of the softmax function. As eta grows larger, this approximates the Heaviside step function.
 
- Return type
- Tensor
- Returns
- Element-wise ‘soft’ feasibility indicator of the same shape as lhs. For each element x, value(x) -> 0 as x becomes positive, and value(x) -> 1 as x becomes negative. 
 
- 
botorch.utils.objective.apply_constraints(obj, constraints, samples, infeasible_cost, eta=0.001)[source]¶
- Apply constraints using an infeasible_cost M for negative objectives. - This allows feasibility-weighting an objective for the case where the objective can be negative by usingthe following strategy: (1) add M to make obj nonnegative (2) apply constraints using the sigmoid approximation (3) shift by -M - Parameters
- obj ( - Tensor) – A n_samples x b x q Tensor of objective values.
- constraints ( - List[- Callable[[- Tensor],- Tensor]]) – A list of callables, each mapping a Tensor of size b x q x m to a Tensor of size b x q, where negative values imply feasibility. This callable must support broadcasting. Only relevant for multi- output models (m > 1).
- samples ( - Tensor) – A b x q x m Tensor of samples drawn from the posterior.
- infeasible_cost ( - float) – The infeasible value.
- eta ( - float) – The temperature parameter of the sigmoid function.
 
- Return type
- Tensor
- Returns
- A n_samples x b x q-dim tensor of feasibility-weighted objectives. 
 
Rounding¶
- 
botorch.utils.rounding.approximate_round(X, tau=0.001)[source]¶
- Diffentiable approximate rounding function. - This method is a piecewise approximation of a rounding function where each piece is a hyperbolic tangent function. - Parameters
- X ( - Tensor) – The tensor to round to the nearest integer (element-wise).
- tau ( - float) – A temperature hyperparameter.
 
- Return type
- Tensor
- Returns
- The approximately rounded input tensor. 
 
Sampling¶
Utilities for MC and qMC sampling.
- 
botorch.utils.sampling.manual_seed(seed=None)[source]¶
- Contextmanager for manual setting the torch.random seed. - Parameters
- seed ( - Optional[- int]) – The seed to set the random number generator to.
- Return type
- Generator[- None,- None,- None]
- Returns
- Generator 
 - Example - >>> with manual_seed(1234): >>> X = torch.rand(3) 
- 
botorch.utils.sampling.construct_base_samples(batch_shape, output_shape, sample_shape, qmc=True, seed=None, device=None, dtype=None)[source]¶
- Construct base samples from a multi-variate standard normal N(0, I_qo). - Parameters
- batch_shape ( - Size) – The batch shape of the base samples to generate. Typically, this is used with each dimension of size 1, so as to eliminate sampling variance across batches.
- output_shape ( - Size) – The output shape (q x m) of the base samples to generate.
- sample_shape ( - Size) – The sample shape of the samples to draw.
- qmc ( - bool) – If True, use quasi-MC sampling (instead of iid draws).
- seed ( - Optional[- int]) – If provided, use as a seed for the RNG.
 
- Return type
- Tensor
- Returns
- A sample_shape x batch_shape x mutput_shape dimensional tensor of base samples, drawn from a N(0, I_qm) distribution (using QMC if qmc=True). Here output_shape = q x m. 
 - Example - >>> batch_shape = torch.Size([2]) >>> output_shape = torch.Size([3]) >>> sample_shape = torch.Size([10]) >>> samples = construct_base_samples(batch_shape, output_shape, sample_shape) 
- 
botorch.utils.sampling.construct_base_samples_from_posterior(posterior, sample_shape, qmc=True, collapse_batch_dims=True, seed=None)[source]¶
- Construct a tensor of normally distributed base samples. - Parameters
- posterior ( - Posterior) – A Posterior object.
- sample_shape ( - Size) – The sample shape of the samples to draw.
- qmc ( - bool) – If True, use quasi-MC sampling (instead of iid draws).
- seed ( - Optional[- int]) – If provided, use as a seed for the RNG.
 
- Return type
- Tensor
- Returns
- A num_samples x 1 x q x m dimensional Tensor of base samples, drawn from a N(0, I_qm) distribution (using QMC if qmc=True). Here q and m are the same as in the posterior’s event_shape b x q x m. Importantly, this only obtain a single t-batch of samples, so as to not introduce any sampling variance across t-batches. 
 - Example - >>> sample_shape = torch.Size([10]) >>> samples = construct_base_samples_from_posterior(posterior, sample_shape) 
- 
botorch.utils.sampling.draw_sobol_samples(bounds, n, q, batch_shape=None, seed=None)[source]¶
- Draw qMC samples from the box defined by bounds. - Parameters
- bounds – A 2 x d dimensional tensor specifying box constraints on a d-dimensional space, where bounds[0, :] and bounds[1, :] correspond to lower and upper bounds, respectively. 
- n – The number of (q-batch) samples. 
- q – The size of each q-batch. 
- batch_shape – The batch shape of the samples. If given, returns samples of shape n x batch_shape x q x d, where each batch is an n x q x d-dim tensor of qMC samples. 
- seed – The seed used for initializing Owen scrambling. If None (default), use a random seed. 
 
- Returns
- A n x batch_shape x q x d-dim tensor of qMC samples from the box defined by bounds. 
 - Example - >>> bounds = torch.stack([torch.zeros(3), torch.ones(3)]) >>> samples = draw_sobol_samples(bounds, 10, 2) 
- 
botorch.utils.sampling.draw_sobol_normal_samples(d, n, device=None, dtype=None, seed=None)[source]¶
- Draw qMC samples from a multi-variate standard normal N(0, I_d) - A primary use-case for this functionality is to compute an QMC average of f(X) over X where each element of X is drawn N(0, 1). - Parameters
- d ( - int) – The dimension of the normal distribution.
- n ( - int) – The number of samples to return.
- device ( - Optional[- device]) – The torch device.
- dtype ( - Optional[- dtype]) – The torch dtype.
- seed ( - Optional[- int]) – The seed used for initializing Owen scrambling. If None (default), use a random seed.
 
- Return type
- Tensor
- Returns
- A tensor of qMC standard normal samples with dimension n x d with device and dtype specified by the input. 
 - Example - >>> samples = draw_sobol_normal_samples(2, 10) 
- 
botorch.utils.sampling.sample_hypersphere(d, n=1, qmc=False, seed=None, device=None, dtype=None)[source]¶
- Sample uniformly from a unit d-sphere. - Parameters
- d ( - int) – The dimension of the hypersphere.
- n ( - int) – The number of samples to return.
- qmc ( - bool) – If True, use QMC Sobol sampling (instead of i.i.d. uniform).
- seed ( - Optional[- int]) – If provided, use as a seed for the RNG.
- device ( - Optional[- device]) – The torch device.
- dtype ( - Optional[- dtype]) – The torch dtype.
 
- Return type
- Tensor
- Returns
- An n x d tensor of uniform samples from from the d-hypersphere. 
 - Example - >>> sample_hypersphere(d=5, n=10) 
- 
botorch.utils.sampling.sample_simplex(d, n=1, qmc=False, seed=None, device=None, dtype=None)[source]¶
- Sample uniformly from a d-simplex. - Parameters
- d ( - int) – The dimension of the simplex.
- n ( - int) – The number of samples to return.
- qmc ( - bool) – If True, use QMC Sobol sampling (instead of i.i.d. uniform).
- seed ( - Optional[- int]) – If provided, use as a seed for the RNG.
- device ( - Optional[- device]) – The torch device.
- dtype ( - Optional[- dtype]) – The torch dtype.
 
- Return type
- Tensor
- Returns
- An n x d tensor of uniform samples from from the d-simplex. 
 - Example - >>> sample_simplex(d=3, n=10) 
- 
botorch.utils.sampling.batched_multinomial(weights, num_samples, replacement=False, generator=None, out=None)[source]¶
- Sample from multinomial with an arbitrary number of batch dimensions. - Parameters
- weights ( - Tensor) – A batch_shape x num_categories tensor of weights. For each batch index i, j, …, this functions samples from a multinomial with input weights[i, j, …, :]. Note that the weights need not sum to one, but must be non-negative, finite and have a non-zero sum.
- num_samples ( - int) – The number of samples to draw for each batch index. Must be smaller than num_categories if replacement=False.
- replacement ( - bool) – If True, samples are drawn with replacement.
- generator ( - Optional[- Generator]) – A a pseudorandom number generator for sampling.
- out ( - Optional[- Tensor]) – The output tensor (optional). If provided, must be of size batch_shape x num_samples.
 
- Return type
- LongTensor
- Returns
- A batch_shape x num_samples tensor of samples. 
 - This is a thin wrapper around torch.multinomial that allows weight (input) tensors with an arbitrary number of batch dimensions (torch.multinomial only allows a single batch dimension). The calling signature is the same as for torch.multinomial. - Example - >>> weights = torch.rand(2, 3, 10) >>> samples = batched_multinomial(weights, 4) # shape is 2 x 3 x 4 
- 
class botorch.utils.sampling.PolytopeSampler(inequality_constraints, n_burnin=0, equality_constraints=None, initial_point=None)[source]¶
- Bases: - object- Sampling points from a polytope described via a set of inequality and equality constraints. - Parameters
- inequality_constraints ( - Tuple[- Tensor,- Tensor]) – Tensors (A, b) describing inequality constraints: A*x<=b, where A is (n_ineq_con, d_sample)-dim Tensor and b is (n_ineq_con, 1)-dim Tensor with n_ineq_con being the number of inequalities and d_sample the dimension of the sample space.
- n_burnin ( - int) – The number of burn in samples.
- equality_constraints ( - Optional[- Tuple[- Tensor,- Tensor]]) – Tensors (C, d) describing the equality constraints: C*x=d, where C is (n_eq_con, d_sample)-dim Tensor and d is (n_eq_con-dim, 1) Tensor with n_eq_con being the number of equalities.
- initial_point ( - Optional[- Tensor]) – An (d_sample, 1)-dim Tensor presenting an inital point of the chain satisfying all the conditions. Determined automatically (by solving an LP) if not provided.
 
 
- 
botorch.utils.sampling.sample_polytope(A, b, x0, n=10000, n0=100, seed=None)[source]¶
- Hit and run sampler from uniform sampling points from a polytope, described via inequality constraints A*x<=b. - Parameters
- A ( - Tensor) – A Tensor describing inequality constraints so that all samples satisfy Ax<=b.
- b ( - Tensor) – A Tensor describing the inequality constraints so that all samples satisfy Ax<=b.
- x0 ( - Tensor) – d dim Tensor representing a starting point of the chain satisfying the constraints.
- n ( - int) – The number of resulting samples kept in the output.
- n0 ( - int) – The number of burn-in samples. The chain will produce n+n0 samples but the first n0 samples are not saved.
- seed ( - Optional[- int]) – The seed for the sampler. If omitted, use a random seed.
 
- Return type
- Tensor
- Returns
- (n, d) dim Tensor containing the resulting samples. 
 
Sampling from GP priors¶
- 
class botorch.utils.gp_sampling.GPDraw(model, seed=None)[source]¶
- Bases: - torch.nn.modules.module.Module- Convenience wrapper for sampling a function from a GP prior. - This wrapper implicitly defines the GP sample as a self-updating function by keeping track of the evaluated points and respective base samples used during the evaluation. - This does not yet support multi-output models. - Construct a GP function sampler. - Parameters
- model ( - Model) – The Model defining the GP prior.
 - 
property Xs¶
- A (batch_shape) x n_eval x d-dim tensor of locations at which the GP was evaluated (or None if the sample has never been evaluated). - Return type
- Tensor
 
 - 
property Ys¶
- A (batch_shape) x n_eval x d-dim tensor of associated function values (or None if the sample has never been evaluated). - Return type
- Tensor
 
 - 
forward(X)[source]¶
- Evaluate the GP sample function at a set of points X. - Parameters
- X ( - Tensor) – A batch_shape x n x d-dim tensor of points
- Return type
- Tensor
- Returns
- The value of the GP sample at the n points. 
 
 - 
training: bool¶
 
Testing¶
- 
class botorch.utils.testing.BotorchTestCase(methodName='runTest')[source]¶
- Bases: - unittest.case.TestCase- Basic test case for Botorch. - This
- sets the default device to be torch.device(“cpu”) 
- ensures that no warnings are suppressed by default. 
 
 - Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name. - 
device= device(type='cpu')¶
 
- 
class botorch.utils.testing.BaseTestProblemBaseTestCase[source]¶
- Bases: - object- 
functions: List[botorch.test_functions.base.BaseTestProblem]¶
 
- 
- 
class botorch.utils.testing.SyntheticTestFunctionBaseTestCase[source]¶
- Bases: - botorch.utils.testing.BaseTestProblemBaseTestCase- 
functions: List[botorch.test_functions.base.BaseTestProblem]¶
 
- 
- 
class botorch.utils.testing.MockPosterior(mean=None, variance=None, samples=None)[source]¶
- Bases: - botorch.posteriors.posterior.Posterior- Mock object that implements dummy methods and feeds through specified outputs - 
property device¶
- The torch device of the posterior. - Return type
- device
 
 - 
property dtype¶
- The torch dtype of the posterior. - Return type
- dtype
 
 - 
property event_shape¶
- The event shape (i.e. the shape of a single sample). - Return type
- Size
 
 - 
property mean¶
- The mean of the posterior as a (b) x n x m-dim Tensor. 
 - 
property variance¶
- The variance of the posterior as a (b) x n x m-dim Tensor. 
 
- 
property 
- 
class botorch.utils.testing.MockModel(posterior)[source]¶
- Bases: - botorch.models.model.Model- Mock object that implements dummy methods and feeds through specified outputs - Initializes internal Module state, shared by both nn.Module and ScriptModule. - 
posterior(X, output_indices=None, observation_noise=False)[source]¶
- Computes the posterior over model outputs at the provided points. - Parameters
- X ( - Tensor) – A b x q x d-dim Tensor, where d is the dimension of the feature space, q is the number of points considered jointly, and b is the batch dimension.
- output_indices ( - Optional[- List[- int]]) – A list of indices, corresponding to the outputs over which to compute the posterior (if the model is multi-output). Can be used to speed up computation if only a subset of the model’s outputs are required for optimization. If omitted, computes the posterior over all model outputs.
- observation_noise ( - bool) – If True, add observation noise to the posterior.
 
- Return type
- Returns
- A Posterior object, representing a batch of b joint distributions over q points and m outputs each. 
 
 - 
property num_outputs¶
- The number of outputs of the model. - Return type
- int
 
 - 
property batch_shape¶
- The batch shape of the model. - This is a batch shape from an I/O perspective, independent of the internal representation of the model (as e.g. in BatchedMultiOutputGPyTorchModel). For a model with m outputs, a test_batch_shape x q x d-shaped input X to the posterior method returns a Posterior object over an output of shape broadcast(test_batch_shape, model.batch_shape) x q x m. - Return type
- Size
 
 - 
state_dict()[source]¶
- Returns a dictionary containing a whole state of the module. - Both parameters and persistent buffers (e.g. running averages) are included. Keys are corresponding parameter and buffer names. - Returns
- a dictionary containing a whole state of the module 
- Return type
- dict 
 - Example: - >>> module.state_dict().keys() ['bias', 'weight'] 
 - 
load_state_dict(state_dict=None, strict=False)[source]¶
- Copies parameters and buffers from - state_dictinto this module and its descendants. If- strictis- True, then the keys of- state_dictmust exactly match the keys returned by this module’s- state_dict()function.- Parameters
- state_dict (dict) – a dict containing parameters and persistent buffers. 
- strict (bool, optional) – whether to strictly enforce that the keys in - state_dictmatch the keys returned by this module’s- state_dict()function. Default:- True
 
- Returns
- missing_keys is a list of str containing the missing keys 
- unexpected_keys is a list of str containing the unexpected keys 
 
- Return type
- NamedTuplewith- missing_keysand- unexpected_keysfields
 
 - 
training: bool¶
 
- 
- 
class botorch.utils.testing.MockAcquisitionFunction[source]¶
- Bases: - object- Mock acquisition function object that implements dummy methods. 
- 
class botorch.utils.testing.MultiObjectiveTestProblemBaseTestCase[source]¶
- Bases: - botorch.utils.testing.BaseTestProblemBaseTestCase- 
functions: List[botorch.test_functions.base.BaseTestProblem]¶
 
- 
- 
class botorch.utils.testing.ConstrainedMultiObjectiveTestProblemBaseTestCase[source]¶
- Bases: - botorch.utils.testing.MultiObjectiveTestProblemBaseTestCase- 
functions: List[botorch.test_functions.base.BaseTestProblem]¶
 
- 
Torch¶
- 
class botorch.utils.torch.BufferDict(buffers=None)[source]¶
- Bases: - torch.nn.modules.module.Module- Holds buffers in a dictionary. - BufferDict can be indexed like a regular Python dictionary, but buffers it contains are properly registered, and will be visible by all Module methods. - BufferDictis an ordered dictionary that respects- the order of insertion, and 
- in - update(), the order of the merged- OrderedDictor another- BufferDict(the argument to- update()).
 - Note that - update()with other unordered mapping types (e.g., Python’s plain- dict) does not preserve the order of the merged mapping.- Parameters
- buffers (iterable, optional) – a mapping (dictionary) of (string : - Tensor) or an iterable of key-value pairs of type (string,- Tensor)
 - Example: - class MyModule(nn.Module): def __init__(self): super(MyModule, self).__init__() self.buffers = nn.BufferDict({ 'left': torch.randn(5, 10), 'right': torch.randn(5, 10) }) def forward(self, x, choice): x = self.buffers[choice].mm(x) return x - Initializes internal Module state, shared by both nn.Module and ScriptModule. - 
pop(key)[source]¶
- Remove key from the BufferDict and return its buffer. - Parameters
- key (string) – key to pop from the BufferDict 
 
 - 
update(buffers)[source]¶
- Update the - BufferDictwith the key-value pairs from a mapping or an iterable, overwriting existing keys.- Note - If - buffersis an- OrderedDict, a- BufferDict, or an iterable of key-value pairs, the order of new elements in it is preserved.- Parameters
- buffers (iterable) – a mapping (dictionary) from string to - Tensor, or an iterable of key-value pairs of type (string,- Tensor)
 
 - 
extra_repr()[source]¶
- Set the extra representation of the module - To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable. 
 - 
training: bool¶
 
Transformations¶
Some basic data transformation helpers.
- 
botorch.utils.transforms.squeeze_last_dim(Y)[source]¶
- Squeeze the last dimension of a Tensor. - Parameters
- Y ( - Tensor) – A … x d-dim Tensor.
- Return type
- Tensor
- Returns
- The input tensor with last dimension squeezed. 
 - Example - >>> Y = torch.rand(4, 3) >>> Y_squeezed = squeeze_last_dim(Y) 
- 
botorch.utils.transforms.standardize(Y)[source]¶
- Standardizes (zero mean, unit variance) a tensor by dim=-2. - If the tensor is single-dimensional, simply standardizes the tensor. If for some batch index all elements are equal (or if there is only a single data point), this function will return 0 for that batch index. - Parameters
- Y ( - Tensor) – A batch_shape x n x m-dim tensor.
- Return type
- Tensor
- Returns
- The standardized Y. 
 - Example - >>> Y = torch.rand(4, 3) >>> Y_standardized = standardize(Y) 
- 
botorch.utils.transforms.normalize(X, bounds)[source]¶
- Min-max normalize X w.r.t. the provided bounds. - Parameters
- X ( - Tensor) – … x d tensor of data
- bounds ( - Tensor) – 2 x d tensor of lower and upper bounds for each of the X’s d columns.
 
- Return type
- Tensor
- Returns
- A … x d-dim tensor of normalized data, given by
- (X - bounds[0]) / (bounds[1] - bounds[0]). If all elements of X are contained within bounds, the normalized values will be contained within [0, 1]^d. 
 
 - Example - >>> X = torch.rand(4, 3) >>> bounds = torch.stack([torch.zeros(3), 0.5 * torch.ones(3)]) >>> X_normalized = normalize(X, bounds) 
- 
botorch.utils.transforms.unnormalize(X, bounds)[source]¶
- Un-normalizes X w.r.t. the provided bounds. - Parameters
- X ( - Tensor) – … x d tensor of data
- bounds ( - Tensor) – 2 x d tensor of lower and upper bounds for each of the X’s d columns.
 
- Return type
- Tensor
- Returns
- A … x d-dim tensor of unnormalized data, given by
- X * (bounds[1] - bounds[0]) + bounds[0]. If all elements of X are contained in [0, 1]^d, the un-normalized values will be contained within bounds. 
 
 - Example - >>> X_normalized = torch.rand(4, 3) >>> bounds = torch.stack([torch.zeros(3), 0.5 * torch.ones(3)]) >>> X = unnormalize(X_normalized, bounds) 
- 
botorch.utils.transforms.normalize_indices(indices, d)[source]¶
- Normalize a list of indices to ensure that they are positive. - Parameters
- indices ( - Optional[- List[- int]]) – A list of indices (may contain negative indices for indexing “from the back”).
- d ( - int) – The dimension of the tensor to index.
 
- Return type
- Optional[- List[- int]]
- Returns
- A normalized list of indices such that each index is between 0 and d-1, or None if indices is None. 
 
- 
botorch.utils.transforms.t_batch_mode_transform(expected_q=None, assert_output_shape=True)[source]¶
- Factory for decorators taking a t-batched X tensor. - This method creates decorators for instance methods to transform an input tensor X to t-batch mode (i.e. with at least 3 dimensions). This assumes the tensor has a q-batch dimension. The decorator also checks the q-batch size if expected_q is provided, and the output shape if assert_output_shape is True. - Parameters
- expected_q ( - Optional[- int]) – The expected q-batch size of X. If specified, this will raise an AssertionError if X’s q-batch size does not equal expected_q.
- assert_output_shape ( - bool) – If True, this will raise an AssertionError if the output shape does not match either the t-batch shape of X, or the acqf.model.batch_shape for acquisition functions using batched models.
 
- Return type
- Callable[[- Callable[[- Any,- Tensor],- Any]],- Callable[[- Any,- Tensor],- Any]]
- Returns
- The decorated instance method. 
 - Example - >>> class ExampleClass: >>> @t_batch_mode_transform(expected_q=1) >>> def single_q_method(self, X): >>> ... >>> >>> @t_batch_mode_transform() >>> def arbitrary_q_method(self, X): >>> ... 
- 
botorch.utils.transforms.concatenate_pending_points(method)[source]¶
- Decorator concatenating X_pending into an acquisition function’s argument. - This decorator works on the forward method of acquisition functions taking a tensor X as the argument. If the acquisition function has an X_pending attribute (that is not None), this is concatenated into the input X, appropriately expanding the pending points to match the batch shape of X. - Example - >>> class ExampleAcquisitionFunction: >>> @concatenate_pending_points >>> @t_batch_mode_transform() >>> def forward(self, X): >>> ... - Return type
- Callable[[- Any,- Tensor],- Any]
 
- 
botorch.utils.transforms.match_batch_shape(X, Y)[source]¶
- Matches the batch dimension of a tensor to that of another tensor. - Parameters
- X ( - Tensor) – A batch_shape_X x q x d tensor, whose batch dimensions that correspond to batch dimensions of Y are to be matched to those (if compatible).
- Y ( - Tensor) – A batch_shape_Y x q’ x d tensor.
 
- Return type
- Tensor
- Returns
- A batch_shape_Y x q x d tensor containing the data of X expanded to the batch dimensions of Y (if compatible). For instance, if X is b’’ x b’ x q x d and Y is b x q x d, then the returned tensor is b’’ x b x q x d. 
 - Example - >>> X = torch.rand(2, 1, 5, 3) >>> Y = torch.rand(2, 6, 4, 3) >>> X_matched = match_batch_shape(X, Y) >>> X_matched.shape torch.Size([2, 6, 5, 3]) 
Feasible Volume¶
- 
botorch.utils.feasible_volume.get_feasible_samples(samples, inequality_constraints=None)[source]¶
- Checks which of the samples satisfy all of the inequality constraints. - Parameters
- samples ( - Tensor) – A sample size x d size tensor of feature samples, where d is a feature dimension.
- constraints (inequality) – A list of tuples (indices, coefficients, rhs), with each tuple encoding an inequality constraint of the form sum_i (X[indices[i]] * coefficients[i]) >= rhs. 
 
- Return type
- Tuple[- Tensor,- float]
- Returns
- 2-element tuple containing - Samples satisfying the linear constraints. 
- Estimated proportion of samples satisfying the linear constraints. 
 
 
- 
botorch.utils.feasible_volume.get_outcome_feasibility_probability(model, X, outcome_constraints, threshold=0.1, nsample_outcome=1000, seed=None)[source]¶
- Monte Carlo estimate of the feasible volume with respect to the outcome constraints. - Parameters
- model ( - Model) – The model used for sampling the posterior.
- X ( - Tensor) – A tensor of dimension batch-shape x 1 x d, where d is feature dimension.
- outcome_constraints ( - List[- Callable[[- Tensor],- Tensor]]) – A list of callables, each mapping a Tensor of dimension sample_shape x batch-shape x q x m to a Tensor of dimension sample_shape x batch-shape x q, where negative values imply feasibility.
- threshold ( - float) – A lower limit for the probability of posterior samples feasibility.
- nsample_outcome ( - int) – The number of samples from the model posterior.
- seed ( - Optional[- int]) – The seed for the posterior sampler. If omitted, use a random seed.
 
- Return type
- float
- Returns
- Estimated proportion of features for which posterior samples satisfy given outcome constraints with probability above or equal to the given threshold. 
 
- 
botorch.utils.feasible_volume.estimate_feasible_volume(bounds, model, outcome_constraints, inequality_constraints=None, nsample_feature=1000, nsample_outcome=1000, threshold=0.1, verbose=False, seed=None, device=None, dtype=None)[source]¶
- Monte Carlo estimate of the feasible volume with respect to feature constraints and outcome constraints. - Parameters
- bounds ( - Tensor) – A 2 x d tensor of lower and upper bounds for each column of X.
- model ( - Model) – The model used for sampling the outcomes.
- outcome_constraints ( - List[- Callable[[- Tensor],- Tensor]]) – A list of callables, each mapping a Tensor of dimension sample_shape x batch-shape x q x m to a Tensor of dimension sample_shape x batch-shape x q, where negative values imply feasibility.
- constraints (inequality) – A list of tuples (indices, coefficients, rhs), with each tuple encoding an inequality constraint of the form sum_i (X[indices[i]] * coefficients[i]) >= rhs. 
- nsample_feature ( - int) – The number of feature samples satisfying the bounds.
- nsample_outcome ( - int) – The number of outcome samples from the model posterior.
- threshold ( - float) – A lower limit for the probability of outcome feasibility
- seed ( - Optional[- int]) – The seed for both feature and outcome samplers. If omitted, use a random seed.
- verbose ( - bool) – An indicator for whether to log the results.
 
- Returns
- Estimated proportion of volume in feature space that is
- feasible wrt the bounds and the inequality constraints (linear). 
 
- Estimated proportion of feasible features for which
- posterior samples (outcome) satisfies the outcome constraints with probability above the given threshold. 
 
 
- Return type
- 2-element tuple containing 
 
Multi-Objective Utilities¶
Abstract Box Decompositions¶
Box decomposition algorithms.
- 
class botorch.utils.multi_objective.box_decompositions.box_decomposition.BoxDecomposition(ref_point, sort, Y=None)[source]¶
- Bases: - torch.nn.modules.module.Module,- abc.ABC- An abstract class for box decompositions. - Note: Internally, we store the negative reference point (minimization). - Initialize BoxDecomposition. - Parameters
- ref_point ( - Tensor) – A m-dim tensor containing the reference point.
- sort ( - bool) – A boolean indicating whether to sort the Pareto frontier.
- Y ( - Optional[- Tensor]) – A (batch_shape) x n x m-dim tensor of outcomes.
 
 - 
property pareto_Y¶
- This returns the non-dominated set. - Return type
- Tensor
- Returns
- A n_pareto x m-dim tensor of outcomes. 
 
 - 
property ref_point¶
- Get the reference point. - Return type
- Tensor
- Returns
- A m-dim tensor of outcomes. 
 
 - 
property Y¶
- Get the raw outcomes. - Return type
- Tensor
- Returns
- A n x m-dim tensor of outcomes. 
 
 - 
abstract get_hypercell_bounds()[source]¶
- Get the bounds of each hypercell in the decomposition. - Return type
- Tensor
- Returns
- A 2 x num_cells x num_outcomes-dim tensor containing the
- lower and upper vertices bounding each hypercell. 
 
 
 - 
update(Y)[source]¶
- Update non-dominated front and decomposition. - Parameters
- Y ( - Tensor) – A (batch_shape) x n x m-dim tensor of outcomes.
- Return type
- None
 
 - 
training: bool¶
 
Box Decomposition Utilities¶
Utilities for box decomposition algorithms.
Box Decompositions [DEPRECATED - use botorch..utils.multi_objective.box_decompositions]¶
DEPRECATED - Box decomposition algorithms. Use the botorch.utils.multi_objective.box_decompositions instead.
Hypervolume¶
Hypervolume Utilities.
References
- Fonseca2006(1,2)
- C. M. Fonseca, L. Paquete, and M. Lopez-Ibanez. An improved dimension-sweep algorithm for the hypervolume indicator. In IEEE Congress on Evolutionary Computation, pages 1157-1163, Vancouver, Canada, July 2006. 
- 
class botorch.utils.multi_objective.hypervolume.Hypervolume(ref_point)[source]¶
- Bases: - object- Hypervolume computation dimension sweep algorithm from [Fonseca2006]. - Adapted from Simon Wessing’s implementation of the algorithm (Variant 3, Version 1.2) in [Fonseca2006] in PyMOO: https://github.com/msu-coinlab/pymoo/blob/master/pymoo/vendor/hv.py - Maximization is assumed. - TODO: write this in C++ for faster looping. - Initialize hypervolume object. - Parameters
- ref_point ( - Tensor) – m-dim Tensor containing the reference point.
 - 
property ref_point¶
- Get reference point (for maximization). - Return type
- Tensor
- Returns
- A m-dim tensor containing the reference point. 
 
 
- 
botorch.utils.multi_objective.hypervolume.sort_by_dimension(nodes, i)[source]¶
- Sorts the list of nodes in-place by the specified objective. - Parameters
- nodes ( - List[- Node]) – A list of Nodes
- i ( - int) – The index of the objective to sort by
 
- Return type
- None
 
- 
class botorch.utils.multi_objective.hypervolume.Node(m, dtype, device, data=None)[source]¶
- Bases: - object- Node in the MultiList data structure. - Initialize MultiList. - Parameters
- m ( - int) – The number of objectives
- dtype ( - dtype) – The dtype
- device ( - device) – The device
- data ( - Optional[- Tensor]) – The tensor data to be stored in this Node.
 
 
- 
class botorch.utils.multi_objective.hypervolume.MultiList(m, dtype, device)[source]¶
- Bases: - object- A special data structure used in hypervolume computation. - It consists of several doubly linked lists that share common nodes. Every node has multiple predecessors and successors, one in every list. - Initialize m doubly linked lists. - Parameters
- m ( - int) – number of doubly linked lists
- dtype ( - dtype) – the dtype
- device ( - device) – the device
 
 - 
append(node, index)[source]¶
- Appends a node to the end of the list at the given index. - Parameters
- node ( - Node) – the new node
- index ( - int) – the index where the node should be appended.
 
- Return type
- None
 
 - 
extend(nodes, index)[source]¶
- Extends the list at the given index with the nodes. - Parameters
- nodes ( - List[- Node]) – list of nodes to append at the given index.
- index ( - int) – the index where the nodes should be appended.
 
- Return type
- None
 
 - 
reinsert(node, index, bounds)[source]¶
- Re-inserts the node at its original position. - Re-inserts the node at its original position in all lists in [0, ‘index’] before it was removed. This method assumes that the next and previous nodes of the node that is reinserted are in the list. - Parameters
- node ( - Node) – The node
- index ( - int) – The upper bound on the range of indices
- bounds ( - Tensor) – A 2 x m-dim tensor bounds on the objectives
 
- Return type
- None
 
 
Non-dominated Partitionings¶
Algorithms for partitioning the non-dominated space into rectangles.
References
- Couckuyt2012(1,2)
- I. Couckuyt, D. Deschrijver and T. Dhaene, “Towards Efficient Multiobjective Optimization: Multiobjective statistical criterions,” 2012 IEEE Congress on Evolutionary Computation, Brisbane, QLD, 2012, pp. 1-8. 
- 
class botorch.utils.multi_objective.box_decompositions.non_dominated.NondominatedPartitioning(ref_point, Y=None, alpha=0.0)[source]¶
- Bases: - botorch.utils.multi_objective.box_decompositions.box_decomposition.BoxDecomposition- A class for partitioning the non-dominated space into hyper-cells. - Note: this assumes maximization. Internally, it multiplies by -1 and performs the decomposition under minimization. TODO: use maximization internally as well. - Note: it is only feasible to use this algorithm to compute an exact decomposition of the non-dominated space for m<5 objectives (alpha=0.0). - The alpha parameter can be increased to obtain an approximate partitioning faster. The alpha is a fraction of the total hypervolume encapsuling the entire Pareto set. When a hypercell’s volume divided by the total hypervolume is less than alpha, we discard the hypercell. See Figure 2 in [Couckuyt2012] for a visual representation. - This PyTorch implementation of the binary partitioning algorithm ([Couckuyt2012]) is adapted from numpy/tensorflow implementation at: https://github.com/GPflow/GPflowOpt/blob/master/gpflowopt/pareto.py. - TODO: replace this with a more efficient decomposition. E.g. https://link.springer.com/content/pdf/10.1007/s10898-019-00798-7.pdf - Initialize NondominatedPartitioning. - Parameters
- ref_point ( - Tensor) – A m-dim tensor containing the reference point.
- Y ( - Optional[- Tensor]) – A (batch_shape) x n x m-dim tensor.
- alpha ( - float) – A thresold fraction of total volume used in an approximate decomposition.
 
 - 
partition_space_2d()[source]¶
- Partition the non-dominated space into disjoint hypercells. - This direct method works for m=2 outcomes. - Return type
- None
 
 - 
get_hypercell_bounds()[source]¶
- Get the bounds of each hypercell in the decomposition. - Parameters
- ref_point – A (batch_shape) x m-dim tensor containing the reference point. 
- Return type
- Tensor
- Returns
- A 2 x num_cells x num_outcomes-dim tensor containing the
- lower and upper vertices bounding each hypercell. 
 
 
 - 
compute_hypervolume()[source]¶
- Compute the hypervolume for the given reference point. - Note: This assumes minimization. - This method computes the hypervolume of the non-dominated space and computes the difference between the hypervolume between the ideal point and hypervolume of the non-dominated space. - Note there are much more efficient alternatives for computing hypervolume when m > 2 (which do not require partitioning the non-dominated space). Given such a partitioning, this method is quite fast. - Return type
- Tensor
- Returns
- (batch_shape)-dim tensor containing the dominated hypervolume. 
 
 - 
training: bool¶
 
Pareto¶
- 
botorch.utils.multi_objective.pareto.is_non_dominated(Y, deduplicate=True)[source]¶
- Computes the non-dominated front. - Note: this assumes maximization. - Parameters
- Y ( - Tensor) – A (batch_shape) x n x m-dim tensor of outcomes.
- deduplicate ( - bool) – A boolean indicating whether to only return unique points on the pareto frontier.
 
- Return type
- Tensor
- Returns
- A (batch_shape) x n-dim boolean tensor indicating whether each point is non-dominated. 
 
Scalarization¶
Helper utilities for constructing scalarizations.
References
- Knowles2005(1,2)
- J. Knowles, “ParEGO: a hybrid algorithm with on-line landscape approximation for expensive multiobjective optimization problems,” in IEEE Transactions on Evolutionary Computation, vol. 10, no. 1, pp. 50-66, Feb. 2006. 
- 
botorch.utils.multi_objective.scalarization.get_chebyshev_scalarization(weights, Y, alpha=0.05)[source]¶
- Construct an augmented Chebyshev scalarization. - Outcomes are first normalized to [0,1] and then an augmented Chebyshev scalarization is applied. - Augmented Chebyshev scalarization:
- objective(y) = min(w * y) + alpha * sum(w * y) 
 - Note: this assumes maximization. - See [Knowles2005] for details. - This scalarization can be used with qExpectedImprovement to implement q-ParEGO as proposed in [Daulton2020qehvi]. - Parameters
- weights ( - Tensor) – A m-dim tensor of weights.
- Y ( - Tensor) – A n x m-dim tensor of observed outcomes, which are used for scaling the outcomes to [0,1].
- alpha ( - float) – Parameter governing the influence of the weighted sum term. The default value comes from [Knowles2005].
 
- Return type
- Callable[[- Tensor,- Optional[- Tensor]],- Tensor]
- Returns
- Transform function using the objective weights. 
 - Example - >>> weights = torch.tensor([0.75, 0.25]) >>> transform = get_aug_chebyshev_scalarization(weights, Y) 
