botorch.fit¶
Model fitting routines.
- botorch.fit.DEFAULT_WARNING_FILTER(w, logging_patterns={10: re.compile('TOTAL NO. of (ITERATIONS REACHED LIMIT|f AND g EVALUATIONS EXCEEDS LIMIT)')})[source]¶
- Default warning resolution policy: retry upon encountering an OptimizationWarning that does not match any logging pattern. - Parameters:
- w (WarningMessage) – Candidate for filtering. 
- logging_patterns (Dict[int, Pattern]) – Dictionary mapping logging levels to regular expressions. Warning messages are compared against these expressions and matches are awarded first-come-first-serve when iterating through the dictionary. 
 
- Returns:
- Boolean indicating whether the warning is unresolved. 
- Return type:
- bool 
 
- botorch.fit.fit_gpytorch_mll(mll, optimizer=None, optimizer_kwargs=None, **kwargs)[source]¶
- Clearing house for fitting models passed as GPyTorch MarginalLogLikelihoods. - Parameters:
- mll (MarginalLogLikelihood) – A GPyTorch MarginalLogLikelihood instance. 
- optimizer (Optional[Callable]) – User specified optimization algorithm. When optimizer is None, this keyword argument is omitted when calling the dispatcher. 
- optimizer_kwargs (Optional[dict]) – A dictionary of keyword arguments passed when calling optimizer. 
- **kwargs (Any) – Keyword arguments passed down through the dispatcher to fit subroutines. Unexpected keywords are ignored. 
 
- Returns:
- The mll instance. If fitting succeeded, then mll will be in evaluation mode, i.e. mll.training == False. Otherwise, mll will be in training mode. 
- Return type:
- MarginalLogLikelihood 
 
- botorch.fit.fit_gpytorch_model(mll, optimizer=None, optimizer_kwargs=None, exclude=None, max_retries=None, **kwargs)[source]¶
- Convenience method for fitting GPyTorch models using legacy API. For more details, see fit_gpytorch_mll. - Parameters:
- mll (MarginalLogLikelihood) – A GPyTorch MarginalLogLikelihood instance. 
- optimizer (Optional[Callable[[MarginalLogLikelihood], Tuple[MarginalLogLikelihood, Any]]]) – User specified optimization algorithm. When optimizer is None, this keyword argument is omitted when calling the dispatcher from inside fit_gpytorch_mll. 
- exclude (Optional[Iterable[str]]) – Legacy argument for specifying parameters x that should be held fixed during optimization. Internally, used to temporarily set x.requires_grad to False. 
- max_retries (Optional[int]) – Legacy name for max_attempts. When max_retries is None, this keyword argument is omitted when calling fit_gpytorch_mll. 
- optimizer_kwargs (Optional[dict]) – 
- kwargs (Any) – 
 
- Return type:
- MarginalLogLikelihood 
 
- botorch.fit.fit_fully_bayesian_model_nuts(model, max_tree_depth=6, warmup_steps=512, num_samples=256, thinning=16, disable_progbar=False, jit_compile=False)[source]¶
- Fit a fully Bayesian model using the No-U-Turn-Sampler (NUTS) - Parameters:
- model (Union[SaasFullyBayesianSingleTaskGP, SaasFullyBayesianMultiTaskGP]) – SaasFullyBayesianSingleTaskGP to be fitted. 
- max_tree_depth (int) – Maximum tree depth for NUTS 
- warmup_steps (int) – The number of burn-in steps for NUTS. 
- num_samples (int) – The number of MCMC samples. Note that with thinning, num_samples / thinning samples are retained. 
- thinning (int) – The amount of thinning. Every nth sample is retained. 
- disable_progbar (bool) – A boolean indicating whether to print the progress bar and diagnostics during MCMC. 
- jit_compile (bool) – Whether to use jit. Using jit may be ~2X faster (rough estimate), but it will also increase the memory usage and sometimes result in runtime errors, e.g., https://github.com/pyro-ppl/pyro/issues/3136. 
 
- Return type:
- None 
 - Example - >>> gp = SaasFullyBayesianSingleTaskGP(train_X, train_Y) >>> fit_fully_bayesian_model_nuts(gp) 
