Struct sampling::wang_landau::WangLandauAdaptive
source · pub struct WangLandauAdaptive<Hist, R, E, S, Res, Energy> { /* private fields */ }
Expand description
Adaptive WangLandau 1/t
- please cite
Yannick Feld and Alexander K. Hartmann, “Large-deviations of the basin stability of power grids,” Chaos 29:113113 (2019), DOI 10.1063/1.5121415
as this adaptive approach was first used and described in this paper. Also cite the following
- The 1/t Wang Landau approach comes from this paper
R. E. Belardinelli and V. D. Pereyra, Fast algorithm to calculate density of states,” Phys. Rev. E 75: 046701 (2007), DOI 10.1103/PhysRevE.75.046701
- The original Wang Landau algorithm comes from this paper
F. Wang and D. P. Landau, “Efficient, multiple-range random walk algorithm to calculate the density of states,” Phys. Rev. Lett. 86, 2050–2053 (2001), DOI 10.1103/PhysRevLett.86.2050
Implementations§
source§impl<Hist, R, E, S, Res, Energy> WangLandauAdaptive<Hist, R, E, S, Res, Energy>where
Hist: Histogram + HistogramVal<Energy>,
impl<Hist, R, E, S, Res, Energy> WangLandauAdaptive<Hist, R, E, S, Res, Energy>where
Hist: Histogram + HistogramVal<Energy>,
sourcepub fn is_initialized(&self) -> bool
pub fn is_initialized(&self) -> bool
Check if self
is initialized
- if this returns true, you can begin the WangLandau simulation
- otherwise call one of the
self.init*
methods
source§impl<R, E, S, Res, Hist, Energy> WangLandauAdaptive<Hist, R, E, S, Res, Energy>
impl<R, E, S, Res, Hist, Energy> WangLandauAdaptive<Hist, R, E, S, Res, Energy>
sourcepub fn min_step_size(&self) -> usize
pub fn min_step_size(&self) -> usize
sourcepub fn max_step_size(&self) -> usize
pub fn max_step_size(&self) -> usize
sourcepub fn is_rebuilding_statistics(&self) -> bool
pub fn is_rebuilding_statistics(&self) -> bool
Is the simulation in the process of rebuilding the statistics, i.e., is it currently trying many different step sizes?
sourcepub fn finished_rebuilding_statistics(&self) -> bool
pub fn finished_rebuilding_statistics(&self) -> bool
Is the simulation has finished the process of rebuilding the statistics, i.e., is it currently not trying many different step sizes
sourcepub fn fraction_of_statistics_gathered(&self) -> f64
pub fn fraction_of_statistics_gathered(&self) -> f64
Tracks progress
- tracks progress until
self.is_rebuilding_statistics
becomes false - returned value is always
0 <= val <= 1.0
sourcepub fn fraction_accepted_current(&self) -> f64
pub fn fraction_accepted_current(&self) -> f64
Fraction of steps accepted since the statistics were reset the last time
- (steps accepted since last reset) / (steps since last reset)
sourcepub fn estimate_statistics(&self) -> Result<Vec<f64>, WangLandauErrors>
pub fn estimate_statistics(&self) -> Result<Vec<f64>, WangLandauErrors>
Estimate accept/reject statistics
- contains list of estimated probabilities for accepting a step of corresponding step size
- list[i] corresponds to step size
i + self.min_step
- O(trial_step_max - trial_step_min)
source§impl<R, E, S, Res, Hist, Energy> WangLandauAdaptive<Hist, R, E, S, Res, Energy>
impl<R, E, S, Res, Hist, Energy> WangLandauAdaptive<Hist, R, E, S, Res, Energy>
sourcepub fn samples_per_trial(&self) -> usize
pub fn samples_per_trial(&self) -> usize
samples_per_trial
- how often a specific step_size should be tried before estimating the fraction of accepted steps resulting from the stepsize- This number was used to create a trial list of appropriate length
source§impl<R, E, S, Res, Hist, Energy> WangLandauAdaptive<Hist, R, E, S, Res, Energy>
impl<R, E, S, Res, Hist, Energy> WangLandauAdaptive<Hist, R, E, S, Res, Energy>
sourcepub fn new(
log_f_threshold: f64,
ensemble: E,
rng: R,
samples_per_trial: usize,
trial_step_min: usize,
trial_step_max: usize,
min_best_of_count: usize,
best_of_threshold: f64,
histogram: Hist,
check_refine_every: usize
) -> Result<Self, WangLandauErrors>
pub fn new( log_f_threshold: f64, ensemble: E, rng: R, samples_per_trial: usize, trial_step_min: usize, trial_step_max: usize, min_best_of_count: usize, best_of_threshold: f64, histogram: Hist, check_refine_every: usize ) -> Result<Self, WangLandauErrors>
New WangLandauAdaptive
log_f_threshold
- threshold for the simulationensemble
ensemble used for the simulationrng
- random number generator usedsamples_per_trial
- how often a specific step_size should be tried before estimating the fraction of accepted steps resulting from the stepsizetrial_step_min
andtrial_step_max
: The step sizes tried are: [trial_step_min, trial_step_min + 1, …, trial_step_max]min_best_of_count
: After estimating, use at least the bestmin_best_of_count
step sizes foundbest_of_threshold
: After estimating, use all steps for which abs(acceptance_rate -0.5) <= best_of_threshold holds truehistogram
: How your energy will be binned etccheck_refine_every
: how often to check if log_f can be refined?
Important
- **You need to call on of the
self.init*
members before starting the Wang Landau simulation! - you can check withself.is_initialized()
- Err if
trial_step_max < trial_step_min
- Err if
log_f_threshold <= 0.0
sourcepub fn init_mixed_heuristik<F, U>(
&mut self,
overlap: NonZeroUsize,
mid: U,
energy_fn: F,
step_limit: Option<u64>
) -> Result<(), WangLandauErrors>where
F: Fn(&mut E) -> Option<Energy>,
Hist: HistogramIntervalDistance<Energy>,
U: One + Bounded + WrappingAdd + Eq + PartialOrd,
pub fn init_mixed_heuristik<F, U>(
&mut self,
overlap: NonZeroUsize,
mid: U,
energy_fn: F,
step_limit: Option<u64>
) -> Result<(), WangLandauErrors>where
F: Fn(&mut E) -> Option<Energy>,
Hist: HistogramIntervalDistance<Energy>,
U: One + Bounded + WrappingAdd + Eq + PartialOrd,
Find a valid starting Point
- if the ensemble is already at a valid starting point, the ensemble is left unchanged (as long as your energy calculation does not change the ensemble)
overlap
- see trait HistogramIntervalDistance. Should smaller than the number of bins in your histogram. E.g.overlap = 3
if you have 200 binsmid
- should be something like128u8
,0i8
or0i16
. It is very unlikely that using a type with more than 16 bit makes sense for midstep_limit
: Some(val) -> val is max number of steps tried, if no valid state is found, it will return an Error. None -> will loop until either a valid state is found or forever- alternates between greedy and interval heuristic every time a wrapping counter passes
mid
orU::min_value()
Parameter
energy_fn
function calculatingSome(energy)
of the system or rather the Parameter of which you wish to obtain the probability distribution. Has to be the same function as used for the wang landau simulation later. If there are any states, for which the calculation is invalid,None
should be returned- steps resulting in ensembles for which
energy_fn(&mut ensemble)
isNone
will always be rejected
sourcepub fn init_interval_heuristik<F>(
&mut self,
overlap: NonZeroUsize,
energy_fn: F,
step_limit: Option<u64>
) -> Result<(), WangLandauErrors>
pub fn init_interval_heuristik<F>( &mut self, overlap: NonZeroUsize, energy_fn: F, step_limit: Option<u64> ) -> Result<(), WangLandauErrors>
Find a valid starting Point
- if the ensemble is already at a valid starting point, the ensemble is left unchanged (as long as your energy calculation does not change the ensemble)
- Uses overlapping intervals. Accepts a step, if the resulting ensemble is in the same interval as before, or it is in an interval closer to the target interval
Parameter
step_limit
: Some(val) -> val is max number of steps tried, if no valid state is found, it will return an Error. None -> will loop until either a valid state is found or foreverenergy_fn
function calculatingSome(energy)
of the system or rather the Parameter of which you wish to obtain the probability distribution. Has to be the same function as used for the wang landau simulation later. If there are any states, for which the calculation is invalid,None
should be returned- steps resulting in ensembles for which
energy_fn(&mut ensemble)
isNone
will always be rejected
sourcepub fn init_greedy_heuristic<F>(
&mut self,
energy_fn: F,
step_limit: Option<u64>
) -> Result<(), WangLandauErrors>
pub fn init_greedy_heuristic<F>( &mut self, energy_fn: F, step_limit: Option<u64> ) -> Result<(), WangLandauErrors>
Find a valid starting Point
- if the ensemble is already at a valid starting point, the ensemble is left unchanged (as long as your energy calculation does not change the ensemble)
- Uses a greedy heuristic. Performs markov steps. If that brought us closer to the target interval, the step is accepted. Otherwise it is rejected
Parameter
step_limit
: Some(val) -> val is max number of steps tried, if no valid state is found, it will return an Error. None -> will loop until either a valid state is found or foreverenergy_fn
function calculatingSome(energy)
of the system or rather the Parameter of which you wish to obtain the probability distribution. Has to be the same function as used for the wang landau simulation later. If there are any states, for which the calculation is invalid,None
should be returned- steps resulting in ensembles for which
energy_fn(&mut ensemble)
isNone
will always be rejected
sourcepub fn wang_landau_while<F, W>(&mut self, energy_fn: F, condition: W)
pub fn wang_landau_while<F, W>(&mut self, energy_fn: F, condition: W)
Wang Landau
- perform Wang Landau simulation
- calls
self.wang_landau_step(energy_fn)
untilself.is_finished()
orcondition(&self)
is false
Important
- You have to call one of the
self.init*
functions before calling this one - you can check withself.is_initialized()
- will panic otherwise, at least in debug mode
sourcepub fn wang_landau_while_acc<F, W>(&mut self, energy_fn: F, condition: W)
pub fn wang_landau_while_acc<F, W>(&mut self, energy_fn: F, condition: W)
Wang Landau Simulation
- similar to
wang_landau_while
Difference
uses accumulating markov steps, i.e., it updates the Energy during the markov steps.
This can be more efficient. Therefore the energy_fn
now gets the state of the ensemble
after the markov step &E
, the step that was performed &S
as well as a mutable
reference to the old Energy &mut Energy
which is to change
sourcepub unsafe fn wang_landau_while_unsafe<F, W>(
&mut self,
energy_fn: F,
condition: W
)
pub unsafe fn wang_landau_while_unsafe<F, W>( &mut self, energy_fn: F, condition: W )
Wang Landau
- if possible, use
self.wang_landau_while()
instead - it is safer - You have mutable access to your ensemble If you do anything, which changes the future outcome of the energy function, the results will be wrong!
- perform Wang Landau simulation
- calls
self.wang_landau_step(energy_fn)
untilself.is_finished()
orcondition(&self)
is false
Safety
- You have to call one of the
self.init*
functions before calling this one - you can check withself.is_initialized()
- will panic otherwise, at least in debug mode
- Be careful of the mutable access of your energy, it has the potential to break logical things in the wang-landau etc. simulations
sourcepub fn wang_landau_convergence<F>(&mut self, energy_fn: F)
pub fn wang_landau_convergence<F>(&mut self, energy_fn: F)
Wang Landau
- perform Wang Landau simulation
- calls
self.wang_landau_step(energy_fn, valid_ensemble)
untilself.is_finished()
Important
- You have to call one of the
self.init*
functions before calling this one - you can check withself.is_initialized()
- will panic otherwise, at least in debug mode
sourcepub fn wang_landau_convergence_acc<F>(&mut self, energy_fn: F)
pub fn wang_landau_convergence_acc<F>(&mut self, energy_fn: F)
Wang Landau simulation
- similar to
wang_landau_convergence
Difference
uses accumulating markov steps, i.e., it updates the Energy during the markov steps.
This can be more efficient. Therefore the energy_fn
now gets the state of the ensemble
after the markov step &E
, the step that was performed &S
as well as a mutable
reference to the old Energy &mut Energy
which is to change
sourcepub unsafe fn wang_landau_convergence_unsafe<F>(&mut self, energy_fn: F)
pub unsafe fn wang_landau_convergence_unsafe<F>(&mut self, energy_fn: F)
Wang Landau
- if possible, use
self.wang_landau_convergence()
instead - it is safer - You have mutable access to your ensemble If you do anything, which changes the future outcome of the energy function, the results will be wrong!
- perform Wang Landau simulation
- calls
self.wang_landau_step_unsafe(energy_fn, valid_ensemble)
untilself.is_finished()
Safety
- You have to call one of the
self.init*
functions before calling this one - you can check withself.is_initialized()
- will panic otherwise, at least in debug mode
sourcepub fn wang_landau_step<F>(&mut self, energy_fn: F)
pub fn wang_landau_step<F>(&mut self, energy_fn: F)
Wang Landau Step
- performs a single Wang Landau step
Parameter
energy_fn
function calculatingSome(energy)
of the system or rather the Parameter of which you wish to obtain the probability distribution. If there are any states, for which the calculation is invalid,None
should be returned- steps resulting in ensembles for which
energy_fn(&mut ensemble)
isNone
will always be rejected
Important
- You have to call one of the
self.init*
functions before calling this one - you can check withself.is_initialized()
- will panic otherwise, at least in debug mode
sourcepub unsafe fn wang_landau_step_unsafe<F>(&mut self, energy_fn: F)
pub unsafe fn wang_landau_step_unsafe<F>(&mut self, energy_fn: F)
Wang Landau Step
- if possible, use
self.wang_landau_step()
instead - it is safer - performs a single Wang Landau step
Parameter
energy_fn
function calculatingSome(energy)
of the system or rather the Parameter of which you wish to obtain the probability distribution. If there are any states, for which the calculation is invalid,None
should be returned- steps resulting in ensembles for which
energy_fn(&mut ensemble)
isNone
will always be rejected
Safety
- You have to call one of the
self.init*
functions before calling this one - you can check withself.is_initialized()
- will panic otherwise, at least in debug mode
- unsafe, because you have to make sure, that the
energy_fn
function does not change the state of the ensemble in such a way, that the result ofenergy_fn
changes when called again. Maybe do cleanup at the beginning of the energy function?
sourcepub fn wang_landau_step_acc<F>(&mut self, energy_fn: F)
pub fn wang_landau_step_acc<F>(&mut self, energy_fn: F)
Accumulating wang landau step
- similar to
wang_landau_step
Difference
- this uses accumulating markov steps, i.e., it calculates the Energy during each markov step, which can be more efficient. This assumes, that cloning the Energy is cheap, which is true for primitive types like usize or f64
- parameter of
energy_fn
:&E
Ensemble after the markov step&S
was performed.&mut Energy
is the old energy, which has to be changed to the new energy of the system
Trait Implementations§
source§impl<Hist: Clone, R: Clone, E: Clone, S: Clone, Res: Clone, Energy: Clone> Clone for WangLandauAdaptive<Hist, R, E, S, Res, Energy>
impl<Hist: Clone, R: Clone, E: Clone, S: Clone, Res: Clone, Energy: Clone> Clone for WangLandauAdaptive<Hist, R, E, S, Res, Energy>
source§fn clone(&self) -> WangLandauAdaptive<Hist, R, E, S, Res, Energy>
fn clone(&self) -> WangLandauAdaptive<Hist, R, E, S, Res, Energy>
1.0.0 · source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read moresource§impl<Hist: Debug, R: Debug, E: Debug, S: Debug, Res: Debug, Energy: Debug> Debug for WangLandauAdaptive<Hist, R, E, S, Res, Energy>
impl<Hist: Debug, R: Debug, E: Debug, S: Debug, Res: Debug, Energy: Debug> Debug for WangLandauAdaptive<Hist, R, E, S, Res, Energy>
source§impl<'de, Hist, R, E, S, Res, Energy> Deserialize<'de> for WangLandauAdaptive<Hist, R, E, S, Res, Energy>where
Hist: Deserialize<'de>,
R: Deserialize<'de>,
E: Deserialize<'de>,
S: Deserialize<'de>,
Energy: Deserialize<'de>,
impl<'de, Hist, R, E, S, Res, Energy> Deserialize<'de> for WangLandauAdaptive<Hist, R, E, S, Res, Energy>where
Hist: Deserialize<'de>,
R: Deserialize<'de>,
E: Deserialize<'de>,
S: Deserialize<'de>,
Energy: Deserialize<'de>,
source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
source§impl<Hist, R, E, S, Res, Energy> GlueAble<Hist> for WangLandauAdaptive<Hist, R, E, S, Res, Energy>where
Hist: Clone,
impl<Hist, R, E, S, Res, Energy> GlueAble<Hist> for WangLandauAdaptive<Hist, R, E, S, Res, Energy>where
Hist: Clone,
fn push_glue_entry_ignoring( &self, job: &mut GlueJob<Hist>, ignore_idx: &[usize] )
fn push_glue_entry(&self, job: &mut GlueJob<H>)
source§impl<Hist, R, E, S, Res, Energy> Serialize for WangLandauAdaptive<Hist, R, E, S, Res, Energy>
impl<Hist, R, E, S, Res, Energy> Serialize for WangLandauAdaptive<Hist, R, E, S, Res, Energy>
source§impl<Hist, R, E, S, Res, T> TryFrom<WangLandauAdaptive<Hist, R, E, S, Res, T>> for EntropicSampling<Hist, R, E, S, Res, T>
impl<Hist, R, E, S, Res, T> TryFrom<WangLandauAdaptive<Hist, R, E, S, Res, T>> for EntropicSampling<Hist, R, E, S, Res, T>
source§fn try_from(
wl: WangLandauAdaptive<Hist, R, E, S, Res, T>
) -> Result<Self, Self::Error>
fn try_from( wl: WangLandauAdaptive<Hist, R, E, S, Res, T> ) -> Result<Self, Self::Error>
Uses as stepsize: first entry of bestof. If bestof is empty, it uses
wl.min_step_size() + (wl.max_step_size() - wl.max_step_size()) / 2