Home Course Concepts About

Monte Carlo simulation of failure probability of a hydrogen tank (Rust kernel)

This notebook is an element of the free risk-engineering.org courseware. It can be distributed under the terms of the Creative Commons Attribution-ShareAlike licence.

Author: Eric Marsden eric.marsden@risk-engineering.org


This notebook contains an introduction to use of the Rust kernel for Jupyter, evcxr. We run a Monte Carlo simulation applied to a mechanical design problem, used for estimating failure probability. Rust is a programming language that is less used for this type of analysis than Python: it is slower to write and requires more programming knowledge, but leads to far greater performance.

See the associated course materials for an introduction to the use of stochastic simulation methods and to download this content as a Jupyter/Rust notebook (you can also get a Python version of this code).

Acknowledgement. This problem is taken from UQWorld, which itself extracted the problem from the article by Bichon et al in 2011 cited below.

Problem statement

The 5-dimensional liquid hydrogen tank problem is a reliability analysis benchmark problem (Bichon et al., 2011). The problem consists in quantifying the failure probability of a liquid hydrogen fuel tank on a space launch vehicle. The structure of the tank is subjected to stresses caused by ullage pressure, head pressure, axial forces due to acceleration, and bending and shear stresses caused by the weight of the fuel.

Three different failure modes exist for the tank:

  • Von Mises stress failure ($PVM$)
  • Isotropic strength failure ($PIS$)
  • Honeycomb buckling failure ($PHB$)

The limit state function of the overall system is then defined as:

$$g(x) = \min(PVM, PIS, PHB)$$

where:

$$PVM = \frac{8400 \cdot t_{plate}}{\sqrt{N_x^2 + N_y^2 - N_xN_y + 3N_{xy}^2}} - 1$$$$PIS = \frac{8400 \cdot t_{plate}}{|Ny|} - 1$$$$PHB = 0.847 + 0.96 \cdot x_1 + 0.986 \cdot x_2 - 0.216 \cdot x_3 + 0.077 \cdot x_1^2 + 0.11 \cdot x_2^2 + 0.007 \cdot x_3^2 + 0.378 \cdot x_1x_2 - 0.106 \cdot x_1x_3 - 0.11 \cdot x_2x_3$$

and:

$x_1 = 4 * (t_{plate} - 0.075)$

$x_2 = 20 * (t_h - 0.1)$

$x_3 = -6000 * (\frac{1}{N_{xy}} + 0.003)$

The failure event is defined as $g(x) ≤ 0$.

Input variables

We will make the following assumptions concerning the distribution of our input variables, which we assume are mutually independent:

Name Distribution Description
$t_{plate}$ Normal(0.07433, 0.005) Plate thickness
$t_h$ Normal(0.1, 0.01) Honeycomb thickness
$N_x$ Normal(13, 60) Load on tank, x-component
$N_y$ Normal(4751, 48) Load on tank, y-component
$N_{xy}$ Normal(-648, 11) Load on tank, xy-component

We want to use a Monte Carlo simulation to estimate the probability of failure, i.e. the probability that $g(x) ≤ 0$.

Source article: B. J. Bichon, J. M. McFarland, and S. Mahadevan, “Efficient surrogate models for reliability analysis of systems with multiple failure modes”, Reliability Engineering & System Safety, vol. 96, no. 10, pp. 1386-1395, 2011. DOI:10.1016/j.ress.2011.05.008

Infrastructure

We are using a Jupyter kernel implemented in the Rust programming language, which allows much higher performance than Python (see the risk-engineering.org website for a Jupyter/Python notebook that resolves the same problem). To use this kernel you will need to install a Rust toolchain (for instance with apt install cargo) as well as the Jupyter kernel from https://github.com/google/evcxr/. Install the Rust Jupyter kernel with

cargo install evcxr_jupyter
evcxr_jupyter --install

and for more details see https://github.com/google/evcxr/blob/master/evcxr_jupyter/samples/evcxr_jupyter_tour.ipynb

We start by enabling optimization for the Rust kernel and loading some external crates (library packages). The rand and statrs crates provide functionality for random number generation from different probability distributions, including inverse CDF calculations. The ndarray crate implements sophisticated multi-dimensions arrays for Rust, with similar functionality to the ndarray type that is the backbone of Python’s NumPy library. The halton crate implements Halton’s low-discrepancy sequence.

In [2]:
// turn on compiler optimization for this notebook
:opt 2
Out[2]:
Optimization: 2
In [3]:
// some external crates we need to use
:dep rand = "*"
:dep statrs = "*"
:dep ndarray = "*"
:dep halton = "*"

Method

Start by defining a function that generates a random input value (note that the input space is 5-dimensional, here represented using a Python dictionary), then functions that calculate the value of PVM, PIS and PHB for some input.

In [4]:
#[allow(unused_variables)]
fn simulated_pvm(t_plate: f64, t_h: f64, n_x: f64, n_y: f64, n_xy: f64) -> f64 {
    let div = n_x*n_x + n_y*n_y - n_x*n_y + 3.0*n_xy*n_xy;
    return -1.0 + 84000.0 * t_plate / div.sqrt();
}

#[allow(unused_variables)]
fn simulated_pis(t_plate: f64, t_h: f64, n_x: f64, n_y: f64, n_xy: f64) -> f64 {
    return -1.0 + 84000.0 * t_plate / n_y.abs();
}

#[allow(unused_variables)]
fn simulated_phb(t_plate: f64, t_h: f64, n_x: f64, n_y: f64, n_xy: f64) -> f64 {
    let x1 = 4.0 * (t_plate - 0.075);
    let x2 = 20.0 * (t_h - 0.1);
    let x3 = -6000.0 * (1.0/n_xy + 0.003);
    return 0.847 + 0.96 * x1 + 0.986 * x2 - 0.216 * x3 +
        0.077 * x1*x1 + 0.11 * x2*x2 + 0.007 * x3*x3 +
        0.378 * x1*x2 - 0.106 * x1*x3 - 0.11 * x2*x3;
}

Now we can estimate the failure probability using a naïve Monte Carlo approach.

In [5]:
extern crate rand;

use rand::distributions::Distribution;
use statrs::distribution::Normal;

fn do_naive_mc(iterations: i32) -> f64 {
    let mut rng = rand::thread_rng();
    let mut failures = 0;

    for _ in 0..iterations {
        let t_plate = Normal::new(0.07433, 0.005).unwrap().sample(&mut rng);
        let t_h = Normal::new(0.1, 0.01).unwrap().sample(&mut rng);
        let n_x = Normal::new(13.0, 60.0).unwrap().sample(&mut rng);
        let n_y = Normal::new(4751.0, 48.0).unwrap().sample(&mut rng);
        let n_xy = Normal::new(-648.0, 11.0).unwrap().sample(&mut rng);
        if simulated_pvm(t_plate, t_h, n_x, n_y, n_xy) <= 0.0 ||
            simulated_pis(t_plate, t_h, n_x, n_y, n_xy) <= 0.0 ||
            simulated_phb(t_plate, t_h, n_x, n_y, n_xy) <= 0.0 {
                failures += 1;
        }
    }
    return failures as f64 / iterations as f64;
}
In [6]:
:timing

do_naive_mc(10_000_000)
Out[6]:
Timing: true
Out[6]:
Took 160ms
Out[6]:
0.0006187
Out[6]:
Took 2456ms

It’s worth noting that on the author’s machine, this is around 194000 times faster than the corresponding Python code (running in the standard CPython implementation).

Latin Hypercube Sampling approach

The LHS method consists of dividing the input space into a number of equiprobable regions, then taking random samples from each region (see our lecture slides for some background information).

In [7]:
use ndarray::prelude::*;
use statrs::distribution::Uniform;
use statrs::distribution::InverseCDF;
use rand::seq::SliceRandom; // for shuffle

fn latin_hypercube_sample(dimensions: usize, samples: usize) -> Array2::<f64> {
    const BUCKETS: i32 = 10;
    let mut rng = rand::thread_rng();
    let mut a = Array2::zeros((samples, dimensions));

    for dim in 0..dimensions {
        let mut v: Vec<f64> = Vec::with_capacity(samples);
        for row in 0..samples {
            let bucket = (BUCKETS as f64 * (row as f64 / samples as f64)).floor();
            v.push(Uniform::new(bucket, bucket+1.0).unwrap().sample(&mut rng) / BUCKETS as f64);
        }
        v.shuffle(&mut rng);
        // copy into our ndarray
        for row in 0..samples {
            a[[row, dim]] = v[row];
        }
    }
    return a;
}

fn do_lhs(iterations: usize) -> f64 {
    let mut failures = 0;
    let a = latin_hypercube_sample(5, iterations);
    let dist_t_plate = Normal::new(0.07433, 0.005).unwrap();
    let dist_t_h = Normal::new(0.1, 0.01).unwrap();
    let dist_n_x = Normal::new(13.0, 60.0).unwrap();
    let dist_n_y = Normal::new(4751.0, 48.0).unwrap();
    let dist_n_xy = Normal::new(-648.0, 11.0).unwrap();
    
    for i in 0..iterations {
        let t_plate = dist_t_plate.inverse_cdf(a[[i, 0]]);
        let t_h = dist_t_h.inverse_cdf(a[[i, 1]]);
        let n_x = dist_n_x.inverse_cdf(a[[i, 2]]);
        let n_y = dist_n_y.inverse_cdf(a[[i, 3]]);
        let n_xy = dist_n_xy.inverse_cdf(a[[i, 4]]);
        if simulated_pvm(t_plate, t_h, n_x, n_y, n_xy) <= 0.0 ||
            simulated_pis(t_plate, t_h, n_x, n_y, n_xy) <= 0.0 ||
            simulated_phb(t_plate, t_h, n_x, n_y, n_xy) <= 0.0 {
                failures += 1;
        }
    }
    return failures as f64 / iterations as f64;
}
Out[7]:
Took 298ms
In [8]:
do_lhs(10_000_000)
Out[8]:
0.0006224
Out[8]:
Took 9273ms

This is much slower than the “naïve” Monte Carlo approach implemented above, because of all the memory initialization and inverse CDF calculations. However, you can expect the result to be of better quality (for a same number of runs) than the naïve Monte Carlo approach (the variability if you execute the simulation many times will be lower).

Halton’s low-discrepancy sequence approach

A low discrepancy (or quasi-random) sequence is a deterministic mathematical sequence of numbers that has the property of low discrepancy. This means that there are no clusters of points and that the sequence fills space roughly uniformly. The Halton sequence is a low discrepancy sequence that has useful properties for pseudo-stochastic sampling methods (also called “quasi-Monte Carlo” methods).

In [9]:
use halton;

fn halton_low_discrepancy_sample(dimensions: usize, samples: usize) -> Array2::<f64> {
    // important that these be coprime bases, see https://observablehq.com/@jrus/halton
    let mut coprimes = vec![2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31];
    assert!(dimensions < coprimes.len());
    let mut a = Array2::zeros((samples, dimensions));
    let mut generator = Vec::new();
    for _ in 0..dimensions {
        generator.push(halton::Sequence::new(coprimes.pop().unwrap()).skip(20));
    }
    for row in 0..samples {
        for dim in 0..dimensions {
            a[[row, dim]] = generator[dim].next().unwrap();
        }
    }
    return a;
}

fn do_halton(iterations: usize) -> f64 {
    let mut failures = 0;
    let a = halton_low_discrepancy_sample(5, iterations);
    let dist_t_plate = Normal::new(0.07433, 0.005).unwrap();
    let dist_t_h = Normal::new(0.1, 0.01).unwrap();
    let dist_n_x = Normal::new(13.0, 60.0).unwrap();
    let dist_n_y = Normal::new(4751.0, 48.0).unwrap();
    let dist_n_xy = Normal::new(-648.0, 11.0).unwrap();
    
    for i in 0..iterations {
        let t_plate = dist_t_plate.inverse_cdf(a[[i, 0]]);
        let t_h = dist_t_h.inverse_cdf(a[[i, 1]]);
        let n_x = dist_n_x.inverse_cdf(a[[i, 2]]);
        let n_y = dist_n_y.inverse_cdf(a[[i, 3]]);
        let n_xy = dist_n_xy.inverse_cdf(a[[i, 4]]);
        if simulated_pvm(t_plate, t_h, n_x, n_y, n_xy) <= 0.0 ||
            simulated_pis(t_plate, t_h, n_x, n_y, n_xy) <= 0.0 ||
            simulated_phb(t_plate, t_h, n_x, n_y, n_xy) <= 0.0 {
                failures += 1;
        }
    }
    return failures as f64 / iterations as f64;
}
Out[9]:
Took 307ms
In [10]:
do_halton(10_000_000)
Out[10]:
0.0006221
Out[10]:
Took 4776ms