{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Monte Carlo simulation of failure probability of a hydrogen tank"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"\n",
"This notebook is an element of the free [risk-engineering.org courseware](https://risk-engineering.org/). It can be distributed under the terms of the [Creative Commons Attribution-ShareAlike licence](https://creativecommons.org/licenses/by-sa/4.0/).\n",
"\n",
"Author: Eric Marsden \n",
"\n",
"---\n",
"\n",
"This notebook contains an introduction to use of the [Rust](https://www.rust-lang.org/) kernel for Jupyter for Monte Carlo simulation applied to a mechanical design problem, used for estimating failure probability. See the [associated course materials](https://risk-engineering.org/monte-carlo-methods/) for an introduction to the use of stochastic simulation methods and to download this content as a Jupyter/Rust notebook.\n",
"\n",
"**Acknowledgement**. This problem is taken from [UQWorld](https://uqworld.org/t/liquid-hydrogen-tank-problem/58), which itself extracted the problem from the article by Bichon et al in 2011 cited below. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Problem statement"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The 5-dimensional liquid hydrogen tank problem is a reliability analysis benchmark problem (Bichon et al., 2011). The problem consists in quantifying the failure probability of a liquid hydrogen fuel tank on a space launch vehicle. The structure of the tank is subjected to stresses caused by ullage pressure, head pressure, axial forces due to acceleration, and bending and shear stresses caused by the weight of the fuel.\n",
"\n",
"\n",
"Three different failure modes exist for the tank:\n",
"\n",
"- Von Mises stress failure ($PVM$)\n",
"- Isotropic strength failure ($PIS$)\n",
"- Honeycomb buckling failure ($PHB$)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The limit state function of the overall system is then defined as:\n",
"\n",
"$$g(x) = \\min(PVM, PIS, PHB)$$\n",
"\n",
"where:\n",
"\n",
"$$PVM = \\frac{8400 \\cdot t_{plate}}{\\sqrt{N_x^2 + N_y^2 - N_xN_y + 3N_{xy}^2}} - 1$$\n",
"\n",
"$$PIS = \\frac{8400 \\cdot t_{plate}}{|Ny|} - 1$$\n",
"\n",
"$$PHB = 0.847 + 0.96 \\cdot x_1 + 0.986 \\cdot x_2 - 0.216 \\cdot x_3 + 0.077 \\cdot x_1^2 + 0.11 \\cdot x_2^2\n",
"+ 0.007 \\cdot x_3^2 + 0.378 \\cdot x_1x_2 - 0.106 \\cdot x_1x_3 - 0.11 \\cdot x_2x_3$$\n",
"\n",
"and:\n",
"\n",
"$x_1 = 4 * (t_{plate} - 0.075)$\n",
"\n",
"$x_2 = 20 * (t_h - 0.1)$\n",
"\n",
"$x_3 = -6000 * (\\frac{1}{N_{xy}} + 0.003)$"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The failure event is defined as $g(x) ≤ 0$."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Input variables\n",
"\n",
"We will make the following **assumptions** concerning the distribution of our input variables, which we assume are mutually independent:\n",
"\n",
"| Name | Distribution | Description |\n",
"|-------------|------------------------|-----------------------------|\n",
"| $t_{plate}$ | Normal(0.07433, 0.005) | Plate thickness |\n",
"| $t_h$ | Normal(0.1, 0.01) | Honeycomb thickness |\n",
"| $N_x$ | Normal(13, 60) | Load on tank, x-component |\n",
"| $N_y$ | Normal(4751, 48) | Load on tank, y-component |\n",
"| $N_{xy}$ | Normal(-648, 11) | Load on tank, xy-component |\n",
"\n",
"\n",
"We want to use a Monte Carlo simulation to estimate the probability of failure, i.e. the probability that $g(x) ≤ 0$.\n",
"\n",
"Source article: B. J. Bichon, J. M. McFarland, and S. Mahadevan, “Efficient surrogate models for reliability analysis of systems with multiple failure modes”, Reliability Engineering & System Safety, vol. 96, no. 10, pp. 1386-1395, 2011. DOI:10.1016/j.ress.2011.05.008"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Infrastructure\n",
"\n",
"We are using a Jupyter kernel implemented in the [Rust programming language](https://www.rust-lang.org/), which allows much higher performance than Python (see the [risk-engineering.org website](https://risk-engineering.org/monte-carlo-methods/) for a Jupyter/Python notebook that resolves the same problem). To use this kernel you will need to install a Rust toolchain (for instance with `apt install cargo`) as well as the Jupyter kernel from https://github.com/google/evcxr/. Install the Rust Jupyter kernel with \n",
"\n",
" cargo install evcxr_jupyter\n",
" evcxr_jupyter --install\n",
" \n",
"and for more details see https://github.com/google/evcxr/blob/master/evcxr_jupyter/samples/evcxr_jupyter_tour.ipynb"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We start by enabling optimization for the Rust kernel and loading some external crates (library packages). The `rand` and `statrs` crates provide functionality for random number generation from different probability distributions, including inverse CDF calculations. The `ndarray` crate implements sophisticated multi-dimensions arrays for Rust, with similar functionality to the `ndarray` type that is the backbone of Python's NumPy library. The `halton` crate implements Halton's low-discrepancy sequence. "
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Optimization: 2\n"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"// turn on compiler optimization for this notebook\n",
":opt 2"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"// some external crates we need to use\n",
":dep rand = \"*\"\n",
":dep statrs = \"*\"\n",
":dep ndarray = \"*\"\n",
":dep halton = \"*\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Method \n",
"\n",
"Start by defining a function that generates a random input value (note that the input space is 5-dimensional, here represented using a Python dictionary), then functions that calculate the value of PVM, PIS and PHB for some input. "
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"#[allow(unused_variables)]\n",
"fn simulated_pvm(t_plate: f64, t_h: f64, n_x: f64, n_y: f64, n_xy: f64) -> f64 {\n",
" let div = n_x*n_x + n_y*n_y - n_x*n_y + 3.0*n_xy*n_xy;\n",
" return -1.0 + 84000.0 * t_plate / div.sqrt();\n",
"}\n",
"\n",
"#[allow(unused_variables)]\n",
"fn simulated_pis(t_plate: f64, t_h: f64, n_x: f64, n_y: f64, n_xy: f64) -> f64 {\n",
" return -1.0 + 84000.0 * t_plate / n_y.abs();\n",
"}\n",
"\n",
"#[allow(unused_variables)]\n",
"fn simulated_phb(t_plate: f64, t_h: f64, n_x: f64, n_y: f64, n_xy: f64) -> f64 {\n",
" let x1 = 4.0 * (t_plate - 0.075);\n",
" let x2 = 20.0 * (t_h - 0.1);\n",
" let x3 = -6000.0 * (1.0/n_xy + 0.003);\n",
" return 0.847 + 0.96 * x1 + 0.986 * x2 - 0.216 * x3 +\n",
" 0.077 * x1*x1 + 0.11 * x2*x2 + 0.007 * x3*x3 +\n",
" 0.378 * x1*x2 - 0.106 * x1*x3 - 0.11 * x2*x3;\n",
"}"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now we can estimate the failure probability using a naïve Monte Carlo approach. "
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
"extern crate rand;\n",
"\n",
"use rand::distributions::Distribution;\n",
"use statrs::distribution::Normal;\n",
"\n",
"fn do_naive_mc(iterations: i32) -> f64 {\n",
" let mut rng = rand::thread_rng();\n",
" let mut failures = 0;\n",
"\n",
" for _ in 0..iterations {\n",
" let t_plate = Normal::new(0.07433, 0.005).unwrap().sample(&mut rng);\n",
" let t_h = Normal::new(0.1, 0.01).unwrap().sample(&mut rng);\n",
" let n_x = Normal::new(13.0, 60.0).unwrap().sample(&mut rng);\n",
" let n_y = Normal::new(4751.0, 48.0).unwrap().sample(&mut rng);\n",
" let n_xy = Normal::new(-648.0, 11.0).unwrap().sample(&mut rng);\n",
" if simulated_pvm(t_plate, t_h, n_x, n_y, n_xy) <= 0.0 ||\n",
" simulated_pis(t_plate, t_h, n_x, n_y, n_xy) <= 0.0 ||\n",
" simulated_phb(t_plate, t_h, n_x, n_y, n_xy) <= 0.0 {\n",
" failures += 1;\n",
" }\n",
" }\n",
" return failures as f64 / iterations as f64;\n",
"}"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Timing: true\n"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
},
{
"data": {
"text/html": [
"Took 269ms"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
},
{
"data": {
"text/plain": [
"0.0006119"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
},
{
"data": {
"text/html": [
"Took 2435ms"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
":timing\n",
"\n",
"do_naive_mc(10_000_000)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"It's worth noting that on the author's machine, this is around 194000 times faster than the corresponding Python code (running in the standard CPython implementation). "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Latin Hypercube Sampling approach"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The LHS method consists of dividing the input space into a number of equiprobable regions, then taking random samples from each region (see our lecture slides for some background information). "
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"Took 450ms"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"use ndarray::prelude::*;\n",
"use statrs::distribution::Uniform;\n",
"use statrs::distribution::InverseCDF;\n",
"use rand::seq::SliceRandom; // for shuffle\n",
"\n",
"fn latin_hypercube_sample(dimensions: usize, samples: usize) -> Array2:: {\n",
" const BUCKETS: i32 = 10;\n",
" let mut rng = rand::thread_rng();\n",
" let mut a = Array2::zeros((samples, dimensions));\n",
"\n",
" for dim in 0..dimensions {\n",
" let mut v: Vec = Vec::with_capacity(samples);\n",
" for row in 0..samples {\n",
" let bucket = (BUCKETS as f64 * (row as f64 / samples as f64)).floor();\n",
" v.push(Uniform::new(bucket, bucket+1.0).unwrap().sample(&mut rng) / BUCKETS as f64);\n",
" }\n",
" v.shuffle(&mut rng);\n",
" // copy into our ndarray\n",
" for row in 0..samples {\n",
" a[[row, dim]] = v[row];\n",
" }\n",
" }\n",
" return a;\n",
"}\n",
"\n",
"fn do_lhs(iterations: usize) -> f64 {\n",
" let mut failures = 0;\n",
" let a = latin_hypercube_sample(5, iterations);\n",
" let dist_t_plate = Normal::new(0.07433, 0.005).unwrap();\n",
" let dist_t_h = Normal::new(0.1, 0.01).unwrap();\n",
" let dist_n_x = Normal::new(13.0, 60.0).unwrap();\n",
" let dist_n_y = Normal::new(4751.0, 48.0).unwrap();\n",
" let dist_n_xy = Normal::new(-648.0, 11.0).unwrap();\n",
" \n",
" for i in 0..iterations {\n",
" let t_plate = dist_t_plate.inverse_cdf(a[[i, 0]]);\n",
" let t_h = dist_t_h.inverse_cdf(a[[i, 1]]);\n",
" let n_x = dist_n_x.inverse_cdf(a[[i, 2]]);\n",
" let n_y = dist_n_y.inverse_cdf(a[[i, 3]]);\n",
" let n_xy = dist_n_xy.inverse_cdf(a[[i, 4]]);\n",
" if simulated_pvm(t_plate, t_h, n_x, n_y, n_xy) <= 0.0 ||\n",
" simulated_pis(t_plate, t_h, n_x, n_y, n_xy) <= 0.0 ||\n",
" simulated_phb(t_plate, t_h, n_x, n_y, n_xy) <= 0.0 {\n",
" failures += 1;\n",
" }\n",
" }\n",
" return failures as f64 / iterations as f64;\n",
"}"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"0.0006303"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
},
{
"data": {
"text/html": [
"Took 8861ms"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"do_lhs(10_000_000)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This is much slower than the \"naïve\" Monte Carlo approach implemented above, because of all the memory initialization and inverse CDF calculations. However, you can expect the result to be of better quality (for a same number of runs) than the naïve Monte Carlo approach (the variability if you execute the simulation many times will be lower). "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Halton's low-discrepancy sequence approach"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"A [low discrepency (or quasi-random) sequence](https://en.wikipedia.org/wiki/Low-discrepancy_sequence) is a deterministic mathematical sequence of numbers that has the property of low discrepancy. This means that there are no clusters of points and that the sequence fills space roughly uniformly. The [Halton sequence](https://en.wikipedia.org/wiki/Halton_sequence) is a low discrepency sequence that has useful properties for pseudo-stochastic sampling methods (also called “quasi-Monte Carlo” methods). "
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"Took 408ms"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"use halton;\n",
"\n",
"fn halton_low_discrepancy_sample(dimensions: usize, samples: usize) -> Array2:: {\n",
" // important that these be coprime bases, see https://observablehq.com/@jrus/halton\n",
" let mut coprimes = vec![2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31];\n",
" assert!(dimensions < coprimes.len());\n",
" let mut a = Array2::zeros((samples, dimensions));\n",
" let mut generator = Vec::new();\n",
" for _ in 0..dimensions {\n",
" generator.push(halton::Sequence::new(coprimes.pop().unwrap()).skip(20));\n",
" }\n",
" for row in 0..samples {\n",
" for dim in 0..dimensions {\n",
" a[[row, dim]] = generator[dim].next().unwrap();\n",
" }\n",
" }\n",
" return a;\n",
"}\n",
"\n",
"fn do_halton(iterations: usize) -> f64 {\n",
" let mut failures = 0;\n",
" let a = halton_low_discrepancy_sample(5, iterations);\n",
" let dist_t_plate = Normal::new(0.07433, 0.005).unwrap();\n",
" let dist_t_h = Normal::new(0.1, 0.01).unwrap();\n",
" let dist_n_x = Normal::new(13.0, 60.0).unwrap();\n",
" let dist_n_y = Normal::new(4751.0, 48.0).unwrap();\n",
" let dist_n_xy = Normal::new(-648.0, 11.0).unwrap();\n",
" \n",
" for i in 0..iterations {\n",
" let t_plate = dist_t_plate.inverse_cdf(a[[i, 0]]);\n",
" let t_h = dist_t_h.inverse_cdf(a[[i, 1]]);\n",
" let n_x = dist_n_x.inverse_cdf(a[[i, 2]]);\n",
" let n_y = dist_n_y.inverse_cdf(a[[i, 3]]);\n",
" let n_xy = dist_n_xy.inverse_cdf(a[[i, 4]]);\n",
" if simulated_pvm(t_plate, t_h, n_x, n_y, n_xy) <= 0.0 ||\n",
" simulated_pis(t_plate, t_h, n_x, n_y, n_xy) <= 0.0 ||\n",
" simulated_phb(t_plate, t_h, n_x, n_y, n_xy) <= 0.0 {\n",
" failures += 1;\n",
" }\n",
" }\n",
" return failures as f64 / iterations as f64;\n",
"}"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"0.0006221"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
},
{
"data": {
"text/html": [
"Took 4835ms"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"do_halton(10_000_000)"
]
}
],
"metadata": {
"anaconda-cloud": {},
"kernelspec": {
"display_name": "Rust",
"language": "rust",
"name": "rust"
},
"language_info": {
"codemirror_mode": "rust",
"file_extension": ".rs",
"mimetype": "text/rust",
"name": "Rust",
"pygment_lexer": "rust",
"version": ""
}
},
"nbformat": 4,
"nbformat_minor": 1
}