Simple descriptive statistics¶
This notebook is an element of the free risk-engineering.org courseware. It can be distributed under the terms of the Creative Commons Attribution-ShareAlike licence.
Author: Eric Marsden eric.marsden@risk-engineering.org.
This notebook contains an introduction to use of Python and the NumPy library for simple descriptive statistics. See the associated course materials for background material and to download this content as a Jupyter/Python notebook.
import numpy
import pandas
import scipy.stats
import matplotlib.pyplot as plt
plt.style.use("bmh")
%config InlineBackend.figure_formats=["svg"]
import warnings
warnings.simplefilter(action="ignore", category=FutureWarning)
We will start by examining some data on fatigue life of strips of aluminium sheeting. The data is expressed in thousands of cycles until rupture. It compes from the article Birnbaum, Z. W. and Saunders, S. C. (1958), A statistical model for life-length of materials, Journal of the American Statistical Association, 53(281).
cycles = numpy.array([370, 1016, 1235, 1419, 1567, 1820, 706, 1018, 1238, 1420,
1578, 1868, 716, 1020, 1252, 1420, 1594, 1881, 746,
1055, 1258, 1450, 1602, 1890, 785, 1085, 1262, 1452,
1604, 1893, 797, 1102, 1269, 1475, 1608, 1895, 844,
1102, 1270, 1478, 1630, 1910, 855, 1108, 1290, 1481,
1642, 1923, 858, 1115, 1293, 1485, 1674, 1940, 886,
1120, 1300, 1502, 1730, 1945, 886, 1134, 1310, 1505,
1750, 2023, 930, 1140, 1313, 1513, 1750, 2100, 960,
1199, 1315, 1522, 1763, 2130, 988, 1200, 1330, 1522,
1768, 2215, 990, 1200, 1355, 1530, 1781, 2268, 1000,
1203, 1390, 1540, 1782, 2440, 1010, 1222, 1416, 1560,
1792])
cycles.mean()
numpy.median(cycles)
numpy.std(cycles)
numpy.var(cycles)
Let’s generate a few different plots to examine the distribution of the data.
# start with a histogram
plt.hist(cycles, alpha=0.5)
plt.xlabel("Cycles until failure");
# now a box and whisker plot. Note that there is an outlier point (the circle)
plt.boxplot(cycles)
plt.ylabel("Cycles until failure");
import seaborn as sns
sns.violinplot(cycles)
plt.xlabel("Cycles until failure");
# a barplot with confidence interval "error bars"
plt.figure(figsize=(8,2))
sns.barplot(y=cycles, errorbar=("ci", 95), capsize=0.1)
plt.xlabel("Cycles until failure (95% CI)");
Exploring insecticide data¶
Let’s explore data collected by Beall in the 1940s concerning the effect of different insecticides on insects. The dataset gives the count of insects in agricultural experimental units treated with different insecticides.
Data comes from the article Beall, G., (1942), The Transformation of data from entomological field experiments, Biometrika, 29, 243–262. It’s available via the statsmodels
package.
import statsmodels.api as sm
insect = sm.datasets.get_rdataset("InsectSprays").data
insect.head()
# total number of observations
len(insect)
# how many observations for each spray? groupby is a function provided by the pandas library.
insect.groupby("spray").count()
We want to analyze the relative effectiveness of the different insecticide sprays tested. A first level of analysis is to compare their means.
# relative effectiveness of each spray
insect.groupby("spray").mean()
However, relying only on the mean can provide us with a misleading impression of the relative effectiveness; we also want to know the variability for each spray. A violinplot is a good way of showing the distribution of each insecticide effectiveness, on a single plot.
sns.violinplot(data=insect, x="spray", y="count")
plt.ylabel("Remaining insects");
Assuming that we really want to kill all insects (indiscriminitely), spray C seems to be the option tested with the highest effectiveness.
Exploring material strength data¶
We will now examine a third dataset, material strength data collected by Vangel from US NIST.
vangel = pandas.read_csv("https://www.itl.nist.gov/div898/software/dataplot/data/VANGEL5.DAT", header=None, skiprows=25)
vangel = vangel.squeeze("columns")
vangel.head()
plt.hist(vangel, density=True, alpha=0.5)
plt.title("Vangel material data (n={})".format(len(vangel)))
plt.xlabel("Specimen strength");
Let's try to fit a Weibull distribution to this data.
plt.hist(vangel, density=True, alpha=0.5)
shape, loc, scale = scipy.stats.weibull_min.fit(vangel, floc=0)
support = numpy.linspace(vangel.min(), vangel.max(), 100)
plt.plot(support, scipy.stats.weibull_min(shape, loc, scale).pdf(support), lw=3)
plt.title("Weibull fit on Vangel data")
plt.xlabel("Specimen strength");
Sometimes it’s clearer to plot the cumulative failure-intensity function (the empirical CDF of our dataset).
import statsmodels.distributions
ecdf = statsmodels.distributions.ECDF(vangel)
plt.plot(support, ecdf(support), label="Empirical CDF")
plt.plot(support, scipy.stats.weibull_min(shape,loc,scale).cdf(support), lw=3, label="Weibull fit")
plt.title("Vangel cumulative failure intensity")
plt.xlabel("Specimen strength")
plt.legend();
A good way of assessing goodness of fit, in particular in the distribution tails, is to generate a probability plot.
scipy.stats.probplot(vangel, \
dist=scipy.stats.weibull_min(shape,loc,scale),\
plot=plt.figure().add_subplot(111))
plt.title("Weibull prob-plot of Vangel data");
This data follows a Weibull distribution quite well.
If we wish to estimate a particular quantile measure for this data, we can do that on the fitted distribution.
# the first percentile
scipy.stats.weibull_min(shape,loc,scale).ppf(0.01)