Home Course Concepts About

This notebook is an element of the risk-engineering.org courseware. It can be distributed under the terms of the Creative Commons Attribution-ShareAlike licence.

Author: Eric Marsden eric.marsden@risk-engineering.org.


In this notebook, we illustrate NumPy features for working with correlated data. Check the associated lecture slides for background material and to download this content as a Jupyter/Python notebook.

Linear correlation

In [18]:
import numpy
import matplotlib.pyplot as plt
import scipy.stats
plt.style.use("bmh")
%config InlineBackend.figure_formats=["svg"]
In [19]:
X = numpy.random.normal(10, 1, 100)
X
Out[19]:
array([10.57635583,  9.86060064,  9.91975802, 11.38344076,  8.80308369,
        9.33568257,  9.4720971 , 11.14796989, 10.35396711,  8.7516979 ,
        9.56651293, 10.75566776,  8.34377207,  9.66489224,  8.98375427,
       11.12235917,  9.13982254, 11.02610938,  9.66631953, 10.35107813,
       10.78754526,  7.57843667, 10.27879021, 10.69068621,  9.16261403,
       11.15348961, 11.09954913,  9.30472372, 11.44155496, 11.55260205,
       10.70402657, 10.27066249, 10.3115234 , 10.03031101,  8.88494723,
        9.38592254,  8.77164954, 10.82886863, 11.37530892,  9.1790621 ,
       11.19819953, 11.20382339, 10.20277283,  9.77419393, 11.24854948,
       10.9498346 , 11.22160778,  8.6485527 , 10.16609828,  9.76959401,
        8.35792037,  9.6297995 , 10.02216447,  9.97970379,  9.94291787,
        9.79567875, 10.17703533, 11.94412466, 11.29280675, 10.2914609 ,
        9.85640353, 11.13548662, 10.39133325, 11.32445098,  9.61064548,
        7.88811534,  9.36270893,  8.79301229,  8.91786675,  8.07671701,
       12.04687547, 11.21459564,  9.08714086,  9.67493959, 10.24937442,
       10.85328485, 10.248183  , 10.42801348,  9.53728973,  9.88565909,
       10.13986078, 10.20269438, 10.01862539, 10.20989259,  9.33351597,
        9.30009102, 10.15755066, 10.64156186,  9.76782764,  9.52452952,
       10.43115129, 10.04918627, 10.06519455, 10.57577809, 10.31803869,
       11.07975128, 11.33344058, 11.14678658, 10.31499146,  9.22206854])
In [20]:
Y = -X + numpy.random.normal(0, 1, 100)
In [21]:
plt.scatter(X, Y);
2023-07-06T14:32:14.748001 image/svg+xml Matplotlib v3.6.3, https://matplotlib.org/

Looking at the scatterplot above, we can see that the random variables $X$ and $Y$ are correlated. There are various statistical measures that allow us to quantify the degree of linear correlation. The most commonly used is Pearson’s product-moment correlation coefficient. It is available in scipy.stats.

In [22]:
scipy.stats.pearsonr(X, Y)
Out[22]:
PearsonRResult(statistic=-0.672296408775144, pvalue=1.8770190117284626e-14)

The first return value is the linear correlation coefficient, a value between -1 and 1 which measures the strength of the linear correlation. A value greater than 0.9 indicates a strong positive linear correlation, and a value lower than -0.9 indicates strong negative linear correlation (when $X$ increases, $Y$ decreases).

(The second return value is a p-value, which is a measure of the confidence which can be placed in the estimation of the correlation coefficient (smaller = more confidence). It tells you the probability of an uncorrelated system producing datasets that have a Pearson correlation at least as extreme as the one computed from these datasets. Here we have a very very low p-value, so high confidence in the estimated value of the correlation coefficient.)

Exercises

Exercise: show that when the error in $Y$ decreases, the correlation coefficient increases.

Exercise: produce data and a plot with a negative correlation coefficient.

Anscombe’s quartet

Let’s examine four datasets produced by the statistician Francis Anscombe to illustrate the importance of exploring your data qualitatively (for example by plotting the data), rather than relying only on summary statistics such as the linear correlation coefficient.

In [23]:
x =  numpy.array([10, 8, 13, 9, 11, 14, 6, 4, 12, 7, 5])
y1 = numpy.array([8.04, 6.95, 7.58, 8.81, 8.33, 9.96, 7.24, 4.26, 10.84, 4.82, 5.68])
y2 = numpy.array([9.14, 8.14, 8.74, 8.77, 9.26, 8.10, 6.13, 3.10, 9.13, 7.26, 4.74])
y3 = numpy.array([7.46, 6.77, 12.74, 7.11, 7.81, 8.84, 6.08, 5.39, 8.15, 6.42, 5.73])
x4 = numpy.array([8,8,8,8,8,8,8,19,8,8,8])
y4 = numpy.array([6.58,5.76,7.71,8.84,8.47,7.04,5.25,12.50,5.56,7.91,6.89])
In [24]:
plt.scatter(x, y1)
plt.title("Anscombe quartet n°1")
plt.margins(0.1)
2023-07-06T14:32:14.875954 image/svg+xml Matplotlib v3.6.3, https://matplotlib.org/
In [25]:
scipy.stats.pearsonr(x, y1)
Out[25]:
PearsonRResult(statistic=0.81642051634484, pvalue=0.002169628873078783)
In [26]:
plt.scatter(x, y2)
plt.title("Anscombe quartet n°2")
plt.margins(0.1)
2023-07-06T14:32:15.054112 image/svg+xml Matplotlib v3.6.3, https://matplotlib.org/
In [27]:
scipy.stats.pearsonr(x, y2)
Out[27]:
PearsonRResult(statistic=0.8162365060002427, pvalue=0.0021788162369108114)
In [28]:
plt.scatter(x, y3)
plt.title("Anscombe quartet n°3")
plt.margins(0.1)
2023-07-06T14:32:15.186526 image/svg+xml Matplotlib v3.6.3, https://matplotlib.org/
In [29]:
scipy.stats.pearsonr(x, y3)
Out[29]:
PearsonRResult(statistic=0.8162867394895982, pvalue=0.0021763052792280213)
In [30]:
plt.scatter(x4, y4)
plt.title("Anscombe quartet n°4")
plt.margins(0.1)
2023-07-06T14:32:15.314438 image/svg+xml Matplotlib v3.6.3, https://matplotlib.org/
In [31]:
scipy.stats.pearsonr(x4, y4)
Out[31]:
PearsonRResult(statistic=0.8165214368885029, pvalue=0.002164602347197214)

Notice that the linear correlation coefficient (Pearson's $r$) of the four datasets is identical, though clearly the relationship between $X$ and $Y$ is very different in each case! This illustrates the risks of depending only on quantitative descriptors to understand your datasets: you should also use different types of plots to give you a better overview of the data.

The Datasaurus

The Datasaurus provides another illustration of the importance of plotting your data to make sure it doesn't contain any surprises, rather than relying only on summary statistics.

In [32]:
import pandas

ds = pandas.read_csv("https://risk-engineering.org/static/data/datasaurus.csv", header=None)
ds.describe()
Out[32]:
0 1
count 142.000000 142.000000
mean 54.263273 47.832253
std 16.765142 26.935403
min 22.307700 2.948700
25% 44.102600 25.288450
50% 53.333300 46.025600
75% 64.743600 68.525675
max 98.205100 99.487200
In [33]:
scipy.stats.pearsonr(ds[0], ds[1])
Out[33]:
PearsonRResult(statistic=-0.06447185270095164, pvalue=0.44589659802470283)

These summary statistics don't look too nasty, but check out the data once it's plotted!

In [34]:
plt.scatter(ds[0], ds[1]);
2023-07-06T14:32:15.620380 image/svg+xml Matplotlib v3.6.3, https://matplotlib.org/

This article titled Same Stats, Different Graphs: Generating Datasets with Varied Appearance and Identical Statistics through Simulated Annealing runs a lot further with this concept.

In [ ]: