## BASIC BOOTSTRAP METHODS

Rand R. Wilcox, in Applying Contemporary Statistical Techniques, 2003

### 7.3.2Testing for Zero Correlation

The modified percentile bootstrap method just described performs relatively well when the goal is to test the hypothesis of a zero correlation (Wilcox & Muska, 2001). You proceed exactly as already described in this section, except for every bootstrap sample you compute Pearsons correlation r rather than the least squares estimate of the slope. So now we have B bootstrap values for r, which, when written in ascending order, we label r(1)* ≤ … ≤ r(B)*. Then a .95 confidence interval for ρ is

$\left(r{\left(a\right)}^{*},r{\left(c\right)}^{*}\right),$

where again for n < 40, a = 7 and c = 593; for 40 ≤ n < 80, a = 8 and c = 592; for 80 ≤ n < 180, a = 11 ande = 588; for 180 ≤ n < 250, a = 14 and c = 585; while for n ≥ 250, a = 15 and c = 584. As usual, if this interval does not contain zero, reject H0: ρ = 0.

We saw in Chapter 6 that heteroscedasticity causes Students T test of H0: ρ = 0 to have undesirable properties. All indications are that the modified percentile bootstrap eliminates these problems. When ρ ≠ 0, the actual probability coverage remains fairly close to the .95 level provided ρ is not too large. But if, for example, ρ = .8, the actual probability coverage of the modified percentile bootstrap method can be unsatisfactory in some situations (Wilcox & Muska, 2001). There is no known method for correcting this problem.

URL:

https://www.sciencedirect.com/science/article/pii/B9780127515410500286

## Correlation and Tests of Independence

Rand Wilcox, in Introduction to Robust Estimation and Hypothesis Testing (Third Edition), 2012

### 9.3.14R Functions corb, pcorb, and pcorhc4

The R function

$\text{corb(x,y,corfun=pbcor,nboot=599,…)}$

tests the hypothesis of a zero correlation using the heteroscedastic bootstrap method just described. By default, it uses the percentage bend correlation, but any correlation can be specified by the argument corfun. For example, the command corb(x,y,corfun=wincor,tr=0.25) will use a 25% Winsorized correlation.

When working with Pearsons correlation, use the function

$\text{pcorb(x,y),}$

which applies the modified percentile bootstrap method described in the previous section. The R function

$\text{pcorhc4(x,y,alpha=0.05)}$

applies the HC4 method.

URL:

https://www.sciencedirect.com/science/article/pii/B9780123869838000093

## The Renormalization Group Operations

Jurgen Honig, Józef Spałek, in A Primer to the Theory of Critical Phenomena, 2018

### 9.1Real Space Renormalization

At the outset, we need to consider some generalities. In executing the blocking methodology, two trivial cases are encountered: if we maintain the temperature of a system above its critical value ${T}_{c}$, then successive rescalings, as already explained, shrink lattice distances and correlation lengths ξ by a factor $b>1$, so that ${\xi }^{\prime }=\frac{\xi }{b}$; after many such operations, this eventually tends to a zero correlation length. This represents the high-T state. Conversely, if we start a system subject to some degree of magnetic order below ${T}_{c}$, then lowering the temperature increases the degree of order at the start of the blocking process. Concurrently, the correlation variables involve ever larger distances until they cover the entire system, indicative of complete order that is unchanged during rescaling. We have reached the low-T attractive critical point.

Clearly, the intermediate case $T={T}_{c}$ is of particular interest; it divides the Hamiltonians for which the various coupling constants ${K}_{l}$ (for $T\ne {T}_{c}$) of Chapter 8 move the system toward the upper trivial point, from Hamiltonians for which the coupling constants lead the system toward the lower attractive point. This dichotomy is reflected in the existence of a dividing (hyper)plane, called a critical surface, spanned by coupling constants ${K}_{l}$ as described later. A point on that surface represents a Hamiltonian at a critical temperature, with the other parameters at values corresponding to the location of that point.

URL:

https://www.sciencedirect.com/science/article/pii/B9780128046852000097

## Regression and Correlation

R.H. Riffenburgh, in Statistics in Medicine (Third Edition), 2012

### Assumptions Underlying Correlation

Let us list assumptions about continuous-variable, or Pearson, correlation and compare them with the five regression assumptions from Section 21.2.

1.

Correlation and regression require the same assumption: the errors in data values are independent one from another.

2.

Correlation always requires the assumption of a straight-line relationship. A large correlation coefficient implies that there is a large linear component of relationship, but not that other components do not exist. In contrast, a zero correlation coefficient only implies that there is not a linear component; there may be curved relationships, as was illustrated in Figure 21.3.

3.

The assumption of exact readings on one axis is not required of correlation; both x and y may be measured with random variability, as was illustrated in Figure 21.4.

4.

and 5. Take on a different form, because x and y vary jointly, so what is assumed for y relative to x is also assumed for x relative to y. x and y are assumed to follow a bivariate normal distribution, that is, sort of a hill in three dimensions, where x and y are the width and length and the height is the probability (could be read relative frequency) of any joint value of x and y. The peak of the hill lies over the point specified by the two means and the height of the hill diminishes in a normal shape in any direction radiating out from the means point. If the bivariate normal assumption is badly violated, it is possible to calculate a correlation coefficient using rank methods.

URL:

https://www.sciencedirect.com/science/article/pii/B9780123848642000214

## Statistical Methods for Physical Science

John L. Stanford, Jerald R. Ziemke, in Methods in Experimental Physics, 1994

### 16.2.4“Local” Critical Correlation Estimation

The first hypothesis-testing procedure concems correlation between two time series, one at a given map grip point and the other a reference time series (as in Fig. 1). This is to reject the null hypothesis (that correlations are due to chance data variations) if $|\stackrel{^}{\mathrm{\rho }}|>{\mathrm{\rho }}_{c}$, the local critical correlation value. For the examples in this chapter, we choose the “local test” of the temporal correlation test between two time series to be made at the 5% level of significance. (The local test level is the choice of the investigator.)

There are at least two methods for finding a suitable critical correlation value ${\mathrm{\rho }}_{c}$ for the local test:

1.

Computational method. One method is to use a Monte Carlo experiment, plotting percent area vs. correlation after combining a large number(hundreds or thousands) of randomly generated correlation maps (each map derived from a simulated series at the reference point). This single plot of percent area vs. correlation will generally have a symmetric bell-shape about zero correlation, from which a value for ${\mathrm{\rho }}_{c}$ can be easily found, i.e. that value of $|\stackrel{^}{\mathrm{\rho }}|$ exceeded in only 5% of test cases.

2.

Bivariate normal model method. If one assumes that data time series taken from two fixed sites can be adequately modeled with a bivariate normal distribution, then it can be shown that the random variable $T=\stackrel{^}{\mathrm{\rho }}\sqrt{n-2}/\sqrt{1-{\stackrel{^}{\mathrm{\rho }}}^{2}}$ has a t distribution with $n-2$ (temporal) DOF under the null hypothesis that $\mathrm{\rho }=0$. The method used in this chapter for obtaining approximate temporal DOF assumes that the response of the filter used in data preprocessing (here a bandpass filter), plotted as response vs. frequency, is positive definite and normalized such that the maximum filter value is 1. Because of the bandpass filtering, the temporal DOF will not be equal to $n-2$. Temporal DOF are estimated by first multiplying $n-2$ by the area of the filter and then dividing by $n/2;\phantom{\rule{0.25em}{0ex}}n/2$ is the area of a full spectrum response (no temporal filtering). This simple approach yields critical correlation values that are often quite close to those derived from method (1) . Since the t probability density function is symmetric about $t=0$, a 5% local test becomes 2.5% on each tail, requiring the one-sided critical value ${T}_{c}$ such that the probability $T>{T}_{c}$ is 2.5% (that is, the t distribution function has value 0.975 for $T={T}_{c}$).

A main strength of method (2) is that ${\mathrm{\rho }}_{\mathrm{c}}$ can be calculated easily, particularly for small significance tests, for example, 0.01% local level. At such a small significance level, method (1) would require much more computation, viz, tens of thousands of randomly generated maps. Method (2) is used exclusively in this chapter for determining ${\mathrm{\rho }}_{\mathrm{c}}$ because it is straightforward and requires minimal effort.

In later sections we will use global fields (latitude vs. longitude) of temperature and ozone data in separate correlation studies. The temperature (ozone) data sets use a 40–50 day (27-day) period bandpass filter with normalized area calculated to be 22.5 (22), which, according to method (2). results in 45 (44) temporal DOF. For either 44 or 45 DOE the 5% test value for ${T}_{c}$ is found from tables to be approximately 2.02, and from the definition of random variable T, the corresponding 5% critical correlation ${\mathrm{\rho }}_{c}$ is ${T}_{c}\text{/}\sqrt{{T}_{c}^{2}\text{+DOF}}\text{=0}\text{.29}$ for both ozone and temperature analyses. The same critical correlation value for ozone and temperature studies is purely coincidental; two different filter responses will generally have two different computed values of ${\mathrm{\rho }}_{c}$ for the same chosen local significance level.

URL:

https://www.sciencedirect.com/science/article/pii/S0076695X08602665

## More Regression Methods

Rand Wilcox, in Introduction to Robust Estimation and Hypothesis Testing (Fourth Edition), 2017

### 11.1.12Confidence Bands for the Typical Value of y Given x

This section deals with computing a confidence band, sometimes called prediction bands, for $m\left(x\right)={\beta }_{0}+{\beta }_{1}x$, the typical value of y given x, that allows heteroscedasticity. More precisely, if the parameters ${\beta }_{0}$ and ${\beta }_{1}$ are estimated based on the random sample $\left({x}_{1},{y}_{1}\right),\dots ,\left({x}_{n},{y}_{n}\right)$, the goal is to compute a confidence interval for $m\left({x}_{i}\right)$ ($i=1,\dots ,n$) such that the simultaneous probability coverage is approximately $1-\alpha$. And there is the related goal of testing the n hypotheses

(11.7)${H}_{0}:m\left({x}_{i}\right)={\theta }_{0},$

where ${\theta }_{0}$ is some specified constant. (For a review of methods based on the least squares estimator that assume normality and homoscedasticity, see Liu, Lin, & Piegorsch, 2008.)

The basic strategy mimics the approach used by the two-sample version of Students t test. Begin by assuming normality and homoscedasticity, determine an appropriate critical value based on the sample size and the regression estimator that is used in conjunction with an obvious test statistic, and then study the impact of non-normality and heteroscedasticity via simulations.

First consider a single value for the covariate, x. Let ${\tau }^{2}$ denote the squared standard error of $\stackrel{ˆ}{y}={b}_{0}+{b}_{1}x$, an estimate of $m\left(x\right)$, where ${b}_{0}$ and ${b}_{1}$ are estimates of ${\beta }_{0}$ and ${\beta }_{1}$, respectively, based on some regression estimator to be determined. A basic percentile bootstrap method is used to estimate ${\tau }^{2}$ (e.g., Efron & Tibshirani, 1993). More precisely, generate a bootstrap sample by randomly sampling with replacement n pairs of points from $\left({x}_{1},{y}_{1}\right),\dots ,\left({x}_{n},{y}_{n}\right)$ yielding $\left({x}_{1}^{⁎},{y}_{1}^{⁎}\right),\dots ,\left({x}_{n}^{⁎},{y}_{n}^{⁎}\right)$. Based on this bootstrap sample, estimate the intercept and slope and label the results ${b}_{0}^{⁎}$ and ${b}_{1}^{⁎}$, which yields ${\stackrel{ˆ}{y}}^{⁎}={b}_{0}^{⁎}+{b}_{1}^{⁎}x$. Repeat this B times yielding ${\stackrel{ˆ}{y}}_{1}^{⁎},\dots ,{\stackrel{ˆ}{y}}_{B}^{⁎}$, in which case an estimate of ${\tau }^{2}$ is

${\stackrel{ˆ}{\tau }}^{2}=\frac{1}{B-1}\sum {\left({\stackrel{ˆ}{y}}_{b}^{⁎}-{\overline{y}}^{⁎}\right)}^{2},$

where ${\overline{y}}^{⁎}=\sum {\stackrel{ˆ}{y}}_{b}^{⁎}/B$. (In terms of controlling the probability of a Type I error, $B=100$ appears to suffice.) Then the hypothesis given by (11.7) can be tested with

$W=\frac{\stackrel{ˆ}{y}-{\theta }_{0}}{\stackrel{ˆ}{\tau }}$

once an appropriate critical value has been determined.

Momentarily assume that W has a standard normal distribution, in which case a p-value can be determined for each ${x}_{i}$, $i=1,\dots ,n$. Denote the resulting p-values by ${p}_{1},\dots ,{p}_{n}$ and let ${p}_{m}=\mathrm{min}\left({p}_{1},\dots ,{p}_{n}\right)$. As is evident, if ${p}_{\alpha }$, the α quantile of ${p}_{m}$, can be determined, the probability of one or more Type I errors can be controlled simply by rejecting ith hypothesis if and only if ${p}_{i}\le {p}_{\alpha }$. And in addition, confidence intervals for each $m\left({x}_{i}\right)$ can be computed that have simultaneous probability coverage $1-\alpha$.

The distribution of ${p}_{m}$ is approximated in the following manner. Momentarily assume that both the error term ϵ and x have a standard normal distribution and consider the case ${\beta }_{0}={\beta }_{1}=0$. Then a simulation can be performed yielding an estimate of the α quantile of the distribution of ${p}_{m}$. In effect, generate n pairs of observations from a bivariate normal distribution having correlation zero yielding $\left({x}_{1},{y}_{1}\right),\dots ,\left({x}_{n},{y}_{n}\right)$. Compute ${p}_{m}$ and repeat this process A times yielding ${p}_{m1},\dots {p}_{mA}$. Put these A values in ascending order yielding ${p}_{m\left(1\right)}\le \dots \le {p}_{m\left(A\right)}$ and let $k=\alpha A$ rounded to the nearest integer. Then the α quantile of ${p}_{m}$, ${p}_{\alpha }$, is estimated with ${p}_{m\left(k\right)}$. Moreover, the simultaneous probability coverage among the n confidence intervals

(11.8)${\stackrel{ˆ}{y}}_{i}±z{\stackrel{ˆ}{\tau }}_{i}\phantom{\rule{0.25em}{0ex}}\left(i=1,\dots ,n\right)$

is approximately $1-\alpha$, where z is the $1-{p}_{\alpha }/2$ quantile of a standard normal distribution, ${\stackrel{ˆ}{y}}_{i}={b}_{0}+{b}_{1}{x}_{i}$ and ${\stackrel{ˆ}{\tau }}_{i}$ is the corresponding estimate of the standard error. Here are some estimates of ${p}_{\alpha }$ when $1-\alpha =0.95$ and when using the Theil–Sen (TS) estimator, the modification of Theil–Sen estimator based on the Harrell–Davis estimator (TSHD), OLS and the quantile regression estimator (QREG):

nTSOLSTSHDQREG
100.0110.0010.0090.011
200.0100.0040.0080.009
500.0100.0080.0090.009
1000.0100.0080.0090.008
4000.0110.0110.0120.009
6000.0100.0110.0100.010

As can be seen, the value depends on the sample size when using least squares regression, as expected. In contrast, when using the robust regression estimators, the estimated values suggest that there is little or nor variation in the value of ${p}_{\alpha }$ as a function of the sample size, at least when $10\le n\le 600$.

Of course, a crucial issue is how well the method performs when dealing with non-normality and heteroscedasticity. Simulations indicate that it performs well when testing at the 0.05 level and $n=20$ (Wilcox, 2016c). Even OLS performed tolerably well, but generally using the Theil–Sen estimator or the quantile regression estimator provides better control over the Type I error probability. (When using least squares regression, Faraway & Sun, 1995, derived an alternative method that allows heteroscedasticity.)

URL:

https://www.sciencedirect.com/science/article/pii/B9780128047330000111

## CORRELATION FUNCTIONS

S. Braun, in Encyclopedia of Vibration, 2001

### Examples of Correlations and Spectra for Random Signals

We first note that for the case:

(13)$\begin{array}{l}\phantom{\rule{1em}{0ex}}R\left(\tau \right)={S}_{0}\delta \left(\tau \right)\\ \hfill {R}_{xx}\left(0\right)=\frac{1}{2\pi }{\int }_{-\mathrm{\infty }}^{\mathrm{\infty }}{S}_{xx}\left(\omega \right)\phantom{\rule{0.2em}{0ex}}\mathrm{d}\omega ={\int }_{-\mathrm{\infty }}^{\mathrm{\infty }}{S}_{\mathit{xx}}\left(f\right)\phantom{\rule{0.2em}{0ex}}\text{d}f\end{array}$

where ${R}_{xx}\left(0\right)$ is the total power (we assume zero mean), and the PSD is interpreted to be the distribution of the total power in the frequency domain. It should be noted that the power tends to infinity, and this case, which is possible mathematically, will never exist exactly in practice.

Next, using eqn (9) we consider some examples involving some idealized situations. These can often help in defining general properties.

Example 1 This concerns a possible definition (and intuitive understanding) of white noise. Assuming a constant PSD of value S0 covering the infinite frequency range ±∞, we compute the autocorrelation as an impulse function. The autocorrelation being an impulse, there is zero correlation between two signal points separated by any incremental time, a completely memory-less phenomenon. White noise is obviously a mathematical notion.

Example 2 This concerns a constant PSD limited to fmax, of magnitude equal to S0. The autocorrelation is then:

(14)${R}_{xx}\left(\tau \right)=2{S}_{0}{f}_{\mathrm{max}}\frac{\mathrm{sin}\left(2\pi {f}_{\mathrm{max}}\tau \right)}{\left(2\pi {f}_{\mathrm{max}}\tau \right)}$

and is shown in Figure 2C. The first zero crossing of Rxx occurs for $\tau =1/2{f}_{\mathrm{max}}$, and this is roughly the memory of the process, the time interval for which there is still a correlation between the signal samples. The smaller the bandwidth, fmax, the longer this memory will be.

Example 3 Here we show a narrowband PSD, typical of a mass-spring-damper (SDOF) system excited by a white noise with a PSD equal to S0:

(15)${R}_{xx}\left(\tau \right)={S}_{0}\frac{\pi S{f}_{0}}{2}\mathrm{exp}\phantom{\rule{0.2em}{0ex}}\left(-2\pi \zeta {f}_{0}\tau \right)\phantom{\rule{0.2em}{0ex}}\mathrm{cos}\phantom{\rule{0.2em}{0ex}}\left(2\pi {f}_{0}\tau \right)$

where f0 is the natural frequency and ζ is the damping ratio. This is shown in Figure 2F. While the autocorrelation is oscillatory (dictated by f0), the duration in the correlation domain is inversely proportional to the bandwidth in the frequency domain.

URL:

https://www.sciencedirect.com/science/article/pii/B0122270851001703

## Correlation and Tests of Independence

Rand Wilcox, in Introduction to Robust Estimation and Hypothesis Testing (Fourth Edition), 2017

### 9.7Exercises

1.

Generate 20 observations from a standard normal distribution and store them in the R variable ep. Repeat this and store the values in x. Compute y=x+ep and compute Kendalls tau. Generally, what happens if two pairs of points are added at $\left(2.1,\phantom{\rule{0.2em}{0ex}}-2.4\right)$? Does this have a large impact on tau? What would you expect to happen to the p-value when testing ${H}_{0}$: $\tau =0$?

2.

Repeat Exercise 1 with Spearmans rho, the percentage bend correlation, and the Winsorized correlation.

3.

Demonstrate that heteroscedasticity affects the probability of a Type I error when testing the hypothesis of a zero correlation based on any type M correlation and non-bootstrap method covered in this chapter.

4.

Use the function cov.mve(m,cor=T) to compute the MVE correlation for the star data in Figure 9.2. Compare the results to the Winsorized, percentage bend, skipped and biweight correlations, as well the M-estimate of correlation returned by the R function relfun.

5.

Using the Group 1 alcohol data in Section 8.6.2, compute the MVE estimate of correlation and compare the results to the biweight midcorrelation, the percentage bend correlation using $\beta =0.1$, 0.2, 0.3, 0.4, and 0.5, Winsorized correlation using $\gamma =0.1$ and 0.2, and the skipped correlation.

6.

Repeat the previous problem using the data for Group 2.

7.

The method for detecting outliers, described in Section 6.4.3, could be modified by replacing the MVE estimator with the Winsorized mean and covariance matrix. Discuss how this would be done and its relative merits.

8.

Using the data in the file read.dat, test for independence using the data in columns 2, 3, and 10 and the R function pball. Try $\beta =0.1$, 0.3, and 0.5. Comment on any discrepancies.

9.

Examine the variables in the last exercise using the R functions mscor.

10.

For the data used in the last two exercises, test the hypothesis of independence using the function indt. Why might indt find an association not detected by any of the correlations covered in this chapter?

11.

For the data in the file read.dat, test for independence using the data in columns 4 and 5 and $\beta =0.1$.

12.

The definition of the percentage bend correlation coefficient, ${\rho }_{\mathrm{pb}}$, involves a measure of scale, ${\omega }_{x}$, that is estimated with $\stackrel{ˆ}{\omega }={W}_{\left(m\right)}$, where ${W}_{i}=|{X}_{i}-{M}_{x}|$ and $m=\left[\left(1-\beta \right)n\right]$, and $0\le \beta \le 0.5$. Note that this measure of scale is defined even when $0.5<\beta <1$ provided that $m>0$. Argue that the finite sample breakdown point of this estimator is maximized when $\beta =0.5$.

13.

If in the definition of the biweight midcovariance, the median is replaced by the biweight measure of location, the biweight midcovariance is equal to zero under independence. Describe some negative consequences of replacing the median with the biweight measure of location.

14.

Let X be a standard normal random variable, and suppose Y is a contaminated normal with probability density function given by Eq. (1.1). Let $Q=\rho X+\sqrt{1-{\rho }^{2}}Y$, $-1\le \rho \le 1$. Verify that the correlation between X and Q is

$\frac{\rho }{\sqrt{{\rho }^{2}+\left(1-{\rho }^{2}\right)\left(1-ϵ+ϵ{K}^{2}\right)}}.$

Examine how the correlation changes as K gets large with $ϵ=0.1$. What does this illustrate about the robustness of ρ?

URL:

https://www.sciencedirect.com/science/article/pii/B9780128047330000093

## Partial Width Correlations and Common Doorway States

A.M. LANE, in Nuclear, Particle and Many Body Physics, 1972

### Common Doorway States for Two Channels

When two channels have the same doorway states d, then the above form γλc = Σd 〈λ | d〉 γdc applies to both. It is easy to show that, with the assumptions that 〈λ | d〉 are uncorrelated in phase and magnitude,

$\overline{{\gamma }_{\lambda c}{\gamma }_{\lambda {c}^{\prime }}}=\sum _{d}\overline{{〈\lambda |d〉}^{2}}{\gamma }_{dc}{\gamma }_{d{c}^{\prime }}$

and

$\overline{{\gamma }_{\lambda c}^{2}{\gamma }_{\lambda {c}^{\prime }}^{2}}-\overline{{\gamma }_{\lambda c}^{2}}\overline{{\gamma }_{\lambda {c}^{\prime }}^{2}}=2{\left(\sum _{d}\overline{{〈\lambda |d〉}^{2}}{\gamma }_{dc}{\gamma }_{d{c}^{\prime }}\right)}^{2}.$

The similarity of the right sides is a reflection of the previously observed fact that [ρ(γλc, γλc′)]2 = ρ(γλc2, γλc2′) in the case of linear correlations. In fact

$\rho \left({\gamma }_{\lambda c}{\gamma }_{\lambda {c}^{\prime }}\right)=\frac{\sum {}_{d}\overline{{〈\lambda |d〉}^{2}}{\gamma }_{dc}{\gamma }_{d{c}^{\prime }}}{{\left[\left(\sum {}_{d}\overline{{〈\lambda |d〉}^{2}}{\gamma }_{dc}^{2}\right)\left(\sum {}_{d}\overline{{〈\lambda |d〉}^{2}}{\gamma }_{d{c}^{\prime }}^{2}\right)\right]}^{1/2}}.$

We see that an isolated doorway implies maximum correlation ρ = 1. In the opposite extreme when all doorways overlap completely so$\overline{{〈\text{λ}|d〉}^{2}}$is independent of d for all d, then the relation Σd γdc γdc = 0 implies zero correlation. Let us now consider the intermediate case where a number n of doorways occurs, but not a large fraction of the total (so that the relation Σd γdcγdc = 0 does not operate). Taking γdc to have random signs, the average value of ρ is zero and the rms value is n−1/2. Since [ρ(γλc, γλc)]2 = ρ(γλc2, γλc2), this means that the mean value of ρ(γλc2, γλc2′) is n−1. The spread in values arising from the random signs of γdc (ignoring variation in magnitude) is [var ρ(γλc2, γλc2)]1/2 = mean ρ(γλc2, γλc2), i.e., like an exponential distribution. When variation in magnitude is allowed the spread will be larger still. As an example, with the exponential form the observed typical value ρ(γλn2, γλf2) = 0.27 corresponds to the mean value for 4 doorways but is within the bounds of reasonable probability (≳ 10% chance) for up to 9 doorways. The highest observed value 0.76 in the same view corresponds to ≤ 3 doorways. Both numbers will be even larger when variation in size γdc and$\overline{{〈\text{λ}|d〉}^{2}}$is included. Notice, however, that the spread in values of ρ falls when the situation of completely overlapping doorways is approached. In that case, as noted, we have Σd γdcγdc′ ≈ 0 since the sum on d is now a complete set. This means that one cannot esimate a range of values for $\sum \overline{{〈\text{λ}|d〉}^{2}}{\gamma }_{dc}{\gamma }_{d{c}^{\prime }}$arising from random signs of γdc, since these signs are not random. So far we have assumed that doorways d have similar features (i.e., values of γdcγdc′). This will not be the case in general. An example of a different case is when d is a common doorway, while d′ are doorways for c′ butnot c. In this case,

$\rho ={\left(1+\left(\sum _{{d}^{\prime }}{\gamma }_{{d}^{\prime }{c}^{\prime }}^{2}\right)/{\gamma }_{dc}^{2}\right)}^{-1/2}.$

Finally, we note that the case of an isolated common doorway predicts that γλcγλc′ should have the same phase for all levels λ (viz., that of γdcγdc′) instead of fluctuating randomly. This means a tendency for destructive interference between levels. The only nonrandom effect reported  is in 197Au(n, γ) where interference between the 4.9 eV and 60 eV levels is constructive for each of 24 final states, implying systematically opposite phases of γλnγλf for the two levels. This is hard to understand with the doorway picture.

URL:

https://www.sciencedirect.com/science/article/pii/B9780125082013500144

## Correlated Chronometric and Psychometric Variables

Arthur R. Jensen, in Clocking the Mind, 2006

### Task complexity and the RT–IQ correlation

It has long seemed paradoxical that RT has low to moderate correlations with IQ, the correlation increasing as a function of task complexity (or length of RT) while the time taken per item on conventional PTs is much less correlated with total score (or with IQ on another test) based on the number of items being scored as correct. The times taken per Raven matrices item, for example, show near-zero correlations with RT. The true-score variance of test scores depends almost entirely on the number right (or conversely, the number of error responses). The relationship between RT and task complexity or cognitive load of the items to be responded to (i.e., the RS) has been a subject of frequent discussion and dispute in the RT–IQ literature (e.g., Larson & Saccuzzo, 1989). I have examined virtually the entire literature on this seeming paradox, but rather than giving a detailed account of all these empirical studies, I will simply summarize the main conclusions that clearly emerge from a wide range of studies. These findings can be illustrated by a couple of studies that were specifically directed at analyzing the relationship of task complexity to the RT–IQ correlation.

But first, a few words about the definitions of complexity in this context. One or another of five clear operational criteria of task complexity is generally used: (1) the average of the subjective ratings made by N judges of various RT tasks “complexity”; (2) the amount of uncertainty as an increasing function of the number of choices (response alternatives) that are associated with the n different RS, such as the difference between SRT and CRTs based on two or more stimulus-response alternatives; (3) the theoretically presumed number of distinct mental operations that are required for a correct response, such as the difference between adding two digits and adding three digits; (4) the difference between (a) single tasks that make a minimal demand on memory and (b) dual tasks requiring that one item of information RS1 be held in memory while performing the interposed task RS2-RT2, then performing RT1; and (5) various tasks mean RTs used as a measure of complexity. All of the above conditions except 1 and 5 can be experimentally manipulated as independent variables while RT is the dependent variable.

Subjective judgment (condition 1) is probably the most questionable measure, although, as predicted by the Spearman-Brown formula, the mean ranking of tasks for “complexity” would gain in validity by aggregating the rankings by an increasing number of judges. A study of the SVT (described on page) in which a group of 25 college students were asked to rank the 14-item types of the SVTs for “complexity” showed that subjective judgments of item complexity do have a fair degree of objective validity (Paul, 1984). The raters were naive concerning the SVT and its use in RT research. The mean ratings on “complexity” of the 14 SVT items (from least complex = 1 to most complex = 14) had a rank-order correlation of +.61 with the items mean RTs obtained in another group of students (N=50).

The hypothesized relationship of the RT–IQ correlation to task complexity is shown in Figure 9.13. The level of complexity at the peak of the curve is not constant for groups of different ability levels. Although the relative levels of complexity on different tasks can be ranked with fair consistency, the absolute complexity level varies across different ability levels. The peak of the curve in Figure 9.13 occurs at a shorter RT for adults than for children and for high IQ than for low IQ groups of the same age. The peak level of task complexity for correlation with IQ in college students, for example, is marked by a mean RT of about 1 s; and for elementary school children it is between 2 and 3 s. But there has not been enough systematic parametric research on this point to permit statements that go beyond these tentative generalizations.

A direct test of the hypothesis depicted in Figure 9.13 was based on eight novel speedof-processing tasks devised to systematically differ in difficulty or complexity (Lindley, Wilson, Smith, & Bathurst, 1995). They were administered to a total of 195 undergraduate college students. IQ was measured by the Wonderlic Personnel Test. The results are summarized in Figure 9.14. This study affords a clue to what is probably the major cause of the very wide range of RT-IQ correlations reported in various studies. The correlation is influenced by two conditions: (1) test complexity and (2) the mean and range of IQ in the subject sample, as the peak of the complexity function shifts to longer RTs as the mean IQ declines. Therefore, the significant RT–IQ correlations fall within a relatively narrow range of task complexity for various groups selected from different regions of the whole spectrum of ability in the population. Hence, when it comes to measuring general intelligence by means of RT there is probably no possibility of finding any single RT task with a level of task complexity that is optimally applicable to different samples that ranges widely in ability. The average RT–IQ correlation in the general population on any single task, therefore, represents an average of mostly suboptimal complexity levels (hence lower RT–IQ correlations) for most of the ability strata within in the whole population.

The optimum level of task complexity for the IQ–RT correlation is hypothesized to occur near the RT threshold between error-free responses and error responses. This is the point on the complexity continuum beyond which RT becomes less correlated (negatively) with IQ and errors become increasingly correlated (negatively) with IQ.

This hypothesis of a trade-off between RT and errors in the RT–IQ correlation and the Errors–IQ correlation was tested in a study expressly designed for this purpose (Schweizer, 1998). In order to study the relationships between errors, RT, and the RT–IQ correlation, the RTs and the number of errors had to be measured entirely on the High side of the complexity function shown in Figure 9.13, resulting in mean RTs ranging between 3 and 7 s; and even then the error rates averaged only 16 percent. Three sets of different RT tasks were used (numbers ordering, figures ordering, mental arithmetic). In each set, the task complexity was experimentally controlled as the independent variable to produce three distinct levels of complexity, determined by the number of homogeneous mental operations required to make a correct response. IQ was measured as the averaged scores on the Wechsler test (WAIS-R); subjects were 76 university students (mean IQ= 120.4, SD 9.6).

Figure 9.15 shows the results (averaged over the three different RT tasks) for the hypothesized functional relationships between the key variables. The consistent linearity of the relationships shows that it is possible to devise cognitive tasks that vary unidimensionally in complexity.

Unfortunately, a graph of the relation between complexity and the Error–IQ correlation is not possible with the given data. The Error-IQ correlations were said to be very small and only the two largest of the nine possible correlations were reported, both significant (– .24 and –.28, each at p < .05); but they evinced no systematic relationship to task complexity. It would probably require a considerably greater range of complexity and error rate to adequately test the relation between task complexity and the Errors-IQ correlation. In typical PTs it is so problematic to measure item complexity that the term is usually used synonymously with item difficulty, measured as the error rate (or percent passing) when all item responses are scored as either right or wrong. Then, of course, the relationship between item difficulty and the Error-IQ correlation is a foregone conclusion. The correlation between item response times and IQ based on right-wrong scoring is typically very low, but this is mainly because there are so many different causes of error responses to test items, except in item sets that have been specially constructed to differ in difficulty along some unitary dimension of complexity. The meaning of complexity in chronometric tasks is discussed further in Chapter 11