BASIC BOOTSTRAP METHODS

Rand R. Wilcox, in Applying Contemporary Statistical Techniques, 2003

7.3.2 Testing for Zero Correlation

The modified percentile bootstrap method just described performs relatively well when the goal is to test the hypothesis of a zero correlation (Wilcox & Muska, 2001). You proceed exactly as already described in this section, except for every bootstrap sample you compute Pearsons correlation r rather than the least squares estimate of the slope. So now we have B bootstrap values for r, which, when written in ascending order, we label r(1)* ≤ … ≤ r(B)*. Then a .95 confidence interval for ρ is

(r(a)*,r(c)*),

where again for n < 40, a = 7 and c = 593; for 40 ≤ n < 80, a = 8 and c = 592; for 80 ≤ n < 180, a = 11 ande = 588; for 180 ≤ n < 250, a = 14 and c = 585; while for n ≥ 250, a = 15 and c = 584. As usual, if this interval does not contain zero, reject H0: ρ = 0.

We saw in Chapter 6 that heteroscedasticity causes Students T test of H0: ρ = 0 to have undesirable properties. All indications are that the modified percentile bootstrap eliminates these problems. When ρ ≠ 0, the actual probability coverage remains fairly close to the .95 level provided ρ is not too large. But if, for example, ρ = .8, the actual probability coverage of the modified percentile bootstrap method can be unsatisfactory in some situations (Wilcox & Muska, 2001). There is no known method for correcting this problem.

Read full chapter

URL: 

https://www.sciencedirect.com/science/article/pii/B9780127515410500286

Correlation and Tests of Independence

Rand Wilcox, in Introduction to Robust Estimation and Hypothesis Testing (Third Edition), 2012

9.3.14 R Functions corb, pcorb, and pcorhc4

The R function

corb(x,y,corfun=pbcor,nboot=599,…)

tests the hypothesis of a zero correlation using the heteroscedastic bootstrap method just described. By default, it uses the percentage bend correlation, but any correlation can be specified by the argument corfun. For example, the command corb(x,y,corfun=wincor,tr=0.25) will use a 25% Winsorized correlation.

When working with Pearsons correlation, use the function

pcorb(x,y),

which applies the modified percentile bootstrap method described in the previous section. The R function

pcorhc4(x,y,alpha=0.05)

applies the HC4 method.

Read full chapter

URL: 

https://www.sciencedirect.com/science/article/pii/B9780123869838000093

The Renormalization Group Operations

Jurgen Honig, Józef Spałek, in A Primer to the Theory of Critical Phenomena, 2018

9.1 Real Space Renormalization

At the outset, we need to consider some generalities. In executing the blocking methodology, two trivial cases are encountered: if we maintain the temperature of a system above its critical value Tc, then successive rescalings, as already explained, shrink lattice distances and correlation lengths ξ by a factor b>1, so that ξ=ξb; after many such operations, this eventually tends to a zero correlation length. This represents the high-T state. Conversely, if we start a system subject to some degree of magnetic order below Tc, then lowering the temperature increases the degree of order at the start of the blocking process. Concurrently, the correlation variables involve ever larger distances until they cover the entire system, indicative of complete order that is unchanged during rescaling. We have reached the low-T attractive critical point.

Clearly, the intermediate case T=Tc is of particular interest; it divides the Hamiltonians for which the various coupling constants Kl (for TTc) of Chapter 8 move the system toward the upper trivial point, from Hamiltonians for which the coupling constants lead the system toward the lower attractive point. This dichotomy is reflected in the existence of a dividing (hyper)plane, called a critical surface, spanned by coupling constants Kl as described later. A point on that surface represents a Hamiltonian at a critical temperature, with the other parameters at values corresponding to the location of that point.

Read full chapter

URL: 

https://www.sciencedirect.com/science/article/pii/B9780128046852000097

Regression and Correlation

R.H. Riffenburgh, in Statistics in Medicine (Third Edition), 2012

Assumptions Underlying Correlation

Let us list assumptions about continuous-variable, or Pearson, correlation and compare them with the five regression assumptions from Section 21.2.

1.

Correlation and regression require the same assumption: the errors in data values are independent one from another.

2.

Correlation always requires the assumption of a straight-line relationship. A large correlation coefficient implies that there is a large linear component of relationship, but not that other components do not exist. In contrast, a zero correlation coefficient only implies that there is not a linear component; there may be curved relationships, as was illustrated in Figure 21.3.

3.

The assumption of exact readings on one axis is not required of correlation; both x and y may be measured with random variability, as was illustrated in Figure 21.4.

4.

and 5. Take on a different form, because x and y vary jointly, so what is assumed for y relative to x is also assumed for x relative to y. x and y are assumed to follow a bivariate normal distribution, that is, sort of a hill in three dimensions, where x and y are the width and length and the height is the probability (could be read relative frequency) of any joint value of x and y. The peak of the hill lies over the point specified by the two means and the height of the hill diminishes in a normal shape in any direction radiating out from the means point. If the bivariate normal assumption is badly violated, it is possible to calculate a correlation coefficient using rank methods.

Read full chapter

URL: 

https://www.sciencedirect.com/science/article/pii/B9780123848642000214

Statistical Methods for Physical Science

John L. Stanford, Jerald R. Ziemke, in Methods in Experimental Physics, 1994

16.2.4 “Local” Critical Correlation Estimation

The first hypothesis-testing procedure concems correlation between two time series, one at a given map grip point and the other a reference time series (as in Fig. 1). This is to reject the null hypothesis (that correlations are due to chance data variations) if |ρ^|>ρc, the local critical correlation value. For the examples in this chapter, we choose the “local test” of the temporal correlation test between two time series to be made at the 5% level of significance. (The local test level is the choice of the investigator.)

There are at least two methods for finding a suitable critical correlation value ρc for the local test:

1.

Computational method. One method is to use a Monte Carlo experiment, plotting percent area vs. correlation after combining a large number(hundreds or thousands) of randomly generated correlation maps (each map derived from a simulated series at the reference point). This single plot of percent area vs. correlation will generally have a symmetric bell-shape about zero correlation, from which a value for ρc can be easily found, i.e. that value of |ρ^| exceeded in only 5% of test cases.

2.

Bivariate normal model method. If one assumes that data time series taken from two fixed sites can be adequately modeled with a bivariate normal distribution, then it can be shown that the random variable T=ρ^n-2/1-ρ^2 has a t distribution with n-2 (temporal) DOF under the null hypothesis that ρ=0. The method used in this chapter for obtaining approximate temporal DOF assumes that the response of the filter used in data preprocessing (here a bandpass filter), plotted as response vs. frequency, is positive definite and normalized such that the maximum filter value is 1. Because of the bandpass filtering, the temporal DOF will not be equal to n-2. Temporal DOF are estimated by first multiplying n-2 by the area of the filter and then dividing by n/2;n/2 is the area of a full spectrum response (no temporal filtering). This simple approach yields critical correlation values that are often quite close to those derived from method (1) [3]. Since the t probability density function is symmetric about t=0, a 5% local test becomes 2.5% on each tail, requiring the one-sided critical value Tc such that the probability T>Tc is 2.5% (that is, the t distribution function has value 0.975 for T=Tc).

A main strength of method (2) is that ρc can be calculated easily, particularly for small significance tests, for example, 0.01% local level. At such a small significance level, method (1) would require much more computation, viz, tens of thousands of randomly generated maps. Method (2) is used exclusively in this chapter for determining ρc because it is straightforward and requires minimal effort.

In later sections we will use global fields (latitude vs. longitude) of temperature and ozone data in separate correlation studies. The temperature (ozone) data sets use a 40–50 day (27-day) period bandpass filter with normalized area calculated to be 22.5 (22), which, according to method (2). results in 45 (44) temporal DOF. For either 44 or 45 DOE the 5% test value for Tc is found from tables to be approximately 2.02, and from the definition of random variable T, the corresponding 5% critical correlation ρc is Tc/Tc2+DOF=0.29 for both ozone and temperature analyses. The same critical correlation value for ozone and temperature studies is purely coincidental; two different filter responses will generally have two different computed values of ρc for the same chosen local significance level.

Read full chapter

URL: 

https://www.sciencedirect.com/science/article/pii/S0076695X08602665

More Regression Methods

Rand Wilcox, in Introduction to Robust Estimation and Hypothesis Testing (Fourth Edition), 2017

11.1.12 Confidence Bands for the Typical Value of y Given x

This section deals with computing a confidence band, sometimes called prediction bands, for m(x)=β0+β1x, the typical value of y given x, that allows heteroscedasticity. More precisely, if the parameters β0 and β1 are estimated based on the random sample (x1,y1),,(xn,yn), the goal is to compute a confidence interval for m(xi) (i=1,,n) such that the simultaneous probability coverage is approximately 1α. And there is the related goal of testing the n hypotheses

(11.7)H0:m(xi)=θ0,

where θ0 is some specified constant. (For a review of methods based on the least squares estimator that assume normality and homoscedasticity, see Liu, Lin, & Piegorsch, 2008.)

The basic strategy mimics the approach used by the two-sample version of Students t test. Begin by assuming normality and homoscedasticity, determine an appropriate critical value based on the sample size and the regression estimator that is used in conjunction with an obvious test statistic, and then study the impact of non-normality and heteroscedasticity via simulations.

First consider a single value for the covariate, x. Let τ2 denote the squared standard error of yˆ=b0+b1x, an estimate of m(x), where b0 and b1 are estimates of β0 and β1, respectively, based on some regression estimator to be determined. A basic percentile bootstrap method is used to estimate τ2 (e.g., Efron & Tibshirani, 1993). More precisely, generate a bootstrap sample by randomly sampling with replacement n pairs of points from (x1,y1),,(xn,yn) yielding (x1,y1),,(xn,yn). Based on this bootstrap sample, estimate the intercept and slope and label the results b0 and b1, which yields yˆ=b0+b1x. Repeat this B times yielding yˆ1,,yˆB, in which case an estimate of τ2 is

τˆ2=1B1(yˆby¯)2,

where y¯=yˆb/B. (In terms of controlling the probability of a Type I error, B=100 appears to suffice.) Then the hypothesis given by (11.7) can be tested with

W=yˆθ0τˆ

once an appropriate critical value has been determined.

Momentarily assume that W has a standard normal distribution, in which case a p-value can be determined for each xi, i=1,,n. Denote the resulting p-values by p1,,pn and let pm=min(p1,,pn). As is evident, if pα, the α quantile of pm, can be determined, the probability of one or more Type I errors can be controlled simply by rejecting ith hypothesis if and only if pipα. And in addition, confidence intervals for each m(xi) can be computed that have simultaneous probability coverage 1α.

The distribution of pm is approximated in the following manner. Momentarily assume that both the error term ϵ and x have a standard normal distribution and consider the case β0=β1=0. Then a simulation can be performed yielding an estimate of the α quantile of the distribution of pm. In effect, generate n pairs of observations from a bivariate normal distribution having correlation zero yielding (x1,y1),,(xn,yn). Compute pm and repeat this process A times yielding pm1,pmA. Put these A values in ascending order yielding pm(1)pm(A) and let k=αA rounded to the nearest integer. Then the α quantile of pm, pα, is estimated with pm(k). Moreover, the simultaneous probability coverage among the n confidence intervals

(11.8)yˆi±zτˆi(i=1,,n)

is approximately 1α, where z is the 1pα/2 quantile of a standard normal distribution, yˆi=b0+b1xi and τˆi is the corresponding estimate of the standard error. Here are some estimates of pα when 1α=0.95 and when using the Theil–Sen (TS) estimator, the modification of Theil–Sen estimator based on the Harrell–Davis estimator (TSHD), OLS and the quantile regression estimator (QREG):

nTSOLSTSHDQREG
100.0110.0010.0090.011
200.0100.0040.0080.009
500.0100.0080.0090.009
1000.0100.0080.0090.008
4000.0110.0110.0120.009
6000.0100.0110.0100.010

As can be seen, the value depends on the sample size when using least squares regression, as expected. In contrast, when using the robust regression estimators, the estimated values suggest that there is little or nor variation in the value of pα as a function of the sample size, at least when 10n600.

Of course, a crucial issue is how well the method performs when dealing with non-normality and heteroscedasticity. Simulations indicate that it performs well when testing at the 0.05 level and n=20 (Wilcox, 2016c). Even OLS performed tolerably well, but generally using the Theil–Sen estimator or the quantile regression estimator provides better control over the Type I error probability. (When using least squares regression, Faraway & Sun, 1995, derived an alternative method that allows heteroscedasticity.)

Read full chapter

URL: 

https://www.sciencedirect.com/science/article/pii/B9780128047330000111

CORRELATION FUNCTIONS

S. Braun, in Encyclopedia of Vibration, 2001

Examples of Correlations and Spectra for Random Signals

We first note that for the case:

(13)R(τ)=S0δ(τ)Rxx(0)=12πSxx(ω)dω=Sxx(f)df

where Rxx0 is the total power (we assume zero mean), and the PSD is interpreted to be the distribution of the total power in the frequency domain. It should be noted that the power tends to infinity, and this case, which is possible mathematically, will never exist exactly in practice.

Next, using eqn (9) we consider some examples involving some idealized situations. These can often help in defining general properties.

Example 1 This concerns a possible definition (and intuitive understanding) of white noise. Assuming a constant PSD of value S0 covering the infinite frequency range ±∞, we compute the autocorrelation as an impulse function. The autocorrelation being an impulse, there is zero correlation between two signal points separated by any incremental time, a completely memory-less phenomenon. White noise is obviously a mathematical notion.

Example 2 This concerns a constant PSD limited to fmax, of magnitude equal to S0. The autocorrelation is then:

(14)Rxx(τ)=2S0fmaxsin(2πfmaxτ)(2πfmaxτ)

and is shown in Figure 2C. The first zero crossing of Rxx occurs for τ=1/2fmax, and this is roughly the memory of the process, the time interval for which there is still a correlation between the signal samples. The smaller the bandwidth, fmax, the longer this memory will be.

crossorigin=anonymous

Figure 2. PSD vs aotocorrelation of random signals. (A) Wideband noise, time domain; (B) wideband noise, PSD; (C) wideband noise, autocorrelation; (D) narrowband noise, time domain; (E) narrowband noise, PSD; (F) narrowband noise, autocorrelation.

Example 3 Here we show a narrowband PSD, typical of a mass-spring-damper (SDOF) system excited by a white noise with a PSD equal to S0:

(15)Rxx(τ)=S0πSf02exp(2πζf0τ)cos(2πf0τ)

where f0 is the natural frequency and ζ is the damping ratio. This is shown in Figure 2F. While the autocorrelation is oscillatory (dictated by f0), the duration in the correlation domain is inversely proportional to the bandwidth in the frequency domain.

Read full chapter

URL: 

https://www.sciencedirect.com/science/article/pii/B0122270851001703

Correlation and Tests of Independence

Rand Wilcox, in Introduction to Robust Estimation and Hypothesis Testing (Fourth Edition), 2017

9.7 Exercises

1.

Generate 20 observations from a standard normal distribution and store them in the R variable ep. Repeat this and store the values in x. Compute y=x+ep and compute Kendalls tau. Generally, what happens if two pairs of points are added at (2.1,2.4)? Does this have a large impact on tau? What would you expect to happen to the p-value when testing H0: τ=0?

2.

Repeat Exercise 1 with Spearmans rho, the percentage bend correlation, and the Winsorized correlation.

3.

Demonstrate that heteroscedasticity affects the probability of a Type I error when testing the hypothesis of a zero correlation based on any type M correlation and non-bootstrap method covered in this chapter.

4.

Use the function cov.mve(m,cor=T) to compute the MVE correlation for the star data in Figure 9.2. Compare the results to the Winsorized, percentage bend, skipped and biweight correlations, as well the M-estimate of correlation returned by the R function relfun.

5.

Using the Group 1 alcohol data in Section 8.6.2, compute the MVE estimate of correlation and compare the results to the biweight midcorrelation, the percentage bend correlation using β=0.1, 0.2, 0.3, 0.4, and 0.5, Winsorized correlation using γ=0.1 and 0.2, and the skipped correlation.

6.

Repeat the previous problem using the data for Group 2.

7.

The method for detecting outliers, described in Section 6.4.3, could be modified by replacing the MVE estimator with the Winsorized mean and covariance matrix. Discuss how this would be done and its relative merits.

8.

Using the data in the file read.dat, test for independence using the data in columns 2, 3, and 10 and the R function pball. Try β=0.1, 0.3, and 0.5. Comment on any discrepancies.

9.

Examine the variables in the last exercise using the R functions mscor.

10.

For the data used in the last two exercises, test the hypothesis of independence using the function indt. Why might indt find an association not detected by any of the correlations covered in this chapter?

11.

For the data in the file read.dat, test for independence using the data in columns 4 and 5 and β=0.1.

12.

The definition of the percentage bend correlation coefficient, ρpb, involves a measure of scale, ωx, that is estimated with ωˆ=W(m), where Wi=|XiMx| and m=[(1β)n], and 0β0.5. Note that this measure of scale is defined even when 0.5<β<1 provided that m>0. Argue that the finite sample breakdown point of this estimator is maximized when β=0.5.

13.

If in the definition of the biweight midcovariance, the median is replaced by the biweight measure of location, the biweight midcovariance is equal to zero under independence. Describe some negative consequences of replacing the median with the biweight measure of location.

14.

Let X be a standard normal random variable, and suppose Y is a contaminated normal with probability density function given by Eq. (1.1). Let Q=ρX+1ρ2Y, 1ρ1. Verify that the correlation between X and Q is

ρρ2+(1ρ2)(1ϵ+ϵK2).

Examine how the correlation changes as K gets large with ϵ=0.1. What does this illustrate about the robustness of ρ?

Read full chapter

URL: 

https://www.sciencedirect.com/science/article/pii/B9780128047330000093

Partial Width Correlations and Common Doorway States

A.M. LANE, in Nuclear, Particle and Many Body Physics, 1972

Common Doorway States for Two Channels

When two channels have the same doorway states d, then the above form γλc = Σd 〈λ | d〉 γdc applies to both. It is easy to show that, with the assumptions that 〈λ | d〉 are uncorrelated in phase and magnitude,

γλcγλc¯=dλ|d2¯γdcγdc

and

γλc2γλc2¯γλc2¯γλc2¯=2(dλ|d2¯γdcγdc)2.

The similarity of the right sides is a reflection of the previously observed fact that [ρ(γλc, γλc′)]2 = ρ(γλc2, γλc2′) in the case of linear correlations. In fact

ρ(γλcγλc)=dλ|d2¯γdcγdc[(dλ|d2¯γdc2)(dλ|d2¯γdc2)]1/2.

We see that an isolated doorway implies maximum correlation ρ = 1. In the opposite extreme when all doorways overlap completely soλ|d2¯is independent of d for all d, then the relation Σd γdc γdc = 0 implies zero correlation. Let us now consider the intermediate case where a number n of doorways occurs, but not a large fraction of the total (so that the relation Σd γdcγdc = 0 does not operate). Taking γdc to have random signs, the average value of ρ is zero and the rms value is n−1/2. Since [ρ(γλc, γλc)]2 = ρ(γλc2, γλc2), this means that the mean value of ρ(γλc2, γλc2′) is n−1. The spread in values arising from the random signs of γdc (ignoring variation in magnitude) is [var ρ(γλc2, γλc2)]1/2 = mean ρ(γλc2, γλc2), i.e., like an exponential distribution. When variation in magnitude is allowed the spread will be larger still. As an example, with the exponential form the observed typical value ρ(γλn2, γλf2) = 0.27 corresponds to the mean value for 4 doorways but is within the bounds of reasonable probability (≳ 10% chance) for up to 9 doorways. The highest observed value 0.76 in the same view corresponds to ≤ 3 doorways. Both numbers will be even larger when variation in size γdc andλ|d2¯is included. Notice, however, that the spread in values of ρ falls when the situation of completely overlapping doorways is approached. In that case, as noted, we have Σd γdcγdc′ ≈ 0 since the sum on d is now a complete set. This means that one cannot esimate a range of values for λ|d2¯γdcγdcarising from random signs of γdc, since these signs are not random. So far we have assumed that doorways d have similar features (i.e., values of γdcγdc′). This will not be the case in general. An example of a different case is when d is a common doorway, while d′ are doorways for c′ butnot c. In this case,

ρ=(1+(dγdc2)/γdc2)1/2.

Finally, we note that the case of an isolated common doorway predicts that γλcγλc′ should have the same phase for all levels λ (viz., that of γdcγdc′) instead of fluctuating randomly. This means a tendency for destructive interference between levels. The only nonrandom effect reported [11] is in 197Au(n, γ) where interference between the 4.9 eV and 60 eV levels is constructive for each of 24 final states, implying systematically opposite phases of γλnγλf for the two levels. This is hard to understand with the doorway picture.

Read full chapter

URL: 

https://www.sciencedirect.com/science/article/pii/B9780125082013500144

Correlated Chronometric and Psychometric Variables

Arthur R. Jensen, in Clocking the Mind, 2006

Task complexity and the RT–IQ correlation

It has long seemed paradoxical that RT has low to moderate correlations with IQ, the correlation increasing as a function of task complexity (or length of RT) while the time taken per item on conventional PTs is much less correlated with total score (or with IQ on another test) based on the number of items being scored as correct. The times taken per Raven matrices item, for example, show near-zero correlations with RT. The true-score variance of test scores depends almost entirely on the number right (or conversely, the number of error responses). The relationship between RT and task complexity or cognitive load of the items to be responded to (i.e., the RS) has been a subject of frequent discussion and dispute in the RT–IQ literature (e.g., Larson & Saccuzzo, 1989). I have examined virtually the entire literature on this seeming paradox, but rather than giving a detailed account of all these empirical studies, I will simply summarize the main conclusions that clearly emerge from a wide range of studies. These findings can be illustrated by a couple of studies that were specifically directed at analyzing the relationship of task complexity to the RT–IQ correlation.

But first, a few words about the definitions of complexity in this context. One or another of five clear operational criteria of task complexity is generally used: (1) the average of the subjective ratings made by N judges of various RT tasks “complexity”; (2) the amount of uncertainty as an increasing function of the number of choices (response alternatives) that are associated with the n different RS, such as the difference between SRT and CRTs based on two or more stimulus-response alternatives; (3) the theoretically presumed number of distinct mental operations that are required for a correct response, such as the difference between adding two digits and adding three digits; (4) the difference between (a) single tasks that make a minimal demand on memory and (b) dual tasks requiring that one item of information RS1 be held in memory while performing the interposed task RS2-RT2, then performing RT1; and (5) various tasks mean RTs used as a measure of complexity. All of the above conditions except 1 and 5 can be experimentally manipulated as independent variables while RT is the dependent variable.

Subjective judgment (condition 1) is probably the most questionable measure, although, as predicted by the Spearman-Brown formula, the mean ranking of tasks for “complexity” would gain in validity by aggregating the rankings by an increasing number of judges. A study of the SVT (described on page) in which a group of 25 college students were asked to rank the 14-item types of the SVTs for “complexity” showed that subjective judgments of item complexity do have a fair degree of objective validity (Paul, 1984). The raters were naive concerning the SVT and its use in RT research. The mean ratings on “complexity” of the 14 SVT items (from least complex = 1 to most complex = 14) had a rank-order correlation of +.61 with the items mean RTs obtained in another group of students (N=50).

The hypothesized relationship of the RT–IQ correlation to task complexity is shown in Figure 9.13. The level of complexity at the peak of the curve is not constant for groups of different ability levels. Although the relative levels of complexity on different tasks can be ranked with fair consistency, the absolute complexity level varies across different ability levels. The peak of the curve in Figure 9.13 occurs at a shorter RT for adults than for children and for high IQ than for low IQ groups of the same age. The peak level of task complexity for correlation with IQ in college students, for example, is marked by a mean RT of about 1 s; and for elementary school children it is between 2 and 3 s. But there has not been enough systematic parametric research on this point to permit statements that go beyond these tentative generalizations.

aria-describedby=B9780080449395500100-cecap13

Figure 9.13. The generalized relationship of the RT–IQ correlation to task complexity. The absolute value of the correlation coefficient |r| is represented here for graphic clarity, although the empirical RT–IQ correlation is always a negative value, with the very rare exceptions being attributable to sampling error.

A direct test of the hypothesis depicted in Figure 9.13 was based on eight novel speedof-processing tasks devised to systematically differ in difficulty or complexity (Lindley, Wilson, Smith, & Bathurst, 1995). They were administered to a total of 195 undergraduate college students. IQ was measured by the Wonderlic Personnel Test. The results are summarized in Figure 9.14. This study affords a clue to what is probably the major cause of the very wide range of RT-IQ correlations reported in various studies. The correlation is influenced by two conditions: (1) test complexity and (2) the mean and range of IQ in the subject sample, as the peak of the complexity function shifts to longer RTs as the mean IQ declines. Therefore, the significant RT–IQ correlations fall within a relatively narrow range of task complexity for various groups selected from different regions of the whole spectrum of ability in the population. Hence, when it comes to measuring general intelligence by means of RT there is probably no possibility of finding any single RT task with a level of task complexity that is optimally applicable to different samples that ranges widely in ability. The average RT–IQ correlation in the general population on any single task, therefore, represents an average of mostly suboptimal complexity levels (hence lower RT–IQ correlations) for most of the ability strata within in the whole population.

aria-describedby=B9780080449395500100-cecap14

Figure 9.14. The RT–IQ correlation plotted as a function of mean RT for 105 undergraduates.

(From data in Lindley et al., 1995.)Copyright © 1995

The optimum level of task complexity for the IQ–RT correlation is hypothesized to occur near the RT threshold between error-free responses and error responses. This is the point on the complexity continuum beyond which RT becomes less correlated (negatively) with IQ and errors become increasingly correlated (negatively) with IQ.

This hypothesis of a trade-off between RT and errors in the RT–IQ correlation and the Errors–IQ correlation was tested in a study expressly designed for this purpose (Schweizer, 1998). In order to study the relationships between errors, RT, and the RT–IQ correlation, the RTs and the number of errors had to be measured entirely on the High side of the complexity function shown in Figure 9.13, resulting in mean RTs ranging between 3 and 7 s; and even then the error rates averaged only 16 percent. Three sets of different RT tasks were used (numbers ordering, figures ordering, mental arithmetic). In each set, the task complexity was experimentally controlled as the independent variable to produce three distinct levels of complexity, determined by the number of homogeneous mental operations required to make a correct response. IQ was measured as the averaged scores on the Wechsler test (WAIS-R); subjects were 76 university students (mean IQ= 120.4, SD 9.6).

Figure 9.15 shows the results (averaged over the three different RT tasks) for the hypothesized functional relationships between the key variables. The consistent linearity of the relationships shows that it is possible to devise cognitive tasks that vary unidimensionally in complexity.

aria-describedby=B9780080449395500100-cecap15

Figure 9.15. The relationships between RT, Response Errors, task complexity, and the RT–IQ correlation for homogeneous RT tasks at three levels of experimentally controlled task complexity.

(From data in Schweizer, 1998.)Copyright © 1998

Unfortunately, a graph of the relation between complexity and the Error–IQ correlation is not possible with the given data. The Error-IQ correlations were said to be very small and only the two largest of the nine possible correlations were reported, both significant (– .24 and –.28, each at p < .05); but they evinced no systematic relationship to task complexity. It would probably require a considerably greater range of complexity and error rate to adequately test the relation between task complexity and the Errors-IQ correlation. In typical PTs it is so problematic to measure item complexity that the term is usually used synonymously with item difficulty, measured as the error rate (or percent passing) when all item responses are scored as either right or wrong. Then, of course, the relationship between item difficulty and the Error-IQ correlation is a foregone conclusion. The correlation between item response times and IQ based on right-wrong scoring is typically very low, but this is mainly because there are so many different causes of error responses to test items, except in item sets that have been specially constructed to differ in difficulty along some unitary dimension of complexity. The meaning of complexity in chronometric tasks is discussed further in Chapter 11

Read full chapter

URL: 

https://www.sciencedirect.com/science/article/pii/B9780080449395500100