# Chapter 17 Area Data and Spatial Autocorrelation

Areal units of observation are very often used when simultaneous observations are aggregated within non-overlapping boundaries. The boundaries may be those of administrative entities, and may be related to underlying spatial processes, such as commuting flows, but are usually arbitrary. If they do not match the underlying and unobserved spatial processes in one or more variables of interest, proximate areal units will contain parts of the underlying processes, engendering spatial autocorrelation. This is at least in part because the aggregated observations are driven by autocorrelated factors which may or may not themselves have been observed.

With support of data we mean the physical size (lenth, area, volume) associated with an individual observational unit (measurement). It is possible to represent the support of areal data by a point, despite the fact that the data have polygonal support. The centroid of the polygon may be taken as a representative point, or the centroid of the largest polygon in a multi-polygon object. When data with intrinsic point support are treated as areal data, the change of support goes the other way, from the known point to a non-overlapping tesselation such as a Voronoi diagram or Dirichlet tessellation or Thiessen polygons often through a Delaunay triangulation and using a Euclidean plane (projected coordinates). Here, different metrics may also be chosen, or distances measured on a network rather than on the plane. There is also a literature using weighted Voronoi diagrams in local spatial analysis (see for example Boots and Okabe 2007; Okabe et al. 2008; She et al. 2015).

When the intrinsic support of the data is as points, but the underlying process is between proximate observations rather than driven chiefly by distance however measured between observations, the data may be aggregate counts or totals (polling stations, retail turnover) or represent a directly observed characteristic of the observation (opening hours of the polling station). Obviously, the risk of mis-representing the footprint of the underlying spatial processes remains in all of these cases, not least because the observations are taken as encompassing the entirety of the underlying process in the case of tesselation of the whole area of interest. This is distinct from the geostatistical setting in which observations are rather samples taken using some scheme within the area of interest. It is also partly distinct from the practice of taking areal sample plots within the area of interest but covering only a small proportion of the area, typically used in ecological and environmental research.

This chapter then considers a subset of the methods potentially available for exploring spatial autocorrelation in areal data, or data being handled as areal, where the spatial processes are considered as working through proximity understood in the first instance as contiguity, as a graph linking observations taken as neighbours. This graph is typically undirected and unweighted, but may be directed and/or weighted in certain settings, which then leads to further issues with regard to symmetry. In principle, proximity would be expected to operate symmetrically in space, that is that the influence of \(i\) on \(j\) and of \(j\) on \(i\) based on their relative positions should be equivalent. Edge effects are not considered in standard treatments.

## 17.1 Spatial autocorrelation

When analysing areal data, it has long been recognised that, if present, spatial autocorrelation changes how we may infer, relative to the default position of independent observations. In the presence of spatial autocorrelation, we can predict the values of observation \(i\) from the values observed at \(j \in N_i\), the set of its proximate neighbours. Early results (Moran 1948; Geary 1954), entered into research practice gradually, for example the social sciences (Duncan, Cuzzort, and Duncan 1961). These results were then collated and extended to yield a set of basic tools of analysis (Cliff and Ord 1973; Cliff and Ord 1981).

Cliff and Ord (1973) generalised and extended the expression of the spatial weights matrix representation as part of the framework for establishing the distribution theory for join count, Moran’s \(I\) and Geary’s \(C\) statistics. This development of what have become known as global measures, returning a single value of autocorrelation for the total study area, has been supplemented by local measures returning values for each areal unit (Getis and Ord 1992; Anselin 1995).

## 17.2 Spatial weight matrices

Handling spatial autocorrelation using relationships to neighbours on a graph takes the graph as given, chosen by the analyst. This differs from the geostatistical approach in which the analyst chooses the binning of the empirical variogram and function used, and then the way the fitted variogram is fitted. Both involve a priori choices, but represent the underlying correlation in different ways (Wall 2004). In Bavaud (1998) and work citing his contribution, attempts have been made to place graph-based neighbours in a broader context.

One issue arising in the creation of objects representing neighbourhood relationships is that of no-neighbour areal units (Bivand and Portnov 2004). Islands or units separated by rivers may not be recognised as neighbours when the units have areal support and when using topological relationships such as shared boundaries. In some settings, for example `mrf`

(Markov Random Field) terms in `mgcv::gam()`

and similar model fitting functions that require undirected connected graphs, a requirement is violated when there are disconnected subgraphs.

No-neighbour observations can also occur when a distance threshold is used between points, where the threshold is smaller than the maximum nearest neighbour distance. Shared boundary contiguities are not affected by using geographical, unprojected coordinates, but all point-based approaches use distance in one way or another, and need to calculate distances in an appropriate way.

The **spdep** package provides an `nb`

class for neighbours, a list of length equal to the number of observations, with integer vector components. No-neighbours are encoded as an integer vector with a single element `0L`

, and observations with neighbours as sorted integer vectors containing values in `1L:n`

pointing to the neighbouring observations. This is a typical row-oriented sparse representation of neighbours. **spdep** provides many ways of constructing `nb`

objects, and the representation and construction functions are widely used in other packages.

**spdep** builds on the `nb`

representation (undirected or directed graphs) with the `listw`

object, a list with three components, an `nb`

object, a matching list of numerical weights, and a single element character vector containing the single letter name of the way in which the weights were calculated. The most frequently used approach in the social sciences is calculating weights by row standardization, so that all the non-zero weights for one observation will be the inverse of the cardinality of its set of neighbours (`1/card(nb[[i]])`

).

We will be using election data from the 2015 Polish Presidential election in this chapter, with 2495 municipalities and Warsaw boroughs, and complete count data from polling stations aggregated to these areal units. The data are an **sf** `sf`

object:

`library(sf)`

```
data(pol_pres15, package="spDataLarge")
head(pol_pres15[, c(1, 4, 6)])
#> Simple feature collection with 6 features and 3 fields
#> geometry type: MULTIPOLYGON
#> dimension: XY
#> bbox: xmin: 235000 ymin: 367000 xmax: 281000 ymax: 413000
#> CRS: +proj=tmerc +lat_0=0 +lon_0=18.99999999999998 +k=0.9993 +x_0=500000 +y_0=-5300000 +ellps=GRS80 +towgs84=0,0,0 +units=m +no_defs
#> TERYT name types geometry
#> 1 020101 BOLESŁAWIEC Urban MULTIPOLYGON (((261089 3855...
#> 2 020102 BOLESŁAWIEC Rural MULTIPOLYGON (((254150 3837...
#> 3 020103 GROMADKA Rural MULTIPOLYGON (((275346 3846...
#> 4 020104 NOWOGRODZIEC Urban/rural MULTIPOLYGON (((251770 3770...
#> 5 020105 OSIECZNICA Rural MULTIPOLYGON (((263424 4060...
#> 6 020106 WARTA BOLESŁAWIECKA Rural MULTIPOLYGON (((267031 3870...
```

```
library(tmap)
#> Warning: replacing previous import 'sf::st_make_valid' by
#> 'lwgeom::st_make_valid' when loading 'tmap'
tm_shape(pol_pres15) + tm_fill("types")
#> Warning: The shape pol_pres15 is invalid. See sf::st_is_valid
```

Between early 2002 and April 2019, **spdep** contained functions for constructing and handling neighbour and spatial weights objects, tests for spatial autocorrelation, and model fitting functions. The latter have been split out into **spatialreg**, and will be discussed in the next chapter. **spdep** now accommodates objects represented using **sf** classes and **sp** classes directly, going beyond the explorations made in this vignette.

```
library(spdep)
#> Loading required package: sp
#> Loading required package: spData
#>
#> Attaching package: 'spData'
#> The following objects are masked _by_ '.GlobalEnv':
#>
#> x, y
```

### 17.2.1 Contiguous neighbours

The `poly2nb()`

function in **spdep** takes the boundary points making up the polygon boundaries in the object passed as the `pl=`

argument, and for each observation checks whether at least one (`queen=TRUE`

, default), or at least two (rook, `queen=FALSE`

) points are within `snap=`

distance units of each other. The distances are planar in the raw coordinate units, ignoring geographical projections. Once the required number of sufficiently close points is found, the search is stopped.

```
args(poly2nb)
#> function (pl, row.names = NULL, snap = sqrt(.Machine$double.eps),
#> queen = TRUE, useC = TRUE, foundInBox = NULL)
#> NULL
```

The geometry column should be either of class `"sfc_MULTIPOLYGON"`

or `"sfc_POLYGON"`

, not `"sfc_GEOMETRY"`

; if need be cast to `"MULTIPOLYGON"`

if there are mixed `"POLYGON"`

and `"MULTIPOLYGON"`

objects.

```
class(st_geometry(pol_pres15))
#> [1] "sfc_MULTIPOLYGON" "sfc"
```

```
table(sapply(st_geometry(pol_pres15), function(x) class(x)[2]))
#>
#> MULTIPOLYGON
#> 2495
```

```
system.time(nb_q <- poly2nb(pol_pres15, queen=TRUE))
#> user system elapsed
#> 1.43 0.00 1.44
```

```
nb_q
#> Neighbour list object:
#> Number of regions: 2495
#> Number of nonzero links: 14242
#> Percentage nonzero weights: 0.229
#> Average number of links: 5.71
```

The **rgeos** `gUnarySTRtreeQuery()`

function also reduces polygons to their boundary points before searching for overlapping “envelopes”, the bounding boxes of the derived multi-point objects. Here, pre-finding candidate contiguous neighbours only shaves off a little run time, but may be helpful with larger objects.

```
system.time({
fB <- rgeos::gUnarySTRtreeQuery(as(pol_pres15, "Spatial"))
nb_q1 <- poly2nb(pol_pres15, queen=TRUE, foundInBox=fB)
})
#> user system elapsed
#> 1.07 0.00 1.07
```

```
all.equal(nb_q, nb_q1, check.attributes=FALSE)
#> [1] TRUE
```

We might consider using the contiguity of the polygon boundaries, the fifth element of the DE-9IM vector, setting the first element to FALSE, to find Queen neighbours using GEOS functions in **sf**, but it takes longer than simply treating the boundaries as points:

```
st_queen <- function(a, b = a) st_relate(a, b, pattern = "F***T****")
as.nb.sgbp <- function(x, ...) {
attrs <- attributes(x)
x <- lapply(x, function(i) { if(length(i) == 0L) 0L else i } )
attributes(x) <- attrs
class(x) <- "nb"
x
}
system.time(nb_sf_q <- as.nb.sgbp(st_queen(pol_pres15)))
#> user system elapsed
#> 2.49 0.00 2.49
```

```
all.equal(nb_q, nb_sf_q, check.attributes=FALSE)
#> [1] TRUE
```

The same effect as using `rgeos::gUnarySTRtreeQuery`

can be obtained without the overhead of converting to the equivalent **sp** class and using **rgeos** to make the equivalent tree search using **sf**, by finding intersecting bounding boxes, removing self-intersections and duplicate intersections (contiguities are by definition symmetric, if `i`

is a neighbour of `j`

, then `j`

is a neighbour of `i`

).

```
system.time({
fB1 <- st_intersects(st_as_sfc(lapply(st_geometry(pol_pres15), function(x) {
st_as_sfc(st_bbox(x))[[1]]
})))
fB1a <- lapply(seq_along(fB1), function(i) fB1[[i]][fB1[[i]] > i])
fB1a <- fB1a[-length(fB1a)]
nb_sf_q1 <- poly2nb(pol_pres15, queen=TRUE, foundInBox=fB1a)
})
#> user system elapsed
#> 1 0 1
```

```
all.equal(nb_q, nb_sf_q1, check.attributes=FALSE)
#> [1] TRUE
```

Much of the work involved in finding contiguous neighbours is spent on finding candidate neighbours with intersecting bounding boxes. Note that `nb`

objects record both symmetric neighbour relationships, because these objects admit asymmetric relationships as well, but these duplications are not needed for object construction.

Most of the **spdep** functions for constructing neighbour objects take a `row.names=`

argument, the value of which is stored as a `region.id`

attribute. If not given, the values are taken from `row.names()`

of the first argument. These can be used to check that the neighbours object is in the same order as data. If `nb`

objects are subsetted, the indices change to continue to be within `1:length(subsetted_nb)`

, but the `region.id`

attribute values point back to the object from which it was constructed.

We can also check that this undirected graph is connected using the `n.comp.nb()`

function:

```
n.comp.nb(nb_q)$nc
#> [1] 1
```

Neighbour objects may be exported and imported in GAL format for exchange with other software, using `write.nb.gal()`

and `read.gal()`

:

```
tf <- tempfile(fileext=".gal")
write.nb.gal(nb_q, tf)
```

Using **reticulate**, it is possible to interoperate with the PySAL family of Python packages, first **libpysal** providing the basic weights handling infrastructure. As we can see, the percentage of non-zero neighbours is the same in both software systems.

```
library(reticulate)
use_python(python='/usr/bin/python3')
np <- import("numpy")
libpysal <- import("libpysal")
nb_gal_ps <- libpysal$io$open(tf)$read()
nb_gal_ps$pct_nonzero
#> [1] 0.229
```

### 17.2.2 Graph-based neighbours

If areal units are an appropriate representation, but only points have been observed, contiguity relationships may be approximated using graph-based neighbours. In this case, the imputed boundaries tesselate the plane such that points closer to one observation than any other fall within its polygon. The simplest form is by using triangulation, here using the `deldir()`

function in the **deldir** package. Because the function returns from and to identifiers, it is easy to construct a long representation of a `listw`

object, as used in the S-Plus SpatialStats module and the `sn2listw()`

function internally to construct an `nb`

object (ragged wide representation). Alternatives often fail to return sufficient information to permit the neighbours to be identified.

```
args(tri2nb)
#> function (coords, row.names = NULL)
#> NULL
```

The `soi.graph()`

function takes triangulated neighbours and prunes off neighbour relationships represented by unusually long edges, especially around the convex hull, but may render the output object asymmetric. Other graph-based approaches include `relativeneigh()`

and `gabrielneigh()`

.

```
args(soi.graph)
#> function (tri.nb, coords, quadsegs = 10)
#> NULL
```

The output of these functions is then converted to the `nb`

representation using `graph2nb()`

, with the possible use of the `sym=`

argument to coerce to symmetry.

```
args(graph2nb)
#> function (gob, row.names = NULL, sym = FALSE)
#> NULL
```

We take the centroids of the largest component polygon for each observation as the point representation; population-weighted centroids might have been a better choice if they were available:

```
coords <- st_centroid(st_geometry(pol_pres15), of_largest_polygon=TRUE)
suppressMessages(nb_tri <- tri2nb(coords))
nb_tri
#> Neighbour list object:
#> Number of regions: 2495
#> Number of nonzero links: 14930
#> Percentage nonzero weights: 0.24
#> Average number of links: 5.98
```

The average number of neighbours is similar to the Queen boundary contiguity case, but if we look at the distribution of edge lengths using `nbdists()`

, we can see that although the upper quartile is about 15 km, the maximum is almost 300 km, an edge along much of one side of the convex hull. The short minimum distance is also of interest, as many centroids of urban municipalities are very close to the centroids of their surrounding rural counterparts.

```
summary(unlist(nbdists(nb_tri, coords)))
#> Min. 1st Qu. Median Mean 3rd Qu. Max.
#> 247 9847 12151 13485 14994 296974
```

Using the `card()`

function to return a vector of neighbour counts by observation, we see that there are relatively many such surrounded urban municipalities with only one neighbour.

```
table(pol_pres15$types, card(nb_q))
#>
#> 1 2 3 4 5 6 7 8 9 10 11 12 13
#> Rural 0 5 41 183 413 465 271 105 55 16 8 0 1
#> Urban 70 58 55 34 22 25 15 13 6 0 3 1 1
#> Urban/rural 2 2 15 36 109 173 162 80 23 8 1 0 0
#> Warsaw Borough 0 0 0 2 3 6 4 3 0 0 0 0 0
```

For obvious reasons, triangulated units seldom have few neighbours, not only because they are located on the convex hull:

```
table(pol_pres15$types, card(nb_tri))
#>
#> 3 4 5 6 7 8 9 10 11
#> Rural 2 57 455 632 328 75 12 2 0
#> Urban 1 41 118 93 33 11 5 1 0
#> Urban/rural 0 22 116 251 158 52 10 0 2
#> Warsaw Borough 0 1 5 6 5 1 0 0 0
```

Triangulated neighbours also yield a connected graph:

```
n.comp.nb(nb_tri)$nc
#> [1] 1
```

The sphere of influence graph trims a neighbour object such as `nb_tri`

to remove edges that seem long in relation to typical neighbours (Avis and Horton 1985).

```
nb_soi <- graph2nb(soi.graph(nb_tri, coords))
nb_soi
#> Neighbour list object:
#> Number of regions: 2495
#> Number of nonzero links: 12792
#> Percentage nonzero weights: 0.205
#> Average number of links: 5.13
```

Unpicking the triangulated neighbours does however remove the connected character of the underlying graph:

```
n_comp <- n.comp.nb(nb_soi)
n_comp$nc
#> [1] 16
```

The SoI algorithm has stripped out longer edges leading to urban and rural municipalities where their centroids are very close to each other, giving 15 pairs of neighours unconnected to the main graph:

```
table(n_comp$comp.id)
#>
#> 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
#> 2465 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
```

The largest length edges along the convex hull have been removed, but “holes” have appeared where the unconnected pairs of neighbours have appeared. The differences between `nb_tri`

and `nb_soi`

are shown in orange in the figure.

```
summary(unlist(nbdists(nb_soi, coords)))
#> Min. 1st Qu. Median Mean 3rd Qu. Max.
#> 247 9507 11651 11880 14142 30005
```

```
opar <- par(mar=c(0,0,0,0)+0.5)
plot(st_geometry(pol_pres15), border="grey", lwd=0.5)
plot(nb_soi, coords=st_coordinates(coords), add=TRUE, points=FALSE, lwd=0.5)
plot(diffnb(nb_tri, nb_soi), coords=st_coordinates(coords), col="orange",
add=TRUE, points=FALSE, lwd=0.5)
par(opar)
```

### 17.2.3 Distance-based neighbours

Distance-based neighbours can be constructed using `dnearneigh()`

, with a distance band with lower `d1=`

and upper `d2=`

bounds controlled by the `bounds=`

argument. If unprojected coordinates are used and either specified in the coordinates object `x`

or with `x`

as a two column matrix and `longlat=TRUE`

, great circle distances in km will be calculated assuming the WGS84 reference ellipsoid.

```
args(dnearneigh)
#> function (x, d1, d2, row.names = NULL, longlat = NULL, bounds = c("GT",
#> "LE"))
#> NULL
```

The `knearneigh()`

function for \(k\)-nearest neighbours returns a `knn`

object, converted to an `nb`

object using `knn2nb()`

. It can also use great circle distances, not least because nearest neighbours may differ when uprojected coordinates are treated as planar. `k=`

should be a small number. For projected coordinates, the **RANN** package is used to compute nearest neighbours more efficiently. Note that `nb`

objects constructed in this way are most unlikely to be symmetric, hence `knn2nb()`

has a `sym=`

argument to permit the imposition of symmetry, which will mean that all units have at least `k=`

neighbours, not that all units will have exactly `k=`

neighbours.

```
args(knearneigh)
#> function (x, k = 1, longlat = NULL, RANN = TRUE)
#> NULL
```

```
args(knn2nb)
#> function (knn, row.names = NULL, sym = FALSE)
#> NULL
```

The `nbdists()`

function returns the length of neighbour relationship edges in the units of the coordinates if the coordinates are projected, in km otherwise.

```
args(nbdists)
#> function (nb, coords, longlat = NULL)
#> NULL
```

In order to set the upper limit for distance bands, one may first find the maximum first nearest neighbour distance, using `unlist()`

to remove the list structure of the returned object.

```
k1 <- knn2nb(knearneigh(coords))
k1dists <- unlist(nbdists(k1, coords))
summary(k1dists)
#> Min. 1st Qu. Median Mean 3rd Qu. Max.
#> 247 6663 8538 8275 10124 17979
```

Here the largest first nearest neighbour distance is just under 18 km, so using this as the upper threshold gives certainty that all units will have at least one neighbour:

```
nb_d18 <- dnearneigh(coords, 0, 18000)
nb_d18
#> Neighbour list object:
#> Number of regions: 2495
#> Number of nonzero links: 20358
#> Percentage nonzero weights: 0.327
#> Average number of links: 8.16
```

However, even though there are no no-neighbour observations (their presence is reported by the print method for `nb`

objects), the graph is not connected, as a pair of observations are each others’ only neighbours.

```
n_comp <- n.comp.nb(nb_d18)
n_comp$nc
#> [1] 2
```

```
table(n_comp$comp.id)
#>
#> 1 2
#> 2493 2
```

Adding 300 m to the threshold gives us a neighbour object with no no-neighbour units, and all units can be reached from all others across the graph.

```
nb_d183 <- dnearneigh(coords, 0, 18300)
nb_d183
#> Neighbour list object:
#> Number of regions: 2495
#> Number of nonzero links: 21086
#> Percentage nonzero weights: 0.339
#> Average number of links: 8.45
```

```
n_comp <- n.comp.nb(nb_d183)
n_comp$nc
#> [1] 1
```

One characteristic of distance-based neighbours is that more densely settled areas, with units which are smaller in terms of area (Warsaw boroughs are much smaller on average, but have almost 30 neighbours). Having many neighbours smoothes the neighbour relationship across more neighbours:

```
table(pol_pres15$types, card(nb_d183))
#>
#> 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
#> Rural 5 18 51 87 117 156 193 227 222 166 114 62 43 29 21
#> Urban 3 6 9 12 28 27 29 21 35 30 20 12 17 11 5
#> Urban/rural 10 30 28 47 55 62 94 89 60 45 35 10 15 7 6
#> Warsaw Borough 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
#>
#> 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
#> Rural 12 13 9 7 7 0 1 0 1 2 0 0 0 0 0
#> Urban 3 8 5 12 5 1 0 0 2 0 1 1 0 0 0
#> Urban/rural 3 7 3 1 1 1 0 2 0 0 0 0 0 0 0
#> Warsaw Borough 0 0 0 0 0 0 0 0 1 2 2 5 3 1 2
#>
#> 31
#> Rural 0
#> Urban 0
#> Urban/rural 0
#> Warsaw Borough 2
```

```
arha <- units::set_units(st_area(pol_pres15), hectare)
aggregate(arha, list(pol_pres15$types), mean)
#> Group.1 x
#> 1 Rural 12500 []
#> 2 Urban 4497 []
#> 3 Urban/rural 16850 []
#> 4 Warsaw Borough 2886 []
```

For use later, we also construct a neighbour object with no-neighbour units, using a threshold of 16 km:

```
nb_d16 <- dnearneigh(coords, 0, 16000)
nb_d16
#> Neighbour list object:
#> Number of regions: 2495
#> Number of nonzero links: 15850
#> Percentage nonzero weights: 0.255
#> Average number of links: 6.35
#> 7 regions with no links:
#> 569 1371 1522 2374 2385 2473 2474
```

It is possible to control the numbers of neighbours directly using \(k\)-nearest neighbours, either accepting asymmetric neighbours or imposing symmetry:

```
knn_k6 <- knearneigh(coords, k=6)
knn2nb(knn_k6)
#> Neighbour list object:
#> Number of regions: 2495
#> Number of nonzero links: 14970
#> Percentage nonzero weights: 0.24
#> Average number of links: 6
#> Non-symmetric neighbours list
```

```
nb_k6s <- knn2nb(knn_k6, sym=TRUE)
nb_k6s
#> Neighbour list object:
#> Number of regions: 2495
#> Number of nonzero links: 16810
#> Percentage nonzero weights: 0.27
#> Average number of links: 6.74
```

Here the size of `k=`

is sufficient to ensure connectedness, although the graph is not planar as edges cross at locations other than nodes, which is not the case for contiguous or graph-based neighbours.

```
n_comp <- n.comp.nb(nb_k6s)
n_comp$nc
#> [1] 1
```

### 17.2.4 Weights specification

Once neighbour objects are available, further choices need to made in specifying the weights objects. The `nb2listw()`

function is used to create a `listw`

weights object with an `nb`

object, a matching list of weights vectors, and a style specification. Because handling no-neighbour observations now begins to matter, the `zero.policy=`

argument is introduced. By default, this is `FALSE`

, indicating that no-neighbour observations will cause an error, as the spatially lagged value for an observation with no neighbours is not available. By convention, zero is substituted for the lagged value, as the cross product of a vector of zero-valued weights and a data vector, hence the name of `zero.policy`

.

```
args(nb2listw)
#> function (neighbours, glist = NULL, style = "W", zero.policy = NULL)
#> NULL
```

We will be using the helper function `spweights.constants()`

below to show some consequences of varing style choices. It returns constants for a `listw`

object, \(n\) is the number of observations, `n1`

to `n3`

are \(n-1, \ldots\), `nn`

is \(n^2\) and \(S_0\), \(S_1\) and \(S_2\) are constants, \(S_0\) being the sum of the weights. There is a full discussion of the constants in Bivand and Wong (2018).

```
args(spweights.constants)
#> function (listw, zero.policy = NULL, adjust.n = TRUE)
#> NULL
```

The `"B"`

binary style gives a weight of unity to each neighbour relationship, and typically upweights units with no boundaries on the edge of the study area.

```
lw_q_B <- nb2listw(nb_q, style="B")
unlist(spweights.constants(lw_q_B))
#> n n1 n2 n3 nn S0 S1 S2
#> 2495 2494 2493 2492 6225025 14242 28484 357280
```

The `"W"`

row-standardized style upweights units around the edge of the study area that necessarily have fewer neighbours. This style first gives a weight of unity to each neighbour relationship, then divides these weights by the per unit sums of weights. Naturally this leads to division by zero where there are no neighbours, a not-a-number result, unless the chosen policy is to permit no-neighbour observations. We can see that \(S_0\) is now equal to \(n\).

```
lw_q_W <- nb2listw(nb_q, style="W")
unlist(spweights.constants(lw_q_W))[c(1,6:8)]
#> n S0 S1 S2
#> 2495 2495 958 10406
```

An `"S"`

style attempts to balance the tendencies of the `"B"`

and `"W"`

styles to downweight or upweight units on the edge of the study area (Tiefelsdorf, Griffith, and Boots 1999). Other styles are variants of `"B"`

, `"C"`

sets the weights such that they sum (\(S_0\)) to \(n\), and `"U"`

to unity. The main change in the `"S"`

style is to \(S_1\) for this configuration of contiguities.

```
lw_q_S <- nb2listw(nb_q, style="S")
unlist(spweights.constants(lw_q_S))[c(1,6:8)]
#> n S0 S1 S2
#> 2495 2495 886 10646
```

Inverse distance weights are used in a number of scientific fields. Some use dense inverse distance matrices, but many of the inverse distances are close to zero, so have little practical contribution, especially as the spatial process matrix is itself dense. Inverse distance weights may be constructed by taking the lengths of edges, changing units to avoid most weights being too large or small (here from m to km), taking the inverse, and passing through the `glist=`

argument to `nb2listw()`

:

```
gwts <- lapply(nbdists(nb_d183, coords), function(x) 1/(x/1000))
lw_d183_idw_B <- nb2listw(nb_d183, glist=gwts, style="B")
unlist(spweights.constants(lw_d183_idw_B))[c(1,6:8)]
#> n S0 S1 S2
#> 2495 1841 534 7265
```

No-neighbour handling is by default to prevent the construction of a weights object, making the analyst take a position on how to proceed.

```
try(lw_d16_B <- nb2listw(nb_d16, style="B"))
#> Error in nb2listw(nb_d16, style = "B") : Empty neighbour sets found
```

Use can be made of the `zero.policy=`

argument to many functions used with `nb`

and `listw`

objects.

```
lw_d16_B <- nb2listw(nb_d16, style="B", zero.policy=TRUE)
unlist(spweights.constants(lw_d16_B, zero.policy=TRUE))[c(1,6:8)]
#> n S0 S1 S2
#> 2488 15850 31700 506480
```

It is also possible to store the chosen zero policy for the running session using `set.ZeroPolicyOption()`

; the function returns the value it held before being set. If the `zero.policy=`

argument takes its default value of `NULL`

, the stored option value is used, initialized to `FALSE`

at the beginning of the session.

```
zO <- set.ZeroPolicyOption(TRUE)
lw_d16_B <- nb2listw(nb_d16, style="B")
unlist(spweights.constants(lw_d16_B))[c(1,6:8)]
#> n S0 S1 S2
#> 2488 15850 31700 506480
invisible(set.ZeroPolicyOption(zO))
```

Note that by default the `adjust.n=`

argument is set by default to `TRUE`

, subtracting the count of no-neighbour observations from the observation count, so \(n\) is smaller with possible consequences for inference. The complete count can be retrieved by changing the argument.

```
unlist(spweights.constants(lw_d16_B, zero.policy=TRUE, adjust.n=FALSE))[c(1,6:8)]
#> n S0 S1 S2
#> 2495 15850 31700 506480
```

## 17.3 Measures of spatial autocorrelation

Measures of spatial autcorrelation unfortunately pick up other mis-specifications in the way that we model data (Waller and Gotway 2004; McMillen 2003). For reference, Moran’s \(I\) is given as (Cliff and Ord 1981, 17):

\[
I = \frac{n \sum_{(2)} w_{ij} z_i z_j}{S_0 \sum_{i=1}^{n} z_i^2}
\] where \(x_i, i=1, \ldots, n\) are \(n\) observations on the numeric variable of interest, \(z_i = x_i - \bar{x}\), \(\bar{x} = \sum_{i=1}^{n} x_i / n\), \(\sum_{(2)} = \stackrel{\sum_{i=1}^{n} \sum_{j=1}^{n}}{i \neq j}\), \(w_{ij}\) are the spatial weights, and \(S_0 = \sum_{(2)} w_{ij}\). First we test a random variable using the Moran test, here under the normality assumption (argument `randomisation=FALSE`

, default `TRUE`

):

```
set.seed(1)
x <- rnorm(nrow(pol_pres15))
mt <- moran.test(x, lw_q_B, randomisation=FALSE)
```

Inference is made on the statistic \(Z(I) = \frac{I - E(I)}{\sqrt{\mathrm{Var}(I)}}\), the z-value compared with the Normal distribution for \(E(I)\) and \(\mathrm{Var}(I)\) for the chosen assumptions; this `x`

does not show spatial autocorrelation with these spatial weights:

```
glance_htest <- function(ht) c(ht$estimate, "Std deviate"=unname(ht$statistic), "p.value="=unname(ht$p.value))
broom_ok <- FALSE
if (names(broom::tidy(mt))[1] == "Moran I statistic") broom_ok <- TRUE
if (broom_ok) broom::tidy(mt)[,1:5] else glance_htest(mt)
#> Moran I statistic Expectation Variance Std deviate
#> -0.004772 -0.000401 0.000140 -0.369320
#> p.value=
#> 0.644056
```

The test however fails to detect a missing trend in the data as a missing variable problem, finding spatial autocorrelation instead:

```
beta <- 0.5e-02
t <- st_coordinates(coords)[,1]/1000
x_t <- x + beta*t
mt <- moran.test(x_t, lw_q_B, randomisation=FALSE)
if (broom_ok) broom::tidy(mt)[,1:5] else glance_htest(mt)
#> Moran I statistic Expectation Variance Std deviate
#> 3.56e-01 -4.01e-04 1.40e-04 3.01e+01
#> p.value=
#> 2.63e-199
```

If we test the residuals of a linear model including the trend, the apparent spatial autocorrelation disappears:

```
lmt <- lm.morantest(lm(x_t ~ t), lw_q_B)
if (broom_ok) broom::tidy(lmt)[,1:5] else glance_htest(lmt)
#> Observed Moran I Expectation Variance Std deviate
#> -0.004777 -0.000789 0.000140 -0.337306
#> p.value=
#> 0.632057
```

A comparison of implementations of measures of spatial autocorrelation shows that a wide range of measures is available in R in a number of packages, chiefly in the **spdep** package, and that differences from other implementations can be attributed to design decisions (Bivand and Wong 2018). The **spdep** package also includes the only implementations of exact and Saddlepoint approximations to global and local Moran’s I for regression residuals (Tiefelsdorf 2002; Bivand, Müller, and Reder 2009).

### 17.3.1 Global measures

We will begin by examining join count statistics, where `joincount.test()`

takes a `factor`

vector of values `fx=`

and a `listw`

object, and returns a list of `htest`

(hypothesis test) objects defined in the **stats** package, one `htest`

object for each level of the `fx=`

argument. The observed counts are of neighbours with the same factor levels, known as same-colour joins.

```
args(joincount.test)
#> function (fx, listw, zero.policy = NULL, alternative = "greater",
#> sampling = "nonfree", spChk = NULL, adjust.n = TRUE)
#> NULL
```

The function takes an `alternative=`

argument for hypothesis testing, a `sampling=`

argument showing the basis for the construction of the variance of the measure, where the default `"nonfree"`

choice corresponds to analytical permutation; the `spChk=`

argument is retained for backward compatibility. For reference, the counts of factor levels for the type of municipality or Warsaw borough are:

```
table(pol_pres15$types)
#>
#> Rural Urban Urban/rural Warsaw Borough
#> 1563 303 611 18
```

Since there are four levels, we re-arrange the list of `htest`

objects to give a matrix of estimated results. The observed same-colour join counts are tabulated with their expectations based on the counts of levels of the input factor, so that few joins would be expected between for example Warsaw boroughs, because there are very few of them. The variance calculation uses the underlying constants of the chosen `listw`

object and the counts of levels of the input factor. The z-value is obtained in the usual way by dividing the difference between the observed and expected join counts by the square root of the variance.

The global tests in **spdep** return `htest`

objects with a `print()`

method in the **stats** package. To save space here, we’ll use a glance function if the `broom::tidy()`

method has not been updated to use estimate names.

```
jcl <- joincount.test(pol_pres15$types, listw=lw_q_B)
if (broom_ok) {
nm <- tibble::enframe(sapply(jcl, function(t) {
nm <- substring(names(t$statistic), 18)
paste0(nm, ":", nm)
}))
mat <- cbind(nm, do.call("rbind", (lapply(jcl, broom::tidy))))
names(mat)[3] <- "Joincount"
names(mat)[6] <- "Std.deviate"
names(mat)[2] <- ""
mat[,2:6]
} else {
mat <- t(sapply(jcl, glance_htest))
colnames(mat)[1] <- "Joincount"
rownames(mat) <- sapply(jcl, function(t) {
nm <- substring(names(t$statistic), 18)
paste0(nm, ":", nm)
})
mat
}
#> Joincount Expectation Variance Std deviate
#> Rural:Rural 3087 2793.92 1126.534 8.732
#> Urban:Urban 110 104.72 93.299 0.547
#> Urban/rural:Urban/rural 656 426.53 331.759 12.599
#> Warsaw Borough:Warsaw Borough 41 0.35 0.347 68.965
#> p.value=
#> Rural:Rural 1.25e-18
#> Urban:Urban 2.92e-01
#> Urban/rural:Urban/rural 1.07e-36
#> Warsaw Borough:Warsaw Borough 0.00e+00
```

The join count test was subsequently adapted for multi-colour join counts (Upton and Fingleton 1985). The implementation as `joincount.mult()`

in **spdep** returns a table based on nonfree sampling, and does not report p-values.

```
args(joincount.multi)
#> function (fx, listw, zero.policy = FALSE, spChk = NULL, adjust.n = TRUE)
#> NULL
```

The first four lines of results for same-colour join counts are the same as those from `jointcount.test()`

above, also under nonfree sampling. The remaining lines report results for multi-colour join counts on the same basis, and finally the “Jtot” statistic summarising the totality of join counts for the variable and `listw`

object chosen.

```
joincount.multi(pol_pres15$types, listw=lw_q_B)
#> Joincount Expected Variance z-value
#> Rural:Rural 3087.000 2793.920 1126.534 8.73
#> Urban:Urban 110.000 104.719 93.299 0.55
#> Urban/rural:Urban/rural 656.000 426.526 331.759 12.60
#> Warsaw Borough:Warsaw Borough 41.000 0.350 0.347 68.96
#> Urban:Rural 668.000 1083.941 708.209 -15.63
#> Urban/rural:Rural 2359.000 2185.769 1267.131 4.87
#> Urban/rural:Urban 171.000 423.729 352.190 -13.47
#> Warsaw Borough:Rural 12.000 64.393 46.460 -7.69
#> Warsaw Borough:Urban 9.000 12.483 11.758 -1.02
#> Warsaw Borough:Urban/rural 8.000 25.172 22.354 -3.63
#> Jtot 3227.000 3795.486 1496.398 -14.70
```

So far, we have used binary weights, so the sum of join counts multiplied by the weight on that join remains integer. If we change to row standardised weights, where the weights are not unity in all cases, the counts, expectations and variances change, but there are few major changes in the z-values.

```
joincount.multi(pol_pres15$types, listw=lw_q_W)
#> Joincount Expected Variance z-value
#> Rural:Rural 521.6476 489.4559 22.8856 6.73
#> Urban:Urban 20.9023 18.3452 2.8822 1.51
#> Urban/rural:Urban/rural 106.1765 74.7213 9.3498 10.29
#> Warsaw Borough:Warsaw Borough 6.7363 0.0613 0.0116 61.89
#> Urban:Rural 165.2283 189.8913 18.2987 -5.77
#> Urban/rural:Rural 389.6002 382.9162 36.1501 1.11
#> Urban/rural:Urban 32.5048 74.2314 10.6416 -12.79
#> Warsaw Borough:Rural 1.8554 11.2807 1.1075 -8.96
#> Warsaw Borough:Urban 1.6171 2.1868 0.3775 -0.93
#> Warsaw Borough:Urban/rural 1.2315 4.4098 0.6810 -3.85
#> Jtot 592.0373 664.9162 43.1245 -11.10
```

Using an inverse distance based `listw`

object does, however, change the z-values markedly, because closer centroids are upweighted relatively strongly:

```
joincount.multi(pol_pres15$types, listw=lw_d183_idw_B)
#> Joincount Expected Variance z-value
#> Rural:Rural 3.46e+02 3.61e+02 4.93e+01 -2.10
#> Urban:Urban 2.90e+01 1.35e+01 2.23e+00 10.39
#> Urban/rural:Urban/rural 4.65e+01 5.51e+01 9.61e+00 -2.79
#> Warsaw Borough:Warsaw Borough 1.68e+01 4.53e-02 6.61e-03 206.38
#> Urban:Rural 2.02e+02 1.40e+02 2.36e+01 12.73
#> Urban/rural:Rural 2.25e+02 2.83e+02 3.59e+01 -9.59
#> Urban/rural:Urban 3.65e+01 5.48e+01 8.86e+00 -6.14
#> Warsaw Borough:Rural 5.65e+00 8.33e+00 1.73e+00 -2.04
#> Warsaw Borough:Urban 9.18e+00 1.61e+00 2.54e-01 15.01
#> Warsaw Borough:Urban/rural 3.27e+00 3.25e+00 5.52e-01 0.02
#> Jtot 4.82e+02 4.91e+02 4.16e+01 -1.38
```

The implementation of Moran’s \(I\) in **spdep** in the `moran.test()`

function has similar arguments to those of `joincount.test()`

, but `sampling=`

is replaced by `randomisation=`

to indicate the underlying analytical approach used for calculating the variance of the measure. It is also possible to use ranks rather than numerical values (Cliff and Ord 1981, 46). The `drop.EI2=`

agrument may be used to reproduce results where the final component of the variance term is omitted.

```
args(moran.test)
#> function (x, listw, randomisation = TRUE, zero.policy = NULL,
#> alternative = "greater", rank = FALSE, na.action = na.fail,
#> spChk = NULL, adjust.n = TRUE, drop.EI2 = FALSE)
#> NULL
```

The default for the `randomisation=`

argument is `TRUE`

, but here we will simply show that the test under normality is the same as a test of least squares residuals with only the intercept used in the mean model. The spelling of randomisation is that of Cliff and Ord (1973).

```
mt <- moran.test(pol_pres15$I_turnout, listw=lw_q_B, randomisation=FALSE)
if (broom_ok) broom::tidy(mt)[,1:5] else glance_htest(mt)
#> Moran I statistic Expectation Variance Std deviate
#> 0.691434 -0.000401 0.000140 58.461349
#> p.value=
#> 0.000000
```

The `lm.morantest()`

function also takes a `resfun=`

argument to set the function used to extract the residuals used for testing, and clearly lets us model other salient features of the response variable (Cliff and Ord 1981, 203).

```
args(lm.morantest)
#> function (model, listw, zero.policy = NULL, alternative = "greater",
#> spChk = NULL, resfun = weighted.residuals, naSubset = TRUE)
#> NULL
```

To compare with the standard test, we are only using the intercept here, and as can be seen, the results are the same.

```
ols <- lm(I_turnout ~ 1, pol_pres15)
lmt <- lm.morantest(ols, listw=lw_q_B)
if (broom_ok) broom::tidy(lmt)[,1:5] else glance_htest(lmt)
#> Observed Moran I Expectation Variance Std deviate
#> 0.691434 -0.000401 0.000140 58.461349
#> p.value=
#> 0.000000
```

Following the elaboration of the global Moran’s I test for least squares residuals, further work was done to develop an exact test (Tiefelsdorf and Boots 1995; Tiefelsdorf and Boots 1997; Hepple 1998). The `lm.morantest.sad()`

function implements a Saddlepoint approximation, and `lm.morantest.exact()`

the exact test, using dense matrix calculations unsuitable for large \(n\) (Tiefelsdorf 2002; Bivand, Müller, and Reder 2009). The additional arguments are used in the computation of the exact standard deviate.

```
args(lm.morantest.exact)
#> function (model, listw, zero.policy = NULL, alternative = "greater",
#> spChk = NULL, resfun = weighted.residuals, zero.tol = 1e-07,
#> Omega = NULL, save.M = NULL, save.U = NULL, useTP = FALSE,
#> truncErr = 1e-06, zeroTreat = 0.1)
#> NULL
```

We can easily see that the exact standard deviate is an order of magnitude smaller for this response, mean model and spatial weights specification, compared to the standard deviate based on the variance calculated using the normality assumption.

```
suppressWarnings(lmte <- lm.morantest.exact(ols, listw=lw_q_B))
names(lmte$estimate) <- "Moran I statistic"
class(lmte) <- c(class(lmte), "htest")
```

```
if (broom_ok) broom::tidy(lmte)[,1:3] else glance_htest(lmte)
#> Moran I statistic Std deviate p.value=
#> 6.91e-01 5.83e+00 2.74e-09
```

The only difference between tests under normality and randomisation is that an extra term is added if the kurtosis of the variable of interest indicates a flatter or more peaked distribution, where the measure used is the classical measure of kurtosis.

```
all.equal(3+e1071::kurtosis(pol_pres15$I_turnout, type=1),
moran(pol_pres15$I_turnout, listw=lw_q_B, n=nrow(pol_pres15),
S0=Szero(lw_q_B))$K)
#> [1] TRUE
```

Under the default randomisation assumption of analytical randomisation, the results are largely unchanged.

```
mtr <- moran.test(pol_pres15$I_turnout, listw=lw_q_B)
if (broom_ok) (tmtr <- broom::tidy(mtr)[,1:5]) else (tmtr <- glance_htest(mtr))
#> Moran I statistic Expectation Variance Std deviate
#> 0.691434 -0.000401 0.000140 58.459835
#> p.value=
#> 0.000000
```

The PySAL **esda** package contains the `Moran`

function reporting the same results, here under randomisation. The function returns results under normality, randomisation and by permutation simulation. Similar compatisons may be made for other global measures; for details see Bivand and Wong (2018).

```
esda <- import("esda")
np$random$seed(1L)
mi <- esda$Moran(pol_pres15$I_turnout, nb_gal_ps, transformation="B",
permutations=999L, two_tailed=FALSE)
all.equal(c(mi$I, mi$EI, mi$VI_rand, mi$z_rand, mi$p_rand), unname(unlist(c(tmtr))))
#> [1] TRUE
```

Of course, from the very beginning, interest was shown in Monte Carlo testing, also known as a Hope-type test and as a permutation bootstrap. By default, `moran.mc()`

retrurns a `"htest"`

object, but may simply use `boot::boot()`

internally and return a `"boot"`

object when `return_boot=TRUE`

.

```
args(moran.mc)
#> function (x, listw, nsim, zero.policy = NULL, alternative = "greater",
#> na.action = na.fail, spChk = NULL, return_boot = FALSE, adjust.n = TRUE)
#> NULL
```

In addition the number of simulations of the variable of interest by permutation, that is shuffling the values across the observations at random, needs to be given as `nsim=`

.

```
set.seed(1)
mmc <- moran.mc(pol_pres15$I_turnout, listw=lw_q_B, nsim=999, return_boot = TRUE)
```

The bootstrap permutation retains the outcomes of each of the random permutations, reporting the observed value of the statistic, here Moran’s \(I\), the difference between this value and the mean of the simulations under randomisation (equivalent to \(E(I)\)), and the standard deviation of the simulations under randomisation.

```
glance_boot <- function(bt) c(original=bt$t0, bias=mean(bt$t)-bt$t0,
std.error=sd(bt$t),
zvalue=(bt$t0-mean(bt$t))/sd(bt$t))
if (broom_ok) broom::tidy(mmc) else glance_boot(mmc)
#> original bias std.error zvalue
#> 0.691 -0.691 0.012 57.592
```

We can extract the equivalent results from the object returned by the PySAL `Moran()`

object, which uses the NumPy random number generator:

```
c(statistic=mi$I, bias=mi$EI_sim-mi$I, std.error=mi$seI_sim)
#> statistic bias std.error
#> 0.6914 -0.6922 0.0123
```

If we compare the Monte Carlo and analytical variances of \(I\) under randomisation, we typically see few differences, arguably rendering Monte Carlo testing unnecessary.

```
c("Permutation bootstrap"=var(mmc$t),
"Analytical randomisation"=unname(mtr$estimate[3]))
#> Permutation bootstrap Analytical randomisation
#> 0.000144 0.000140
```

There is also a permutation bootstrap approach to the approximate profile likelihood estimator (APLE) measure (Li, Calder, and Cressie 2007; Li, Calder, and Cressie 2012) using row-standardised weights, in which the variable of interest must be centred on zero first:

```
apmc <- aple.mc(c(scale(pol_pres15$I_turnout, scale=FALSE)), listw=lw_q_W,
nsim=999)
if (broom_ok) broom::tidy(apmc) else glance_boot(apmc)
#> original bias std.error zvalue
#> 0.7852 -0.7847 0.0326 24.0760
```

Geary’s global \(C\) is implemented in `geary.test()`

largely following the same argument structure as `moran.test()`

. Geary’s global \(C\) measure is given as (Cliff and Ord 1981, 17):

\[ C = \left( \frac{(n-1)}{2 S_0} \right) \frac{\sum_{(2)} w_{ij}(x_i-x_j)^2}{\sum_{i=1}^{n} z_i^2} \] using the same definitions as for Moran’s \(I\).

```
args(geary.test)
#> function (x, listw, randomisation = TRUE, zero.policy = NULL,
#> alternative = "greater", spChk = NULL, adjust.n = TRUE)
#> NULL
```

Because \(C\) is based on the similarity of neighbouring values, small values of \(C\) indicate small differences, here much smaller than the expected standardised difference of unity.

```
gt <- geary.test(pol_pres15$I_turnout, listw=lw_q_B)
if (broom_ok) broom::tidy(gt)[,1:5] else glance_htest(gt)
#> Geary C statistic Expectation Variance Std deviate
#> 3.04e-01 1.00e+00 2.14e-04 4.76e+01
#> p.value=
#> 0.00e+00
```

The Getis-Ord \(G\) test includes extra arguments to accommodate differences between implementations, as Bivand and Wong (2018) found multiple divergences from the original definitions, often to omit no-neighbour observations generated when using distance band neighbours. It is given as (Getis and Ord 1992, 194):

\[ G = \frac{\sum_{(2)} w_{ij} x_i x_j}{\sum_{(2)} x_i x_j} \] using the same definitions as for Moran’s \(I\). For \(G_*\), the \(\sum_{(2)}\) constraint is relaxed by including \(i\) as a neighbour of itself (thereby also removing the no-neighbour problem, because all observations have at least one neighbour).

```
args(globalG.test)
#> function (x, listw, zero.policy = NULL, alternative = "greater",
#> spChk = NULL, adjust.n = TRUE, B1correct = TRUE, adjust.x = TRUE,
#> Arc_all_x = FALSE)
#> NULL
```

With contiguity neighbours, global \(G\) yields a much smaller standard deviate than when using a 16 km distance band neighbour definition:

```
ggt <- globalG.test(pol_pres15$I_turnout, listw=lw_q_B)
if (broom_ok) broom::tidy(ggt)[,1:5] else glance_htest(ggt)
#> Global G statistic Expectation Variance Std deviate
#> 2.31e-03 2.29e-03 1.73e-11 5.08e+00
#> p.value=
#> 1.88e-07
```

Because this neighbour definition leaves a few observations without neighbours, we need to use the `zero.policy=`

argument.

```
ggt <- globalG.test(pol_pres15$I_turnout, listw=lw_d16_B, zero.policy=TRUE)
if (broom_ok) broom::tidy(ggt)[,1:5] else glance_htest(ggt)
#> Global G statistic Expectation Variance Std deviate
#> 2.79e-03 2.56e-03 5.51e-11 3.05e+01
#> p.value=
#> 2.31e-204
```

The Mantel test is implemented without its analytical variance as `sp.mantel.mc()`

, rather using permutation bootstrap like `moran.mc()`

. It takes a `type=`

argument, which can be set to `"moran"`

, `"geary"`

or `"sokal"`

. The output is scaled differently, but gives similar inferences.

```
set.seed(1)
mamc <- sp.mantel.mc(pol_pres15$I_turnout, listw=lw_q_B, nsim=999,
type="moran", return_boot = TRUE)
if (broom_ok) broom::tidy(mamc) else glance_boot(mamc)
#> original bias std.error zvalue
#> 9843.5 -9844.3 170.9 57.6
```

Finally, the empirical Bayes Moran’s \(I\) takes account of the denominator in assessing spatial autocorrelation in rates data (Assunção and Reis 1999). Until now, we have considered the proportion of valid votes cast in relation to the numbers entitled to vote by spatial entity, but using `EBImoran.mc()`

we can try to accommodate uncertainty in extreme rates in entities with small numbers entitled to vote. There is, however, little impact on the outcome in this case.

```
set.seed(1)
suppressMessages(ebimc <- EBImoran.mc(n=pol_pres15$I_valid_votes,
x=pol_pres15$I_entitled_to_vote,
listw=lw_q_B, nsim=999, return_boot=TRUE))
if (broom_ok) broom::tidy(ebimc) else glance_boot(ebimc)
#> original bias std.error zvalue
#> 0.693 -0.693 0.012 57.742
```

A similar measure may also be achieved using global empirical Bayes estimators with `EBest()`

to calculate a numerical vector of empirical Bayes estimates, but both are only limited ways of handling mis-specification in the mean model.

Global measures of spatial autocorrelation using spatial weights objects based on graphs of neighbours are, as we have seen, rather blunt tools, which for interpretation depend critically on a reasoned mean model of the variable in question. If the mean model is just the intercept, the global measures will respond to all kinds of mis-specification, not only spatial autocorrelation. A key source of mis-specification will typically also include the choice of entities for aggregation of data.

### 17.3.2 Local measures

Building on insights from the weaknesses of global measures, local indicators of spatial association began to appear in the first half of the 1990s (Anselin 1995; Getis and Ord 1992; Getis and Ord 1996). In addition, the Moran plot was introduced, plotting the values of the variable of interest against their spatially lagged values, typically using row-standardised weights to make the axes more directly comparable (Anselin 1996). The `moran.plot()`

function also returns an influence measures object used to label observations exerting more than propotional influence on the slope of the line representing global Moran’s \(I\). In this case, we can see that there are many spatial entities exerting such influence. These pairs of observed and lagged observed values make up in aggregate the global measure, but can also be explored in detail. The quadrants of the Moran plot also show low-low pairs in the lower left quadrant, high-high in the upper right quadrant, and fewer low-high and high-low pairs in the upper left and lower right quadrants.

```
infl_W <- moran.plot(pol_pres15$I_turnout, listw=lw_q_W,
labels=pol_pres15$TERYT, cex=1, pch=".",
xlab="I round turnout", ylab="lagged turnout")
```

If we extract the hat value influence measure from the returned object, a map suggests that some edge entities exert more than proportional influence, as do entities in or near larger urban areas.

```
pol_pres15$hat_value <- infl_W$infmat[,6]
tm_shape(pol_pres15) + tm_fill("hat_value")
#> Warning: The shape pol_pres15 is invalid. See sf::st_is_valid
```

Bivand and Wong (2018) discuss issues impacting the use of local indicators, such as local Moran’s \(I\) and local Getis-Ord \(G\). Some issues affect the calculation of the local indicators, others inference from their values. Because \(n\) statistics may be being calculated from the same number of observations, there are multiple comparison problems that need to be addressed. Although the apparent detection of hotspots from values of local indicators has been quite widely adopted, it remains fraught with difficulty because adjustment of the inferential basis to accommodate multiple comparisons is not often chosen, and as in the global case, mis-specification also remains a source of confusion. Further, interpreting local spatial autocorrelation in the presence of global spatial autocorrelation is challenging (Ord and Getis 2001; Tiefelsdorf 2002; Bivand, Müller, and Reder 2009). The `mlvar=`

and `adjust.x=`

arguments to `localmoran()`

are discussed in Bivand and Wong (2018), and permit matching with other implementations. The `p.adjust.method=`

argument uses an untested speculation that adjustment should only take into account the cardinality of the neighbour set of each observation when adjusting for multiple comparisons; using `stats::p.adjust()`

is preferable.

```
args(localmoran)
#> function (x, listw, zero.policy = NULL, na.action = na.fail,
#> alternative = "greater", p.adjust.method = "none", mlvar = TRUE,
#> spChk = NULL, adjust.x = FALSE)
#> NULL
```

Taking `"two.sided"`

p-values because these local indicators when summed and divided by the sum of the spatial weights, and thus positive and negative local spatial autocorrelation may be present, we obtain:

`locm <- localmoran(pol_pres15$I_turnout, listw=lw_q_B, alternative="two.sided")`

```
all.equal(sum(locm[,1])/Szero(lw_q_B),
unname(moran.test(pol_pres15$I_turnout, lw_q_B)$estimate[1]))
#> [1] TRUE
```

Using `stats::p.adjust()`

to adjust for multiple comparisons, we see that almost 29% of the local measures have p-values < 0.05 if no adjustment is applied, but only 12% using Bonferroni correction.

```
pvs <- cbind(locm[,5], p.adjust(locm[,5], "bonferroni"),
p.adjust(locm[,5],"fdr"), p.adjust(locm[,5], "BY"))
colnames(pvs) <- c("none", "bonferroni", "fdr", "BY")
```

```
apply(pvs, 2, function(x) sum(x < 0.05))
#> none bonferroni fdr BY
#> 715 297 576 424
```

The `localmoran.sad()`

and `localmoran.exact()`

functions take a fitted linear model, a `select=`

argument to specify which observations to test (choosing only one removes the need to adjust for multiple comparisons), arguments like `nb2listw()`

to build single observation star graphs of neighbours and their weights, and arguments needed for numerical estimation internally. They return lists of objects like `"htest"`

objects, which may be coerced to `data.frame`

objects.

```
args(localmoran.sad)
#> function (model, select, nb, glist = NULL, style = "W", zero.policy = NULL,
#> alternative = "greater", spChk = NULL, resfun = weighted.residuals,
#> save.Vi = FALSE, tol = .Machine$double.eps^0.5, maxiter = 1000,
#> tol.bounds = 1e-04, save.M = FALSE, Omega = NULL)
#> NULL
```

If we use `localmoran.sad()`

to make saddlepoint approximations of the standard deviates of the local measures, and make the same corrections, we drop from 26% of the p-values < 0.05 with no correction to under 2% using Bonferroni correction:

```
locms <- localmoran.sad(ols, nb=nb_q, style="B", alternative="two.sided")
locms <- as.data.frame(locms)
pvss <- cbind(locms[,5], p.adjust(locms[,5], "bonferroni"),
p.adjust(locms[,5], "fdr"), p.adjust(locms[,5], "BY"))
colnames(pvss) <- c("none", "bonferroni", "fdr", "BY")
```

```
apply(pvss, 2, function(x) sum(x < 0.05))
#> none bonferroni fdr BY
#> 646 44 314 87
```

Should we think it appropriate to take possible global spatial autocorrelation into account, we may fit a spatial regression model first. We can find the interval for the line search for the spatial coefficient \(\lambda\) from the inverse of the extreme eigenvalues of the spatial weights, here using `spatialreg::lextrB()`

:

```
ints <- 1/c(spatialreg::lextrB(lw_q_B))
ints
#> lambda_n lambda_1
#> -0.293 0.157
```

From the output of a simultaneous autoregressive spatial error model, we see that the coefficient is close to its upper bound:

```
SEM <- spatialreg::errorsarlm(I_turnout ~ 1, data=pol_pres15, listw=lw_q_B,
method="Matrix", interval=ints)
```

```
spatialreg::coef.sarlm(SEM)
#> lambda (Intercept)
#> 0.139 0.459
```

The spatial error model is clearly a better description of the data than a mean model with just the intercept using a likelihood ratio test for comparison:

```
spatialreg::LR1.sarlm(SEM)
#>
#> Likelihood Ratio diagnostics for spatial dependence
#>
#> data:
#> Likelihood ratio = 2111, df = 1, p-value <2e-16
#> sample estimates:
#> Log likelihood of spatial error model Log likelihood of OLS fit y
#> 4386 3330
```

Reconstructing a linear model after filtering out the global spatial autocorrelation \((I - \lambda W)\) as `lm.target`

, we can also construct \((I - \lambda W)^{-1}\) representing the global spatial process as input to the `Omega=`

argument. This is used internally to re-balance the matrix products of the per-observation star weights matrices, so involves substantial amounts of numerical linear algebra, including solving a dense matrix eigeneproblem, for each chosen observation. So if we take global autocorrelation into account, in this case the no-adjustment count of observations with p-values < 0.05 is under 7%, and using Bonferroni correction, we drop to 0.2%.

```
lm.target <- lm(SEM$tary ~ SEM$tarX - 1)
Omega <- invIrW(lw_q_B, rho=SEM$lambda)
locmsO <- localmoran.sad(lm.target, nb=nb_q, style="B", alternative="two.sided",
Omega=Omega)
locmsO <- as.data.frame(locmsO)
pvssO <- cbind(locmsO[,5], p.adjust(locmsO[,5], "bonferroni"),
p.adjust(locmsO[,5], "fdr"), p.adjust(locmsO[,5], "BY"))
colnames(pvssO) <- c("none", "bonferroni", "fdr", "BY")
```

```
apply(pvssO, 2, function(x) sum(x < 0.05))
#> none bonferroni fdr BY
#> 167 5 13 4
```

In the global measure case, bootstrap permutations could be used as an alternative to analytical methods for possible inference. In the local case, conditional permutation may be used, retaining the value at observation \(i\) and randomly sampling from the remaining \(n-1\) values to find randomised values at neighbours.

```
x <- pol_pres15$I_turnout
lw <- lw_q_B
xx <- mean(x)
z <- x - xx
s2 <- sum(z^2)/length(x)
crd <- card(lw$neighbours)
nsim <- 999
res_p <- numeric(nsim)
mns <- numeric(length(x))
sds <- numeric(length(x))
set.seed(1)
for (i in seq(along=x)) {
wtsi <- lw$weights[[i]]
zi <- z[i]
z_i <- z[-i]
crdi <- crd[i]
if (crdi > 0) {
for (j in 1:nsim) {
sz_i <- sample(z_i, size=crdi)
lz_i <- sum(sz_i*wtsi)
res_p[j] <- (zi/s2)*lz_i
}
mns[i] <- mean(res_p)
sds[i] <- sd(res_p)
} else {
mns[i] <- as.numeric(NA)
sds[i] <- as.numeric(NA)
}
}
```

This approach is not provided as a function in **spdep**. The outcome is that almost 32% of observations have two sided p-values < 0.05 without multiple comparison correction, and under 3% with Bonferroni correction. Since it is not advisable to use conditional permutation with regression residuals in the presence of spatial autocorrelation, `localmoran.sad()`

and `localmoran.exact()`

provide ways of calculating more robust standard deviates with the added flexibility of a possibly richer mean model.

```
perm_Zi <- (locm[,1] - mns)/sds
pv <- 2 * pnorm(abs(perm_Zi), lower.tail = FALSE)
pvsp <- cbind(pv, p.adjust(pv, "bonferroni"), p.adjust(pv, "fdr"),
p.adjust(pv, "BY"))
colnames(pvsp) <- c("none", "bonferroni", "fdr", "BY")
apply(pvsp, 2, function(x) sum(x < 0.05))
#> none bonferroni fdr BY
#> 789 68 477 160
```

In order to compare the results from `localmoran()`

with the PySAL function in **esda**, we need first to re-run dividing by \(n-1\) instead of \(n\) in parts of the calculation, by setting the argument `mlvar=FALSE`

.

```
locm_nml <- localmoran(pol_pres15$I_turnout, listw=lw_q_B,
alternative="two.sided", mlvar=FALSE)
```

Once this is done, the local estimates of Moran’s \(I\) agree within machine precision.

```
np$random$seed(1L)
loc_I_ps <- esda$Moran_Local(pol_pres15$I_turnout, nb_gal_ps, transformation="B", permutations=999L)
all.equal(unname(locm_nml[,1]), c(loc_I_ps$Is))
#> [1] TRUE
```

The PySAL outcomes after adjusting for multiple comparisons are also similar to those shown above:

```
pv <- 2 * pnorm(abs(loc_I_ps$z_sim), lower.tail = FALSE)
pvsp <- cbind(pv, p.adjust(pv, "bonferroni"), p.adjust(pv, "fdr"),
p.adjust(pv, "BY"))
colnames(pvsp) <- c("none", "bonferroni", "fdr", "BY")
apply(pvsp, 2, function(x) sum(x < 0.05))
#> none bonferroni fdr BY
#> 787 73 480 157
```

The figure shows the breadth of the density distribution of the standard deviates calculated using the analytical randomisation approach, compared with those of Saddlepoint approximation with and without the accommodation of global spatial autocorrelation, and with conditional permutation.

```
df <- data.frame(std_deviate=c(locm[,4], locms[,4], locmsO[,4], perm_Zi,
loc_I_ps$z_sim), method=rep(c("Analytical randomisation",
"Saddlepoint approximation", "Saddlepoint approximation (Omega)",
"Conditional permutation", "Conditional permutation (PySAL)"), each=2495))
library(ggplot2)
ggplot(df) + geom_density(aes(x=std_deviate, fill=method), alpha=0.2) +
xlim(c(-6, 12))
#> Warning: Removed 26 rows containing non-finite values (stat_density).
```

The figure and the tabulated summaries of the standard deviates by method show that in this case the conditional permutations shift the medians and third quartiles of the distributions rightwards compared to the analytical standard deviates, yielding larger numbers of apparently “significantly” locally autocorelated areal units than, say, the Saddlepoint approximation.

```
print(do.call("rbind", tapply(df$std_deviate, list(df$method), summary)),
digits=3)
#> Min. 1st Qu. Median Mean 3rd Qu. Max.
#> Analytical randomisation -3.90 0.0701 0.691 1.629 2.3183 22.47
#> Conditional permutation -3.07 0.2656 1.235 1.305 2.2613 7.52
#> Conditional permutation (PySAL) -3.11 0.2652 1.244 1.307 2.2635 7.86
#> Saddlepoint approximation -2.66 0.1213 0.950 1.159 2.0047 6.67
#> Saddlepoint approximation (Omega) -5.66 -0.8934 -0.379 -0.462 0.0361 2.89
```

Taking global spatial autocorrelation into account leaves most of the areal units with “significant” local residual spatial autocorrelation below zero, indicating negative dependencies probably associated with urban/rural contrasts. The figure shows the dominance of positive local autocorrelation before the removal of global autocorrelation, and the speckled appearance of the local indocators after its removal using Omega, the global process model.

```
pol_pres15$loc_I_SA <- locms[,4]
pol_pres15$loc_I_Omega <- locmsO[,4]
tm_shape(pol_pres15) + tm_fill(c("loc_I_SA", "loc_I_Omega"), midpoint=0, title="Std. deviate") + tm_facets(free.scales=FALSE) + tm_borders(lwd=0.3, alpha=0.4) + tm_layout(panel.labels=c("Saddlepoint approximation", "Omega"))
#> Warning: The shape pol_pres15 is invalid. See sf::st_is_valid
```

The local Getis-Ord \(G\) measure is reported as a standard deviate, and may also take the \(G^*\) form where self-neighbours are inserted into the neighbour object using `include.self()`

. The observed and expected values of local \(G\) with their analytical variances may also be returned if `return_internals=TRUE`

. The `GeoDa=`

argument changes summation behaviour when no-neighbour observations are present, dropping values of `x`

from sums.

```
args(localG)
#> function (x, listw, zero.policy = NULL, spChk = NULL, return_internals = FALSE,
#> GeoDa = FALSE)
#> NULL
```

`locG <- localG(pol_pres15$I_turnout, lw_q_B, return_internals=TRUE)`

Once again we face the problem of multiple comparisons, with the count of areal unit p-values < 0.05 being reduced by an order of magnitude when employing Bonferroni correction:

```
pv <- 2 * pnorm(abs(c(locG)), lower.tail = FALSE)
pvsp <- cbind(pv, p.adjust(pv, "bonferroni"), p.adjust(pv, "fdr"),
p.adjust(pv, "BY"))
colnames(pvsp) <- c("none", "bonferroni", "fdr", "BY")
apply(pvsp, 2, function(x) sum(x < 0.05))
#> none bonferroni fdr BY
#> 789 69 468 156
```

The PySAL **esda** `G_Local()`

function returns the same basic values, resulting in the same local standard deviates:

```
np$random$seed(1L)
loc_G_ps <- esda$G_Local(pol_pres15$I_turnout, nb_gal_ps, transform="B", permutations=999L)
all.equal(c(locG), c(loc_G_ps$Zs))
#> [1] TRUE
```

Code for conditional permutation is not included in **spdep**, but is not difficult to construct:

```
x <- pol_pres15$I_turnout
lw <- lw_q_B
crd <- card(lw$neighbours)
nsim <- 999
res <- matrix(nrow=length(x), ncol=nsim)
set.seed(1)
x_star <- sum(x[crd > 0])
for (i in seq(along=x)) {
wtsi <- lw$weights[[i]]
xi <- x[i]
x_i <- x[-i]
crdi <- crd[i]
if (crdi > 0) {
for (j in 1:nsim) {
sx_i <- sample(x_i, size=crdi)
lx_i <- sum(sx_i*wtsi)
res[i, j] <- lx_i/(x_star-xi)
}
}
}
```

If we follow `esda$G_Local()`

and construct the permutation standard deviates from the average and standard deviation of the complete set of simulated local \(G\) values, it seems that the distribution of values is much closer to zero; the R case:

```
G_obs <- attr(locG, "internals")[,1]
G_Zi <- (G_obs - mean(res))/sd(res) # code follows PySAL esda$G_Local()
pv <- 2 * pnorm(abs(c(G_Zi)), lower.tail = FALSE)
pvsp <- cbind(pv, p.adjust(pv, "bonferroni"), p.adjust(pv, "fdr"),
p.adjust(pv, "BY"))
colnames(pvsp) <- c("none", "bonferroni", "fdr", "BY")
apply(pvsp, 2, function(x) sum(x < 0.05))
#> none bonferroni fdr BY
#> 210 5 8 4
```

and the PySAL case:

```
pv <- 2 * pnorm(abs(c(loc_G_ps$z_sim)), lower.tail = FALSE)
pvsp <- cbind(pv, p.adjust(pv, "bonferroni"), p.adjust(pv, "fdr"),
p.adjust(pv, "BY"))
colnames(pvsp) <- c("none", "bonferroni", "fdr", "BY")
apply(pvsp, 2, function(x) sum(x < 0.05))
#> none bonferroni fdr BY
#> 210 5 8 4
```

If however we take the by entity local permutation outputs, averages and standard deviations by row of the simulation matrix, we get back to the relative proportions seen in the analytical standard deviates in the R case:

```
G_Zi_1 <- (G_obs - apply(res, 1, mean))/apply(res, 1, sd)
pv <- 2 * pnorm(abs(c(G_Zi_1)), lower.tail = FALSE)
pvsp <- cbind(pv, p.adjust(pv, "bonferroni"), p.adjust(pv, "fdr"),
p.adjust(pv, "BY"))
colnames(pvsp) <- c("none", "bonferroni", "fdr", "BY")
apply(pvsp, 2, function(x) sum(x < 0.05))
#> none bonferroni fdr BY
#> 789 68 477 160
```

and the PySAL case:

```
mns_1 <- apply(loc_G_ps$rGs, 1, mean)
sds_1 <- apply(loc_G_ps$rGs, 1, sd)
z_sim_1 <- (loc_G_ps$Gs - mns_1) / sds_1
pv <- 2 * pnorm(abs(z_sim_1), lower.tail = FALSE)
pvsp <- cbind(pv, p.adjust(pv, "bonferroni"), p.adjust(pv, "fdr"),
p.adjust(pv, "BY"))
colnames(pvsp) <- c("none", "bonferroni", "fdr", "BY")
apply(pvsp, 2, function(x) sum(x < 0.05))
#> none bonferroni fdr BY
#> 787 73 478 157
```

(https://github.com/pysal/esda/issues/53).

```
pol_pres15$loc_G_Zi_perm <- G_Zi_1
pol_pres15$loc_G_Zi <- c(locG)
tm_shape(pol_pres15) + tm_fill(c("loc_G_Zi", "loc_G_Zi_perm"), midpoint=0, title="Std. deviate") + tm_facets(free.scales=FALSE) + tm_borders(lwd=0.3, alpha=0.4) + tm_layout(panel.labels=c("Analytical", "Permutation"))
#> Warning: The shape pol_pres15 is invalid. See sf::st_is_valid
```

## 17.4 Spatial heterogeneity

Over and above local and global spatial autocorrelation, areal data are affected by heterogeneity. In some cases, this heterogeneity may be modelled in the mean model through included covariates, or possibly using unstructured random effects, which may also reflect the differing scales of spatial process footprints. Here we will only mention two specifically spatial approaches, the local spatial heteroscedasticity (LOSH) statistic and Moran eigenvectors.

### 17.4.1 Local spatial heteroscedasticity (LOSH) statistic

Local spatial heteroscedasticity (LOSH) statistics were introduced fairly recently, and an implementation was contributed to **spdep** even more recently, so there is as yet little experience with the approach (Ord and Getis 2012). It has been extended to provide bootstrap p-values for the measures of heterogeneity (Xu, Mei, and Yan 2014). The `a=`

argument takes a default value of \(2\), giving a Chi-squared interpretation to output.

```
args(LOSH)
#> function (x, listw, a = 2, var_hi = TRUE, zero.policy = NULL,
#> na.action = na.fail, spChk = NULL)
#> NULL
```

`lh <- LOSH(pol_pres15$I_turnout, listw=lw_q_B)`

It is also possible to map local spatially weighted mean values derived from the local measures, showing a smoothing effect

```
pol_pres15$x_bar_i <- lh[,5]
tm_shape(pol_pres15) + tm_fill(c("I_turnout", "x_bar_i"), title="Turnout", n=6, style="fisher") + tm_facets(free.scales=FALSE) + tm_borders(lwd=0.3, alpha=0.4) + tm_layout(panel.labels=c("Turnout", "Weighted means"))
#> Warning: The shape pol_pres15 is invalid. See sf::st_is_valid
```

These approaches have been further extended but as yet without implementations in R (Westerholt, Resch, and Zipf 2015; Westerholt et al. 2018).

### 17.4.2 Moran eigenvectors

There is a close connection between Moran’s \(I\) and the eigenproblem of the doubly centred weights matrix, and again with the spatial pattern of a variable of interest and combinations of eigenvectors of the same matrix (Griffith 2003; Griffith 2010; Chun and Griffith 2013). There is a literature in numerical ecology using the term of principle coordinates of neighbour matrices for the same approach. The value of global Moran’s \(I\) can be calculated using the `hat`

matrix to centre the variable of interest in taking the cross products which yield Moran’s \(I\).

```
n <- nrow(pol_pres15)
ones <- rep(1, n)
hat <- (diag(n) - tcrossprod(ones) / n)
Cmat <- hat %*% listw2mat(lw_q_B) %*% hat
x <- pol_pres15$I_turnout
I_MCM <- c((n/Szero(lw_q_B) * crossprod(x, Cmat %*% x)/crossprod(x, hat %*% x)))
```

```
I_MCM
#> [1] 0.691
```

The solution to the eigenproblem of matrix product gives us the eigenvectors, providing key information about the linkage structure of the graph of neighbours.

`EV <- eigen(Cmat)$vectors`

```
pol_pres15$EV1 <- EVa[,1]
pol_pres15$EV2 <- EVa[,2]
pol_pres15$EV3 <- EVa[,3]
pol_pres15$EV4 <- EVa[,4]
tm_shape(pol_pres15) + tm_fill(c("EV1", "EV2", "EV3", "EV4"), midpoint=0, title="Eigenvectors", n=6, style="fisher") + tm_facets(free.scales=FALSE) + tm_borders(lwd=0.3, alpha=0.4) + tm_layout(panel.labels=c("EV 1", "EV 2", "EV 3", "EV 4"))
#> Warning: The shape pol_pres15 is invalid. See sf::st_is_valid
```

### References

Anselin, L. 1995. “Local indicators of spatial association - LISA.” *Geographical Analysis* 27 (2): 93–115.

Anselin, L. 1996. “The Moran Scatterplot as an ESDA Tool to Assess Local Instability in Spatial Association.” In *Spatial Analytical Perspectives on GIS*, edited by M. M. Fischer, H. J. Scholten, and D. Unwin, 111–25. London: Taylor & Francis.

Assunção, R.M., and E. A. Reis. 1999. “A New Proposal to Adjust Moran’s \(I\) for Population Density.” *Statistics in Medicine* 18: 2147–62.

Avis, D., and J. Horton. 1985. “Remarks on the Sphere of Influence Graph.” In *Discrete Geometry and Convexity*, edited by J. E. Goodman, 323–27. New York: New York Academy of Sciences, New York.

Bavaud, F. 1998. “Models for Spatial Weights: A Systematic Look.” *Geographical Analysis* 30: 153–71. doi:10.1111/j.1538-4632.1998.tb00394.x.

Bivand, R. S., and B. A. Portnov. 2004. “Exploring Spatial Data Analysis Techniques Using R: The Case of Observations with No Neighbours.” In *Advances in Spatial Econometrics: Methodology, Tools, Applications*, edited by L. Anselin, R. J. G. M. Florax, and S. J. Rey, 121–42. Berlin: Springer.

Bivand, R. S., W. Müller, and M. Reder. 2009. “Power Calculations for Global and Local Moran’s \(I\).” *Computational Statistics and Data Analysis* 53: 2859–72.

Bivand, Roger S., and David W. S. Wong. 2018. “Comparing Implementations of Global and Local Indicators of Spatial Association.” *TEST* 27 (3): 716–48. doi:10.1007/s11749-018-0599-x.

Boots, B., and A. Okabe. 2007. “Local Statistical Spatial Analysis: Inventory and Prospect.” *International Journal of Geographical Information Science* 21 (4). Taylor & Francis: 355–75. doi:10.1080/13658810601034267.

Chun, Yongwan, and Daniel A. Griffith. 2013. *Spatial Statistics and Geostatistics: Theory and Applications for Geographic Information Science and Technology*. Thousand Oaks, CA: Sage.

Cliff, A. D., and J. K. Ord. 1973. *Spatial Autocorrelation*. London: Pion.

Cliff, A. 1981. *Spatial Processes*. London: Pion.

Duncan, O. D., R. P. Cuzzort, and B. Duncan. 1961. *Statistical Geography: Problems in Analyzing Areal Data*. Glencoe, IL: Free Press.

Geary, R. C. 1954. “The Contiguity Ratio and Statistical Mapping.” *The Incorporated Statistician* 5: 115–45.

Getis, A., and J. K. Ord. 1992. “The Analysis of Spatial Association by the Use of Distance Statistics.” *Geographical Analysis* 24 (2): 189–206.

Getis, A., and J. K. Ord. 1992. “The Analysis of Spatial Association by the Use of Distance Statistics.” *Geographical Analysis* 24 (2): 189–206.

*Spatial Analysis: Modelling in a Gis Environment*, edited by P. Longley and M Batty, 261–77. Cambridge: GeoInformation International.

Griffith, D. A. 2010. “Spatial Filtering.” In *Handbook of Applied Spatial Analysis*, edited by Manfred Fischer and Arthur Getis, 301–18. Heidelberg: Springer.

Griffith, Daniel A. 2003. *Spatial Autocorrelation and Spatial Filtering*. New York: Springer.

Hepple, L. W. 1998. “Exact Testing for Spatial Autocorrelation Among Regression Residuals.” *Environment and Planning A* 30: 85–108.

Li, Hongfei, Catherine A. Calder, and Noel Cressie. 2007. “Beyond Moran’s I: Testing for Spatial Dependence Based on the Spatial Autoregressive Model.” *Geographical Analysis* 39: 357–75.

Li, Hongfei, Catherine A. 2012. “One-Step Estimation of Spatial Dependence Parameters: Properties and Extensions of the APLE Statistic.” *Journal of Multivariate Analysis* 105: 68–84.

McMillen, Daniel P. 2003. “Spatial Autocorrelation or Model Misspecification?” *International Regional Science Review* 26: 208–17.

Moran, P. A. P. 1948. “The Interpretation of Statistical Maps.” *Journal of the Royal Statistical Society, Series B (Methodological)* 10 (2): 243–51.

Okabe, A., T. Satoh, T. Furuta, A. Suzuki, and K. Okano. 2008. “Generalized Network Voronoi Diagrams: Concepts, Computational Methods, and Applications.” *International Journal of Geographical Information Science* 22 (9). Taylor & Francis: 965–94. doi:10.1080/13658810701587891.

Ord, J. K., and A. Getis. 2001. “Testing for Local Spatial Autocorrelation in the Presence of Global Autocorrelation.” *Journal of Regional Science* 41 (3): 411–32.

Ord, J. 2012. “Local Spatial Heteroscedasticity (LOSH).” *Annals of Regional Science* 48 (2): 529–39.

She, Bing, Xinyan Zhu, Xinyue Ye, Wei Guo, Kehua Su, and Jay Lee. 2015. “Weighted Network Voronoi Diagrams for Local Spatial Analysis.” *Computers, Environment and Urban Systems* 52: 70–80. doi:https://doi.org/10.1016/j.compenvurbsys.2015.03.005.

Tiefelsdorf, M. 2002. “The Saddlepoint Approximation of Moran’s I and Local Moran’s \({I}_i\) Reference Distributions and Their Numerical Evaluation.” *Geographical Analysis* 34: 187–206.

Tiefelsdorf, M., and B. N. Boots. 1995. “The Exact Distribution of Moran’s I.” *Environment and Planning A* 27: 985–99.

Tiefelsdorf, M., and B. N. Boots. 1995. “The Exact Distribution of Moran’s I.” *Environment and Planning A* 27: 985–99.

*Geographical Analysis*29: 248–57.

Tiefelsdorf, M., D. A. Griffith, and B. Boots. 1999. “A Variance-Stabilizing Coding Scheme for Spatial Link Matrices.” *Environment and Planning A* 31: 165–80.

Upton, G., and B. Fingleton. 1985. *Spatial Data Analysis by Example: Point Pattern and Qualitative Data*. New York: Wiley.

Wall, M. M. 2004. “A Close Look at the Spatial Structure Implied by the CAR and SAR Models.” *Journal of Statistical Planning and Inference* 121: 311–24.

Waller, Lance A., and Carol A. Gotway. 2004. *Applied Spatial Statistics for Public Health Data*. Hoboken, NJ: John Wiley & Sons.

Westerholt, Rene, Bernd Resch, and Alexander Zipf. 2015. “A Local Scale-Sensitive Indicator of Spatial Autocorrelation for Assessing High- and Low-Value Clusters in Multiscale Datasets.” *International Journal of Geographical Information Science* 29 (5): 868–87. doi:10.1080/13658816.2014.1002499.

Westerholt, Rene, Bernd Resch, Franz-Benjamin Mocnik, and Dirk Hoffmeister. 2018. “A Statistical Test on the Local Effects of Spatially Structured Variance.” *International Journal of Geographical Information Science* 32 (3): 571–600. doi:10.1080/13658816.2017.1402914.

Xu, M., C. L. Mei, and N. Yan. 2014. “A Note on the Null Distribution of the Local Spatial Heteroscedasticity (LOSH) Statistic.” *Annals of Regional Science* 52 (3): 697–710.