# Chapter 7 Introduction to sf and stars

This chapter introduces R packages sf and stars. sf provides a table format for simple features, where feature geometries are carried in a list-column. R package stars was written to support raster and vector datacubes (Chapter 6), and has raster data stacks and feature time series as special cases. sf first appeared on CRAN in 2016, stars in 2018. Development of both packages received support from the R Consortium as well as strong community engagement. The packages were designed to work together.

All functions operating on sf or stars objects start with st_, making it easy to recognize them or to search for them when using command line completion.

## 7.1 Package sf

Intended to succeed and replace R packages sp, rgeos and the vector parts of rgdal, R package sf (Pebesma 2018) was developed to move spatial data analysis in R closer to standards-based approaches seen in the industry and open source projects, to build upon more modern versions of the open source geospatial software stack (figure 1.6), and to allow for integration of R spatial software with the tidyverse if desired.

To do so, R package sf provides simple features access (J. Herring and others 2011), natively, to R. It provides an interface to several tidyverse packages, in particular to ggplot2, dplyr and tidyr. It can read and write data through GDAL, execute geometrical operations using GEOS (for projected coordinates) or s2geometry (for ellipsoidal coordinates), and carry out coordinate transformations or conversions using PROJ. External C++ libraries are interfaced using Rcpp (Eddelbuettel 2013).

Package sf represents sets of simple features in sf objects, a sub-class of a data.frame or tibble. sf objects contain at least one geometry list-column of class sfc, which for each element contains the geometry as an R object of class sfg. A geometry list-column acts as a variable in a data.frame or tibble, but has a more complex structure than e.g. numeric or character variables. Following the convention of PostGIS, all operations (functions, method) that operate on sf objects or related start with st_.

An sf object has the following meta-data:

• the name of the (active) geometry column, held in attribute sf_column
• for each non-geometry variable, the attribute-geometry relationship (section 5.1), held in attribute agr

An sfc geometry list-column has the following meta-data:

• the coordinate reference system held in attribute crs
• the bounding box held in attribute bbox
• the precision held in attribute precision
• the number of empty geometries held in attribute n_empty

These attributes may best be accessed or set by using functions like st_bbox, st_crs, st_set_crs, st_agr, st_set_agr, st_precision, and st_set_precision.

### 7.1.1 Creation

One could create an sf object from scratch e.g. by

library(sf)
p1 = st_point(c(7.35, 52.42))
p2 = st_point(c(7.22, 52.18))
p3 = st_point(c(7.44, 52.19))
sfc = st_sfc(list(p1, p2, p3), crs = 'OGC:CRS84')
st_sf(elev = c(33.2, 52.1, 81.2), marker = c("Id01", "Id02", "Id03"),
geom = sfc)
# Simple feature collection with 3 features and 2 fields
# Geometry type: POINT
# Dimension:     XY
# Bounding box:  xmin: 7.22 ymin: 52.2 xmax: 7.44 ymax: 52.4
# Geodetic CRS:  WGS 84
#   elev marker              geom
# 1 33.2   Id01 POINT (7.35 52.4)
# 2 52.1   Id02 POINT (7.22 52.2)
# 3 81.2   Id03 POINT (7.44 52.2)

Figure 7.1 gives an explanation of the components printed. Rather than creating objects from scratch, spatial data in R are typically read from an external source, which can be:

• an external file
• a request to a web service
• a dataset held in some form in another R package

The next section introduces reading from files; section 8.1 discusses handling of datasets too large to fit into working memory.

Reading datasets from an external “data source” (file, web service, or even string) is done using st_read:

library(sf)
(file = system.file("gpkg/nc.gpkg", package = "sf"))
# [1] "/home/edzer/R/x86_64-pc-linux-gnu-library/4.0/sf/gpkg/nc.gpkg"
nc = st_read(file)
# Reading layer nc.gpkg' from data source
#   /home/edzer/R/x86_64-pc-linux-gnu-library/4.0/sf/gpkg/nc.gpkg'
#   using driver GPKG'
# Simple feature collection with 100 features and 14 fields
# Geometry type: MULTIPOLYGON
# Dimension:     XY
# Bounding box:  xmin: -84.3 ymin: 33.9 xmax: -75.5 ymax: 36.6
# Geodetic CRS:  NAD27

Here, the file name and path file is read from the sf package, which has a different path on every machine, and hence is guaranteed to be present on every sf installation.

Command st_read has two arguments: the data set name (dsn) and the layer. In the example above, the geopackage (GPKG) file contains only a single layer that is being read. If it had contained multiple layers, then the first layer would have been read and a warning would have been emitted. The available layers of a data set can be queried by

st_layers(file)
# Driver: GPKG
# Available layers:
#   layer_name geometry_type features fields
# 1    nc.gpkg Multi Polygon      100     14

Simple feature objects can be written with st_write, as in

(file = tempfile(fileext = ".gpkg"))
# [1] "/tmp/RtmpGEBwjt/file71264b572713.gpkg"
st_write(nc, file, layer = "layer_nc")
# Writing layer layer_nc' to data source
#   /tmp/RtmpGEBwjt/file71264b572713.gpkg' using driver GPKG'
# Writing 100 features with 14 fields and geometry type Multi Polygon.

where the file format (GPKG) is derived from the file name extension.

### 7.1.3 Subsetting

A very common operation is to subset objects; base R can use [ for this. The rules that apply to data.frame objects also apply to sf objects, e.g. that records 2-5 and columns 3-7 are selected by

nc[2:5, 3:7]

but with a few additional features, in particular:

• the drop argument is by default FALSE meaning that the geometry column is always selected, and an sf object is returned; when it is set to TRUE and the geometry column not selected, it is dropped and a data.frame is returned
• selection with a spatial (sf, sfc or sfg) object as first argument leads to selection of the features that spatially intersect with that object (see next section); other predicates than intersects can be chosen by setting parameter op to a function such as st_covers or or any other binary predicate function listed in section 3.2.2

### 7.1.4 Binary predicates

Binary predicates like st_intersects, st_covers etc (section 3.2.2) take two sets of features or feature geometries and return for all pairs whether the predicate is TRUE or FALSE. For large sets this would potentially result in a huge matrix, typically filled mostly with FALSE values and for that reason a sparse representation is returned by default:

nc5 = nc[1:5, ]
nc7 = nc[1:7, ]
(i = st_intersects(nc5, nc7))
# Sparse geometry binary predicate list of length 5, where the predicate
# was intersects'
#  1: 1, 2
#  2: 1, 2, 3
#  3: 2, 3
#  4: 4, 7
#  5: 5, 6

Figure 7.2 shows how the intersections of the first five with the first seven counties can be understood. We can transform the sparse logical matrix into a dense matrix by

as.matrix(i)
#       [,1]  [,2]  [,3]  [,4]  [,5]  [,6]  [,7]
# [1,]  TRUE  TRUE FALSE FALSE FALSE FALSE FALSE
# [2,]  TRUE  TRUE  TRUE FALSE FALSE FALSE FALSE
# [3,] FALSE  TRUE  TRUE FALSE FALSE FALSE FALSE
# [4,] FALSE FALSE FALSE  TRUE FALSE FALSE  TRUE
# [5,] FALSE FALSE FALSE FALSE  TRUE  TRUE FALSE

The number of countries that each of nc5 intersects with is

lengths(i)
# [1] 2 3 2 2 2

and the other way around, the number of counties in nc5 that intersect with each of the counties in nc7 is

lengths(t(i))
# [1] 2 3 2 1 1 1 1

The object i is of class sgbp (sparse geometrical binary predicate), and is a list of integer vectors, with each element representing a row in the logical predicate matrix holding the column indices of the TRUE values for that row. It further holds some metadata like the predicate used, and the total number of columns. Methods available for sgbp objects include

methods(class = "sgbp")
#  [1] as.data.frame as.matrix     coerce        dim           initialize
#  [6] Ops           print         show          slotsFromS3   t
# see '?methods' for accessing help and source code

where the only Ops method available is !, the negation operation.

### 7.1.5 tidyverse

The tidyverse is a collection of data science packages that work together, described e.g. in (Wickham and Grolemund 2017; Wickham et al. 2019). Package sf has tidyverse-style read and write functions, read_sf and write_sf, which return a tibble rather than a data.frame, do not print any output, and overwrite existing data by default.

Further tidyverse generics with methods for sf objects include filter, select, group_by, ungroup, mutate, transmute, rowwise, rename, slice, summarise, distinct, gather, pivot_longer, spread, nest, unnest, unite, separate, separate_rows, sample_n, and sample_frac. Most of these methods simply manage the metadata of sf objects, and make sure the geometry remains present. In case a user wants the geometry to be removed, one can use st_drop_geometry() or simply coerce to a tibble or data.frame before selecting:

library(tidyverse)
nc %>% as_tibble() %>% select(BIR74) %>% head(3)
# # A tibble: 3 × 1
#   BIR74
#   <dbl>
# 1  1091
# 2   487
# 3  3188

The summarise method for sf objects has two special arguments:

• do_union (default TRUE) determines whether grouped geometries are unioned on return, so that they form a valid geometry
• is_coverage (default FALSE) in case the geometries grouped form a coverage (do not have overlaps), setting this to TRUE speeds up the unioning

The distinct method selects distinct records, where st_equals is used to evaluate distinctness of geometries.

filter can be used with the usual predicates; when one wants to use it with a spatial predicate, e.g. to select all counties less than 50 km away from Orange county, one could use

orange <- nc %>% filter(NAME == "Orange")
wd = st_is_within_distance(nc, orange, units::set_units(50, km))
o50 <- nc %>% filter(lengths(wd) > 0)
nrow(o50)
# [1] 17

Figure 7.3 shows the results of this analysis, and in addition a buffer around the county borders; note that this buffer serves for illustration, it was not used to select the counties.

## 7.2 Spatial joins

In regular (left, right or inner) joins, joined records from a pair of tables are reported when one or more selected attributes match (are identical) in both tables. A spatial join is similar, but the criterion to join records is not equality of attributes but a spatial predicate. This leaves a wide variety of options in order to define spatially matching records, using binary predicates listed in section 3.2.2. The concepts of “left”, “right”, “inner” or “full” joins remain identical to the non-spatial join as the options for handling records that have no spatial match.

When using spatial joins, each record may have several matched records, yielding a large result table. A way to reduce this complexity may be to select from the matching records the one with the largest overlap with the target geometry. An example of this is shown (visually) in figure 7.4; this is done using st_join with argument largest = TRUE.

# example of largest = TRUE:
gr = st_sf(
label = apply(expand.grid(1:10, LETTERS[10:1])[,2:1], 1, paste0, collapse = " "),
geom = st_make_grid(nc))
gr$col = sf.colors(10, categorical = TRUE, alpha = .3) # cut, to check, NA's work out: gr = gr[-(1:30),] suppressWarnings(nc_j <- st_join(nc, gr, largest = TRUE)) # the two datasets: opar = par(mfrow = c(2,1), mar = rep(0,4)) plot(st_geometry(nc_j)) plot(st_geometry(gr), add = TRUE, col = gr$col)
text(st_coordinates(st_centroid(st_geometry(gr))), labels = gr$label) # the joined dataset: plot(st_geometry(nc_j), border = 'black', col = nc_j$col)

### 7.4.2 Example: Bristol origin-destination datacube

The data used for this example come from Lovelace, Nowosad, and Muenchow (2019), and concern origin-destination (OD) counts: the number of persons going from region A to region B, by transportation mode. We have feature geometries for the 102 origin and destination regions, shown in figure 7.14.

library(spDataLarge)
plot(st_geometry(bristol_zones), axes = TRUE, graticule = TRUE)
plot(st_geometry(bristol_zones)[33], col = 'red', add = TRUE)

and the OD counts come in a table with OD pairs as records, and transportation mode as variables:

head(bristol_od)
# # A tibble: 6 × 7
#   o         d           all bicycle  foot car_driver train
#   <chr>     <chr>     <dbl>   <dbl> <dbl>      <dbl> <dbl>
# 1 E02002985 E02002985   209       5   127         59     0
# 2 E02002985 E02002987   121       7    35         62     0
# 3 E02002985 E02003036    32       2     1         10     1
# 4 E02002985 E02003043   141       1     2         56    17
# 5 E02002985 E02003049    56       2     4         36     0
# 6 E02002985 E02003054    42       4     0         21     0

We see that many combinations of origin and destination are implicit zeroes, otherwise these two numbers would have been similar:

nrow(bristol_zones)^2 # all combinations
# [1] 10404
nrow(bristol_od) # non-zero combinations
# [1] 2910

We will form a three-dimensional vector datacube with origin, destination and transportation mode as dimensions. For this, we first “tidy” the bristol_od table to have origin (o), destination (d), transportation mode (mode), and count (n) as variables, using pivot_longer:

# create O-D-mode array:
bristol_tidy <- bristol_od %>%
select(-all) %>%
pivot_longer(3:6, names_to = "mode", values_to = "n")
head(bristol_tidy)
# # A tibble: 6 × 4
#   o         d         mode           n
#   <chr>     <chr>     <chr>      <dbl>
# 1 E02002985 E02002985 bicycle        5
# 2 E02002985 E02002985 foot         127
# 3 E02002985 E02002985 car_driver    59
# 4 E02002985 E02002985 train          0
# 5 E02002985 E02002987 bicycle        7
# 6 E02002985 E02002987 foot          35

Next, we form the three-dimensional array a, filled with zeroes:

od = bristol_tidy %>% pull("o") %>% unique()
nod = length(od)
mode = bristol_tidy %>% pull("mode") %>% unique()
nmode = length(mode)
a = array(0L,  c(nod, nod, nmode),
dimnames = list(o = od, d = od, mode = mode))
dim(a)
# [1] 102 102   4

We see that the dimensions are named with the zone names (o, d) and the transportation mode name (mode). Every row of bristol_tidy denotes an array entry, and we can use this to to fill the non-zero entries of the bristol_tidy table with their appropriate value (n):

a[as.matrix(bristol_tidy[c("o", "d", "mode")])] = bristol_tidy$n To be sure that there is not an order mismatch between the zones in bristol_zones and the zone names in bristol_tidy, we can get the right set of zones by: order = match(od, bristol_zones$geo_code) # it happens this equals 1:102
zones = st_geometry(bristol_zones)[order]

(It happens that the order is already correct, but it is good practice to not assume this).

Next, with zones and modes we can create a stars dimensions object:

library(stars)
(d = st_dimensions(o = zones, d = zones, mode = mode))
#      from  to offset delta refsys point
# o       1 102     NA    NA WGS 84 FALSE
# d       1 102     NA    NA WGS 84 FALSE
# mode    1   4     NA    NA     NA FALSE
#                                                                 values
# o    MULTIPOLYGON (((-2.51 51.4,...,...,MULTIPOLYGON (((-2.55 51.5,...
# d    MULTIPOLYGON (((-2.51 51.4,...,...,MULTIPOLYGON (((-2.55 51.5,...
# mode                                                 bicycle,...,train

and finally build or stars object from a and d:

(odm = st_as_stars(list(N = a), dimensions = d))
# stars object with 3 dimensions and 1 attribute
# attribute(s):
#    Min. 1st Qu. Median Mean 3rd Qu. Max.
# N     0       0      0  4.8       0 1296
# dimension(s):
#      from  to offset delta refsys point
# o       1 102     NA    NA WGS 84 FALSE
# d       1 102     NA    NA WGS 84 FALSE
# mode    1   4     NA    NA     NA FALSE
#                                                                 values
# o    MULTIPOLYGON (((-2.51 51.4,...,...,MULTIPOLYGON (((-2.55 51.5,...
# d    MULTIPOLYGON (((-2.51 51.4,...,...,MULTIPOLYGON (((-2.55 51.5,...
# mode                                                 bicycle,...,train

We can take a single slice through from this three-dimensional array, e.g. for zone 33 (figure 7.14), by odm[,,33], and plot it with

plot(adrop(odm[,,33]) + 1, logz = TRUE)

the result of which is shown in figure 7.15. Subsetting this way, we take all attributes (there is only one: N) since the first argument is empty, we take all origin regions (second argument empty), we take destination zone 33 (third argument), and all transportation modes (fourth argument empty, or missing).

We plotted this particular zone because it has the largest number of travelers as its destination. We can find this out by summing all origins and travel modes by destination:

d = st_apply(odm, 2, sum)
which.max(d[[1]])
# [1] 33

Other aggregations we can carry out include: total transportation by OD (102 x 102):

st_apply(odm, 1:2, sum)
# stars object with 2 dimensions and 1 attribute
# attribute(s):
#      Min. 1st Qu. Median Mean 3rd Qu. Max.
# sum     0       0      0 19.2      19 1434
# dimension(s):
#   from  to offset delta refsys point
# o    1 102     NA    NA WGS 84 FALSE
# d    1 102     NA    NA WGS 84 FALSE
#                                                              values
# o MULTIPOLYGON (((-2.51 51.4,...,...,MULTIPOLYGON (((-2.55 51.5,...
# d MULTIPOLYGON (((-2.51 51.4,...,...,MULTIPOLYGON (((-2.55 51.5,...

Origin totals, by mode:

st_apply(odm, c(1,3), sum)
# stars object with 2 dimensions and 1 attribute
# attribute(s):
#      Min. 1st Qu. Median Mean 3rd Qu. Max.
# sum     1    57.5    214  490     771 2903
# dimension(s):
#      from  to offset delta refsys point
# o       1 102     NA    NA WGS 84 FALSE
# mode    1   4     NA    NA     NA FALSE
#                                                                 values
# o    MULTIPOLYGON (((-2.51 51.4,...,...,MULTIPOLYGON (((-2.55 51.5,...
# mode                                                 bicycle,...,train

Destination totals, by mode:

st_apply(odm, c(2,3), sum)
# stars object with 2 dimensions and 1 attribute
# attribute(s):
#      Min. 1st Qu. Median Mean 3rd Qu.  Max.
# sum     0      13    104  490     408 12948
# dimension(s):
#      from  to offset delta refsys point
# d       1 102     NA    NA WGS 84 FALSE
# mode    1   4     NA    NA     NA FALSE
#                                                                 values
# d    MULTIPOLYGON (((-2.51 51.4,...,...,MULTIPOLYGON (((-2.55 51.5,...
# mode                                                 bicycle,...,train

Origin totals, summed over modes:

o = st_apply(odm, 1, sum)

Destination totals, summed over modes (we had this):

d = st_apply(odm, 2, sum)

We plot o and d together after joining them by

x = (c(o, d, along = list(od = c("origin", "destination"))))
plot(x, logz = TRUE)

the result of which is shown in figure 7.16.

There is something to say for the argument that such maps give the wrong message, as both amount (color) and polygon size give an impression of amount. To take out the amount in the count, we can compute densities (count / km$$^2$$), by

library(units)
a = set_units(st_area(st_as_sf(o)), km^2)
o$sum_km = o$sum / a
d$sum_km = d$sum / a
od = c(o["sum_km"], d["sum_km"], along = list(od = c("origin", "destination")))
plot(od, logz = TRUE)

shown in figure 7.17. Another way to normalize these totals would be to divide them by population size.

### 7.4.3 Tidy array data

The tidy data paper (Wickham 2014b) may suggest that such array data should be processed not as an array, but in a long table where each row holds (region, class, year, value), and it is always good to be able to do this. For primary handling and storage however, this is often not an option, because:

• a lot of array data are collected or generated as array data, e.g. by imagery or other sensory devices, or e.g. by climate models
• it is easier to derive the long table form from the array than vice versa
• the long table form requires much more memory, since the space occupied by dimension values is $$O(nmp)$$, rather than $$O(n+m+p)$$
• when missing-valued cells are dropped, the long table form loses the implicit indexing of the array form

To put this argument to the extreme, consider for instance that all image, video and sound data are stored in array form; few people would make a real case for storing them in a long table form instead. Nevertheless, R packages like tsibble take this approach, and have to deal with ambiguous ordering of multiple records with identical time steps for different spatial features and index them, which is solved for both automatically by using the array form – at the cost of using dense arrays, in package stars.

Package stars tries to follow the tidy manifesto to handle array sets, and has particularly developed support for the case where one or more of the dimensions refer to space, and/or time.

## 7.5 raster-to-vector, vector-to-raster

Section 1.3 already showed some examples of raster-to-vector and vector-to-raster conversions, this section will add some code details and examples.

### 7.5.1 vector-to-raster

st_as_stars is meant as a method to transform objects into stars objects. However, not all stars objects are raster objects, and the method for sf objects creates a vector data cube with the geometry as its spatial (vector) dimension, and attributes as attributes. When given a feature geometry (sfc) object, st_as_stars will rasterize it, as shown in section 7.7, and in figure 7.18.

(file = system.file("gpkg/nc.gpkg", package="sf"))
# [1] "/home/edzer/R/x86_64-pc-linux-gnu-library/4.0/sf/gpkg/nc.gpkg"
read_sf(file) %>%
st_geometry() %>%
st_as_stars() %>%
plot()

Here, st_as_stars can be parameterized to control cell size, number of cells, and/or extent. The cell values returned are 0 for cells with center point outside the geometry and 1 for cell with center point inside or on the border of the geometry. Rasterizing existing features is done using st_rasterize, as also shown in figure 1.4:

library(dplyr)
mutate(name = as.factor(NAME)) %>%
select(SID74, SID79, name) %>%
st_rasterize()
# stars object with 2 dimensions and 3 attributes
# attribute(s):
#      SID74           SID79            name
#  Min.   : 0      Min.   : 0      Sampson :  655
#  1st Qu.: 3      1st Qu.: 3      Columbus:  648
#  Median : 5      Median : 6      Robeson :  648
#  Mean   : 8      Mean   :10      Bladen  :  604
#  3rd Qu.:10      3rd Qu.:13      Wake    :  590
#  Max.   :44      Max.   :57      (Other) :30952
#  NA's   :30904   NA's   :30904   NA's    :30904
# dimension(s):
#   from  to   offset      delta refsys point values x/y
# x    1 461 -84.3239  0.0192484  NAD27 FALSE   NULL [x]
# y    1 141  36.5896 -0.0192484  NAD27 FALSE   NULL [y]

Similarly, line and point geometries can be rasterized, as shown in figure 7.19.

read_sf(file) %>%
st_cast("MULTILINESTRING") %>%
select(CNTY_ID) %>%
st_rasterize() %>%
plot()

## 7.6 Coordinate transformations and conversions

### 7.6.1st_crs

Spatial objects of class sf or stars contain a coordinate reference system that can be get or replaced with st_crs, or be set or replaced in a pipe with st_set_crs. Coordinate reference systems can be set with an EPSG code, like st_crs(4326) which will be converted to st_crs('EPSG:4326'), or with a PROJ.4 string like "+proj=utm +zone=25 +south", a name like “WGS84”, or a name preceded by an authority like “OGC:CRS84”; alternatives include a coordinate reference system definition in WKT, WKT-2 (section 2.5) or PROJJSON.

The object returned contains two fields:

• wkt with the WKT-2 representation
• input with the user input, if any, or a human readable description of the coordinate reference system, if available

Note that PROJ.4 strings can be used to define some coordinate reference systems, but they cannot be used to represent coordinate reference systems. Conversion of a WKT-2 in a crs object to a proj4string using the $proj4string method, as in x = st_crs("OGC:CRS84") x$proj4string
# [1] "+proj=longlat +datum=WGS84 +no_defs"

may succeed but is not in general lossless or invertible. Using PROJ.4 strings, for instance to define a parameterized, projected coordinate reference system is fine as long as it is associated with the WGS84 datum.

### 7.6.2st_transform, sf_project

Coordinate transformations or conversions (section 2.4) for sf or stars objects are carried out with st_transform, which takes as its first argument a spatial object of class sf or stars that has a coordinate reference system set, and as a second argument with an crs object (or something that can be converted to it with st_crs). When PROJ finds more than one possibility to transform or convert from the source crs to the target crs, it chooses the one with the highest declared accuracy. More fine-grained control over the options is explained in section 7.6.5.

A lower-level function to transform or convert coordinates not in sf or stars objects is sf_project: it takes a matrix with coordinates and a source and target crs, and returns transformed or converted coordinates.

### 7.6.3sf_proj_info

Function sf_proj_info can be used to query available projections, ellipsoids, units and prime meridians available in the PROJ software. It takes a single parameter, type, which can have the following values:

• type = "proj" lists the short and long names of available projections; short names can be used in a “+proj=name” string
• type = "ellps" lists available ellipses, with name, long name, and ellipsoidal parameters
• type = "units" lists the available length units, with conversion constant to meters
• type = "prime_meridians" lists the prime meridians with their position with respect to the Greenwich meridian

### 7.6.4 proj.db, datum grids, cdn.proj.org, local cache

Datum grids (section 2.4) can be installed locally, or be read from the PROJ datum grid CDN at https://cdn.proj.org/. If installed locally, they are read from the PROJ search path, which is shown by

sf_proj_search_paths()
# [1] "/home/edzer/.local/share/proj" "/usr/share/proj"

The main PROJ database is proj.db, an sqlite3 database typically found at

paste0(tail(sf_proj_search_paths(), 1), .Platform$file.sep, "proj.db") # [1] "/usr/share/proj/proj.db" which can be read. The version of the snapshot of the EPSG database included in each PROJ release is stated in the "metadata" table of proj.db; the version of the PROJ runtime used by sf is shown by sf_extSoftVersion()["PROJ"] # PROJ # "7.2.1" If for a particular coordinate transformation datum grids are not locally found, PROJ will search for online datum grids in the PROJ CDN when sf_proj_network() # [1] FALSE returns TRUE. By default it is set to FALSE, but sf_proj_network(TRUE) # [1] "https://cdn.proj.org" sets it to TRUE and returns the URL of the network resource used; this resource can also be set to another resource, that may be faster or less limited. After querying a datum grid on the CDN, PROJ writes the portion of the grid queried (not, by default, the entire grid) to a local cache, which is another sqlite3 database found locally in a user directory, e.g. at list.files(sf_proj_search_paths()[1], full.names = TRUE) # [1] "/home/edzer/.local/share/proj/cache.db" that will be searched first in subsequent datum grid queries. ### 7.6.5 Transformation pipelines Internally, PROJ uses a so-called coordinate operation pipeline, to represent the sequence of operations to get from a source CRS to a target CRS. Given multiple options to go from source to target, st_transform chooses the one with highest accuracy. We can query the options available by (p = sf_proj_pipelines("EPSG:4326", "EPSG:22525")) # Candidate coordinate operations found: 5 # Strict containment: FALSE # Axis order auth compl: FALSE # Source: EPSG:4326 # Target: EPSG:22525 # Best instantiable operation has accuracy: 2 m # Description: axis order change (2D) + Inverse of Corrego Alegre 1970-72 to # WGS 84 (2) + UTM zone 25S # Definition: +proj=pipeline +step +proj=unitconvert +xy_in=deg +xy_out=rad # +step +inv +proj=hgridshift # +grids=br_ibge_CA7072_003.tif +step +proj=utm # +zone=25 +south +ellps=intl and see that pipeline with the highest accuracy is summarised; we can see that it specifies use of a datum grid. Had we not switched on the network search, we would have obtained a different result: sf_proj_network(FALSE) # character(0) sf_proj_pipelines("EPSG:4326", "EPSG:22525") # Candidate coordinate operations found: 5 # Strict containment: FALSE # Axis order auth compl: FALSE # Source: EPSG:4326 # Target: EPSG:22525 # Best instantiable operation has accuracy: 5 m # Description: axis order change (2D) + Inverse of Corrego Alegre 1970-72 to # WGS 84 (4) + UTM zone 25S # Definition: +proj=pipeline +step +proj=unitconvert +xy_in=deg +xy_out=rad # +step +proj=push +v_3 +step +proj=cart # +ellps=WGS84 +step +proj=helmert +x=206.05 # +y=-168.28 +z=3.82 +step +inv +proj=cart # +ellps=intl +step +proj=pop +v_3 +step +proj=utm # +zone=25 +south +ellps=intl # Operation 4 is lacking 1 grid with accuracy 2 m # Missing grid: br_ibge_CA7072_003.tif # URL: https://cdn.proj.org/br_ibge_CA7072_003.tif and a report that a datum grid is missing. The object returned by sf_proj_pipelines is a sub-classed data.frame, with columns names(p) # [1] "id" "description" "definition" "has_inverse" "accuracy" # [6] "axis_order" "grid_count" "instantiable" "containment" and we can list for instance the accuracies by p$accuracy
# [1]  5  5  8  2 NA

Here, NA refers to “ballpark accuracy”, which may be anything in the 30-120 m range:

p[is.na(p$accuracy),] # Candidate coordinate operations found: 1 # Strict containment: FALSE # Axis order auth compl: FALSE # Source: EPSG:4326 # Target: EPSG:22525 # Best instantiable operation has only ballpark accuracy # Description: axis order change (2D) + Ballpark geographic offset from WGS 84 # to Corrego Alegre 1970-72 + UTM zone 25S # Definition: +proj=pipeline +step +proj=unitconvert +xy_in=deg +xy_out=rad # +step +proj=utm +zone=25 +south +ellps=intl The default, most accurate pipeline chosen by st_transform can be overriden by specifying pipeline argument, as selected from the set of options in p$definition.

### 7.6.6 Axis order

As mentioned in section 2.5, the definition of EPSG:4326,

# GEOGCRS["WGS 84",
#     DATUM["World Geodetic System 1984",
#         ELLIPSOID["WGS 84",6378137,298.257223563,
#             LENGTHUNIT["metre",1]]],
#     PRIMEM["Greenwich",0,
#         ANGLEUNIT["degree",0.0174532925199433]],
#     CS[ellipsoidal,2],
#         AXIS["geodetic latitude (Lat)",north,
#             ORDER[1],
#             ANGLEUNIT["degree",0.0174532925199433]],
#         AXIS["geodetic longitude (Lon)",east,
#             ORDER[2],
#             ANGLEUNIT["degree",0.0174532925199433]],
#     USAGE[
#         SCOPE["Horizontal component of 3D system."],
#         AREA["World."],
#         BBOX[-90,-180,90,180]],
#     ID["EPSG",4326]]

indicates that the first axis is associated with latitude and the second with longitude; this is also the case for a number of other ellipsoidal coordinate reference systems. Although this is how the authority (EPSG) prescribes this, it is not how most datasets are currently stored! As most other software, package sf by default ignores this, and interprets ellipsoidal coordinates as (longitude, latitude) by default. If however data needs to be read e.g. from a WFS service that wants to be compliant to the authority, one can set

st_axis_order(TRUE)

to globally instruct sf, when calling GDAL and PROJ routines, that authority compliance (latitude, longitude order) is assumed. It is anticipated that problems may happen in case of authority compliance, e.g. with plotting data. At the time of writing this, the plot method for sf objects respects the axis order flag and will swap coordinates using the transformation pipeline "+proj=pipeline +step +proj=axisswap +order=2,1" before plotting them, but e.g. geom_sf() in ggplot2 has not been modified to do this. As mentioned earlier, the ambiguity of EPSG:4326 is resolved by replacing it with OGC:CRS84.

## 7.7 Transforming and warping rasters

When using st_transform on a raster data set, as e.g. in

tif = system.file("tif/L7_ETMs.tif", package = "stars")
st_transform(4326)
# stars object with 3 dimensions and 1 attribute
# attribute(s):
#              Min. 1st Qu. Median Mean 3rd Qu. Max.
# L7_ETMs.tif     1      54     69 68.9      86  255
# dimension(s):
#      from  to offset delta refsys point                          values x/y
# x       1 349     NA    NA WGS 84 FALSE [349x352] -34.9165,...,-34.8261 [x]
# y       1 352     NA    NA WGS 84 FALSE  [349x352] -8.0408,...,-7.94995 [y]
# band    1   6     NA    NA     NA    NA                            NULL
# curvilinear grid

we see that a curvilinear is created, which means that for every grid cell the coordinates are computed in the new CRS, which no longer form a regular grid. Plotting such data is extremely slow, as small polygons are computed for every grid cell and then plotted. The advantage of this is that no information is lost: grid cell values remain identical after the projection.

When we start with a raster on a regular grid and want to obtain a regular grid in a new coordinate reference system, we need to warp the grid: we need to recreate a grid at new locations, and use some rule to assign values to new grid cells. Rules can involve using the nearest value, or using some form of interpolation. This operation is not lossless and not invertible.

The best options for warping is to specify the target grid as a stars object. When only a target CRS is specified, default options for the target grid are picked that may be completely inappropriate for the problem at hand. An example workflow that uses only a target CRS is

read_stars(tif) %>%
st_warp(crs = st_crs(4326)) %>%
st_dimensions()
#      from  to   offset        delta refsys point values x/y
# x       1 350 -34.9166  0.000259243 WGS 84    NA   NULL [x]
# y       1 352 -7.94982 -0.000259243 WGS 84    NA   NULL [y]
# band    1   6       NA           NA     NA    NA   NULL

which creates a pretty close raster, but then the transformation is also relatively modest. For a workflow that creates a target raster first, here with exactly the same number of rows and columns as the original raster one could use:

r = read_stars(tif)
grd = st_bbox(r) %>%
st_as_sfc() %>%
st_transform(4326) %>%
st_bbox() %>%
st_as_stars(nx = dim(r)["x"], ny = dim(r)["y"])
st_warp(r, grd)
# stars object with 3 dimensions and 1 attribute
# attribute(s):
#              Min. 1st Qu. Median Mean 3rd Qu. Max. NA's
# L7_ETMs.tif     1      54     69 68.9      86  255 6180
# dimension(s):
#      from  to   offset        delta refsys point values x/y
# x       1 349 -34.9166  0.000259666 WGS 84    NA   NULL [x]
# y       1 352 -7.94982 -0.000258821 WGS 84    NA   NULL [y]
# band    1   6       NA           NA     NA    NA   NULL

## 7.8 Exercises

Use R to solve the following exercises.

1. Find the names of the nc counties that intersect LINESTRING(-84 35,-78 35); use [ for this, and as an alternative use st_join() for this.
2. Repeat this after setting sf_use_s2(FALSE), and compute the difference (hint: use setdiff()), and color the counties of the difference using color ‘#88000088’.
3. Plot the two different lines in a single plot; note that R will plot a straight line always straight in the projection currently used; st_segmentize can be used to add points on straight line, or on a great circle for ellipsoidal coordinates.
4. NDVI, normalized differenced vegetation index, is computed as (NIR-R)/(NIR+R), with NIR the near infrared and R the red band. Read the L7_ETMs.tif file into object x, and distribute the band dimensions over attributes by split(x, "band"). Then, add attribute NDVI to this object by using an expression that uses the NIR (band 4) and R (band 3) attributes directly.
5. Compute NDVI for the L7_ETMs.tif image by reducing the band dimension, using st_apply and an a function ndvi = function(x) { (x[4]-x[3])/(x[4]+x[3]) }. Plot the result, and write the result to a GeoTIFF.
6. Use st_transform to transform the stars object read from L7_ETMs.tif to EPSG:4326. Print the object. Is this a regular grid? Plot the first band using arguments axes=TRUE and border=NA, and explain why this takes such a long time.
7. Use st_warp to warp the L7_ETMs.tif object to EPSG:4326, and plot the resulting object with axes=TRUE. Why is the plot created much faster than after st_transform?
8. Using a vector representation of the raster L7_ETMs, plot the intersection with a circular area around POINT(293716 9113692) with radius 75 m, and compute the area-weighted mean pixel values for this circle. Compare the area-weighted values with those obtained by aggregate using the vector data, and by aggregate using the raster data, using exact=FALSE (default) and exact=FALSE`. Explain the differences.

### References

Eddelbuettel, Dirk. 2013. Seamless R and C++ Integration with Rcpp. Springer.

Greenberg, Jonathan Asher, and Matteo Mattiuzzi. 2020. GdalUtils: Wrappers for the Geospatial Data Abstraction Library (Gdal) Utilities. https://CRAN.R-project.org/package=gdalUtils.

Herring, John, and others. 2011. “Opengis Implementation Standard for Geographic Information-Simple Feature Access-Part 1: Common Architecture [Corrigendum].”

Hijmans, Robert J. 2021a. Raster: Geographic Data Analysis and Modeling. https://rspatial.org/raster.

Hijmans, Robert J. 2021b. Terra: Spatial Data Analysis. https://rspatial.org/terra/.

Lovelace, Robin, Jakub Nowosad, and Jannes Muenchow. 2019. Geocomputation with R. Chapman; Hall/CRC. https://geocompr.robinlovelace.net/.

Pebesma, Edzer. 2012. “spacetime: Spatio-Temporal Data in R.” Journal of Statistical Software 51 (7): 1–30. https://www.jstatsoft.org/v51/i07/.

Pebesma, Edzer. 2018. “Simple Features for R: Standardized Support for Spatial Vector Data.” The R Journal 10 (1): 439–46. https://doi.org/10.32614/RJ-2018-009.

Plate, Tony, and Richard Heiberger. 2016. Abind: Combine Multidimensional Arrays. https://CRAN.R-project.org/package=abind.

Wickham, Hadley. 2014b. “Tidy Data.” Journal of Statistical Software 59 (1). https://www.jstatsoft.org/article/view/v059i10.

Wickham, Hadley, Mara Averick, Jennifer Bryan, Winston Chang, Lucy D’Agostino McGowan, Romain François, Garrett Grolemund, et al. 2019. “Welcome to the Tidyverse.” Journal of Open Source Software 4 (43): 1686. https://joss.theoj.org/papers/10.21105/joss.01686.

Wickham, Hadley, and Garret Grolemund. 2017. R for Data Science. O’Reilly. http://r4ds.had.co.nz/.