Chapter 8 Large data sets
This chapter describes how large spatial and spatiotemporal datasets
can be handled with R, with a focus on packages sf
and stars
.
For practical use, we classify large data sets as either:
- too large to fit in working memory, or
- also too large to fit on the local hard drive, or
- also too large to download it to locally managed compute infrastructure (such as network attached storage)
These three categories correspond very roughly to Gigabyte-, Terabyte- and Petabyte-sized data sets.
8.1 Vector data: sf
8.1.1 Reading from disk
Function st_read
reads vector data from disk, using GDAL, and
then keeps the data read in working memory. In case the file is
too large to be read in working memory, several options exist to
read parts of the file. The first is to set argument wkt_filter
with a WKT text string containing a geometry; only geometries from
the target file that intersect with this geometry will be returned.
An example is
# [1] "/home/edzer/R/x86_64-pc-linux-gnu-library/4.0/sf/gpkg/nc.gpkg"
bb = "POLYGON ((-81.7 36.2, -80.4 36.2, -80.4 36.5, -81.7 36.5, -81.7 36.2))"
nc.1 = st_read(file, wkt_filter = bb)
# Reading layer `nc.gpkg' from data source
# `/home/edzer/R/x86_64-pc-linux-gnu-library/4.0/sf/gpkg/nc.gpkg'
# using driver `GPKG'
# Simple feature collection with 8 features and 14 fields
# Geometry type: MULTIPOLYGON
# Dimension: XY
# Bounding box: xmin: -81.9 ymin: 36 xmax: -80 ymax: 36.6
# Geodetic CRS: NAD27
The second option is to use the query
argument to st_read
,
which can be any query in “OGR SQL” dialect, which can be used to
select features from a layer, and limit fields. An example is:
q = paste("select BIR74,SID74,geom from 'nc.gpkg' where BIR74 > 1500")
nc.2 = st_read(file, query = q)
# Reading query `select BIR74,SID74,geom from 'nc.gpkg' where BIR74 > 1500' from data source `/home/edzer/R/x86_64-pc-linux-gnu-library/4.0/sf/gpkg/nc.gpkg'
# using driver `GPKG'
# Simple feature collection with 61 features and 2 fields
# Geometry type: MULTIPOLYGON
# Dimension: XY
# Bounding box: xmin: -83.3 ymin: 33.9 xmax: -76.1 ymax: 36.6
# Geodetic CRS: NAD27
Note that nc.gpkg
is the layer name, which can be obtained from file
using st_layers
.
Sequences of records can be read using LIMIT
and OFFSET
, to read records 51-60 use
q = paste("select BIR74,SID74,geom from 'nc.gpkg' LIMIT 10 OFFSET 50")
nc.2 = st_read(file, query = q)
# Reading query `select BIR74,SID74,geom from 'nc.gpkg' LIMIT 10 OFFSET 50' from data source `/home/edzer/R/x86_64-pc-linux-gnu-library/4.0/sf/gpkg/nc.gpkg'
# using driver `GPKG'
# Simple feature collection with 10 features and 2 fields
# Geometry type: MULTIPOLYGON
# Dimension: XY
# Bounding box: xmin: -84 ymin: 35.2 xmax: -75.5 ymax: 36.2
# Geodetic CRS: NAD27
Further query options include selection on geometry type, polygon area. When the dataset queried is a spatial database, then the query is passed on to the database and not interpreted by GDAL; this means that more powerful features will be available. Further information is found in the GDAL documentation under “OGR SQL dialect”.
Very large files or directories that are zipped can be read
without the need to unzip them, using the /vsizip
(for zip),
/vsigzip
(for gzip) or /vsitar
(for tar files) prefix to files;
this is followed by the path to the zip file, and then followed by
the file inside this zip file. Reading files this way may come at
some computational cost.
8.1.2 Reading from databases, dbplyr
Although GDAL has support for several spatial databases, and as
mentioned above it passes on SQL in the query
argument to the
database, it is sometimes beneficial to directly read from and
write to a spatial database using the R database drivers for this. An
example of this is:
pg <- DBI::dbConnect(
RPostgres::Postgres(),
host = "localhost",
dbname = "postgis")
st_read(pg, query = "select BIR74,wkb_geometry from nc limit 3")
# Simple feature collection with 3 features and 1 field
# Geometry type: MULTIPOLYGON
# Dimension: XY
# Bounding box: xmin: -81.7 ymin: 36.2 xmax: -80.4 ymax: 36.6
# Geodetic CRS: NAD27
# bir74 wkb_geometry
# 1 1091 MULTIPOLYGON (((-81.5 36.2,...
# 2 487 MULTIPOLYGON (((-81.2 36.4,...
# 3 3188 MULTIPOLYGON (((-80.5 36.2,...
A spatial query might look like
q = "SELECT BIR74,wkb_geometry FROM nc WHERE \
ST_Intersects(wkb_geometry, 'SRID=4267;POINT (-81.49826 36.4314)');"
st_read(pg, query = q)
# Simple feature collection with 1 feature and 1 field
# Geometry type: MULTIPOLYGON
# Dimension: XY
# Bounding box: xmin: -81.7 ymin: 36.2 xmax: -81.2 ymax: 36.6
# Geodetic CRS: NAD27
# bir74 wkb_geometry
# 1 1091 MULTIPOLYGON (((-81.5 36.2,...
Here, the intersection is done in the database, and uses the spatial index typically present.
The same mechanism works when using dplyr
with a database backend:
Spatial queries can be formulated and are passed on to the database:
# # A tibble: 1 × 16
# ogc_fid area perimeter cnty_ cnty_id name fips fipsno cress_id bir74 sid74
# <int> <dbl> <dbl> <dbl> <dbl> <chr> <chr> <dbl> <int> <dbl> <dbl>
# 1 1 0.114 1.44 1825 1825 Ashe 37009 37009 5 1091 1
# # … with 5 more variables: nwbir74 <dbl>, bir79 <dbl>, sid79 <dbl>,
# # nwbir79 <dbl>, wkb_geometry <pq_gmtry>
# # Source: lazy query [?? x 16]
# # Database: postgres [edzer@localhost:5432/postgis]
# ogc_fid area perimeter cnty_ cnty_id name fips fipsno cress_id bir74 sid74
# <int> <dbl> <dbl> <dbl> <dbl> <chr> <chr> <dbl> <int> <dbl> <dbl>
# 1 1 0.114 1.44 1825 1825 Ashe 37009 37009 5 1091 1
# 2 3 0.143 1.63 1828 1828 Surry 37171 37171 86 3188 5
# 3 5 0.153 2.21 1832 1832 North… 37131 37131 66 1421 9
# # … with 5 more variables: nwbir74 <dbl>, bir79 <dbl>, sid79 <dbl>,
# # nwbir79 <dbl>, wkb_geometry <pq_gmtry>
It should be noted that PostGIS’ ST_Area
computes the same
area as the AREA
field in nc
, which is the meaningless value
obtained by assuming the coordinates are projected, although they
are ellipsoidal.
8.1.3 Reading from online resources or web services
GDAL drivers support reading from online resources, by prepending
/vsicurl/
before the URL starting with e.g. https://
. A number of
similar drivers specialized for particular clouds include /vsis3
for Amazon S3, /vsigs
for Google Cloud Storage, /vsiaz
for
Azure, /vsioss
for Alibaba Cloud, or /vsiswift
for OpenStack
Swift Object Storage. These prepositions can be combined e.g. with
/vsizip/
to read a zipped online resource. Depending on the
file format used, reading information this way may always involve
reading the entire file, or reading it multiple times, and may not
always be the most efficient way of handling resources. A format
like “cloud-optimized GeoTIFF” (COG) has been specially designed
to be efficient and resource-friendly in many cases, e.g. for only
reading the metadata, or for only reading overviews (low-resolutions
versions of the full imagery) or spatial segments. COGs can also
be created using the GeoTIFF driver of GDAL, and setting the right
dataset creation options in a write_stars
call.
8.1.4 API’s, OpenStreetMap
Although online resource do not have to be stored files but could be created server-side on the fly, typical web services for geospatial data create data on the fly, and give access to this through an API. As an example, data from OpenStreetMap can be bulk downloaded and read locally, e.g. using the GDAL vector driver, but more typical a user wants to obtain a small subset of the data or use the data for a small query. Several R packages exist that query OpenStreetMap data:
- Package
OpenStreetMap
downloads data as raster tiles, typically used as backdrop or reference for plotting other features - Package
osmdata
downloads vector data as points, lines or polygons insf
orsp
format - Package
osmar
returns vector data, but in addition the network topology (as anigraph
object) that contains how road elements form a network, and has functions that compute the shortest route
When provided with a correctly formulated API call in the URL the
highly configurable GDAL OSM driver (in st_read
) can read an
“.osm” file (xml) and returns a dataset with five layers: points
that have significant tags, lines
with non-area “way” features,
multilinestrings
with “relation” features, multipolygons
with
“relation” features and other_relations
. A simple and very small
bounding box query to OpenStreetMap could look like
download.file(
"https://openstreetmap.org/api/0.6/map?bbox=7.595,51.969,7.598,51.970",
"data/ms.osm", method = "auto")
and from this file we can read the layer lines
, and plot its
first attribute by
o = read_sf("data/ms.osm", "lines")
p = read_sf("data/ms.osm", "multipolygons")
bb = st_bbox(c(xmin=7.595, ymin = 51.969, xmax = 7.598, ymax = 51.970),
crs = 4326)
plot(st_as_sfc(bb), axes = TRUE, lwd = 2, lty = 2, cex.axis = .5)
plot(o[,1], lwd = 2, add = TRUE)
plot(st_geometry(p), border = NA, col = '#88888888', add = TRUE)

Figure 8.1: OpenStreetMap vector data
the result of which is shown in figure 8.1. The overpass API provides a more generic and powerful query functionality to OpenStreetMap data.
8.2 Raster data: stars
A common challenge with raster datasets is not only that they come in large files (single Sentinel-2 tiles are around 1 GB), but that many of these files, potentially thousands, are needed to address the area and time period of interest. At time of writing this, the Copernicus program that runs all Sentinel satellites publishes 160 TB of images per day. This means that a classic pattern in using R, consisting of:
- downloading data to local disc,
- loading the data in memory,
- analysing it
is not going to work.
Cloud-based Earth Observation processing platforms like Google Earth Engine (Gorelick et al. 2017) or Sentinel Hub recognize this and let users work with datasets up to the petabyte range rather easily and with a great deal of interactivity. They share the following properties:
- computations are postponed as long as possible (lazy evaluation)
- only the data you ask for are being computed and returned, and nothing more
- storing intermediate results is avoided in favour of on-the-fly computations
- maps with useful results are generated and shown quickly to allow for interactive model development
This is similar to the dbplyr
interface to databases and
cloud-based analytics environments, but differs in the aspect of
what we want to see quickly: rather than the first \(n\) records
of a dbplyr
table, we want a quick overview of the results,
in the form of a map covering the whole area, or part of it, but
at screen resolution rather than native (observation) resolution.
If for instance we want to “see” results for the United States on screen with 1000 x 1000 pixels, we only need to compute results for this many pixels, which corresponds roughly to data on a grid with 3000 m x 3000 m grid cells. For Sentinel-2 data with 10 m resolution, this means we can subsample with a factor 300, giving 3 km x 3 km resolution. Processing, storage and network requirements then drop a factor \(300^2 \approx 10^5\), compared to working on the native 10 m x 10 m resolution. On the platforms mentioned, zooming in the map triggers further computations on a finer resolution and smaller extent.
A simple optimisation that follows these lines is how stars’ plot method works: in the case of plotting large rasters, it subsamples the array before it plots, drastically saving time. The degree of subsampling is derived from the plotting region size and the plotting resolution (pixel density). For vector devices, such as pdf, R sets plot resolution to 75 dpi, corresponding to 0.3 mm per pixel. Enlarging plots may reveal this, but replotting to an enlarged devices will create a plot at target density.
8.2.1 stars
proxy objects
To handle datasets that are too large to fit in memory, stars
provides stars_proxy
objects. To demonstrate its use, we will
use the starsdata
package, an R data package with larger datasets
(around 1 GB total). It can be installed by
options(timeout = 600) # or large in case of slow network
install.packages("starsdata", repos = "http://pebesma.staff.ifgi.de",
type = "source")
We can “load” a Sentinel-2 image from it by
f = "sentinel/S2A_MSIL1C_20180220T105051_N0206_R051_T32ULE_20180221T134037.zip"
granule = system.file(file = f, package = "starsdata")
file.size(granule)
# [1] 7.69e+08
base_name = strsplit(basename(granule), ".zip")[[1]]
s2 = paste0("SENTINEL2_L1C:/vsizip/", granule, "/", base_name,
".SAFE/MTD_MSIL1C.xml:10m:EPSG_32632")
(p = read_stars(s2, proxy = TRUE))
# stars_proxy object with 1 attribute in 1 file(s):
# $EPSG_32632
# [1] "[...]/MTD_MSIL1C.xml:10m:EPSG_32632"
#
# dimension(s):
# from to offset delta refsys point values x/y
# x 1 10980 3e+05 10 WGS 84 / UTM z... NA NULL [x]
# y 1 10980 6e+06 -10 WGS 84 / UTM z... NA NULL [y]
# band 1 4 NA NA NA NA B4,...,B8
# 11808 bytes
and we see that this does not actually load any of the pixel
values, but keeps the reference to the dataset and fills the
dimensions table. (The convoluted s2
name is needed to point
GDAL to the right file inside the .zip
file containing 115 files
in total).
The idea of a proxy object is that we can build expressions like
but that the computations for this are postponed. Only when we
really need the data, e.g. because we want to plot it, is p * 2
evaluated. We need data when either:
- we want to
plot
data, or - we want to write an object to disk, with
write_stars
, or - we want to explicitly load an object in memory, with
st_as_stars
In case the entire object does not fit in memory, plot
and
write_stars
choose different strategies to deal with this:
plot
fetches only the pixels that can be seen, rather than all pixels availablewrite_stars
reads, processes, and writes data chunk by chunk
Downsampling and chunking is implemented for spatially dense images, not e.g. for dense time series, or other dense dimensions.
As an example, the output of plot(p)
, shown in figure 8.2

Figure 8.2: downsampled 10 m bands of a Sentinel-2 scene
only fetches the pixels that can be seen on the plot device, rather than the 10980 x 10980 pixels available in each band. The downsampling ratio taken is
# [1] 19
meaning that for every 19 \(\times\) 19 sub-image in the original image, only one pixel is read, and plotted. This value is still a bit too low as it ignores the white space and space for the key on the plotting device.
8.2.2 Operations on proxy objects
Several dedicated methods are available for stars_proxy
objects:
# [1] [ [[<- [<- adrop
# [5] aggregate aperm as.data.frame c
# [9] coerce dim droplevels filter
# [13] hist initialize is.na mapView
# [17] Math merge mutate Ops
# [21] plot predict print pull
# [25] rename replace_na select show
# [29] slice slotsFromS3 split st_apply
# [33] st_as_sf st_as_stars st_crop st_dimensions<-
# [37] st_downsample st_mosaic st_redimension st_sample
# [41] st_set_bbox transmute write_stars
# see '?methods' for accessing help and source code
We have seen plot
and print
in action; dim
reads out
the dimension from the dimensions metadata table.
The three methods that actually fetch data are st_as_stars
,
plot
and write_stars
. st_as_stars
reads the actual data into a
stars
object, its argument downsample
controls the downsampling
rate. plot
does this too, choosing an appropriate downsample
value from the device resolution, and plots the object. write_stars
writes a stars_proxy
object to disk.
All other methods for stars_proxy
objects do not actually operate
on the raster data but add the operations to a to do list,
attached to the object. Only when actual raster data are fetched,
e.g. by calling plot
or st_as_stars
, the commands in this list
are executed.
st_crop
limits the extent (area) of the raster that will be
read. c
combines stars_proxy
objects, but still doesn’t read
any data. adrop
drops empty dimensions, aperm
changes dimension
order.
write_stars
reads and processes its input chunk-wise; it has an
argument chunk_size
that lets users control the size of spatial
chunks.
8.3 Very large data cubes
At some stage, data sets need to be analysed that are so large that downloading them is no longer feasible; even when local storage would be sufficient, network bandwidth may become limiting. Examples are satellite image archives such as those from Landsat and Copernicus (Sentinel-x), or model computations such as the ERA5 (Hersbach et al. 2020), a model reanalysis of the global atmosphere, land surface and ocean waves from 1950 onwards. In such cases it may be most helpful to gain access to virtual machines in a cloud that has these data available, or to use a system that lets the user carry out computations without having to worry about virtual machines and storage. Both options will be discussed.
8.3.1 Finding and processing assets
When working on a virtual machine on a cloud, a first task is usually to find the assets (files) to work on. It looks attractive to obtain a file listing, and then parse file names such as
S2A_MSIL1C_20180220T105051_N0206_R051_T32ULE_20180221T134037.zip
for their metadata including the date of acquisition and the code of the spatial tile covered. Obtaining such a file listing however is usually computationally very demanding, as is the processing of the result, when the number of tiles runs in the many millions.
A solution to this is to use a catalogue. The recently developed
and increasingly deployed STAC, short for spatiotemporal asset
catalogue, provides an API that can be used to query image
collections by properties like bounding box, date, band, and cloud
coverage. The R package rstac
(Brazil Data Cube Team 2021) provides an R interface
to create queries, and manage the information returned.
Processing the resulting files may involve creating a data cube at a lower spatial and/or temporal resolution, from images that may span a range of coordinate reference systems (e.g., several UTM zones). An R package that can do that is gdalcubes (Appel 2021; Appel and Pebesma 2019), which can also directly use STAC output (Appel, Pebesma, and Mohr 2021).
8.3.2 Processing data: GEE, openEO
Platforms that do not require the management and programming of virtual machines in the cloud but provide direct access to the imagery managed include GEE, openEO, and the climate data store.
Google Earth Engine (GEE) is a cloud platform that allows users
to compute on large amounts of Earth Observation data as well
as modelling products (Gorelick et al. 2017). It has powerful analysis
capabilities, including most of the data cube operations explained
in section 6.3. It has an IDE where scripts can
be written in JavaScript, and a Python interface to the same
functionality. The code of GEE is not open source, and cannot be
extended by arbitrary user-defined functions in languages like
Python or R. R package rgee
(Aybar 2021) provides an R client
interface to GEE.
Cloud-based data cube processing platforms built entirely around
open source software are emerging, several of which using the openEO
API (Schramm et al. 2021). This API allows for user-defined functions (UDFs)
written in Python or R that are being passed on through the API and
executed at the pixel level, e.g. to aggregate or reduce dimensions.
UDFs in R represent the data chunk to be processed as a stars
object, in Python xarray
objects are used.
Other platforms include the Copernicus climate data store (Raoult et al. 2017)
or atmosphere data store, which allow processing of atmospheric
or climate data from ECMWF, including ERA5. An R package with an
interface to both data stores is ecmwfr
(Hufkens 2020).
8.4 Exercises
Use R to solve the following exercises.
- For the S2 image (above), find out in which order the bands are by
using
st_get_dimension_values()
, and try to find out (e.g. by internet search) which spectral bands / colors they correspond to. - Compute NDVI for the S2 image, using
st_apply
and an an appropriatendvi
function. Plot the result to screen, and then write the result to a GeoTIFF. Explain the difference in runtime between plotting and writing. - Plot an RGB composite of the S2 image, using the
rgb
argument toplot()
, and then by usingst_rgb()
first. - Select five random points from the bounding box of
S2
, and extract the band values at these points; convert the object returned to ansf
object. - For the 10 km radius circle around
POINT(390000 5940000)
, useaggregate
to compute the mean pixel values of the S2 image when downsampling the images with factor 30, and on the original resolution. Compute the relative difference between the results. - Use
hist
to compute the histogram on the downsampled S2 image. Also do this for each of the bands. Useggplot2
to compute a single plot with all four histograms. - Use
st_crop
to crop the S2 image to the area covered by the 10 km circle. Plot the results. Explore the effect of setting argumentcrop = FALSE
- With the downsampled image, compute the logical layer where all four
bands have pixel values higher than 1000. Use a raster algebra expression
on the four bands (use
split
first), or usest_apply
for this.
References
Appel, Marius. 2021. Gdalcubes: Earth Observation Data Cubes from Satellite Image Collections. https://github.com/appelmar/gdalcubes_R.
Appel, Marius, and Edzer Pebesma. 2019. “On-Demand Processing of Data Cubes from Satellite Image Collections with the Gdalcubes Library.” Data 4 (3): 92. https://www.mdpi.com/2306-5729/4/3/92.
Appel, Marius, Edzer Pebesma, and Matthias Mohr. 2021. Cloud-Based Processing of Satellite Image Collections in R Using Stac, Cogs, and on-Demand Data Cubes. https://r-spatial.org/r/2021/04/23/cloud-based-cubes.html.
Aybar, Cesar. 2021. Rgee: R Bindings for Calling the Earth Engine Api. https://CRAN.R-project.org/package=rgee.
Brazil Data Cube Team. 2021. Rstac: Client Library for Spatiotemporal Asset Catalog. https://github.com/brazil-data-cube/rstac.
Gorelick, Noel, Matt Hancher, Mike Dixon, Simon Ilyushchenko, David Thau, and Rebecca Moore. 2017. “Google Earth Engine: Planetary-Scale Geospatial Analysis for Everyone.” Remote Sensing of Environment 202: 18–27. https://doi.org/10.1016/j.rse.2017.06.031.
Hersbach, Hans, Bill Bell, Paul Berrisford, Shoji Hirahara, András Horányi, Joaquín Muñoz-Sabater, Julien Nicolas, et al. 2020. “The Era5 Global Reanalysis.” Quarterly Journal of the Royal Meteorological Society 146 (730): 1999–2049. https://doi.org/https://doi.org/10.1002/qj.3803.
Hufkens, Koen. 2020. Ecmwfr: Interface to Ecmwf and Cds Data Web Services. https://github.com/bluegreen-labs/ecmwfr.
Raoult, Baudouin, Cedric Bergeron, Angel López Alós, Jean-Noël Thépaut, and Dick Dee. 2017. “Climate Service Develops User-Friendly Data Store.” ECMWF Newsletter 151: 22–27.
Schramm, Matthias, Edzer Pebesma, Milutin Milenković, Luca Foresta, Jeroen Dries, Alexander Jacob, Wolfgang Wagner, et al. 2021. “The openEO API–Harmonising the Use of Earth Observation Cloud Services Using Virtual Data Cube Functionalities.” Remote Sensing 13 (6). https://doi.org/10.3390/rs13061125.