# Load packages
library(HSItools)
library(terra)
library(tidyterra)
library(dplyr)
library(tidyr)
library(ggplot2)
library(signal)
library(sf)
library(prospectr)
library(patchwork)
library(raster)
1 Initial state
HSItools offers an easy way to preprocess Specim data. However, if the data follows the same rules, it can be generalized to the broader workflow.
1.1 Data structure
Data should be structured as follows:
– NAME <directory>
–– capture <directory>
––– DARKREF_NAME.hdr
––– DARKREF_NAME.log
––– DARKREF_NAME.raw
––– NAME.hdr
––– NAME.log
––– NAME.raw
––– WHITEREF_NAME.hdr
––– WHITEREF_NAME.log
––– WHITEREF_NAME.raw
However, if necessary, you can select appropriate files on your own. Files with the extension .raw are data files, while files with the .hdr extension are header files that contain essential information, such as the number of pixels, facilitating data reading.
1.2 Startup
Start by loading all necessary packages{HSItools} will load essential functions from the namespace, but to be sure, call all packages.
Hyperspectral data gets quite large. It is a good practice to process data stored on an SSD drive separate from your Operating System (OS). While data is stored on a separate drive, it is beneficial to instruct {terra} to store temporary data on a separate drive, too. Adjust this according to your OS. Here, we are using a non-system drive on Windows 11. We set the maximum allowed RAM to a high value of 0.9. It would be best to decide according to the available memory and OS requirements.
# Set tempdir
::terraOptions(tempdir = "D:/", memmax = 0.9) terra
First, this is specific only for workflow, where you’d like to calculate reflectance from the Shiny output. We must set the working directory (otherwise, we discourage this approach in R). More on it in the Chapter 3.
Here we’re using the example dataset provided by Zahajská et al. (2024) which is available at https://zenodo.org/records/13925618.
# Set working direcotory
setwd("C:/GitHub/data/cake/")