I'd say it depends a lot on what information you'll want to extract at the end, and why you want to downsample.
With unordered data it's common to take a subset of the data using
sample() to see what would happen with a smaller sample, to me that's the most common definition of "downsampling". But that seems very inappropriate for spatial data: you would randomly select/drop pixels, totally changing the properties of the image.
If your goal is just to crop the image to reduce the storing space, you could do some thresholding and crop the rows and columns that don't contain information, for example:
threshold <- EBImage::otsu(volcano, c(min(volcano), max(volcano)))
thresholded_volcano <- volcano >= threshold
image(EBImage::erode(volcano >= threshold))
cropped_volcano <- volcano[rowSums(thresholded_volcano)>0,
If you're specifically interested in the values over 170, you can threshold your matrix on 170, then use functions from
EBImage to make a nice mask and apply it:
# make binary matrix (mask)
mask_thres <- volcano >= 170
# fill holes
mask_filled <- EBImage::closing(mask_thres)
mask_filled <- EBImage::fillHull(mask_filled)
# increase a bit the size of the mask to capture the surroundings
mask_dilated <- EBImage::dilate(mask_filled)
# Now take the original values in the mask, discarding the values outside
thresholded_volcano <- volcano*mask_dilated
Working with images is a particular art form, the right approach is really very very dependent on what your goal is.