 # Resizing 3D Volumetric images

If I have 3D image data in 176 x 256 x 256 dimensions, How can I resize it to say 160 x 256 x 256?
Or any other dimension? Does anyone know of any packages that can achieve this?

Kind regards.

If you have read the image into `R` as a 3D `array` with those dimensions, and you just want to "slice off" some of the cells, you can just use `R` subsetting. For example:

``````img <- array(runif(176*256*256), dim = c(176,256,256))

dim(img[1:160,,])
``````

Of course, if you want to resize to a new grid, you would need to resample and interpolate for voxels which do not have 1:1 relations with the original grid. There are several packages that can do this, if you are interested.

That's okay, and yes I've found a solution in python. I see that is good to know I use the neurobase package in R, but now I've shifted to python for speed, and 'nibabel' works fine, it's also much faster.

Regards.

1 Like

This topic was automatically closed 21 days after the last reply. New replies are no longer allowed.

Slicing off was the first idea that came to mind, but I have data with different dimensions to a point that I can't slice off all the time. So I'd need to use a generic function, you've mentioned some packages that do this in R, would you mind sharing them?

Regards

@mattwarkentin I’ve found some packages in python and matlab that can do this, but haven’t found any for R... Maybe I’m missing something...

Update:

I've found a way to do this in python for the time being, I'm using this to create new files with the resized data: Here is the code in python if anyone needs this:

``````import numpy as np
import nibabel as nib
import itertools
import os

def resize_data(data):
initial_size_x = data.shape
initial_size_y = data.shape
initial_size_z = data.shape

new_size_x = 176
new_size_y = 256
new_size_z = 256

delta_x = initial_size_x / new_size_x
delta_y = initial_size_y / new_size_y
delta_z = initial_size_z / new_size_z

new_data = np.zeros((new_size_x, new_size_y, new_size_z))

for x, y, z in itertools.product(range(new_size_x),
range(new_size_y),
range(new_size_z)):
new_data[x][y][z] = data[int(x * delta_x)][int(y * delta_y)][int(z * delta_z)]

return new_data

if initial_data.shape != (176, 256, 256):
resized_data = resize_data(initial_data)
img = nib.Nifti1Image(resized_data, np.eye(4))
The keras package has several image processing functions that allow you to resize images. `image_load()` accepts a `target_size` parameter to resize an image upon loading. `image_array_resize()` allows you to reshape images that have already been vectorized as an array.
Sorry I was away for the weekend without internet access. Did you find a sufficient solution now? I primarily work with medical images, so I use the Python `SimpleITK` library (also available in `R` but I find it easier to use `reticulate` and use the Python lib).