Become a leader in the IoT community!
Join our community of embedded and IoT practitioners to contribute experience, learn new skills and collaborate with other developers with complementary skillsets.
Join our community of embedded and IoT practitioners to contribute experience, learn new skills and collaborate with other developers with complementary skillsets.
Hello everyone, I’m working on an object recognition project using PyTorch in Python, and I’m encountering an issue with model inference. After successfully training my model, I’m getting an error during inference:
Error: RuntimeError: Expected 4-dimensional input for 4-dimensional weight [32, 3, 3, 3], but got 3-dimensional input of size [3, 224, 224] instead.
I’m using a pre-trained ResNet model and passing in images of size 224×224 pixels. Any ideas on why this error is occurring and how I can resolve it? Your insights would be much appreciated
@Middleware & OS
Hi @wafa_ath see what’s happening here ur images prolly is coming in as single tensors of size (224, 224, 3) for RGB channels, so the model expects an extra dimension at the beginning to represent the batch size. Reshape your image or instead add a batch dimension of size 1
Ow now i can see it
Fixed ? It works now?
Thanx
CONTRIBUTE TO THIS THREAD