0

I am performing human awareness detection and have trained my model using transfer learning with MobileNetV2. This model expects a tensor of dimension [Null,224,224,3].

I have applied face detection using BlazeFace which uses an input of [128,128,3] on the input video stream and cropped the detected faces in order to send the cropped faces to my custom model but I am not sure what to do as the cropped images are all of varying sizes and smaller than what my model expects.

Example of a cropped face tensor

Array [
  1,
  43,
  111,
  3,
]
yudhiesh
  • 213
  • 1
  • 9
  • Possible duplicate https://datascience.stackexchange.com/questions/30819/image-resizing-and-padding-for-cnn – YuseqYaseq Sep 16 '20 at 15:08

1 Answers1

0

The issue was fixed by resizing the Tensor to fit into the model. I was reshaping them instead of resizing them.

yudhiesh
  • 213
  • 1
  • 9