I am performing human awareness detection and have trained my model using transfer learning with MobileNetV2. This model expects a tensor of dimension [Null,224,224,3].
I have applied face detection using BlazeFace which uses an input of [128,128,3] on the input video stream and cropped the detected faces in order to send the cropped faces to my custom model but I am not sure what to do as the cropped images are all of varying sizes and smaller than what my model expects.
Example of a cropped face tensor
Array [
1,
43,
111,
3,
]