Become a leader in the IoT community!
Join our community of embedded and IoT practitioners to contribute experience, learn new skills and collaborate with other developers with complementary skillsets.
Join our community of embedded and IoT practitioners to contribute experience, learn new skills and collaborate with other developers with complementary skillsets.
How do i optimize and deploy a deep learning model on an ESP32? still based on my project image recognition system that can analyze images of tissue samples, identify malignancies, and predict possible symptoms and causes. Am currently trying to deploy the trained model on the ESP32 for real-time inference.
But
MemoryError: Model size exceeds available memory
How do I go about resolving this
import tensorflow as tf
from tensorflow.keras.models import load_model
import tensorflow.lite as tflite
model = load_model('malignant_tissue_model.h5')
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
with open('malignant_tissue_model.tflite', 'wb') as f:
f.write(tflite_model)
CONTRIBUTE TO THIS THREAD