This post delves into how Barbara 's MLOps capabilities make deploying AI models like ResNet18 to edge devices effortless and remote, all managed seamlessly through a single, intuitive console. Discover more.
Unlocking the full potential of AI in industry requires deploying machine learning models at the edge, where data is generated. Barbara makes this seamless, offering efficient deployment, scalability, and reliable edge data processing.
Barbara's MLOps management capabilities empower users to:
1. Load Trained Models: Seamlessly integrate trained models into the platform, supporting a variety of formats including TensorFlow SavedModel, PyTorch TorchScript, and ONNX.
2. Deploy Models to Edge Nodes: With a single click, you can deploy models to one or multiple edge nodes.
3. Choose the Right Serving Engine: You can select between TensorFlow's TFX Serving Engine and NVIDIA's Triton Inference Server to serve the deployed models in the node.
4. Harness GPU Power: Utilize the GPU capabilities of edge devices to accelerate model inference and enhance real-time performance.
Barbara’s MLOps capabilities eliminate the challenges of deploying and managing AI at the edge, enabling organizations to unlock the full potential of their models. By simplifying the deployment process and offering flexible serving options, Barbara helps industrial operations stay agile, efficient, and ahead of the curve.
The ResNet18 model, a popular convolutional neural network (CNN), is specifically designed for image classification tasks. It excels at recognizing objects such as animals, equipment, or components in images, making it highly valuable in industries like manufacturing, healthcare, and logistics. Deploying ResNet18 on an edge device enables faster inference and minimizes dependence on cloud connectivity.
Using Barbara Edge AI Platform, the deployment process is broken into 3 key steps:
Before deploying the model, it must be uploaded to the Panel’s library in a compatible model’s format. Remember the options are:
In this case we will use the Pytorch framework to download the pretrained Resnet18 model and save it locally in Torchscript format. The following script demonstrates how to download the ResNet18 model, convert it into TorchScript format, and save it as resnet18_traced.pt.
Once we have the resnet18_traced.pt file, we just need to compress it in a zip file and upload it to our Panel’s library.
TorchScript ensures compatibility with NVIDIA Triton, Barbara's model-serving engine, so we will use that inference server in our Edge Node.
After uploading our model, it will be available in our Lilbrary, ready to be deployed to any Edge Node.
Learn how to upload your model directly from Jupyter Notebook in the following "Talk to the Expert" video.
Inference involves sending an image to the model via REST API and receiving classification results. We will use a Jupyter Notebook to perform the inference request to our node. This Jupyter Notebook will do several things:
Finally, the results obtained from the model are interpreted and that are the results:
Deploying AI models like ResNet18 to edge devices is made simple and efficient with Barbara's Edge Orchestration Tool. By combining the power of PyTorch, NVIDIA Triton, and Barbara’s platform, organizations can unlock real-time AI capabilities at the edge.
Ready to take your AI models to the edge? Start exploring Barbara today! Book a free trial today.