Note: Verizon Media is now known as Edgecast.
By Arjun Ramamurthy, Head of Product, Consumer Video Platform & Edge AI, Verizon Media, and Debashis Mondal, Principal Product Manager, Verizon Media
Welcome to the second blog in this three-part series.
Real-time enterprise AI applications in various industrial sectors require a real-time actionable data and ML insights platform with cloud-agnostic interconnect. Verizon Media’s Edge AI is a purpose-built platform focused on helping customers at the intersection of data management and machine learning operations (MLOps), so they can operate on a heterogeneous infrastructure at global scale. Moving artificial intelligence to the network edge enables decisions and actions to be taken in near real time. This opens up a range of exciting and transformative applications in both industrial and consumer segments, which we outlined in the first blog post in this series.
In many ways, as we will explain, the purpose of Edge AI is to connect all the elements needed to design, develop and deploy commercial AI applications on the edge to enable real-time enterprise use cases. This includes our content delivery network (CDN), with just 10-25 milliseconds of latency for virtually every internet user worldwide, our on-premises 5G technology, an extensible application platform as a service (aPaaS) layer, cloud data management, comprehensive security and in-depth monitoring and analytics.
From the outset of the Edge AI development process, our vision was to create an infrastructure-agnostic lightweight containerized platform with cloud-agnostic interconnect to deliver real-time, actionable data and machine-learning insights on the edge. This vision, in turn, helped us adhere to goals and technology decisions for the platform, as outlined in the figure below.
These nine elements play an essential role in making the Edge AI platform possible and are critical to its success as commercial solutions are deployed into production. Let’s take a closer look at these elements, working from the bottom up.
Platform reference architecture
Now that you have an overview of the technologies at play in the Edge AI platform, let’s take a look at how they fit together. As shown in the figure below, the Edge AI platform architecture has three major parts:
Models are trained on the cloud and served on the edge for real-time use cases. Batch inferencing, which is not time-dependent, takes place in the cloud.
Unlike traditional applications, which can be implemented, deployed, and occasionally updated, AI/ML applications constantly learn and improve. There are three main workflows within the platform that help us accomplish the above:
Edge AI ingestion, processing and storage
One of the most important aspects of an AI/ML solution is the ability to capture and store data with speed and efficiency. For some applications, such as those involving IoT sensors, data volumes can be massive. To give you some idea of the scale, IDC predicts that IoT devices alone will generate nearly 80 zettabytes of data by 2025.
To support even the most massive data volumes, the Edge AI platform, as shown below, supports multiple ingestion sources (IoT, video, location and sensors), protocols and ingestion providers. It also supports high throughput with low latency (millions of events/second with 10ms latency).
As incoming video, IoT, or sensor data arrives, the ingestion layer uses built-in throttling to guarantee data delivery and prevent overflow conditions. A message broker delivers the incoming data to the stream/event engine, where it’s transformed, enriched, or cleansed before moving to the memory store. Once the data is in the memory store, it’s periodically synced with the distributed cloud store. Visualization tools provide real-time analytics and operational dashboards using data in the memory store.
Machine learning pipeline
Machine learning relies on algorithms, and unless you’re a data scientist or ML expert, these algorithms are very complicated to understand and work. That’s where a machine learning framework comes in, making it possible to easily develop ML models without a deep understanding of the underlying algorithms. While TensorFlow, PyTorch, and scikit-learn are arguably the most popular ML frameworks today, that may not be the case in the future, so it’s important to choose the best framework for the intended application.
To this end, the Edge AI platform supports a full range of ML frameworks for model training, feature engineering and serving. As shown in the figure below, Edge AI supports complete model lifecycle management, including training, tracking, packaging and serving.
Let’s take a look at the typical machine learning workflow on the Edge AI platform. First, you leverage the ML framework of choice to create a model in a local environment. Once the model is pulled together, testing begins with small data sets, and experiments are captured using model life cycle tools like MLflow and Sagemaker. After initial testing, the model is ready to be trained in the cloud on larger data sets along with hyperparameter tuning. Model versions are stored in model repositories on the cloud.
Once the model has been fully trained in the cloud, the next step is initial deployment on the edge for further testing. The model then undergoes final testing and packaging––and based on certain deployment triggers on the edge––is pulled from the cloud and deployed seamlessly on the edge platform. Model metrics are gathered continuously and sent to the cloud for further model tuning and evolution.
Platform serving and monitoring
For maximum flexibility, in terms of ML framework selection and support, the Edge AI platform uses REST or gRPC endpoints to serve models in real time. An overview of the serving and monitoring architecture is shown below.
With our platform, continuous integration tools like Jenkins X enable models to be pushed to the model store at the edge using deployment triggers. A continuous deployment tool like Argo CD is used to pull the model image from the repository and deploy each model as a self-contained pod.
Deployed models are served using Seldon with a REST/gRPC interface and load balanced behind an API gateway. Clients send REST/gRPC calls to the API gateway to generate predictions. Model management and metrics are provided using Seldon, and logging and monitoring are done using ELK Stack and/or Prometheus.
The integration of AI and compute capacity, combined with cloud services directly at the network’s edge, enables organizations to bring increasingly sophisticated and transformative real-time enterprise use cases to market. As described in this post, the Edge AI platform helps operationalize real-time enterprise AI at scale and significantly reduces the hurdles involved with bringing a wide range of real-time ML applications to life. This enables customers to accelerate the implementation of pilots and scale effectively from pilots to production.
In the upcoming final installment of this three-part blog series, we will explore the process involved with designing and deploying solutions based on the Edge AI platform and provide customer examples of Edge AI solutions in predictive analytics, smart manufacturing and logistics.
Contact us to learn more about how your application could benefit from our Edge AI platform.
To read the first blog in this series, click here.
Call us at
Manage your account or get tools and information.