As the market moves from proof of concepts to large multi-application deployments that require scalability, different technological alternatives emerge at the Edge. In this article, we explore the foundation for a successful Edge Computing project.
Edge Computing is becoming a key technology for the digitisation and automations for Industry. The ability to run digital applications close to production processes enables multiple use cases such as real-time remote monitoring, predictive maintenance, process performance optimisation, or the creation of new business models based on information exchange.
However, companies still face few challenges when introducing edge computing into their IT/OT architectures.
As the market moves from proof-of-concepts at the edge to large multi-application deployments that require scalability, different technology alternatives emerge and decisions need to be made. The industrial world is not fault-tolerant, and one bad decision can mean the failure of deploying this technology.
In this article we cover key considerations about how to make Edge Computing project succesful.
Stratus' recent study on Edge trends shows a general lack of awareness of the Edge, and more specifically the Internet of Things (IoT), as the biggest barriers to corporate Edge Computing deployments.
Often many companies choose their CIOs or COOs to lead edge computing deployments. In our opinion, this is not optimal, as it diverts them from their more traditional objectives, and they lack experience in this area. Ideally, it should be the Chief Data Officer (CDO) of the organisation who leads this deployment.
The Chief Data Officer is responsible for managing data as an asset throughout the company. Their main objective is to reduce costs or increase revenue through advanced processing or commercialisation of data from production processes. This profile, combines a mathematical-scientific knowledge and business knowledge and understands the benefits of Edge such as speed and scalability in data processing, or data security. But much more importantly, operations at the Edge directly impacts the CDO's objectives and so, its momentum is natural.
Therefore, our first recommendation to any company wishing to implement Edge Computing is to hire a CDO and a team composed by data scientists, systems engineers and cybersecurity experts.
The deployment led by the CDO, should be phased in an agile but structured process. Each phase has its own objectives and indicators of success. Failure to do so, can lead to a spiral of interconnected errors that can result in an unstable and inefficient system. We recommend 4 stages when approaching an Edge Computing project. The first question is always related to how long does it take. With the right equipment and products, it should not take more than three months.
A successful implementation starts with understanding exactly what your goal is and what you hope to achieve. Before you contact the first vendor, install the first piece of equipment, or write the first line of code, you need to be able to select a killer application to spearhead your edge computing deployment. To do this, it is best to perform an analysis to identify those processes that meet the maximum number of the following conditions:
By reviewing the critical business processes and ranking them in a matrix along these five axes, we can find the process(es) whose highest score means it is an undisputed candidate to benefit from edge computing.
Once the use case has been selected, a phase of collecting, cleaning, tagging and storing the data handled by the process must begin. To do this, the first Edge Nodes will be deployed and through the use of Software Connectors, we can collect data from sensors, actuators, industrial equipment and internal or external servers.
The collected data can be cleaned to remove inconsistencies and labelled to improve further processing. Sometimes, simple data processing can be included in this phase, such as generating alarms for anomalous data, or generating simple indicator reports on the process. This is not the ultimate aim of the deployment, but it will help to debug any errors and reach better conclusions.
Using the data that is being continuously collected, the model training phase begins. In this phase, there are key aspects such as standardisation, the right selection of tools, or the design for model interoperability, which is perfectly described in the Harvard Business Review article how to scale AI. In this process led by data scientists, Edge Computing platforms help with the functionalities related to MLOps, which allow models to be generated, tested and executed in an agile and secure way.
Once the data scientists decide that the models are sufficiently trained, systems come into play that, like the one Barbara provides that allow you to send, start, monitor, stop, or update applications and models to thousands of distributed Edge Nodes. Depending on the volume of the deployment, it may be interesting to make progressive roll-outs by locations until the entire territory is covered. At this point, and once the distributed applications are controlling the process with the trained models, we will have reached the maximum potential of Edge Computing and we will be able to contrast the improvements obtained with the expectations defined in the first phase of the process.
Edge Computing is a generic type of architecture. When applied to a specific industry or project it is important to be aware of its different layers in order to use the one that best suits the project.It is increasingly common to differentiate between two types of Edge: Thick Edge and Thin Edge, which refers to where data processing occurs.
This is the processing that takes place at nodes, located in the backbone network operator's infrastructure, but close to the client devices. This, in a mobile cellular network, may be the antenna to which the devices connect, or in a fixed network infrastructure it may be a server located in the data centre closest to our location.
It is called "Thick" because these nodes usually have a high processing capacity, firstly because they are located in places where power consumption or space is not an issue, and secondly because when claiming the network operator has to be able to process data from multiple end customers. When telecoms operators talk about edge computing, they are referring to this kind of casuistry.
It implies that the processing is done on nodes owned by the end customer, located on their local network, and therefore even closer to their devices. The adjective "Thin" is appropriate in this case as these nodes are usually more limited in processing capacity and consume fewer resources than the Thick Edge devices.
While it is not possible to draw a perfect line between Thick Edge and Thin Edge as there are cases where both models could work, it is interesting to understand their differences from some points of view in order to choose the one that best suits our project:
This makes the Thin Edge much more suitable for environments that require fast response latencies, that want to isolate as much as possible from the network operator for security or privacy reasons and CAPEX budgets are applied such as in the Energy and Water Industry.
The case of the Thick Edge is however more suitable for systems where latency is not absolutely critical, which require processing data flows of significant bandwidth and tend to operate with operational costs (OPEX) rather than investment. This places this type of technology close to lighter industries and closer to the consumer, such as Fintech, Media or Retail.
Barbara Industrial Edge Platform helps organizations simplify and accelerate their Edge AI deployments, building, orchestrating and maintaining easily container-based or native applications across thousands of distributed edge nodes: