Leveraging Edge Computing opportunities

As the market moves from proof of concepts to large multi-application deployments that require scalability, different technological alternatives emerge at the Edge. In this article, we explore the foundation for a successful Edge Computing project.


Edge Computing is becoming a key technology for the digitisation and automations for Industry. The ability to run digital applications close to production processes enables multiple use cases such as real-time remote monitoring, predictive maintenance, process performance optimisation, or the creation of new business models based on information exchange.

However, companies still face few challenges when introducing edge computing into their IT/OT architectures.

As the market moves from proof-of-concepts at the edge to large multi-application deployments that require scalability, different technology alternatives emerge and decisions need to be made. The industrial world is not fault-tolerant, and one bad decision can mean the failure of deploying this technology.

In this article we cover key considerations about how to make Edge Computing project succesful.

The basis for a successful Edge Computing project

1. The Team and the growing relevance of the CDO

Stratus' recent study on Edge trends shows a general lack of awareness of the Edge, and more specifically the Internet of Things (IoT), as the biggest barriers to corporate Edge Computing deployments.

Often many companies choose their CIOs or COOs to lead edge computing deployments. In our opinion, this is not optimal, as it diverts them from their more traditional objectives, and they lack experience in this area. Ideally, it should be the Chief Data Officer (CDO) of the organisation who leads this deployment.

The Chief Data Officer is responsible for managing data as an asset throughout the company. Their main objective is to reduce costs or increase revenue through advanced processing or commercialisation of data from production processes. This profile, combines a mathematical-scientific knowledge and business knowledge and understands the benefits of Edge such as speed and scalability in data processing, or data security. But much more importantly, operations at the Edge directly impacts the CDO's objectives and so, its momentum is natural.

Therefore, our first recommendation to any company wishing to implement Edge Computing is to hire a CDO and a team composed by data scientists, systems engineers and cybersecurity experts.

2. The phases of Edge Computing, divide to conquer

The deployment led by the CDO, should be phased in an agile but structured process. Each phase has its own objectives and indicators of success. Failure to do so, can lead to a spiral of interconnected errors that can result in an unstable and inefficient system. We recommend  4 stages when approaching an Edge Computing project. The first question is always related to how long does it take. With the right equipment and products, it should not take more than three months.

Phase 1: selection of the use case

A successful implementation starts with understanding exactly what your goal is and what you hope to achieve. Before you contact the first vendor, install the first piece of equipment, or write the first line of code, you need to be able to select a killer application to spearhead your edge computing deployment. To do this, it is best to perform an analysis to identify those processes that meet the maximum number of the following conditions:

  1. They are not optimised, there is a heavy decision-making burden with little information.
  2. They handle a large amount of data
  3. Data security is important
  4. They require rapid, near real-time decision making
  5. They contain distributed assets and their connectivity can be a challenge.

By reviewing the critical business processes and ranking them in a matrix along these five axes, we can find the process(es) whose highest score means it is an undisputed candidate to benefit from edge computing.

Phase 2: data collection

Once the use case has been selected, a phase of collecting, cleaning, tagging and storing the data handled by the process must begin. To do this, the first Edge Nodes will be deployed and through the use of Software Connectors, we can collect data from sensors, actuators, industrial equipment and internal or external servers.

The collected data can be cleaned to remove inconsistencies and labelled to improve further processing. Sometimes, simple data processing can be included in this phase, such as generating alarms for anomalous data, or generating simple indicator reports on the process. This is not the ultimate aim of the deployment, but it will help to debug any errors and reach better conclusions.

Phase 3: training of models

Using the data that is being continuously collected, the model training phase begins. In this phase, there are key aspects such as standardisation, the right selection of tools, or the design for model interoperability, which is perfectly described in the Harvard Business Review article how to scale AI. In this process led by data scientists, Edge Computing platforms help with the functionalities related to MLOps, which allow models to be generated, tested and executed in an agile and secure way.

Phase 4: deployment, operation and governance of distributed AI

Once the data scientists decide that the models are sufficiently trained, systems come into play that, like the one Barbara provides that allow you to send, start, monitor, stop, or update applications and models to thousands of distributed Edge Nodes. Depending on the volume of the deployment, it may be interesting to make progressive roll-outs by locations until the entire territory is covered. At this point, and once the distributed applications are controlling the process with the trained models, we will have reached the maximum potential of Edge Computing and we will be able to contrast the improvements obtained with the expectations defined in the first phase of the process.

Choosing the right type of Edge Computing

Edge Computing is a generic type of architecture. When applied to a specific industry or project it is important to be aware of its different layers in order to use the one that best suits the project.It is increasingly common to differentiate between two types of Edge: Thick Edge and Thin Edge, which refers to where data processing occurs.

Thin Edge

Thick Edge

This is the processing that takes place at nodes, located in the backbone network operator's infrastructure, but close to the client devices. This, in a mobile cellular network, may be the antenna to which the devices connect, or in a fixed network infrastructure it may be a server located in the data centre closest to our location.  

It is called "Thick" because these nodes usually have a high processing capacity, firstly because they are located in places where power consumption or space is not an issue, and secondly because when claiming the network operator has to be able to process data from multiple end customers. When telecoms operators talk about edge computing, they are referring to this kind of casuistry.  

Thin Edge

It implies that the processing is done on nodes owned by the end customer, located on their local network, and therefore even closer to their devices. The adjective "Thin" is appropriate in this case as these nodes are usually more limited in processing capacity and consume fewer resources than the Thick Edge devices.

While it is not possible to draw a perfect line between Thick Edge and Thin Edge as there are cases where both models could work, it is interesting to understand their differences from some points of view in order to choose the one that best suits our project:

  • From a latency point of view, Thin Edge can process data much faster than Thick Edge. The great promise of network operators to eliminate latency is 5G networks, but even in the new standard it is difficult to go below latencies of less than 20ms in practice, when if the processing is done in the local network we can get to "near real time" with latencies close to 1ms.
  • In terms of security and privacy, the Thin Edge allows even more privacy and security of data to be preserved, as the data does not leave the customer's local network, as is the case with the Thick Edge, which requires taking the data one step further to the local operator's infrastructure, which is beyond our control.
  • In terms of cost, a Thin Edge deployment usually requires a higher initial investment for the purchase and installation of nodes, while Thick Edge, involving shared nodes from the network operator, usually moves to "IaaS" (Infrastructure as a Service) structures with pay-as-you-go that do not require initial investment.
  • Finally, as mentioned above, Thick Edge Nodes can process a larger volume of data because they are endowed with more resources than Thin Edge Nodes.

This makes the Thin Edge much more suitable for environments that require fast response latencies, that want to isolate as much as possible from the network operator for security or privacy reasons and CAPEX budgets are applied such as in the Energy and Water Industry.

The case of the Thick Edge is however more suitable for systems where latency is not absolutely critical, which require processing data flows of significant bandwidth and tend to operate with operational costs (OPEX) rather than investment. This places this type of technology close to lighter industries and closer to the consumer, such as Fintech, Media or Retail.

Barbara, The Cybersecure Edge Platform for Industries

Barbara Industrial Edge Platform helps organizations simplify and accelerate their Edge AI deployments, building, orchestrating and maintaining easily container-based or native applications across thousands of distributed edge nodes:

  • Real-time data processing: Barbara allows for real-time data processing at the edge, which can lead to improved operational efficiency and cost savings. By processing data at the edge, organizations can reduce the amount of data that needs to be transmitted to the cloud, resulting in faster response times and reduced latency.
  • Improved scalability: Barbara provides the ability to scale up or down depending on the organization´s needs which can be beneficial for industrial processes that have varying levels of demand.
  • Enhanced security: Barbara offers robust security features to ensure that data is protected at all times. This is especially important for industrial processes that deal with sensitive information.
  • Flexibility: Barbara is a flexible platform that can be customized to meet the specific needs of an organization. This allows organizations to tailor the platform to their specific use case, which can lead to improved efficiency and cost savings.
  • Remote management: Barbara allows for remote management and control of edge devices, applications and data, enabling organizations to manage their infrastructure from a centralized location.
  • Integration: Barbara can integrate with existing systems and platforms, allowing organizations to leverage their existing investments and improve efficiency.