Why is OSS data consolidation so important for enhancing service performance?

08 June 2021

Why is OSS data consolidation so important for enhancing service performance?

OSS data consolidation is required to deliver a complete, holistic view of network assets, their performance and the ways in which they relate to one another. In turn, this view provides the basis for both efficient operations and effective planning that considers all affected elements, as well as gaps that need to be closed.

OSS data provides real-time and historic information to understand performance

An operator’s OSS is composed of a number of (often complex) systems that perform both discrete and related tasks. Each such system produces a continual output of data, reflecting status changes, health reports, statistics and the like. This is used for a variety of purposes, such as:

  • Service assurance
  • Service availability checks
  • Service delivery validation
  • Service planning
  • Service optimization

And so on. While some of this data is produced in a more or less consistent format, some of it is produced in proprietary formats. Moreover, it also exists in different silos, with the result that it can be scattered.

Since a complete overview of service performance and optimization opportunities requires a holistic view of this data, operators have long been challenged to aggregate and consolidate it, so that they can obtain a single view – and also use the data to inform systems that can consume it to make more informed decisions.

OSS data is key for effective planning but correlation between sources is required

For example, if we want to where and how to increase local mobile coverage, we need to fully understand both gaps and demand, so that the right balance for increasing coverage can be achieved. If a location is identified as in need of enhancement or upgrade through the processing of OSS data, then any related activities also need to be understood.

Since a local cell site must be connected to the macro network (and, with 5G, potentially to any required local processing solutions – for example, as operators build out mobile edge cloud capabilities for new high performance applications that have low latency requirements), all such dependencies need to be clearly understood too.

These dependencies need to be correlated with any existing resources and capabilities, so that they can be assessed as part of the complete project, and the relevant processes triggered – requesting an upgrade for the fiber backhaul to enhance capacity, planning a new connection to any related service enabling solutions, and so on.

OSS data consolidation is a prerequisite for driving efficiency gains

This requires all of the OSS data to be consolidated and aggregated, so that the holistic view can be viewed, analyzed and interpreted. A key input to this view is the resource and status (capacity, usage and more) of the solutions required to deliver the specific changes – in this case, the upgrade of a cellular location. Given the overall number of cellular sites in a national cellular network, data from the entire network must be consolidated in this way.

The problem has been that, for many operators, this data has been maintained separately, within the specific OSS systems concerned. As such consolidation of this data has often been a manual process. Again, given the extent of an operator’s network and their growing complexity (5G brings massively more cell sites, for example, as well as a more fragmented, distributed processing fabric), a manual approach is neither aligned with transformation goals, nor, more practically speaking, compatible with a more automated approach.

As such, operators need to consider OSS data consolidation as a key step towards more efficient operations, with greater control of costs – and also cost reduction to help offset the costs of rolling out denser, higher performing networks.

Network inventory data is fundamental to all network enhancement goals

There is a lot of data to consolidate. One key input is network inventory – what solutions and resources are available, together with their status and capacity. In addition, the logical relationship between resources and the services they support and enable also needs to be considered. It’s not just where and what something is (a fiber route), it’s also the role it plays in delivering, say, a 1 Gbps path to connect a radio cell site to the other solutions to which it needs to be connected. Connecting physical and logical assets is thus key to understanding how to manage and to enhance the network.

This information is contained within OSS data and needs to be brought together, so that it can be properly considered and the right operational decisions taken. That’s why operators need a consolidated view of the relevant OSS data, that is correlated so that dependencies and interconnections can clearly be understood.

Collecting, consolidating and correlating different OSS data inputs is essential

That’s the role of CROSS. It collects and organizes OSS data from any source, such that the right holistic view can be obtained, in a single place. It provides a platform that delivers a correlated view across physical, logical, virtual and service layers, allowing operators to easily process and interpret network topology and to manage their networks.

In turn, this data can be shared with other processes, fueling automation and driving greater interconnection between operational and service workflows and entities. It presents the holistic view necessary to drive efficiency and to automate key tasks – and to eliminate the dependency on manual processing of data.