Category Archives: BigData

Network intelligence from chaos

Building a single network view – Part 2

As part two in our series from the recent blog post where we described the five ways in which network downtime can be reduced, we focus on the importance of unified, or single view reporting.

Monitoring and reporting of network performance, with multiple vendors and domains involved, is a tremendous challenge for telco operators, both from a resource, and an intelligence point of view. The increase in data driven digital content is placing ever-greater demands on the mobile network infrastructure. To assure customer experience, mobile operators are adopting various methods for collecting information from multiple sources in the network, to improve monitoring of network performance and service quality. Along with identifying problem areas on the network for prompt corrective action to be taken, this information is also used for network optimisation and capacity planning.

An essential requirement is for the results of this data collection process to be reported on, and visualised in a simple and intuitive fashion in near real time; including historical data at the granular level. Additionally, there is a growing demand for huge volumes of data to be retained for extensive periods of time, without degradation of reporting performance.

An example of this is Vodafone UK, who implemented the SysMech ZenPM™ real-time Performance Management solution to measure and compare network quality and coverage from a customer experience perspective. It wanted to monitor the test results for each location against network performance statistics and to provide results in a format suitable for senior management and network planning and optimisation engineers. Vodafone’s network performance and optimisation staff used the ZenPM desktop user interface to create dashboards and reports to monitor performance before and during the drive-by tests for each city – something unheard of before ZenPM.

The advanced features of ZenPM enabled Vodafone to address performance issues in affected locations, prior to the drive-by tests taking place, and to prioritise corrective actions for problems occurring during the tests.

The kind of benefits that Vodafone and other organisations can see by reporting as one across multiple platforms, domains, system, companies and affiliations, efficiently, are multiple, and can have a huge impact on performance as well as efficiency. In summary, to name a few:

  • Intuitive reporting can identify issues prior to them occurring, enabling corrective action to be taken
  • Operators have a near real-time view of the networks, the correlated picture identifies hot spot issues, where they can drill down at any point to resolve issues
  • An automated reporting dashboard can save internal resources hours per day in compiling reports and data from different sources
  • Up to date high level dashboards for senior management immediately available

In our next blog we will be discussing the importance of predictive analytics as part of resolving network pain points.


Finding the needle in a big data haystack

Predicting and preventing network pain points – Part I

In a recent blog post we highlighted the five ways in which network downtime can be significantly reduced in order to improve customer service and brand loyalty. In this post we aim to address the first point, ensuring visibility, and provide some guidance on how this can be achieved.

As many companies in the telco sector will know, big data is nothing new; the volumes of data available have always been extensive, and for regulatory reasons some classes of data have to be stored for many years. However, frustratingly for many in the industry, the big data hype is drowning out the potential deliverables.

A common misconception is that ‘big data’ equals ‘unstructured data’ or ‘useless data’, which unfortunately, for some is true due to a lack of strategy or even a business case for using the data. Purely storing data will not make it useful, but that in itself is no justification for discarding it.

To make data useful there must be realisable benefits to the business and an effective business strategy to take steps to ensure visibility of that data to the relevant business groups. However this is where there is a cost implication and problems can start; if the wrong tools are used for the job the costs can be considerable and the task can very quickly become just like looking for a very expensive ‘needle in a haystack’!

A real ‘needle in a haystack’ case 

The pain: A mobile operator suffered a significant increase in the number of dropped calls on their network, equal to 2% of total calls. The operations department was using legacy fault management reporting and didn’t know what the problem was until the following day, when a retrospective report was pulled.

The diagnosis: A fault with the equipment at one of their major cell sites, which could have been picked up in minutes instead of hours or days. Unfortunately, the customer care department’s data wasn’t integrated with the legacy fault management software, meaning that the cause and effect couldn’t be identified and resolved in real-time.

The remedy: Ensure your operations provide a vertical (top down) view, so you can simultaneously view network and customer problems, and prioritise accordingly.

For a preventive cure that can collect and analyse raw terabytes of network and service platform performance data; in real-time, contact us.

How to manage what you can’t see

Preparing for the anonymisation of individual’s data

Our CTO, Chris Mathews considers the European Commission’s proposal to give consumers the right to be forgotten online, meaning they will have the choice to become ‘invisible’ online and erase their digital footprint. In this post we highlight the key points on how businesses can continue to manage its data should the proposal come into place.

The end of the data industry?

The proposal set forward by Viviane Reding, the EC justice commissioner would allow individuals to demand that data about them is deleted from companies’ data stores.

There will of course be some organisations that will feel the negative effect of the proposal, and in fact companies such as Google have been lobbying against the motion, arguing that it will leave gaps in its data. The changes could take up to two years to come into effect and those companies who are prepared to change will be able to deal with the changes more than those companies who do nothing.

Should the anonymisation of individuals occur, here are few ways organisations can prepare:

  • Find the value in data where the individual may benefit eg. reward schemes
  • Update your existing terms and conditions to confirm the agreement of customers to allow you to use their data
  • Show test cases to demonstrate that data can be anonymised
  • Educate your customers on the potential benefits of releasing personal information

For further information on the topic, read Chris’s Q&A interview with Tineka Smith at Computer Business Review here.


Is a Streaming Analytics Correlation Engine Powering Your Big Data?

Hype or no hype, big data is not a new phenomenon and can provide businesses incredible insight. That is, if businesses know what to look for, how to use the data and have the right tools to hand. In this … Continue reading


Big Data Never Sleeps

– Takeaways from the TM Forum Big Data Analytics Summit SysMech Sales Director, Terry Harding, reflects on his attendance at the recent TM Forum Big Data Analytics Summit in Amsterdam and the key points emerging. Big data is in itself … Continue reading


Big Data’s big week – from mainstream to monetisation

This week saw the topic of ‘big data’ discussed on Radio 4’s Today programme  with Dr Shirley-Ann Jackson, President Obama’s advisor. Reaching beyond the technical audience, the programme described a future where tremendous amounts of data could be mined and … Continue reading