Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

Wait, You Can Do That? – An AI Update for T&D Utilities

Artificial intelligence (AI) never seems far from the headlines, but OpenAI’s most recent incarnation of ChatGPT has certainly (re)captured imaginations in the here and now. While columnists fret over the death of the undergraduate essay (and for their own jobs, perhaps) an equally exciting (albeit less flashy) AI revolution has been underway in the transmission and distribution (T&D) utilities space.

So, now seems a good time to check in on the state of play for practical applications of AI in the T&D sector and to give a glimpse of what we can do today, what we’ll be able to do tomorrow, and what will follow shortly thereafter.

Today

In the last few years, we have made some impressive advances in AI with regards to creating living digital twins. 

For background, we build our 3D digital twins of powerline infrastructure using a variety of sensor data types. We use LiDAR (light detection and ranging) data to create point clouds, overlaid with high definition RGB (red, green, blue) imagery and with the capacity to incorporate hyperspectral imagery, satellite images and other data.

The AI algorithms then make sense of this to construct an intelligible, navigable 3D digital twin of the network – but doing so is anything but simple.

Instance identification

For example, something seemingly straightforward such as identifying an individual utility pole is actually rather complicated. First, there is the identification – how can the software tell a pole from another vertical object, such as a tree? In this instance, we can actually collect sufficient examples of manually labelled point-clouds and fine-tune our deep neural models to modify algorithms already developed to identify trees and teach them that ‘trees’ that are exceptionally straight and have no branches are actually poles. However, you then have the challenge of identifying the edge cases which significantly deviate from the norm, for example, within the real world it may happen that a tree is temporarily used to prop up some wires in remote locations, the neural model will then get confused – no wonder, as the object is essentially both a pole and a tree. Another example would be a tree that has no branches remaining, even to a human it may look like a pole when looking at point-cloud data.

Then there is instance individualization. Imagine you are standing in a field, looking directly down the length of a transmission line. You will see several poles stretching into the distance and – from a two-dimensional perspective – they will overlap. How can AI count and delineate between individual instances of powerline components in such an image? 

Here, the utility sector can enjoy a degree of second mover advantage, because our colleagues working on autonomous vehicles as well as human pose estimation have already done some of the hard work for us. A self-driving car in a lane of traffic needs to know whether there is one, two or several cars in front of it in the lane. Like our utility poles, the car’s actual 2D view will be a smudge of overlapping car components from which it must identify individual instances of vehicles – and given the safety constraints that sector is working under, you can trust that they have put some serious resources into getting this right.

Read More: Nippon Life Selects H2O.ai to Transform Its Insurance Business With Machine Learning and Improve Customer Health

Furthermore, if you take as an example the complexity of trying to separate individuals out of a crowd and determine their pose while keeping track of every limb, the fairly isolated powerline infrastructure is much easier to digitize accurately – hence we can pick up some rather powerful and ingenious AI model architectures as a starting point. 

This is however just the start. We must then modify the models to fit our specific use cases, and work on collecting accurately annotated data. Then we must solve the aforementioned edge cases – the last 5% that is particularly difficult after all we strive to approach 100% accuracy, as safe and consistent power delivery can mean the difference between life and death.      

The digital twin paradigm is not all about automated algorithms and menial labelling. Whenever we speak to utility engineers, they immediately see the value in this apparently innocuous AI application. This is because for many utilities – especially older ones – their assets across the network have never been properly mapped. Sending out line crews to survey would be prohibitively expensive, so instead they make do with approximate knowledge. However, the advent of AI paired with easy drone and helicopter based data collection suddenly makes this feasible. Utilities can then better plan maintenance, assess risk, and therefore enjoy more accurate financial forecasting, among other things.

Related Posts
1 of 7,063

On woods, trees, etc.

Utilities must not only model their own assets such as poles though, they also need to contextualize their digital twins with what surrounds the infrastructure. Vegetation management is a prime example. Utilities must stay on top of trees encroaching on powerlines, which risk damaging assets or even starting fires. Millions are spent every year on walking the line, spotting problem trees, and sending crews to cut them back.

AI can help. Just as with the poles, we must be able to identify trees and map them in the point cloud using LiDAR. Instance identification is also critical – you must be able to distinguish between individual trees and even branches to assess risk. However, we must go one step further in this case, and identify which species of tree we are looking at. Different species have different growth rates and propensity to break in adverse weather, and therefore pose different risks. 

If you read the research literature on this, you will find various claims of >99% species detection accuracy. However, often these results can only be replicated during specific seasons, at a specific time of day, etc. – everything has to be just right. In fact, most researchers work with open source data which may only comprise a single short data collection flight. The real world is not so obliging. However, today we can overlay hyperspectral image data onto a 3D twin created with LiDAR and RGB imagery and use extensively-trained AI algorithms to achieve >95% species identification accuracy. This is attained using a number of techniques invented in-house or borrowed from tangential domains, combined with extensive domain knowledge from our team.     

Today, our partners at utilities are often excited by these specific use cases, and we are excited too. However, the bigger prizes are yet to come and as an industry we need to get more used to looking at the bigger picture regarding AI. It is notable that we need humans to step back and see the wood for the trees, just as we are teaching the AI to see the trees for the woods. 

Top AI ML News: A Green Future: 10 Ways to Achieve Carbon Neutrality with AI

Tomorrow

With that in mind, what’s next? In short, more detailed and accurate living digital twins made more affordably and quickly using advances in AI. Consider: today we combine enormous volumes of LiDAR and RGB data. Over time, we can train AI to recognize common patterns across the two. In effect, this means that we will be able to build reasonably accurate realistic 3D models with simulated physical properties which resemble the real world using only RGB imagery – multimodal models built from monomodal data. In practice, this might mean that a utility flies an initial helicopter flight to capture a baseline of LiDAR data (plus others), then ‘tops-up’ that data using drone-mounted RGB cameras every six months, thereby keeping the digital twin up to date more easily and cost-effectively.

We are also very close to a world where utility maintenance programs are predictive and preventative rather than reactive and ad hoc, based on AI models of deterioration and risk prediction. In fact, we can largely do this today, but few utilities yet have enough data-rich living digital twins to do so.

Part of that is flowing-in the wealth of data that is currently collected by line-workers. Today, this takes a lot of manual data cleansing, as humans tend to write information in esoteric ways that aren’t easily understandable for machines. However, this is necessary work to do – the average age of a line-worker in the US is 41, and they commonly retire young, between 55-60. An ageing workforce means we must do what we can to capture that knowledge before it leaves the workforce. Fortunately, AI can help do that – and the younger workers that do come through should be comfortable using digital tools to help them multiply their work throughput, and thus increase consistency of power delivery – potentially saving lives in the process.

Shortly thereafter

And what about after that – the figurative day after tomorrow? In short, we will see digital twins that boast immense levels of detail and predictive modelling with superhuman accuracy. This will create efficiencies in functions we see today – streamlining maintenance regimes and equipment purchasing, modelling risk more accurately for better rates on insurance and financing, better prevention of outages and fires, etc. No doubt we will also see new operations emerge as bright minds experiment with tomorrow’s tools.

We will also see the general trend continue that we can do more with less data. Today, we employ an in-house team of expert data labelers who polish the data so that it can be understood by the AI algorithms. These algorithms are then trained over time on vast quantities of data and their predictive and analytical powers become better and more powerful as they do so. That in turn means they require less data as input to achieve the same outcome. 

Oh – and those data labelers – they might just be a model for the line-worker of the future, who operate predominantly in the living digital twin rather than the real world. Imagine that – a world where even the line-workers can w*************. Anything is possible with AI.

[To share your insights with us, please write to sghosh@martechseries.com]

Comments are closed.