Digital twins are one of the most important and frequently discussed aspects of IoT. But some digital twins suffer from poor reliability and resilience. If you want to guarantee high reliability and resiliency in a digital twin, there are three critical IoT performance tests to run:
- Pushing data to the digital twin
- Pulling data from the digital twin
- Ensuring the digital twin has the proper, most recent, device status
By testing these three aspects of a digital twin, you can ensure high levels of performance for data entering a digital twin, data leaving a digital twin, and the data accuracy of the digital twin.
Read on to learn more from the IoT performance testing experts at MachNation.
Introduction
A digital twin–the digital representation of a physical device–is one of the most critical parts of an IoT solution and sits at the heart of almost all connected solutions. Without a digital twin, there are few practical or cost-effective ways to implement a scalable, secure IoT solution.
Digital twins are so important that IoT industry analysts write in detail about the concept routinely. Check-out materials from IoT Analytics in its Digital Twin Insights Report 2020, Transforma postulating the transition from digital twin to digital master, and Gartner with various digital-twin survey analyses.
Missing from all of these conversations is the seemingly simple question: Is an enterprise’s IoT digital twin reliable?
It’s a question MachNation routinely addresses during IoT performance testing with Tempest, our IoT simulation software. Oftentimes enterprises are suffering from reliability and resiliency problems associated with poorly implemented digital twins. Poor platform and IoT solution reliability leads to
- Low customer satisfaction
- High customer churn
- Decreases in revenues
- Increases in IoT solution cost
IoT solutions that suffer from poorly implemented digital twins experience long message latencies, missed device alarms, and incorrect data presented in an IoT application, dashboard, or analytics tool. All of these problems reduce the value of an IoT solution, negatively impacting an IoT solution’s return-on-investment (ROI).
Improving digital twin reliability and resiliency
In the most recent release of Tempest, we have implemented a way to test the reliability, scalability, and performance of an IoT digital twin. The ability to test a digital twin is one of the unique characteristics of Tempest–a characteristic not found in legacy performance testing tools.
If you want to improve reliability of a digital twin, here are the three most important IoT performance tests to run, based on performance testing MachNation has completed for various Tempest enterprise clients.
1. Pushing data to the digital twin. A typical IoT device sends data to an IoT cloud platform to be used for workflows like visualization, condition monitoring, analytics, storage, and more. This information updates the digital twin on the IoT platform with the latest device status. Making sure that device status data can update the digital twin efficiently is one of the most critical performance characteristics of an IoT solution.
When we see problems with digital-twin performance, it is often the result of poorly written APIs or ill-implemented digital twin models for an IoT solution. We encourage enterprises to build a configurable test template in Tempest to performance test the ability of the digital twin to accept device status update messages. Once configured, the Tempest worker engine sends messages into the digital twin at rates appropriate for a production-scale IoT solution. Then, Tempest’s reporting pipeline provides data on message latency rates–the number of milliseconds it takes to update the digital twin–as well as message failure rates.
2. Pulling data from the digital twin. Typical IoT solutions allow users (or apps) to query data from the digital twin. For example, a factory floor technician wants to see which sensors are malfunctioning on a piece of equipment. He uses his IoT app to query the equipment’s digital twin to visualize the latest alarm status. Or, you leave your home and aren’t sure if you closed your IoT-enabled garage door. So you use your mobile IoT app to query the garage door’s digital twin to see if it’s open or closed. In both cases, the user is checking device status by querying the digital twin, not the actual device itself. These are some of the most common IoT workflows across all IoT use cases.
When we see problems with digital twin performance, it sometimes comes from extremely long latencies associated with making queries on a digital twin. The exercise of pulling data from a digital twin should flow unencumbered and allow for periodic heavy querying volumes. If you can’t efficiently query data from a digital twin, you will have low IoT solution reliability, because the performance of IoT analytics, visualization, and IoT apps will be very poor.
3. Ensuring the digital twin has the proper, most recent, device status. Even if data flows into and out of the digital twin with high performance, there are still cases where the data flowing into and out of the digital twin aren’t the same. For example, an IoT device shares information with its digital twin that an alarm was triggered at 6:23 am showing a temperature reading of 52C, however, the digital twin update shows an alarm was triggered at 6:23 am, but does not capture the temperature reading. If the digital twin is updating inaccurately, it will be inaccurate when a user (or some IoT app) goes to query the data. There are myriad reasons why a digital twin is not accurately updated with the latest device status.
When we see this type of problem with a digital twin, it becomes very important to test that the data flowing into and queried out of each digital twin are identical. By comparing data and metadata associated with all or a fraction of digital twin updates during performance testing, we can identify the nature of the digital twin inaccuracy and tune the IoT solution to correct the problem. This alleviates the real-world issue of poor IoT solution reliability.
Conclusion
By performance testing a digital twin, you can ensure high levels of performance associated with pushing data to a digital twin, pulling data from the digital twin, and ensuring the digital twin has the proper, most recent, device status. This type of ongoing IoT performance testing leads to a highly reliable and resilient digital twin and end-to-end IoT solution.
MachNation Tempest and our team of testing engineers can help ensure the quality of your IoT digital twin and end-to-end IoT solution.
Want to learn more about high-performance digital twins?
Talk with us about your IoT digital twin challenges.
Find out more about MachNation Tempest.
Find out more about MachNation IoT platform testing and benchmarking.