Data LOSS at client side

Questions regarding the use of the .NET SDK 2.0 for Server or Client development or integration into customer products ...

Moderator: uasdknet

Post Reply
abhijit.bhopale
Full Member
Full Member
Posts: 6
Joined: 25 Sep 2021, 14:46

Data LOSS at client side

Post by abhijit.bhopale »

I have created a client-server application using UA Modeller where the server has only a single node of type Int64 and I have written two clients.

Client 1 - Continuously updating the node value after every 100 milliseconds
Client 2 - Only subscribed to the node which we are updating in client 1
Created the subscription using the below setting
subscription.PublishingInterval = 0;
subscription.Lifetime = 600000;

Client 1 is updating the node value after 100 milliseconds but Client 2 is receiving only ~4800 updates out of 5000
If I update the data after every 10 milliseconds then we receive less than 800 records out of 5000.

Is this a known issue or am I missing something?

User avatar
Support Team
Hero Member
Hero Member
Posts: 3064
Joined: 18 Mar 2011, 15:09

Re: Data LOSS at client side

Post by Support Team »

Hi,

oh yes, you are missing quite a bit here.

1) have you measured on your Client1 how accurate is the interval you are writing? Is it 95ms to 105ms? or is it 75 to 115ms? Please measure the timer accuracy of your write interval using a high precision clock.

2) what is the "revised sampling interval" you are getting on your Client2 when creating the monitored item? have you considered sampling and publish interval being independent intervals, and the relation of the data queue allowing to sample much faster than publishing? (see second picture here: https://documentation.unified-automation.com/uasdknet/3.1.0/html/L2UaSubscription.html)

3) what do you think is the best approach for sampling rate, in order to not miss any change of data (considering the Nyquist-Shannon Sampling Theorem)? assuming queue size is 1 and publish interval is faster than sampling?

Generally spoken: I am pretty sure that there is NO loss of data when configuring/requesting the correct values for the mentioned intervals. Secondly I am pretty sure that OPC UA technology provides some "ingenious" features to prevent any loss of data (even accross connection interruption), however requires some basic understanding of the OPC UA subscription mechanizm.
Best regards
Unified Automation Support Team

abhijit.bhopale
Full Member
Full Member
Posts: 6
Joined: 25 Sep 2021, 14:46

Re: Data LOSS at client side

Post by abhijit.bhopale »

Hi,

Thank you for your response and the explanation. I am able to receive all the updates when the sampling is done faster than the publishing interval.

Still, I am not able to set the publishing interval lesser than 50 ms using .Net-based SDK whereas with a C++ Demo server it is possible to set a publishing interval lesser than 50 ms.
The revised publishing interval is always set at 50 ms even if tried setting up below 50 ms.

How can I set publishing interval lesser than 50 ms using the .Net OPC UA server? We need it for our performance evaluation.

Even tried updating MinPublishingInterval in the server code. as shown below.
// SubscriptionSettings
var subscription = new UnifiedAutomation.UaSchema.SubscriptionSettings()
{
MinPublishingInterval = 10,
PublishingIntervalResolution = 10,
MaxSubscriptionCount = 500,
MaxSubscriptionsPerSession = 100,
MaxNotificationsPerPublish = 25000,
MaxMessageQueueSize = 10000
};
application.SubscriptionSettings = subscription;

application.TransportSettings = new TransportSettings()
{
SecurityTokenLifetime = 60000,
InactiveChannelLifetime = 60000
};
ApplicationInstanceBase.Default.SetApplicationSettings(application);

User avatar
Support Team
Hero Member
Hero Member
Posts: 3064
Joined: 18 Mar 2011, 15:09

Re: Data LOSS at client side

Post by Support Team »

Hi,

please carefully read the documentation:
https://documentation.unified-automation.com/uasdknet/3.1.0/html/L2BaseLibConfigSchema.html

It says:
1) MinPublishingInterval = (The minimum publishing interval in milliseconds. The minimum and default is PublishingIntervalResolution.)
2) PublishingIntervalResolution = (The minimum publish interval in milliseconds. The default is 100. The minimum is 50.)
With that you can assume that the minimum is 50, independently what ever you have configured.

Generally: the "performance measurement" you are trying to measure, makes no sense at all. At best you will measure the "accuracy" of the Windows timer implementation, which is approx plus/minus 15ms (no need to measure because it is known already). The timer resolution is given by the system heartbeat. This typically defaults to 64 beats/s which is 15.625 ms. Depending on the Windows and .NET version could be better, but requires using different implementation, which in turn will not be available on all legacy systems. In any case (even with an accuracy of 1ms) you will not be able to interpret the results, because it has nothing to do with "throughput" or the "bandwith" or the "speed" in general. The OPC UA subscription mechanizm is invented to protect the server from being overloaded, the server must "delay" (not deliver faster than) the configured rate. The Client can request slower rate to protect himself, to not getting overloaded by fast delivery (e.g. when client is logging data to a slow database). In most use cases (e.g. HMI and SCADA) the data is "displayed" for a human operater, in that case 500ms to 2000ms will be more than enough. That said, the given limit of 50ms is way better than required. To make this point very clear, what you measure will always be at the multitude of publish rate, of 50ms in this case (independent from the number of tags you have in your subscription), you can not measure throughput, you can not measure speed, you can not measure reaction time, you can not measure "jitter" (except the jitter on the Windows system timer accuracy, which is known to be 15ms), so where is the sense in that?

If you want to measure the maximum speed/throughput the server can handle at 100% CPU load, you could measure the "Read(registered)" with maximum number of tags in an high speed loop. The tags should be "registered" once at the server first, before being used in the measurement, because that is what a (good) client implementation will do when looking for performance, throughput and reaction/response time.

Tip: if you want to avoid measuring implementation glitches in your own measurement client, you can just use the UaExpert (PerformanceView) for doing the measurement, that will give you the reference performance measurement.
Best regards
Unified Automation Support Team

Post Reply