increase the granularity of Timestamps in data log

General topics regarding OPC Foundation and communication technology in general.

Moderator: Support Team

Post Reply
aak
Jr. Member
Jr. Member
Posts: 4
Joined: 16 Apr 2025, 13:48

increase the granularity of Timestamps in data log

Post by aak »

Hi
I want to measure some latencies. So, i was wondering is there a way to increase the granularity of the timestamps, i.e., get to microsecond or nanosecond resolution instead of millisecond in UaExpert data logs?

Thanks in advance

User avatar
Support Team
Hero Member
Hero Member
Posts: 3277
Joined: 18 Mar 2011, 15:09

Re: increase the granularity of Timestamps in data log

Post by Support Team »

Hi,

you can not measure the latency with UaExpert, within log or trace file, because this will measure the time including the writing time on the disk. In fact you should measure WITHOUT any trace of log files being activated. Logging and tracing can cost up to 30% of the overall performance.

You can measure the (round trip) performance with UaExpert (Performance View Plugin), which you can find under the "Documents" in UaExpert.

Alternatively you can measure the latency to some degree with "Wireshark" showing high relolution timestaps on the TCP messages level.
Best regards
Unified Automation Support Team

aak
Jr. Member
Jr. Member
Posts: 4
Joined: 16 Apr 2025, 13:48

Re: increase the granularity of Timestamps in data log

Post by aak »

Thank you so much for the reply. Can you please tell me about how performance View plugin measures the performance of the system? Which network performance metrics it is considering and how do we measure one cycle. For example, I have a periodic data generated every 4 seconds, how performance view will handle it at receiver's side?
Thanks in advance

User avatar
Support Team
Hero Member
Hero Member
Posts: 3277
Joined: 18 Mar 2011, 15:09

Re: increase the granularity of Timestamps in data log

Post by Support Team »

Hi,

if data is generated every 4 seconds at the source, and if the server is asked to monitore the data with 500ms sampling rate, than it depends on the implementation of the server (the so called sampling engine) when the change of data is detected, however at latest within 515ms after the change (depending on the accuracy of the timer implementation used by the sampling engine). The typical average change detection will be faster, and as long as you sample the data faster than it changes, you will not loose any change of data (Nyquist-Shannon and Whittaker).

Depending on the implementation of the data source and the server's sampling engine, it may even run without any sampling, but firing the data directly on change, e.g. event driven source and/or pushing data into a queue. However, the implementation is detail inside the server, the OPC Specification just reads like this: "data is not send faster than Publish Interval"

Note: when measuring the "accuracy of the Publish Interval", you probably not get the maximum throughput (performance) of the server.
Best regards
Unified Automation Support Team

Post Reply