There are several motivations to create a distributed visualization of technical processes:
Vision supports a distributed model, which allows multiple visualization stations to monitor the same process controller. All participating stations run a copy of the same executable. They are only distinguished by local configuration information. Therefore, although the initial presentation of process data may differ between stations, each station has the ability to display the entire range of available instruments. Also, multiple stations may display both control instruments and passive instruments, which are linked to the same process value at the same time.
The distributed version of Vision uses the same basic concept as the single station version. Also in the distributed version there exists only one instance of the post in the entire system at a time. The way instruments are created and registered at the post, and the way they communicate with the post did not have to be altered. The major extension to the concept needed is a transparent extension of the event handling which allows the addressing and sending of events concerning process data across the network. This has been done by supplementing the event address of instruments with a network address. When events are dispatched from a systems event queue, the network address is compared with the address of the system itself and if this is found not to be identical the event is serialized and sent across the network.
The logical network connections are arranged as follows:
Figure 5: logical connections
However, because of the inherent asynchronous parallelism of the distributed model an invariant of the single station version is not valid any more, which is concerned with the existence of event destinations. Because event handling across machine boundaries has been limited to events concerning process data, and for all events sent across the network the post is either sender or receiver, it is sufficient to consider the protocol of event exchanges implemented by the post. The post assumes that all instruments registered there are existing and thus the event address formed by the network address and memory location of a registered instrument is valid. Based on that assumption the post generates update events for all registered instruments when necessary. Parallel to that an instrument may be closed, but the event, which notifies the post, may be processed only after an update event for a non-existing instrument has been sent.
Figure 6: synchronization problem
To solve this problem, at each station the mechanism which receives events from the network maintains a private list of local valid instruments and processes only events with valid destinations. On the other side of the line the post is an object which is never deleted and thus always exists. Considerations concerning the total failure of a station and especially the station running the post are presented later. No other synchronization problems are introduced by system distribution, which might compromise system integrity and stability.
To ensure that there is no loss of data when transferring it on the network, we use an error tolerant connection oriented network protocol. In case there are many visualization stations connected, network traffic can increase to the limit set by the protocol and by the hardware. For this reason, distributed visualization can never fulfill hard real time requirements, but as the visualization is fully separated from controlling there is no need for it. Visions performance has been tested on a simple Ethernet LAN, using the TCP/IP protocol. Because of the intelligent buffering mechanism employed by Vision, which is described later, network congestion does not cause immediate system failure. Even then the worst case measured for an update was below one second. Because all critical automated responses are part of the controller software, and seeing this in relation to the response time of the user of the visualization, this is tolerable.