Xiao Liu from Swinburne University of Technology presented his paper on A Probabilistic Strategy for Setting Temporal Constraints in Scientific Workflows, co-authored by Jinjun Chen and Yun Yang. This is motivated by the problem of using only a few overall user-specified temporal constraints on a process without considering system performance and issues of local fine-grained control: this can result in frequent temporal variations and huge exception-handling costs.
They established two basic requirements temporal constraints must allow for both coarse-grained and fine-grained control, and they must consider both user requirements and system performance. They used some probabilistic assumptions, such as normal distributions of activity durations. They determined the weighted joint normal distribution that estimated the overall completion time of the entire workflow based on the time required for each activity, the probability of iterations and the probability of different choice paths: assuming the normal distributes of events as earlier stated, this allows for the calculation of maximum and minimum duration from the mean by assuming that almost all process instance durations will be bounded by +/- 3 sigma (sorry, can’t find the sigma symbol right now). After aggregating to set the coarse-grained temporal constraints, they can propagate to set the fine-grained temporal constraints on each activity. There are modifications to the models if, for example, it’s known that there is not a normal distribution of activity durations.
This becomes relevant in practice when you consider setting service level agreements (SLAs) for processes: if you don’t have a good idea of how long a process is going to take and the variability from the mean, then you can’t set a reasonable SLA for that process. In cases where a violation of an SLA impacts a company financially, either immediately through compliance penalties or in the longer term through loss of revenue, this is particularly important.