Markov-Based Probabilistic State Estimation for Deadlock Prediction in Distributed Systems

Deadlocks are a persistent problem in distributed queueing networks due to dynamic workloads, shared resource contention, and nondeterministic execution. Conventional deadlock detection techniques are predominantly reactive and rely on deterministic rules, which limits their effectiveness in highly dynamic environments. This study proposes a Markov-based proactive deadlock detection system that estimates deadlock risk using probabilistic state modeling. The system is modeled as a stochastic process under the Markov assumption, where future system behavior depends only on the current state. System metrics such as resource utilization, traffic intensity, and queue contention are mapped to probabilistic indicators that approximate state transition likelihoods. Using Markov-based probability estimation, the likelihood of transitioning from a safe state to a deadlockprone state is continuously evaluated. These probabilistic outputs are combined into a unified deadlock risk score and integrated with machine learning classifiers to improve detection accuracy. Simulation experiments conducted under varying contention and workload conditions show that the proposed approach achieves an average detection accuracy of 91.8%, precision of 89.4%, and recall of 93.1%, while detecting deadlock-prone states 22–30% earlier than traditional reactive methods. The results demonstrate that Markov-based probabilistic state modeling effectively captures uncertainty and dynamic system behavior, enabling proactive deadlock detection and improved system reliability in distributed queueing networks.

Liked Liked