Adaptive Action Pruning: Scaling Index Selection for Unseen Workloads
Table of Links
-
Related Works
-
Methodology
4.1 Formulation of the DRL Problem
4.2 Instance-Aware Deep Reinforcement Learning for Efficient Index Selection
-
Experiments
6.4 Key Insights
Summarizing our extensive experiments, IA2 represents a significant advancement in index selection, outperforming existing methods in several key areas:
Rapid Training Efficiency: IA2 excels with its unparalleled training speed, leveraging a what-if cost model and pre-trained models to facilitate quick adaptability and learning. This efficiency allows IA2 to drastically reduce training time compared to competitors, making it highly suitable for environments where speed is crucial.
Advanced Workload Modeling: Unlike static or exhaustive methods, IA2 employs dynamic workload modeling, enabling it to adapt to changing database queries and structures seamlessly. This flexibility ensures optimal index selection across diverse scenarios, including previously unseen workloads.
Effective Action Space Exploration: IA2 introduces an innovative approach to pruning and navigating the action space, efficiently identifying meaningful actions early in the training process. This strategy contrasts with the more resource-intensive techniques of SWIRL [6] or the rigid rules of Lan et al. [7], offering a balanced pathway to optimizing index configurations without exhaustive search or oversimplification.
:::info
Authors:
(1) Taiyi Wang, University of Cambridge, Cambridge, United Kingdom (Taiyi.Wang@cl.cam.ac.uk);
(2) Eiko Yoneki, University of Cambridge, Cambridge, United Kingdom (eiko.yoneki@cl.cam.ac.uk).
:::
:::info
This paper is available on arxiv under CC BY-NC-SA 4.0 Deed (Attribution-Noncommercial-Sharelike 4.0 International) license.
:::