Using RL with a Transformer that outputs structured actions (index + complex object) — architecture advice?
Hi everyone,
I’m working on a research project where my advisor suggested combining reinforcement learning with a transformer model, and I’m trying to figure out what the best architecture might look like. I unfortunately can’t share too many details about the actual project (sorry!), but I’ll try to explain the technical structure as clearly as possible using simplified examples.
Problem setup (simplified example)
Imagine we have a sequence where each element is represented by a super-token containing many attributes. Something like:
token = {
feature_1,
feature_2,
feature_3,
…
feature_k
}
So the transformer input is something like:
[token_1, token_2, token_3, …, token_N]
Each token is basically a bundle of multiple parameters (not just a simple discrete token).
The model then needs to decide an action that is structured, for example:
action = (index_to_modify, new_object)
Example dummy scenario:
state: [A, B, C, D, E]
action:
index_to_modify = 2
new_object = X
The reward is determined by a set of rules that evaluate whether the modification improves the state.
Importantly:
• There is no single correct answer • Multiple outputs may be valid • I also want the agent to sometimes explore outside the rule set
My questions
- Transformer output structure
Is it reasonable to design the transformer with multiple heads, for example:
• head 1 → probability distribution over indices • head 2 → distribution over possible object replacements
So effectively the policy becomes:
π(a | s) = π(index | s) * π(object | s, index)
Is this a common design pattern for RL with transformers?
Or would it be better to treat each (index, object) pair as a single action in a large discrete action space?
⸻
- RL algorithm choice
For a setup like this, would something like PPO / actor-critic be the most reasonable starting point?
Or are there RL approaches that are particularly well suited for structured / factorized action spaces?
⸻
- Exploration outside rule-based rewards
The reward function is mostly based on domain rules, but I don’t want the agent to only learn those rules rigidly.
I want it to:
• get reward when following good rule-based decisions • occasionally explore other possibilities that might still work
What’s the best way to do this?
I’m not sure what works best when the policy is produced by a transformer.
⸻
- Super-token inputs
Because each input token contains many parameters, I’m currently thinking of embedding them separately and summing/concatenating them before feeding them into the transformer.
Is this the usual approach, or are there better ways to handle multi-field tokens in transformers?
submitted by /u/Unique_Simple_1383
[link] [comments]