[D] Advice on journal for work between ML, data infrastructures, and robotics
I’m looking for guidance on a journal submission for a paper that sits between disciplinary lines: ML, robotics, and research data infrastructures. I’d really appreciate your perspective.
Context: We recently received an editorial reject from an IEEE journal after a long review process. The decision was frustrating mainly because the reviewer feedback was largely positive, and from our side it felt like one more revision round would have been sufficient. Before blindly resubmitting elsewhere, I’m trying to get a sense of where this kind of work may fit.
tl;dr: We build dynamic and semantic “data-to-Knowledge pipelines” across organisational boundaries and demonstrated their benefits by training a more robust base model for inverse kinematics in robot control.
Concretely:
- We deployed identical robotic systems (Franka Emika robots) across multiple research institutes and locations.
- Their motion data was independently collected, then centrally stored and published via a research data infrastructure, making these datasets FAIR and discoverable.
- A separate, independent process semantically queries suitable datasets, train an ML-based foundation model for robot trajectories on demand, and publish the trained model openly again.
We think the results shows a few important things:
- Organizational feasibility: This kind of loosely coupled, cross-institutional pipeline actually works in practice.
- Clear technical value: Through sharing larger datasets become available much faster (in academic research, this is often proposed, but rarely done; at least in my experience).
- Despite using identical robot models, small systematic differences between setups improve robustness of the final base model (benchmarks contrast the more heterogenous base model against others).
- Thus the resulting model transfers better to new contexts than models trained on single-site data.
Why this feels “between the disciplines”: We can absolutely debate:
- which technologies could have been integrated, if smarter semantic annotations, tools and frameworks, would have been better etc. So the modelling/semantic web community will probably judge this work as too hands on.
- whether the abstraction level is “high” or “low” enough, if more and different machines would have need to be integrated in this demonstrator. People working on different machines may probably dislike our usecase (which was hard enough to find in a university context)
- or whether it’s more systems, ML, or infrastructure work.
Our approach is intentionally pragmatic:
- we loosely couple existing heterogeneous systems,
- avoid vendor- or technology lock-in,
- and focus on actually running code instead of purely conceptual integration papers.
Everything is open: connectors, training pipeline, datasets, and the source code.
In that sense, the work goes beyond many conceptual papers that propose integration but don’t implement it end-to-end. On the other hand, it’s not a new algorithm, a new tool fulfilling a narrowly defined goal, its not a new infrastructure, not a new base model that works for all robots, etc.
Where would you see or submit a paper like this? Most communities I know are either/or but have troubles accepting works that combine elements from different disciplinary perspectives. What are communities that “tolerate” integration, openness, and empirical feasibility over algorithmic or modelling novelty? Thanks a lot!
submitted by /u/lipflip
[link] [comments]