MLOS now supports universal ONNX conversion and multi-type tensor inference across all major ML repositories
Multi-framework ONNX conversion with repository-specific strategies
Enhanced ONNX plugin with comprehensive tensor type support
Complete E2E workflow from any repository to kernel-level inference
Axon now intelligently routes models to the best converter for their source repository:
MLOS Core plugin now parses JSON inputs with full type support:
All enhancements leverage the existing generic void* API - proving the architecture was designed right from the start. No breaking changes, just more capabilities!
Status: ✅ Passing
Status: ⏳ Ready (not tested)
Status: ⏳ Ready (not tested)
Start running models from any repository with kernel-level performance today
View on GitHub →