SOL represents a leap in AI development – a full stack AI compiler and optimizer engineered to unleash the full capacities of AI.
SOL provides a standardized AI hardware-software interface, removing hardware and software bottlenecks while offering optimization and acceleration of neural networks. It enables mathematical accuracy, helps you streamline your AI workloads and enhances AI capabilities for your tasks.
Why SOL?
SOL bridges the gap between advanced hardware and AI frameworks, offering smooth integration with existing AI infrastructure and tools to optimize a wide range of neural networks.
It helps boost performance and reduces memory and compute overhead without requiring hardware or software changes.
How SOL works
SOL integrates smoothly into the AI software and hardware stack, optimizing AI workloads.
Input AI model: Kick-start model development with SOL's one-line integration into Python and your AI frameworks of choice.
SOL then automatically optimizes your AI model in three steps.
Step 1.
High-level transformations. SOL optimizes the AI model using intelligent optimization including:
- Dead code elimination: Cleanses unused code for efficiency
- Deduplication: Merges duplicate operations to save memory
- Mathematical transformations: Reconfigure calculations for optimal performance, keeping the mathematical integrity and accuracy of your model intact
- Device-specific transformations: Tailor operations for specific hardware capabilities when required
Step 2.
Greedy auto-tuning.
SOL's heuristic-based auto-tuning swiftly pinpoints the best optimization paths for your AI model, conducting localized tests that enhance each model layer, without the long wait times typical of global tuning methods.
Step 3.
Code generation.
Generates specific code optimized for target hardware.
Which AI frameworks is SOL compatible with?
SOL supports major AI frameworks such as PyTorch, TensorFlow and NumPy, and has the ability to load models trained in one framework (such as TensorFlow) into a different one (such as PyTorch).
What hardware does SOL support?
SOL works with a variety of hardware platforms, offering native, hybrid and offload AI execution modes.
New hardware can be supported by adding hardware plug-ins for SOL.
How can I get started using SOL?
If you’re an AI engineer, you can start by adding SOL as a package in your Python code. If you’re a hardware producer, you can start by substituting SOL for your current compiler.
SOL AI performance gains
NEC customer voices
SOL AI optimization as a service for 6G networks
NEC is a key partner in the European project DESIRE6G, which will design and develop deep programmability and secure distributed intelligence for real-time, end-to-end 6G networks.
Implementing NEC SOL as a service in the DESIRE6G project has led to a 2.5x reduction in memory usage and up to 50x performance improvement for nonstandard recurrent neural networks, demonstrating its efficiency and scalability.

Supercharge your AI
Easy to implement, SOL helps teams develop and deploy AI products faster – retaining model accuracy while reducing energy needed for AI development.
Excellent flexibility
Run AI in your framework and on your hardware of choice
Approval from experts
Developed by compiler experts and validated by customers throughout the AI technology stack
Preserved precision
Enhanced performance while retaining the mathematical equivalence of AI models
Enhanced performance
Automatic optimizations throughout the AI technology stack, with enhanced model training, inference and deployment
Simplified use
Easy integration with Python, custom integrations using SOL SDK, with full-service implementation by NEC
Measuring SOL’s breakthrough performance
Benchmarking SOL against other compilers highlights its efficiency and speed. With an auto-tuning mechanism that outperforms other compilers, SOL helps your AI models run faster, consume less memory, and operate with needed precision. Dive into the metrics that matter and witness the performance gains that SOL consistently delivers.