
The Most Efficient Vision Models
Jump into our cutting-edge research on performance and computer vision.

Object Detection
fai-detr
Fast, accurate COCO & Objects365 model family. Ideal for smart cameras, surveillance, and robotics.

Semantic Segmentation
fai-mf
Pixel-precise models family trained on ADE20K. Ideal for smart cities, self-driving, and aerospace.

Instance Segmentation
fai-mf
Instance segmentation models pretrained on ADE20K. Ideal for inspection, robotics, and logistics.

Semantic Segmentation
bisenetformer
Real-time semantic segmentation model family for high accuracy and efficiency, powered by our research.
Ready to use, also in our open library
embrace frugality
Our models

outperform on every hardware, every task.
Instead of sticking an AI model on your tech infrastructure, we test scrupulously your edge devices or cloud performance requirements to compose the model you deserve.
Breaking vision boundaries
Segmentation and detection, what are the differences?
At Focoos AI, our research team is dedicated to keeping Anyma at the cutting edge of accuracy and efficiency. By continuously advancing neural network architecture, we ensure your models are not only powerful but also optimized for real-world applications. Our commitment to pioneering research is demonstrated in our published work and deep technical expertise, allowing Anyma to stay ahead with the most precise solutions across a growing range of computer vision tasks tailored to your unique needs.
Benchmark performances
Task
Deploy on
Task
Deploy on
Tasks
Segmentation and detection, what are the differences?
Task
Output
Goal
Object Detection
Output
Bounding boxes around objects with class labels.
Goal
Localize and classify objects.
Semantic Segmentation
Output
Pixel-wise classification, but all objects of the same class are merged.
Goal
Classify each pixel into a category.
Instance Segmentation
Output
Pixel-wise classification, with differentiation between instances.
Goal
Classify pixels and distinguish objects.
Deploy on
Segmentation and detection, what are the differences?
Our technology is designed to generate the best neural network architecture for any device, ensuring optimal performance and efficiency. We’ve already optimized models for NVIDIA GPUs and Intel CPUs, but if you’re using different hardware, reach out to us to get a model perfectly tailored to your setup.
Our secret? Anyma search engine
Our Neural Architecture Search Engine, it is able to find, among over 50 thousand possible alternatives, the most efficient and accurate one, for any hardware device and computer vision task.
Anyma
It’s all completely automated
Anyma collects hardware statistics from the target device and then evaluates application-oriented metrics such as accuracy, latency, and energy consumption to automatically identify the optimal solution.
Ready to use
Thanks to Anyma’s hardware-aware NAS, the best NN architecture identified, is deployable on the target device, without requiring complex optimization and compilation steps.
Within reach through our library
Platform continuity
Train locally, deploy to the cloud, or switch back seamlessly; the platform stays in sync to support uninterrupted workflows across cloud, on-prem, or edge environments.
Community
We built an open source library to build and optimize computer vision models together; a shared space for developers to experiment, improve and deploy with control.
Interoperability
Our library is modular and designed to fit into any ML stack. You can easily integrate it, or even use it to build and plug in your own custom models.
Data Security
Training and running models on your hardware means full control over data; no cloud dependency, no lock-in; maximum privacy, no compromise on performance.
Easy Access
Clear documentation and ready to use examples make it easy to get started; low entry barriers and smooth integration even for non experts.
FAQ
-
What advantages do Focoos models offer in terms of speed and responsiveness?
Focoos AI’s Anyma-powered models are optimized for real-time performance, enabling ultra-fast inference even in resource-constrained environments. This allows for rapid decision-making in time-critical applications, helping businesses boost operational efficiency and stay competitive in high-speed industries.
-
What kind of models are available?
Focoos provides a range of high-performance models, including object detection, semantic segmentation, and instance segmentation. You can use them out of the box, fine-tune them on your own data, or integrate your custom models using the Focoos Library. The model catalog includes both standard models and Pro models, which are proprietary, highly optimized, and available with the Premium Plan.
-
What’s the difference between Standard and Pro models?
Both Standard and Pro models are built using Focoos’ proprietary Anyma technology, ensuring high efficiency and fast deployment. Standard models are open-source, pre-trained, and ready to use or fine-tune on your own data. Pro models, available with the Premium Plan, are further optimized for specific hardware and low-power environments. They offer faster inference, lower memory usage, and are ideal for real-time, embedded, and edge applications.
-
How does Focoos support sustainability?
Focoos AI’s computer vision models, both standard and Pro, are optimized for efficiency, requiring minimal computational resources. This reduces energy consumption, extends hardware lifespan, and minimizes environmental impact. By helping businesses do more with less, Focoos contributes to lower carbon footprints and supports long-term sustainability goals.
-
Are Focoos models compatible with existing hardware and infrastructure?
Absolutely. Focoos AI is built to run efficiently across a wide range of hardware, from high-performance servers to edge and embedded devices, without requiring costly upgrades. Our models are optimized for low-power, resource-constrained environments, delivering real-time performance even on limited hardware. This allows businesses to meet demanding requirements with affordable infrastructure, supporting both short-term deployment and long-term cost-effective planning.
-
Can I use models without training them?
Yes. All Focoos models are ready to use out of the box. You can upload an image or video and run inference directly from the platform or through the Focoos Library, no training required. If needed, you can fine-tune the models on your own data using the built-in training tools.
-
Can I create my own training pipeline or add custom components like new losses?
Yes. The Focoos Library is built for flexibility and modularity. You can define your own training pipeline and easily integrate custom components such as losses, metrics, or model architectures. This allows you to experiment freely.
-
How can I interface with the platform through the Focoos Library?
The Focoos Library connects your local environment with the Focoos platform. It lets you manage Focoos models, run inference remotely, collaborate with your team, and monitor performance, all directly from your code. It connects to your platform workspace by default, syncing training runs and model artifacts with the platform. You can enable real-time experiment tracking by setting sync_to_hub=True during training, allowing you to monitor progress and access results directly from the web interface—even when working locally. For full details, check the documentation.
-
Is the Focoos Library free to use?
Yes, the Focoos Library is open-source and free. You can use it to train and test models locally, integrate it into your workflows, and sync with the Focoos platform. While advanced platform features and access to Pro Models require a Premium Plan, the core library is fully accessible to everyone.
-
How can I contribute to the Focoos community?
You can contribute by sharing your custom models, writing tutorials, reporting bugs, or suggesting improvements. If you’re working with the Focoos Library, you’re welcome to open issues or submit pull requests on GitHub. We also encourage sharing use cases and feedback. Your input helps us improve the ecosystem for everyone.