How do we run real-time AI applications at the edge: fast, secure, and with minimal latency?

SMARTY’s Answer: An Enhanced Serverless Engine that’s deeply integrated with In-band Network Telemetry (INT) developed by our partner TKI. 

In simple terms, this innovation helps us to deploy AI apps without waiting for a full server to spin up, monitor performance in real time, route traffic intelligently across edge and cloud, and scale instantly and heal automatically. Edge-based systems (like those in smart cars or smart cities) don’t have the luxury of lag. They need to: 

  • Start instantly 
  • Understand network traffic on the fly 
  • Use compute resources efficiently 
  • Adapt in real time 

🔬 What’s unique about our solution? 

  • Serverless computing with function fusion and zero-cold-start warmup 
  • Real-time telemetry data from network + compute nodes (via P4) 
  • Smart orchestration across cloud and edge 
  • Built on Amazon Web Services now, with migration to Kubernetes-based open-source solutions coming soon 

🎯SMARTY’s solution is already being tested and used in Secure Automotive Applications, where self-driving features rely on collective perception and ultra-fast response times. Everything is built to serve AI: deployment, monitoring, resource scaling and even AI training and learning tasks on the fly. Our first open-source-ready versions are coming soon! 

Read more to discover: https://www.smarty-project.eu  

Share this Post