SLIM™ Timeline

Leaner, Stronger AI


Release Date
Share

AI at Speed got even faster with our HAI IO Format!

With this release, we have made major changes to our API:

  • HAI IO data format can now be accessed by calling convert_HAI_format() function to accelerate the training and inference process.
  • An embedded visualiser for plotting and profiling the training and inference process.
  • When training or carrying out inference, parameters for the core functions,fit(),play(), can be configured using config files.

SLIM™ vs Conventional DL on CIFAR10
SLIM™ vs Conventional DL on CIFAR10

SLIM™ (green) vs Conventional (blue)
DL on CIFAR10
SLIM™ (green) vs Conventional (blue)
DL on CIFAR10

SLIM™ vs Conventional DL on CIFAR100
SLIM™ vs Conventional DL on CIFAR100

SLIM™ (green) vs Conventional (blue)
DL on CIFAR100
SLIM™ (green) vs Conventional (blue)
DL on CIFAR100

Please note: The above results are obtained using the Keras default CNN architecture for both CIFAR10 and CIFAR100. For demonstration purposes only, we have considered a simpler neural network architecture to exhibit SLIM™ performance in training and inference speed, especially with limited number of epochs and GPU resources. The above results are encouraging and we are excited to now work on integrating SLIM™ with more advanced architectures (e.g. GPipe) on multi-card systems, further increasing the accuracy (e.g. >80%).


Checkout the demo with a summary of the results.

Release features

  • API functions for HAI IO Format
  • Embedded Visualisers
  • Docker Container
  • Jupyter Interface

Release Date
Share

SLIM™ Demo as Docker Container

Try SLIM™ demo on your own GPU machine.

Our demos are available on allocated AWS machines, however now you can try SLIM™ on your own machine and let us know how it performs!

To integrate SLIM™ into your custom machine learning platform stay tuned.

Link to the demo.

Release features

  • Docker Container
  • Jupyter Interface

Release Date
Share

AI at Speed. SLIM™ FPGA Demo.

SLIM™ on FPGA enables robots and machines to see less and infer more with a better energy profile and exceptional inference speed!

Visit the link below and try it for yourself today!

Link to the demo.

Release features

  • AWS EC2 (FPGA) Instances
  • Jupyter Interface
  • Real-time inference

Release Date
Share

AI at Speed. SLIM™ GPU Demo.

SLIM™ for GPUs illustrates how our intelligent data formatting schemes enable users to reduce their training and inference time significantly.

Visit the link below and try it for yourself today!

Link to the demo.

Release features

  • AWS EC2 (GPU) Instances
  • Jupyter Interface
  • 5x Faster Training
  • <10% Data Required
   

Release Date
Share

👏Headlight AI launches SLIM™🍾

In comparison to traditional Deep Learning, that needs tens of thousands of data samples, SLIM™ technology generates neural networks using just the minimum data (at times &lr;10%), chosen intelligently, by our algorithms.


Link to the demo.

Release features

  • AWS EC2 (GPU) Instances
  • Jupyter Interface
  • 5x Faster Training
  • <10% Data Required
   

Release Date
Share

SLIM™ is patent-pending

After months of research and development, involving careful comparisons with current state-of-the-art, the team at Headlight AI filed a patent to protect the core idea behind SLIM™ with the UK patent office.

SLIM™ (see less, infer more) enables machines, especially ones operating in harsh environments with limited power, to make faster decisions using minimum data and

Patent coverage

  • UK