Leaner, Stronger AI
The new API provides:
convert_HAI_format()function to accelerate the training and inference process.
play(), can be configured using configs.
Try SLIM™ demo on your own GPU machine! Our demos are
allocated AWS machines, however now we provide SLIM™ demo docker container
publicly so you can try it on your own machine and let us know how it
To integrate SLIM™ into your Machine Learning platform stay tuned.
SLIM™ on FPGA enables robots and machines to see less and infer more with a better energy profile and exceptional inference speed.
SLIM™ for GPUs illustrates how our intelligent encoding schemes enable users to reduce their training and inference time significantly.