Press "Enter" to skip to content

Researchers Reduce Size of AI Models to Increase Speed

Gadgets of the latest era like security cameras, smartphones, and speakers are now ready to run on artificial intelligence software. The intention behind incorporating AI software is to speed-up speech-processing tasks and images. Smartphone brands and other gadget manufacturers are adopting AI in a compressed version to reduce energy costs and computation. The compression technique – quantization, helps in making deep learning models smaller. However, the complexity arises when the smaller models show vulnerability to malicious attacks. This appears as a big concern since AI is easily trickable into misbehaving.

Tech giant IBM and MIT researchers found how susceptible the compressed AI models are to several adverse attacks. Therefore, to cater this challenge, researchers offer a fix adding a mathematical constraint at the time of quantization. Moreover, they intend to reduce the odds related to the compressed AI models, which often mis-classifies the modified images. The researchers have also found the reason behind AI misclassifying images because of error amplification effects due to compression.

Speed and Security Prudent for AI Models in Connected Devices

Further experiment proves that the quantized models up to 8 bits or fewer are more vulnerable to adversarial attacks. Moreover, researchers found the issues with accuracy where it dropped from 30-40% to less than 10% as bit width declined. However, it restored some resilience when the researchers controlled the Lipschitz constraints during quantization.

MIT and IBM researchers also noticed drastic change in AI’s image recognition after adding their mathematical constraints during compression. They observed the model’s role in correctly classifying images, even performing more precisely than a full 32-bit model.
Song Han, an assistant professor in MIT states that their extraordinary technique mitigates error amplification. He also asserts that the technique makes compressed deep learning models more robust than full-precision AI models. Currently, the team emphasizes on improving the technique further applying it to a broader range of models.

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Be First to Comment

Leave a Reply