Azmyin Md. Kamal’s Post

View profile for Azmyin Md. Kamal, graphic

Graduate Research and Teaching Assistant @ iCORE, MIE, LSU, U.S.A. Roboticist specializing in collaborative VSLAM, 3D object detection and model predictive motion planning of autonomous agents.

From what I had observed since getting into the field of robotics and computer vision, when neural networks gained popularity in Computer Vision tasks in early 2013-2014s, there was a trend of using bigger and bigger networks for performance gains. Then came the years where models were optimized (and still are) in terms size, data type, conpression and deployment in specialized hardware for very impressive real time performances. I think Transformers are probably heading towards the same direction where in the next few years we might see LLMs and its derived variants would start to decrease in size, utilize multi modal data more efficiently and generalize into multiple domains all the while increasing in throughput by leveraging modern GPGPU and other acceleration hardware.

View profile for Sebastian Raschka, PhD, graphic
Sebastian Raschka, PhD Sebastian Raschka, PhD is an Influencer

Machine learning and AI researcher • author of the "Build a Large Language Model From Scratch" book (amzn.to/4fqvn0D) • research engineer at Lightning AI • ex-statistics professor at University of Wisconsin-Madison

"What Matters In Transformers?" is an interesting paper (https://lnkd.in/g_Zqwf9M) that finds you can actually remove half of the attention layers in LLMs like Llama without noticeably reducing modeling performance. The concept is relatively simple. The authors delete attention layers, MLP layers, or entire transformer blocks: - Removing entire transformer blocks leads to significant performance degradation. - Removing MLP layers results in significant performance degradation. - Removing attention layers causes almost no performance degradation! In Llama 2 70B, even if half of the attention layers are deleted (which results in a 48% speed-up), there's only a 2.4% decrease in the model benchmarks. The author also recently added Llama 3 results to the paper, which are similar. The attention layers were not removed randomly but based on a cosine-based similarity score: If the input and output are very similar, the layer is redundant and can be removed. This is a super intriguing result and could potentially be combined with various model compression techniques (like pruning and quantization) for compounding effects. Furthermore, the layers are removed in a one-shot fashion (versus iterative fashion), and no (re)training is required after the removal. However, retraining the model after the removal could potentially even recover some of the lost performance. Overall, a very simple but very interesting study. It appears there might be lots of computational redundancy in larger architectures. One big caveat of this study, though, is that the focus is mostly on academic benchmarks (HellaSwag, MMLU, etc.). It's unclear how well the models perform on benchmarks measuring conversational performance.

  • No alternative text description for this image

To view or add a comment, sign in

Explore topics