The Heads Hypothesis: A Unifying Statistical Approach Towards Understanding Multi-Headed Attention in BERT
BERT has been a top contender in the space of NLP models. With its sucess, a parallel stream of research, named BERTology, has emerged, that tries to understand how does BERT work so well. With the similar objective, this blog explains a novel technique for analysing the behaviour of BERT’s attention heads. Read this for interesting insights which unravel the inner working of BERT!
5 min read
Levelling up NLP for Indian Languages
We are working towards building a better ecosystem for Indian languages while also keeping up with the recent advancements in NLP. To this end, we are releasing IndicNLPSuite, which is a collection of various resources and models for Indian languages
6 min read