Times are given in Irish Standard Time (IST), i.e., either UTC+0 or UTC+1 (Daylight Saving).
- This event has passed.
Cardamom Seminar Series #13 – Dr Hassan Sajjad (Qatar Computing Research Institute)
June 27, 2022 @ 5:00 pm – 6:00 pm IST
Analyzing Latent Concepts in Deep Neural Network Models of NLP
The Unit for Linguistic Data at the Insight SFI Research Centre for Data Analytics / Data Science Institute, National University of Ireland Galway is delighted to welcome Dr Hassan Sajjad, a senior scientist at the Qatar Computing Research Institute, to be the next speaker in our seminar series. He will talk about the unsupervised methods to analyze latent spaces of deep neural network models. Register here.
Abstract:
Despite the benefits of deep neural network models, their opaqueness is a major cause of concern. Deep neural network models work as a black box and it can be impossible to understand what and how much a model learns about language to solve a task. In this talk, I will present a novel unsupervised method to analyze latent spaces of deep neural network models. I will seek answers to the following questions: i) Is the linguistic hierarchy, such as morphology and semantics, learned in the model? ii) how such knowledge is structured in the model? iii) what are the novel concepts learned by the model? Some notable findings suggest that the models learn n-gram-based and semantically related concepts in the lower layers, core-linguistic concepts are learned in the middle-higher layers, and the last layers’ latent spaces represent task-specific concepts. Moreover, the models learn novel multifacet concepts comprised of more than one linguistic concept such as a concept based on a mix of syntactic and morphological properties.
About the Speaker:
Dr Hassan Sajjad is a Senior Scientist at the Qatar Computing Research Institute. He leads and manages the machine translation project and the Explainable AI project, NeuroX. His research interests include the interpretation of deep neural networks, machine translation, domain adaptation, and natural language processing involving low-resource and morphologically-rich languages. His research work has been published in several prestigious venues such as CL, ICLR, ACL and AAAI. His work in collaboration with MIT and Harvard on the interpretation of deep models has also been featured in several tech blogs. Dr Sajjad regularly serves as an area chair and as a reviewer at various machine learning and computational linguistics conferences and journals. He co-organized the BlackboxNLP workshop 2020/21 and the shared task on MT Robustness 2019/20. In addition, he has been teaching courses on deep learning internationally at several spring and summer schools. Before joining QCRI, Dr Sajjad was at the University of Stuttgart, Germany where he did his PhD in Computer Science under the supervision of Prof. Dr Hinrich Schütze.
Host:
The seminar series is led by the Cardamom project team. The Cardamom project aims to close the resource gap for minority and under-resourced languages using deep-learning-based natural language processing (NLP) and exploiting similarities of closely related languages. The project further extends this idea to historical languages, which can be considered closely related to their modern form. It aims to provide NLP through both space and time for languages that current approaches have ignored.
Registration link: