לוח שנה

א ב ג ד ה ו ש
 
 
 
 
 
1
 
2
 
3
 
4
 
5
 
6
 
7
 
8
 
9
 
10
 
11
 
12
 
13
 
14
 
15
 
16
 
17
 
18
 
19
 
20
 
21
 
22
 
23
 
24
 
25
 
26
 
27
 
28
 
29
 
30
 

Statistics Seminar: Yonatan Belinkov

תאריך: 
ב', 01/01/2018 - 15:30 עד 16:30

מיקום: 
Hevra 4412

Title: Understanding Internal Representations in Deep Learning Models for Language and Speech Processing 

 

Abstract:

Language technology has become pervasive in everyday life, powering applications like Apple’s Siri or Google’s Assistant. Neural networks are a key component in these systems thanks to their ability to model large amounts of data. Contrary to traditional systems, models based on deep neural networks (a.k.a. deep learning) can be trained in an end-to-end fashion on input-output pairs, such as a sentence in one language and its translation in another language, or a speech utterance and its transcription. The end-to-end training paradigm simplifies the engineering process while giving the model flexibility to optimize for the desired task. This, however, often comes at the expense of model interpretability: understanding the role of different parts of the deep neural network is difficult, and such models are often perceived as “black-box”. In this work, we study deep learning models for two core language technology tasks: machine translation and speech recognition. We advocate an approach that attempts to decode the information encoded in such models while they are being trained. We perform a range of experiments comparing different modules, layers, and representations in the end-to-end models. Our analyses illuminate the inner workings of end-to-end machine translation and speech recognition systems, explain how they capture different language properties, and suggest potential directions for improving them. The methodology is also applicable to other tasks in the language domain and beyond.   

 

Bio:

Yonatan Belinkov is a PhD candidate at the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), working on speech and language processing. His recent research interests focus on representations of language in neural network models. His research has been published at ACL, EMNLP, TACL, ICLR, and NIPS. He received an SM degree from MIT in 2014 and prior to that a BSc in Mathematics and an MA in Arabic Studies, both from Tel Aviv University.

 

מרצה: 
Yonatan Belinkov, MIT