Mitesh M. Khapra

Also published as: Mitesh Khapra, Mitesh M Khapra, Mitesh M. Khapra


2020

pdf bib
Towards Transparent and Explainable Attention Models
Akash Kumar Mohankumar | Preksha Nema | Sharan Narasimhan | Mitesh M. Khapra | Balaji Vasan Srinivasan | Balaraman Ravindran
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics

Recent studies on interpretability of attention distributions have led to notions of faithful and plausible explanations for a model’s predictions. Attention distributions can be considered a faithful explanation if a higher attention weight implies a greater impact on the model’s prediction. They can be considered a plausible explanation if they provide a human-understandable justification for the model’s predictions. In this work, we first explain why current attention mechanisms in LSTM based encoders can neither provide a faithful nor a plausible explanation of the model’s predictions. We observe that in LSTM based encoders the hidden representations at different time-steps are very similar to each other (high conicity) and attention weights in these situations do not carry much meaning because even a random permutation of the attention weights does not affect the model’s predictions. Based on experiments on a wide variety of tasks and datasets, we observe attention distributions often attribute the model’s predictions to unimportant words such as punctuation and fail to offer a plausible explanation for the predictions. To make attention mechanisms more faithful and plausible, we propose a modified LSTM cell with a diversity-driven training objective that ensures that the hidden representations learned at different time steps are diverse. We show that the resulting attention distributions offer more transparency as they (i) provide a more precise importance ranking of the hidden states (ii) are better indicative of words important for the model’s predictions (iii) correlate better with gradient-based attribution methods. Human evaluations indicate that the attention distributions learned by our model offer a plausible explanation of the model’s predictions. Our code has been made publicly available at https://github.com/akashkm99/Interpretable-Attention

pdf bib
IndicNLPSuite: Monolingual Corpora, Evaluation Benchmarks and Pre-trained Multilingual Language Models for Indian Languages
Divyanshu Kakwani | Anoop Kunchukuttan | Satish Golla | Gokul N.C. | Avik Bhattacharyya | Mitesh M. Khapra | Pratyush Kumar
Findings of the Association for Computational Linguistics: EMNLP 2020

In this paper, we introduce NLP resources for 11 major Indian languages from two major language families. These resources include: (a) large-scale sentence-level monolingual corpora, (b) pre-trained word embeddings, (c) pre-trained language models, and (d) multiple NLU evaluation datasets (IndicGLUE benchmark). The monolingual corpora contains a total of 8.8 billion tokens across all 11 languages and Indian English, primarily sourced from news crawls. The word embeddings are based on FastText, hence suitable for handling morphological complexity of Indian languages. The pre-trained language models are based on the compact ALBERT model. Lastly, we compile the (IndicGLUE benchmark for Indian language NLU. To this end, we create datasets for the following tasks: Article Genre Classification, Headline Prediction, Wikipedia Section-Title Prediction, Cloze-style Multiple choice QA, Winograd NLI and COPA. We also include publicly available datasets for some Indic languages for tasks like Named Entity Recognition, Cross-lingual Sentence Retrieval, Paraphrase detection, etc. Our embeddings are competitive or better than existing pre-trained embeddings on multiple tasks. We hope that the availability of the dataset will accelerate Indic NLP research which has the potential to impact more than a billion people. It can also help the community in evaluating advances in NLP over a more diverse pool of languages. The data and models are available at https://indicnlp.ai4bharat.org.

pdf bib
On Incorporating Structural Information to improve Dialogue Response Generation
Nikita Moghe | Priyesh Vijayan | Balaraman Ravindran | Mitesh M. Khapra
Proceedings of the 2nd Workshop on Natural Language Processing for Conversational AI

We consider the task of generating dialogue responses from background knowledge comprising of domain specific resources. Specifically, given a conversation around a movie, the task is to generate the next response based on background knowledge about the movie such as the plot, review, Reddit comments etc. This requires capturing structural, sequential and semantic information from the conversation context and the background resources. We propose a new architecture that uses the ability of BERT to capture deep contextualized representations in conjunction with explicit structure and sequence information. More specifically, we use (i) Graph Convolutional Networks (GCNs) to capture structural information, (ii) LSTMs to capture sequential information and (iii) BERT for the deep contextualized representations that capture semantic information. We analyze the proposed architecture extensively. To this end, we propose a plug-and-play Semantics-Sequences-Structures (SSS) framework which allows us to effectively combine such linguistic information. Through a series of experiments we make some interesting observations. First, we observe that the popular adaptation of the GCN model for NLP tasks where structural information (GCNs) was added on top of sequential information (LSTMs) performs poorly on our task. This leads us to explore interesting ways of combining semantic and structural information to improve the performance. Second, we observe that while BERT already outperforms other deep contextualized representations such as ELMo, it still benefits from the additional structural information explicitly added using GCNs. This is a bit surprising given the recent claims that BERT already captures structural information. Lastly, the proposed SSS framework gives an improvement of 7.95% on BLUE score over the baseline.

pdf bib
Joint Transformer/RNN Architecture for Gesture Typing in Indic Languages
Emil Biju | Anirudh Sriram | Mitesh M. Khapra | Pratyush Kumar
Proceedings of the 28th International Conference on Computational Linguistics

Gesture typing is a method of typing words on a touch-based keyboard by creating a continuous trace passing through the relevant keys. This work is aimed at developing a keyboard that supports gesture typing in Indic languages. We begin by noting that when dealing with Indic languages, one needs to cater to two different sets of users: (i) users who prefer to type in the native Indic script (Devanagari, Bengali, etc.) and (ii) users who prefer to type in the English script but want the transliterated output in the native script. In both cases, we need a model that takes a trace as input and maps it to the intended word. To enable the development of these models, we create and release two datasets. First, we create a dataset containing keyboard traces for 193,658 words from 7 Indic languages. Second, we curate 104,412 English-Indic transliteration pairs from Wikidata across these languages. Using these datasets we build a model that performs path decoding, transliteration and transliteration correction. Unlike prior approaches, our proposed model does not make co-character independence assumptions during decoding. The overall accuracy of our model across the 7 languages varies from 70-95%.

pdf bib
On the weak link between importance and prunability of attention heads
Aakriti Budhraja | Madhura Pande | Preksha Nema | Pratyush Kumar | Mitesh M. Khapra
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)

Given the success of Transformer-based models, two directions of study have emerged: interpreting role of individual attention heads and down-sizing the models for efficiency. Our work straddles these two streams: We analyse the importance of basing pruning strategies on the interpreted role of the attention heads. We evaluate this on Transformer and BERT models on multiple NLP tasks. Firstly, we find that a large fraction of the attention heads can be randomly pruned with limited effect on accuracy. Secondly, for Transformers, we find no advantage in pruning attention heads identified to be important based on existing studies that relate importance to the location of a head. On the BERT model too we find no preference for top or bottom layers, though the latter are reported to have higher importance. However, strategies that avoid pruning middle layers and consecutive layers perform better. Finally, during fine-tuning the compensation for pruned attention heads is roughly equally distributed across the un-pruned heads. Our results thus suggest that interpretation of attention heads does not strongly inform pruning.

pdf bib
Towards Interpreting BERT for Reading Comprehension Based QA
Sahana Ramnath | Preksha Nema | Deep Sahni | Mitesh M. Khapra
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)

BERT and its variants have achieved state-of-the-art performance in various NLP tasks. Since then, various works have been proposed to analyze the linguistic information being captured in BERT. However, the current works do not provide an insight into how BERT is able to achieve near human-level performance on the task of Reading Comprehension based Question Answering. In this work, we attempt to interpret BERT for RCQA. Since BERT layers do not have predefined roles, we define a layer’s role or functionality using Integrated Gradients. Based on the defined roles, we perform a preliminary analysis across all layers. We observed that the initial layers focus on query-passage interaction, whereas later layers focus more on contextual understanding and enhancing the answer prediction. Specifically for quantifier questions (how much/how many), we notice that BERT focuses on confusing words (i.e., on other numerical quantities in the passage) in the later layers, but still manages to predict the answer correctly. The fine-tuning and analysis scripts will be publicly available at https://github.com/iitmnlp/BERT-Analysis-RCQA.

2019

pdf bib
Graph Convolutional Network with Sequential Attention for Goal-Oriented Dialogue Systems
Suman Banerjee | Mitesh M. Khapra
Transactions of the Association for Computational Linguistics, Volume 7

Domain-specific goal-oriented dialogue systems typically require modeling three types of inputs, namely, (i) the knowledge-base associated with the domain, (ii) the history of the conversation, which is a sequence of utterances, and (iii) the current utterance for which the response needs to be generated. While modeling these inputs, current state-of-the-art models such as Mem2Seq typically ignore the rich structure inherent in the knowledge graph and the sentences in the conversation context. Inspired by the recent success of structure-aware Graph Convolutional Networks (GCNs) for various NLP tasks such as machine translation, semantic role labeling, and document dating, we propose a memory-augmented GCN for goal-oriented dialogues. Our model exploits (i) the entity relation graph in a knowledge-base and (ii) the dependency graph associated with an utterance to compute richer representations for words and entities. Further, we take cognizance of the fact that in certain situations, such as when the conversation is in a code-mixed language, dependency parsers may not be available. We show that in such situations we could use the global word co-occurrence graph to enrich the representations of utterances. We experiment with four datasets: (i) the modified DSTC2 dataset, (ii) recently released code-mixed versions of DSTC2 dataset in four languages, (iii) Wizard-of-Oz style CAM676 dataset, and (iv) Wizard-of-Oz style MultiWOZ dataset. On all four datasets our method outperforms existing methods, on a wide range of evaluation metrics.

pdf bib
Let’s Ask Again: Refine Network for Automatic Question Generation
Preksha Nema | Akash Kumar Mohankumar | Mitesh M. Khapra | Balaji Vasan Srinivasan | Balaraman Ravindran
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)

In this work, we focus on the task of Automatic Question Generation (AQG) where given a passage and an answer the task is to generate the corresponding question. It is desired that the generated question should be (i) grammatically correct (ii) answerable from the passage and (iii) specific to the given answer. An analysis of existing AQG models shows that they produce questions which do not adhere to one or more of the above-mentioned qualities. In particular, the generated questions look like an incomplete draft of the desired question with a clear scope for refinement. To alleviate this shortcoming, we propose a method which tries to mimic the human process of generating questions by first creating an initial draft and then refining it. More specifically, we propose Refine Network (RefNet) which contains two decoders. The second decoder uses a dual attention network which pays attention to both (i) the original passage and (ii) the question (initial draft) generated by the first decoder. In effect, it refines the question generated by the first decoder, thereby making it more correct and complete. We evaluate RefNet on three datasets, viz., SQuAD, HOTPOT-QA, and DROP, and show that it outperforms existing state-of-the-art methods by 7-16% on all of these datasets. Lastly, we show that we can improve the quality of the second decoder on specific metrics, such as, fluency and answerability by explicitly rewarding revisions that improve on the corresponding metric during training. The code has been made publicly available .

pdf bib
On Knowledge distillation from complex networks for response prediction
Siddhartha Arora | Mitesh M. Khapra | Harish G. Ramaswamy
Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)

Recent advances in Question Answering have lead to the development of very complex models which compute rich representations for query and documents by capturing all pairwise interactions between query and document words. This makes these models expensive in space and time, and in practice one has to restrict the length of the documents that can be fed to these models. Such models have also been recently employed for the task of predicting dialog responses from available background documents (e.g., Holl-E dataset). However, here the documents are longer, thereby rendering these complex models infeasible except in select restricted settings. In order to overcome this, we use standard simple models which do not capture all pairwise interactions, but learn to emulate certain characteristics of a complex teacher network. Specifically, we first investigate the conicity of representations learned by a complex model and observe that it is significantly lower than that of simpler models. Based on this insight, we modify the simple architecture to mimic this characteristic. We go further by using knowledge distillation approaches, where the simple model acts as a student and learns to match the output from the complex teacher network. We experiment with the Holl-E dialog data set and show that by mimicking characteristics and matching outputs from a teacher, even a simple network can give improved performance.

2018

pdf bib
Generating Descriptions from Structured Data Using a Bifocal Attention Mechanism and Gated Orthogonalization
Preksha Nema | Shreyas Shetty | Parag Jain | Anirban Laha | Karthik Sankaranarayanan | Mitesh M. Khapra
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)

In this work, we focus on the task of generating natural language descriptions from a structured table of facts containing fields (such as nationality, occupation, etc) and values (such as Indian, actor, director, etc). One simple choice is to treat the table as a sequence of fields and values and then use a standard seq2seq model for this task. However, such a model is too generic and does not exploit task specific characteristics. For example, while generating descriptions from a table, a human would attend to information at two levels: (i) the fields (macro level) and (ii) the values within the field (micro level). Further, a human would continue attending to a field for a few timesteps till all the information from that field has been rendered and then never return back to this field (because there is nothing left to say about it). To capture this behavior we use (i) a fused bifocal attention mechanism which exploits and combines this micro and macro level information and (ii) a gated orthogonalization mechanism which tries to ensure that a field is remembered for a few time steps and then forgotten. We experiment with a recently released dataset which contains fact tables about people and their corresponding one line biographical descriptions in English. In addition, we also introduce two similar datasets for French and German. Our experiments show that the proposed model gives 21% relative improvement over a recently proposed state of the art method and 10% relative improvement over basic seq2seq models. The code and the datasets developed as a part of this work are publicly available on https://github.com/PrekshaNema25/StructuredData_To_Descriptions

pdf bib
A Mixed Hierarchical Attention Based Encoder-Decoder Approach for Standard Table Summarization
Parag Jain | Anirban Laha | Karthik Sankaranarayanan | Preksha Nema | Mitesh M. Khapra | Shreyas Shetty
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)

Structured data summarization involves generation of natural language summaries from structured input data. In this work, we consider summarizing structured data occurring in the form of tables as they are prevalent across a wide variety of domains. We formulate the standard table summarization problem, which deals with tables conforming to a single predefined schema. To this end, we propose a mixed hierarchical attention based encoder-decoder model which is able to leverage the structure in addition to the content of the tables. Our experiments on the publicly available weathergov dataset show around 18 BLEU (around 30%) improvement over the current state-of-the-art.

pdf bib
Towards Exploiting Background Knowledge for Building Conversation Systems
Nikita Moghe | Siddhartha Arora | Suman Banerjee | Mitesh M. Khapra
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing

Existing dialog datasets contain a sequence of utterances and responses without any explicit background knowledge associated with them. This has resulted in the development of models which treat conversation as a sequence-to-sequence generation task (i.e., given a sequence of utterances generate the response sequence). This is not only an overly simplistic view of conversation but it is also emphatically different from the way humans converse by heavily relying on their background knowledge about the topic (as opposed to simply relying on the previous sequence of utterances). For example, it is common for humans to (involuntarily) produce utterances which are copied or suitably modified from background articles they have read about the topic. To facilitate the development of such natural conversation models which mimic the human process of conversing, we create a new dataset containing movie chats wherein each response is explicitly generated by copying and/or modifying sentences from unstructured background knowledge such as plots, comments and reviews about the movie. We establish baseline results on this dataset (90K utterances from 9K conversations) using three different models: (i) pure generation based models which ignore the background knowledge (ii) generation based models which learn to copy information from the background knowledge when required and (iii) span prediction based models which predict the appropriate response span in the background knowledge.

pdf bib
Towards a Better Metric for Evaluating Question Generation Systems
Preksha Nema | Mitesh M. Khapra
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing

There has always been criticism for using n-gram based similarity metrics, such as BLEU, NIST, etc, for evaluating the performance of NLG systems. However, these metrics continue to remain popular and are recently being used for evaluating the performance of systems which automatically generate questions from documents, knowledge graphs, images, etc. Given the rising interest in such automatic question generation (AQG) systems, it is important to objectively examine whether these metrics are suitable for this task. In particular, it is important to verify whether such metrics used for evaluating AQG systems focus on answerability of the generated question by preferring questions which contain all relevant information such as question type (Wh-types), entities, relations, etc. In this work, we show that current automatic evaluation metrics based on n-gram similarity do not always correlate well with human judgments about answerability of a question. To alleviate this problem and as a first step towards better evaluation metrics for AQG, we introduce a scoring function to capture answerability and show that when this scoring function is integrated with existing metrics, they correlate significantly better with human judgments. The scripts and data developed as a part of this work are made publicly available.

pdf bib
A Dataset for Building Code-Mixed Goal Oriented Conversation Systems
Suman Banerjee | Nikita Moghe | Siddhartha Arora | Mitesh M. Khapra
Proceedings of the 27th International Conference on Computational Linguistics

There is an increasing demand for goal-oriented conversation systems which can assist users in various day-to-day activities such as booking tickets, restaurant reservations, shopping, etc. Most of the existing datasets for building such conversation systems focus on monolingual conversations and there is hardly any work on multilingual and/or code-mixed conversations. Such datasets and systems thus do not cater to the multilingual regions of the world, such as India, where it is very common for people to speak more than one language and seamlessly switch between them resulting in code-mixed conversations. For example, a Hindi speaking user looking to book a restaurant would typically ask, “Kya tum is restaurant mein ek table book karne mein meri help karoge?” (“Can you help me in booking a table at this restaurant?”). To facilitate the development of such code-mixed conversation models, we build a goal-oriented dialog dataset containing code-mixed conversations. Specifically, we take the text from the DSTC2 restaurant reservation dataset and create code-mixed versions of it in Hindi-English, Bengali-English, Gujarati-English and Tamil-English. We also establish initial baselines on this dataset using existing state of the art models. This dataset along with our baseline implementations will be made publicly available for research purposes.

pdf bib
Leveraging Orthographic Similarity for Multilingual Neural Transliteration
Anoop Kunchukuttan | Mitesh Khapra | Gurneet Singh | Pushpak Bhattacharyya
Transactions of the Association for Computational Linguistics, Volume 6

We address the task of joint training of transliteration models for multiple language pairs (multilingual transliteration). This is an instance of multitask learning, where individual tasks (language pairs) benefit from sharing knowledge with related tasks. We focus on transliteration involving related tasks i.e., languages sharing writing systems and phonetic properties (orthographically similar languages). We propose a modified neural encoder-decoder model that maximizes parameter sharing across language pairs in order to effectively leverage orthographic similarity. We show that multilingual transliteration significantly outperforms bilingual transliteration in different scenarios (average increase of 58% across a variety of languages we experimented with). We also show that multilingual transliteration models can generalize well to languages/language pairs not encountered during training and hence perform well on the zeroshot transliteration task. We show that further improvements can be achieved by using phonetic feature input.

pdf bib
DuoRC: Towards Complex Language Understanding with Paraphrased Reading Comprehension
Amrita Saha | Rahul Aralikatte | Mitesh M. Khapra | Karthik Sankaranarayanan
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

We propose DuoRC, a novel dataset for Reading Comprehension (RC) that motivates several new challenges for neural approaches in language understanding beyond those offered by existing RC datasets. DuoRC contains 186,089 unique question-answer pairs created from a collection of 7680 pairs of movie plots where each pair in the collection reflects two versions of the same movie - one from Wikipedia and the other from IMDb - written by two different authors. We asked crowdsourced workers to create questions from one version of the plot and a different set of workers to extract or synthesize answers from the other version. This unique characteristic of DuoRC where questions and answers are created from different versions of a document narrating the same underlying story, ensures by design, that there is very little lexical overlap between the questions created from one version and the segments containing the answer in the other version. Further, since the two versions have different levels of plot detail, narration style, vocabulary, etc., answering questions from the second version requires deeper language understanding and incorporating external background knowledge. Additionally, the narrative style of passages arising from movie plots (as opposed to typical descriptive passages in existing datasets) exhibits the need to perform complex reasoning over events across multiple sentences. Indeed, we observe that state-of-the-art neural RC models which have achieved near human performance on the SQuAD dataset, even when coupled with traditional NLP techniques to address the challenges presented in DuoRC exhibit very poor performance (F1 score of 37.42% on DuoRC v/s 86% on SQuAD dataset). This opens up several interesting research avenues wherein DuoRC could complement other RC datasets to explore novel neural approaches for studying language understanding.

2017

pdf bib
Diversity driven attention model for query-based abstractive summarization
Preksha Nema | Mitesh M. Khapra | Anirban Laha | Balaraman Ravindran
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Abstractive summarization aims to generate a shorter version of the document covering all the salient points in a compact and coherent fashion. On the other hand, query-based summarization highlights those points that are relevant in the context of a given query. The encode-attend-decode paradigm has achieved notable success in machine translation, extractive summarization, dialog systems, etc. But it suffers from the drawback of generation of repeated phrases. In this work we propose a model for the query-based summarization task based on the encode-attend-decode paradigm with two key additions (i) a query attention model (in addition to document attention model) which learns to focus on different portions of the query at different time steps (instead of using a static representation for the query) and (ii) a new diversity based attention model which aims to alleviate the problem of repeating phrases in the summary. In order to enable the testing of this model we introduce a new query-based summarization dataset building on debatepedia. Our experiments show that with these two additions the proposed model clearly outperforms vanilla encode-attend-decode models with a gain of 28% (absolute) in ROUGE-L scores.

pdf bib
Generating Natural Language Question-Answer Pairs from a Knowledge Graph Using a RNN Based Question Generation Model
Sathish Reddy | Dinesh Raghu | Mitesh M. Khapra | Sachindra Joshi
Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers

In recent years, knowledge graphs such as Freebase that capture facts about entities and relationships between them have been used actively for answering factoid questions. In this paper, we explore the problem of automatically generating question answer pairs from a given knowledge graph. The generated question answer (QA) pairs can be used in several downstream applications. For example, they could be used for training better QA systems. To generate such QA pairs, we first extract a set of keywords from entities and relationships expressed in a triple stored in the knowledge graph. From each such set, we use a subset of keywords to generate a natural language question that has a unique answer. We treat this subset of keywords as a sequence and propose a sequence to sequence model using RNN to generate a natural language question from it. Our RNN based model generates QA pairs with an accuracy of 33.61 percent and performs 110.47 percent (relative) better than a state-of-the-art template based method for generating natural language question from keywords. We also do an extrinsic evaluation by using the generated QA pairs to train a QA system and observe that the F1-score of the QA system improves by 5.5 percent (relative) when using automatically generated QA pairs in addition to manually generated QA pairs available for training.

2016

pdf bib
Bridge Correlational Neural Networks for Multilingual Multimodal Representation Learning
Janarthanan Rajendran | Mitesh M. Khapra | Sarath Chandar | Balaraman Ravindran
Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

pdf bib
Multilingual Multimodal Language Processing Using Neural Networks
Mitesh M Khapra | Sarath Chandar
Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Tutorial Abstracts

pdf bib
Statistical Machine Translation between Related Languages
Pushpak Bhattacharyya | Mitesh M. Khapra | Anoop Kunchukuttan
Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Tutorial Abstracts

pdf bib
Substring-based unsupervised transliteration with phonetic and contextual knowledge
Anoop Kunchukuttan | Pushpak Bhattacharyya | Mitesh M. Khapra
Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning

pdf bib
A Correlational Encoder Decoder Architecture for Pivot Based Sequence Generation
Amrita Saha | Mitesh M. Khapra | Sarath Chandar | Janarthanan Rajendran | Kyunghyun Cho
Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers

Interlingua based Machine Translation (MT) aims to encode multiple languages into a common linguistic representation and then decode sentences in multiple target languages from this representation. In this work we explore this idea in the context of neural encoder decoder architectures, albeit on a smaller scale and without MT as the end goal. Specifically, we consider the case of three languages or modalities X, Z and Y wherein we are interested in generating sequences in Y starting from information available in X. However, there is no parallel training data available between X and Y but, training data is available between X & Z and Z & Y (as is often the case in many real world applications). Z thus acts as a pivot/bridge. An obvious solution, which is perhaps less elegant but works very well in practice is to train a two stage model which first converts from X to Z and then from Z to Y. Instead we explore an interlingua inspired solution which jointly learns to do the following (i) encode X and Z to a common representation and (ii) decode Y from this common representation. We evaluate our model on two tasks: (i) bridge transliteration and (ii) bridge captioning. We report promising results in both these applications and believe that this is a right step towards truly interlingua inspired encoder decoder architectures.

2015

pdf bib
Show Me Your Evidence - an Automatic Method for Context Dependent Evidence Detection
Ruty Rinott | Lena Dankin | Carlos Alzate Perez | Mitesh M. Khapra | Ehud Aharoni | Noam Slonim
Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing

2014

pdf bib
Claims on demand – an initial demonstration of a system for automatic detection and polarity identification of context dependent claims in massive corpora
Noam Slonim | Ehud Aharoni | Carlos Alzate | Roy Bar-Haim | Yonatan Bilu | Lena Dankin | Iris Eiron | Daniel Hershcovich | Shay Hummel | Mitesh Khapra | Tamar Lavee | Ran Levy | Paul Matchen | Anatoly Polnarov | Vikas Raykar | Ruty Rinott | Amrita Saha | Naama Zwerdling | David Konopnicki | Dan Gutfreund
Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: System Demonstrations

pdf bib
When Transliteration Met Crowdsourcing : An Empirical Study of Transliteration via Crowdsourcing using Efficient, Non-redundant and Fair Quality Control
Mitesh M. Khapra | Ananthakrishnan Ramanathan | Anoop Kunchukuttan | Karthik Visweswariah | Pushpak Bhattacharyya
Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)

Sufficient parallel transliteration pairs are needed for training state of the art transliteration engines. Given the cost involved, it is often infeasible to collect such data using experts. Crowdsourcing could be a cheaper alternative, provided that a good quality control (QC) mechanism can be devised for this task. Most QC mechanisms employed in crowdsourcing are aggressive (unfair to workers) and expensive (unfair to requesters). In contrast, we propose a low-cost QC mechanism which is fair to both workers and requesters. At the heart of our approach, lies a rule based Transliteration Equivalence approach which takes as input a list of vowels in the two languages and a mapping of the consonants in the two languages. We empirically show that our approach outperforms other popular QC mechanisms (\textit{viz.}, consensus and sampling) on two vital parameters : (i) fairness to requesters (lower cost per correct transliteration) and (ii) fairness to workers (lower rate of rejecting correct answers). Further, as an extrinsic evaluation we use the standard NEWS 2010 test set and show that such quality controlled crowdsourced data compares well to expert data when used for training a transliteration engine.

2013

pdf bib
Cut the noise: Mutually reinforcing reordering and alignments for improved machine translation
Karthik Visweswariah | Mitesh M. Khapra | Ananthakrishnan Ramanathan
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

pdf bib
Improving reordering performance using higher order and structural features
Mitesh M. Khapra | Ananthakrishnan Ramanathan | Karthik Visweswariah
Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

2012

pdf bib
Experiences in Resource Generation for Machine Translation through Crowdsourcing
Anoop Kunchukuttan | Shourya Roy | Pratik Patel | Kushal Ladha | Somya Gupta | Mitesh M. Khapra | Pushpak Bhattacharyya
Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)

The logistics of collecting resources for Machine Translation (MT) has always been a cause of concern for some of the resource deprived languages of the world. The recent advent of crowdsourcing platforms provides an opportunity to explore the large scale generation of resources for MT. However, before venturing into this mode of resource collection, it is important to understand the various factors such as, task design, crowd motivation, quality control, etc. which can influence the success of such a crowd sourcing venture. In this paper, we present our experiences based on a series of experiments performed. This is an attempt to provide a holistic view of the different facets of translation crowd sourcing and identifying key challenges which need to be addressed for building a practical crowdsourcing solution for MT.

pdf bib
Proceedings of the Workshop on Reordering for Statistical Machine Translation
Karthik Visweswariah | Ananthakrishnan Ramanathan | Mitesh M. Khapra
Proceedings of the Workshop on Reordering for Statistical Machine Translation

pdf bib
Whitepaper for Shared Task on Learning Reordering from Word Alignments at RSMT 2012
Mitesh M. Khapra | Ananthakrishnan Ramanathan | Karthik Visweswariah
Proceedings of the Workshop on Reordering for Statistical Machine Translation

pdf bib
Report of the Shared Task on Learning Reordering from Word Alignments at RSMT 2012
Mitesh M. Khapra | Ananthakrishnan Ramanathan | Karthik Visweswariah
Proceedings of the Workshop on Reordering for Statistical Machine Translation

pdf bib
I Can Sense It: a Comprehensive Online System for WSD
Salil Joshi | Mitesh M Khapra | Pushpak Bhattacharyya
Proceedings of COLING 2012: Demonstration Papers

2011

pdf bib
It Takes Two to Tango: A Bilingual Unsupervised Approach for Estimating Sense Distributions using Expectation Maximization
Mitesh M. Khapra | Salil Joshi | Pushpak Bhattacharyya
Proceedings of 5th International Joint Conference on Natural Language Processing

pdf bib
Together We Can: Bilingual Bootstrapping for WSD
Mitesh M. Khapra | Salil Joshi | Arindam Chatterjee | Pushpak Bhattacharyya
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies

2010

pdf bib
Everybody loves a rich cousin: An empirical study of transliteration through bridge languages
Mitesh M. Khapra | A Kumaran | Pushpak Bhattacharyya
Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics

pdf bib
Improving the Multilingual User Experience of Wikipedia Using Cross-Language Name Search
Raghavendra Udupa | Mitesh M. Khapra
Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics

pdf bib
Report of NEWS 2010 Transliteration Mining Shared Task
A Kumaran | Mitesh M. Khapra | Haizhou Li
Proceedings of the 2010 Named Entities Workshop

pdf bib
Whitepaper of NEWS 2010 Shared Task on Transliteration Mining
A Kumaran | Mitesh M. Khapra | Haizhou Li
Proceedings of the 2010 Named Entities Workshop

pdf bib
More Languages, More MAP?: A Study of Multiple Assisting Languages in Multilingual PRF
Vishal Vachhani | Manoj Chinnakotla | Mitesh Khapra | Pushpak Bhattacharyya
Proceedings of the 4th Workshop on Cross Lingual Information Access

pdf bib
Value for Money: Balancing Annotation Effort, Lexicon Building and Accuracy for Multilingual WSD
Mitesh Khapra | Saurabh Sohoney | Anup Kulkarni | Pushpak Bhattacharyya
Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010)

pdf bib
Verbs are where all the action lies: Experiences of Shallow Parsing of a Morphologically Rich Language
Harshada Gune | Mugdha Bapat | Mitesh M. Khapra | Pushpak Bhattacharyya
Coling 2010: Posters

pdf bib
All Words Domain Adapted WSD: Finding a Middle Ground between Supervision and Unsupervision
Mitesh Khapra | Anup Kulkarni | Saurabh Sohoney | Pushpak Bhattacharyya
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics

pdf bib
OWNS: Cross-lingual Word Sense Disambiguation Using Weighted Overlap Counts and Wordnet Based Similarity Measures
Lipta Mahapatra | Meera Mohan | Mitesh Khapra | Pushpak Bhattacharyya
Proceedings of the 5th International Workshop on Semantic Evaluation

pdf bib
CFILT: Resource Conscious Approaches for All-Words Domain Specific WSD
Anup Kulkarni | Mitesh Khapra | Saurabh Sohoney | Pushpak Bhattacharyya
Proceedings of the 5th International Workshop on Semantic Evaluation

2009

pdf bib
Projecting Parameters for Multilingual Word Sense Disambiguation
Mitesh M. Khapra | Sapan Shah | Piyush Kedia | Pushpak Bhattacharyya
Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing

pdf bib
Improving Transliteration Accuracy Using Word-Origin Detection and Lexicon Lookup
Mitesh Khapra | Pushpak Bhattacharyya
Proceedings of the 2009 Named Entities Workshop: Shared Task on Transliteration (NEWS 2009)