Animesh Nighojkar

Ph.D. Student (Computer Science)

Mutual Implication as a Measure of Textual Equivalence


Journal article


Animesh Nighojkar, John Licato
The Florida AI Research Society, 2021

Semantic Scholar DBLP DOI
Cite

Cite

APA   Click to copy
Nighojkar, A., & Licato, J. (2021). Mutual Implication as a Measure of Textual Equivalence. The Florida AI Research Society.


Chicago/Turabian   Click to copy
Nighojkar, Animesh, and John Licato. “Mutual Implication as a Measure of Textual Equivalence.” The Florida AI Research Society (2021).


MLA   Click to copy
Nighojkar, Animesh, and John Licato. “Mutual Implication as a Measure of Textual Equivalence.” The Florida AI Research Society, 2021.


BibTeX   Click to copy

@article{animesh2021a,
  title = {Mutual Implication as a Measure of Textual Equivalence},
  year = {2021},
  journal = {The Florida AI Research Society},
  author = {Nighojkar, Animesh and Licato, John}
}

Abstract

Semantic Textual Similarity (STS) and paraphrase de- tection are two NLP tasks that have a high focus on the meaning of sentences, and current research in both re- lies heavily on comparing fragments of text. Little to no work has been done in studying inference-centric ap- proaches to solve these tasks. We study the relation be- tween existing work and what we call mutual implica- tion (MI), a binary relationship between two sentences that holds when they textually entail each other. MI thus shifts the focus of STS and paraphrase detection to un- derstanding the meaning of a sentence in terms of its in- ferential properties. We study the comparison between MI, paraphrasing, and STS work. We then argue that MI should be considered a complementary evaluation met- ric for advancing work in areas as diverse as machine translation, natural language inference, etc. Finally, we study the limitations of MI and discuss possibilities for overcoming them.