Skip to main content
  • Home
  • ताजा घटनाएं
  • कार्यक्रम
  • "Encode-Attend-Refine Decode: Enriching Encoder-Decoder Models with Better Content Representation for Natural Language Generation tasks."
"Encode-Attend-Refine Decode: Enriching Encoder-Decoder Models with Better Content Representation for Natural Language Generation tasks."

"Encode-Attend-Refine Decode: Enriching Encoder-Decoder Models with Better Content Representation for Natural Language Generation tasks."

Date30th Sep 2020

Time03:00 PM

Venue meet.google.com/rvy-aing-pfy

PAST EVENT

Details

Encode-Attend-Refine Decode: Enriching Encoder-Decoder Models with Better Content Representation for Natural Language Generation tasks.
Thesis Abstract: Sequence-to-Sequence (encode-attend-decode) models have become ubiquitous to a wide variety of Natural Language Generation (NLG) tasks. Although the generation quality across tasks have improved significantly, there are still several limitations. For instance, i) task-agnostic problems like repeating-phrases, hallucinations, and so on, still persist, ii) task-specific characteristics, like conciseness in abstractive summarization, answerability for question generation, better alignment in machine translation, etc., are not explicitly modeled or rewarded in the vanilla sequence-to-sequence models. In this thesis, we integrate the refine module as (encode-attend-refine-decode) in vanilla sequence-to-sequence model. The refine module improves upon the context representations learnt by the attend module, i) to mitigate task-agnostic: repeating phrase problem, and ii) to enrich task-specific characteristics in the model.

Speakers

Ms. Preksha Nema (CS15D201)

Computer Science & Engg.