Generative AI has invaded our places of work and learning with the promise of increasing productivity.
However, many generative AIs are built on (Large Language Model) LLMs which act as next-wordpredictors
based on probabilistic modeling. This leads to numerous challenges, especially ambiguity.
This proposal addresses the research question: How can we reduce ambiguity in AI generated text?
The current proposal seeks to 1) identify ways to algorithmically identify and flag ambiguity, and 2)
explore identifying levels of ambiguity and 3) explore ways in which ambiguity could be reduced or
managed.
Once ambiguity is identified, we intend to use a LLM application to generate improved alternatives. This
project will help improve the quality of human interactions with AI applications such as chatbots.
Generative AI has invaded our places of work and learning with the promise of increasing productivity.
However, many generative AIs are built on (Large Language Model) LLMs which act as next-wordpredictors
based on probabilistic modeling. This leads to numerous challenges, especially ambiguity.
This proposal addresses the research question: How can we reduce ambiguity in AI generated text?
The current proposal seeks to 1) identify ways to algorithmically identify and flag ambiguity, and 2)
explore identifying levels of ambiguity and 3) explore ways in which ambiguity could be reduced or
managed.
Once ambiguity is identified, we intend to use a LLM application to generate improved alternatives. This
project will help improve the quality of human interactions with AI applications such as chatbots.