Docugami Science Team to Present Research on Agentic Workflows at BayLearn 2024
Docugami’s Science Team is proud to be presenting at BayLearn 2024, the very distinguished machine learning science symposium, on our research into alternative approaches for agentic workflows. Our novel approach uses small language models with limited human supervision to achieve high-quality reasoning over complex documents without the need for costly proprietary LLMs and heavy human supervision and prompting, achieving results comparable to state-of-the-art LLMs like Chat GPT.
As the scientific community works to make AI agents more capable and sophisticated, there are increasing concerns over inherent bottlenecks, such as reliance on costly proprietary LLMs, the privacy and security risks of sharing confidential data with LLM providers like OpenAI, and the challenges in scaling human efforts for finely tuned prompt engineering.
BayLearn is one of the premier AI science gatherings in the Bay Area, bringing together hundreds of top AI scientists from Silicon Valley, other US technology centers, and around the world to exchange ideas. At the BayLearn 2024 conference on October 10 in Cupertino, members of Docugami’s Science Team will present our research showing that small language models can achieve high levels of reasoning on complex documents using weak supervision, without the need for expensive large model training or detailed human prompt intervention.
Our research findings have important implications.
- If open-source and low-cost small language models can achieve high levels of accuracy on complex tasks using self-annotation and limited human supervision, it could reduce costs and reliance on proprietary LLM providers like OpenAI, and make AI capabilities more widely available to a much broader cross-section of potential users.
- Furthermore, by avoiding proprietary LLMs like OpenAI, our approach ensures the security and privacy of user data.
- In addition, small language models can be run locally, allowing for greater speed and lower cost, as well as enhanced privacy and security.
Our approach relies on a progressive refinement learning paradigm. Through self-annotation and fine-tuning, smaller language models can adapt their own behavior without requiring either an expensive proprietary large language model teacher or significant human intervention through hand-crafted prompt engineering.
We start with a small number of demonstrations with in-context learning abilities to annotate tool-augmented reasoning trajectories. Then, in a weakly-supervised manner, tools are iteratively updated with those trajectories to identify the most accurate reasoning capabilities.
In Mathematical Tabular reasoning experiments with two large datasets, we demonstrated that our small language model with weak supervision approach significantly improves reasoning performance. Indeed, for one dataset, our small-model approach attains results comparable to ChatGPT, with less than a 0.4% difference using only a 3 billion weight model.
We look forward to presenting our research on AI agents to the distinguished scientific community at BayLearn 2024. Our research indicates that there is enormous potential for using small language models with limited human supervision to achieve high-quality reasoning capabilities on complex documents, comparable to state-of-the-art LLMs like Chat GPT. We will continue to research ways to achieve greater precision without reliance on heavy supervision and costly proprietary LLMs.