skip to main content
10.1145/3626246.3654734acmconferencesArticle/Chapter ViewAbstractPublication PagesmodConference Proceedingsconference-collections
short-paper
Open access

Building Reactive Large Language Model Pipelines with Motion

Published: 09 June 2024 Publication History

Abstract

Large language models (LLMs) rely on prompts with detailed and informative context to produce high-quality responses at scale. One way to develop such prompts is through reactive LLM pipelines, which incorporate new information---e.g., end-user feedback and summaries of historical inputs and outputs---back into prompts to improve future response quality. We present Motion, a Python framework to build and execute reactive LLM pipelines. Motion uses a weak consistency model to maintain prompt versions, trading off freshness for end-user latency. We demonstrate Motion with an e-commerce application that suggests apparel to wear for any event, allowing attendees to indirectly influence prompts with their queries. Attendees can interact with the demo as end-users or modify the application as developers, adding new information sources for reactive prompts.

References

[1]
Mahyar Abbasian, Iman Azimi, Amir M. Rahmani, and Ramesh Jain. 2024. Conversational Health Agents: A Personalized LLM-Powered Agent Framework. arxiv: 2310.02374 [cs.CL]
[2]
Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. 2023. Gpt-4 technical report. arXiv preprint arXiv:2303.08774 (2023).
[3]
Phillip Carter. 2023. All the hard stuff nobody talks about when building products with llms. https://www.honeycomb.io/blog/hard-stuff-nobody-talks-about-llm
[4]
Zui Chen, Zihui Gu, Lei Cao, Ju Fan, Sam Madden, and Nan Tang. 2023. Symphony: Towards natural language query answering over multi-modal data lakes. In Conference on Innovative Data Systems Research, CIDR. 8--151.
[5]
Qingxiu Dong, Lei Li, Damai Dai, Ce Zheng, Zhiyong Wu, Baobao Chang, Xu Sun, Jingjing Xu, and Zhifang Sui. 2022. A survey for in-context learning. arXiv preprint arXiv:2301.00234 (2022).
[6]
Jerry Liu. 2023. Data Agents. https://blog.llamaindex.ai/data-agents-eed797d7972f
[7]
Milos Nikolic, Mohammed ElSeidy, and Christoph Koch. 2014. LINVIEW: incremental view maintenance for complex analytical queries. In Proceedings of the 2014 ACM SIGMOD International Conference on Management of Data (Snowbird, Utah, USA) (SIGMOD '14). Association for Computing Machinery, New York, NY, USA, 253--264. https://doi.org/10.1145/2588555.2610519
[8]
David Sculley, Gary Holt, Daniel Golovin, Eugene Davydov, Todd Phillips, Dietmar Ebner, Vinay Chaudhary, Michael Young, Jean-Francois Crespo, and Dan Dennison. 2015. Hidden technical debt in machine learning systems. Advances in neural information processing systems, Vol. 28 (2015).
[9]
Shreya Shankar, Rolando Garcia, Joseph M Hellerstein, and Aditya G Parameswaran. 2022. Operationalizing machine learning: An interview study. arXiv preprint arXiv:2209.09125 (2022).
[10]
Dixin Tang, Zechao Shang, Aaron J Elmore, Sanjay Krishnan, and Michael J Franklin. 2019. Intermittent query processing. Proceedings of the VLDB Endowment, Vol. 12, 11 (2019), 1427--1441.

Index Terms

  1. Building Reactive Large Language Model Pipelines with Motion

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      SIGMOD/PODS '24: Companion of the 2024 International Conference on Management of Data
      June 2024
      694 pages
      ISBN:9798400704222
      DOI:10.1145/3626246
      This work is licensed under a Creative Commons Attribution International 4.0 License.

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 09 June 2024

      Check for updates

      Author Tags

      1. incremental view maintenance
      2. large language models

      Qualifiers

      • Short-paper

      Funding Sources

      Conference

      SIGMOD/PODS '24
      Sponsor:

      Acceptance Rates

      Overall Acceptance Rate 785 of 4,003 submissions, 20%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • 0
        Total Citations
      • 294
        Total Downloads
      • Downloads (Last 12 months)294
      • Downloads (Last 6 weeks)59
      Reflects downloads up to 01 Mar 2025

      Other Metrics

      Citations

      View Options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Login options

      Figures

      Tables

      Media

      Share

      Share

      Share this Publication link

      Share on social media