СƵ

Skip to content

Shelly Palmer - Making Your LLM Yours: Enhancing LLMs with External Signals

Shelly Palmer has been named LinkedIn’s “Top Voice in Technology,” and writes a popular daily business blog.
shelly040428

Big foundational models like GPT-4, Gemini, Claude, Llama (aka large language models or “LLMs”) are awesome, but they are not experts in your business. They are also available to all of your competitors so, creating competitive advantage requires you to train your subject matter experts to get the most out of the AI, and to augment your LLMs to be as relevant to your business as possible. One of the most effective ways to augment an LLM is with the addition of external signals – which are various types of real-world data and information that can be used to influence and improve the performance and relevance of LLMs. Let’s take a high-level look at how external signals can transform LLMs, making them more relevant, responsive, and ultimately, more useful.

Can’t I Just Put All My Data Into The LLM?

How great would it be if you could just throw all of your data into one big database and have the LLM sort it out. That may happen one day, but it’s not in the realm of current computer science. For now, before we can use signals to make our LLMs uniquely ours, we need to go through a painstaking process that includes identifying what signals we need (and what signals we need to purchase or partner for). Then, we’ll need to classify and weight each signal (this is about as hard as it sounds). And, after all of that is done, we will need to technically integrate the signals into our LLM workflows.

Sourcing External Signals

External signals can be sourced from various channels, depending on the application of the LLM:

  • Internal Data Repositories: For businesses, internal data such as customer behavior analytics, sales data, and support tickets can serve as valuable signals.
  • APIs: Many services offer APIs that provide real-time data streams, such as news updates, financial reports, social media posts, and more.
  • Public Datasets: For broader applications, publicly available datasets that update regularly, such as data available from data.gov, can also be tapped as a source of signals.

Classification of Signals

Once external signals are collected, the next step is to classify them. This process involves sorting and tagging each signal with relevant metadata that accurately describes its content, origin, urgency, and contextual relevance. The metadata serves as a foundation for strategically integrating these signals into the LLM, ensuring that their influence on the model is both meaningful and appropriate.

Content Analysis

  • What’s in the signal? The system will use natural language processing (NLP) techniques to analyze the content of each signal. This includes extracting key themes, sentiments, and facts from the data.
  • Categories and Tags: Signals are categorized into clearly defined types such as political events, economic data, technological updates, or cultural news. Each category is then tagged with keywords that help in further refining the model’s understanding and response strategy.
  • Assessment of Source Credibility: The reliability of each signal’s source is assessed to ensure its trustworthiness. This involves checking the source’s historical accuracy, reputation, and expertise in the subject matter.
  • Impact on Weighting: The credibility of the source significantly influences how much weight a signal is given in the decision-making process of the LLM. Highly credible sources lead to a stronger influence on the model’s outputs, while signals from less reliable sources are down-weighted to minimize their impact.
  • Contextual Relevance: The relevance of a signal is determined based on the current context in which the LLM is operating. For instance, a signal containing the latest technological advancements is highly relevant for a tech-focused discussion.
  • Timeliness: Signals are evaluated on their freshness and immediacy. Recent signals are usually more pertinent and are thus prioritized over older information, unless historical context is specifically relevant.

Weighting of Signals

Weighting involves assigning importance to different signals based on their expected impact on the model’s performance:

  • Relevance Scoring: Signals are scored based on their relevance to the current context or conversation. For instance, breaking news might be given higher weight during a discussion about current events.
  • Decay Factors: Signals might lose relevance over time, so a decay factor can be applied to reduce their influence. For example, yesterday’s stock market prices are less relevant than today’s.

Technical Integration into LLMs

If you thought that aggregating, classifying, and weigting was the hard part, think again. After we’ve figured out how to find the signals we need to help us make better use of our LLMs, we need to integrate our signals into our LLMs. There are severals ways to accomplish this including:

Direct Integration Methods

  • Dynamic Prompt Engineering: Modify the prompts fed into the LLM by appending or prepending relevant signal information, allowing the model to generate responses influenced by the latest data.
  • Embedding Vectors: Convert signals into dense vector representations that can be directly fed into the LLM, enabling it to process the signals as part of its input layer.

Indirect Integration Methods

  • Contextual Adjustment Layers: Implement additional neural network layers that take both the primary input and the external signals, merging them to adjust the model’s outputs based on the weighted signals.
  • Model Retraining and Fine-Tuning: Periodically retrain or fine-tune the LLM on data enhanced with external signals, thereby integrating an understanding of these signals into the model’s core functionality.

Practical Considerations and Challenges

  • Latency and Performance: Integrating real-time data can introduce computational overhead. Efficient processing pipelines and optimization strategies are necessary to balance latency and performance.
  • Signal Noise and Overfitting: Care must be taken to avoid incorporating noisy signals that could lead to overfitting or degrade the model’s performance.
  • Ethical and Privacy Concerns: Ensuring that the use of external data complies with privacy laws and ethical guidelines is crucial, especially when handling personal or sensitive information.

Knowledge Is Power

Incorporating classification and weighting of external signals into LLMs requires careful consideration of the source, relevance, and integration method to enhance the model’s responsiveness and accuracy effectively. This approach not only makes LLMs more adaptive but also more attuned to the nuances of real-world applications. In practice, the only way to get the most out of your LLM-based AI systems is to become a signals integration powerhouse. For LLMs as for humans, knowledge is power.

Author’s note: This is not a sponsored post. I am the author of this article and it expresses my own opinions. I am not, nor is my company, receiving compensation for it. This work was created with the assistance of various generative AI models.

[email protected]

ABOUT SHELLY PALMER

Shelly Palmer is the Professor of Advanced Media in Residence at Syracuse University’s S.I. Newhouse School of Public Communications and CEO of The Palmer Group, a consulting practice that helps Fortune 500 companies with technology, media and marketing. Named  he covers tech and business for , is a regular commentator on CNN and writes a popular . He's a , and the creator of the popular, free online course, . Follow  or visit . 

push icon
Be the first to read breaking stories. Enable push notifications on your device. Disable anytime.
No thanks