
Interpret Epidemiological Data or Visualisations using LLMs
Source:R/llm_interpret.R
llm_interpret.RdThis function interprets a given data frame or ggplot visualisation by sending it to a language model API via the ellmer package. It supports multiple LLM providers, allowing users to specify the desired provider and model through environment variables.
Arguments
- input
An input object, either a data frame or a ggplot object, representing the data or visualisation to be interpreted.
- word_limit
Integer. The desired word length for the response. Defaults to 100.
- prompt_extension
Character. Optional additional instructions to extend the standard prompt. Defaults to NULL.
Value
A character string containing the narrative or interpretation of the input object as generated by the LLM.
Details
Supported LLM Providers and Models:
OpenAI: Utilises OpenAI's models via
chat_openai(). Requires setting theOPENAI_API_KEYenvironment variable. Applicable models include:"gpt-4.1-nano"
Google Gemini: Utilises Google's Gemini models via
chat_gemini(). Requires setting theGOOGLE_API_KEYenvironment variable. Applicable models include:"gemini-2.5-flash-lite"
Anthropic Claude: Utilises Anthropic's Claude models via
chat_anthropic(). Requires setting theCLAUDE_API_KEYenvironment variable. Applicable models include:"claude-sonnet-4-20250514"
Environment Variables:
LLM_PROVIDER: Specifies the LLM provider ("openai", "gemini", "anthropic").LLM_API_KEY: The API key corresponding to the chosen provider.LLM_MODEL: The model identifier to use.
Note: Ensure that the appropriate environment variables are set before invoking this function. The function will throw an error if the specified provider is unsupported or if required environment variables are missing.
Tested Models
As of October 2025, this function has been tested and verified to work with the following models:
OpenAI: gpt-4.1-nano
Anthropic: claude-sonnet-4-20250514
Google Gemini: gemini-2.5-flash-lite
Additional models may be tested in the future. Users can provide custom instructions
through the prompt_extension parameter for specialised analysis requirements.