DSI's Web Log

Deterministic Precision with Real-Time Intelligence - An LLM Story

Written by Jeff Steele | Aug 26, 2025 3:04:04 PM

In modern manufacturing, the bedrock of operational excellence is robust, deterministic scheduling—where every batch, every machine, and every resource is accounted for in meticulous, hour‑by‑hour detail. But even with advanced planning tools, the future remains uncertain: How will costs shift if we run ten extra batches on the second shift? What if we defer Saturday’s load to Monday—will capacity be exceeded or resource bottlenecks emerge?

Enter SchedDSI—a powerful, intuitive, and highly configurable platform for finite and production scheduling. Designed for both simplicity (via drag‑and‑drop interfaces) and depth (with embedded Python scripting for complex constraints), SchedDSI delivers real‑time visibility into workflow, resource modeling, dependencies, and batch logic  . But what truly elevates the platform is its integration of a generative Large Language Model (LLM) layered atop your scheduling engine.

This LLM doesn’t replace your deterministic core—instead, it augments it. By tapping into the “tools layer,” the LLM becomes an interactive assistant: you can ask, “what will the material cost be if we run 10 batches on second shift?” or “if we delay Saturday’s batches to Monday, will any equipment bottlenecks crop up next week?” The LLM translates natural-language queries into operations on live schedule data and returns precise, actionable insights. You get the best of both worlds: the reliability of finite scheduling and the flexibility of intuitive “what-if” scenario modeling.

With the right "tooling" ..... we can be enhanced and augmented!

Turning Natural-Language “What-Ifs” into Deterministic, Auditable Answers

Most “AI-enabled” scheduling products stop at chat. SchedDSI goes further with a dedicated Tools Layer that connects natural-language prompts to precise database queries and domain-specific logic—safe, repeatabe, and fast. This is how planners can ask free-form questions like “What will the material cost be if we run 10 batches on second shift?” or “If we push Saturday’s batches to Monday, will we bottleneck next week?” and get answers grounded in the live scheduling data—not guesses

What the Tools Layer Actually Is........

At its core, the Tools Layer is a lightweight Flask service that exposes two primary behaviors:

Load & Index Context (POST /load_workflow)

We convert workspace metadata (areas, workflows, activities, batches) into concise, structured prompts—atomic facts the model can use.

We embed those prompts with FAISS (using mxbai-embed-large) and build a vector index so the system can instantly retrieve the most relevant facts for any question.

Query with Live Data + Tools (POST /query)

We parse the user’s natural-language prompt to detect intent and then call purpose-built tools (functions) that hit your read-only DB API (POST /query) for authoritative data.

We augment the LLM’s context with both the retrieved prompts and the real-time database results, then ask the model to synthesize an answer—prioritizing the fresh numbers.

This structure keeps the LLM honest: it must reason with the facts we provide, not hallucinate around them.

Why Prompts + Database Access Matters

  •  Prompts (from /load_workflow) give the model map-level knowledge: “which areas exist,” “what workflows look like,” “how batches relate to activities,” and so on. Think of this as the schema + static context that improves language understanding and reduces misunderstanding.
  •  Database access (inside /query) provides the ground truth: batch statuses, equipment/personnel availability, alerts, activity durations, variable values, etc. The Tools Layer fetches exactly what’s needed and injects it into the model’s working set.

Together, they deliver answers that are both explanatory (thanks to the prompts) and numerically correct (thanks to the database).

How a Question Flows Through the Tools Layer

User asks a question in plain English.

Example: “Show me all in-progress batches in Purification, and highlight any active alerts.”

Intent is detected in detect_and_execute_tools(...).

Keywords such as batches, in progress, and Purification trigger query_batches_with_status(...) with the right filters (status="In Progress", area_name="Purification").

Tool calls hit the DB API via _api_query(sql).

Your DB API returns either {columns, rows} or list[dict]; the code normalizes both to a list of dicts so everything downstream is predictable.

Results are summarized & merged into context.

The service embeds the user’s question, does a FAISS lookup for the top prompt snippets, and then appends a REAL-TIME DATABASE RESULTS block containing the tool outputs (e.g., list of batches with statuses, alerts, equipment assignments).

A domain-tuned system prompt instructs the model to prioritize real-time results over static context and to answer with specificity.

The LLM reasons and responds, citing the operational consequences (e.g., which batches have alerts, what to watch next, where capacity is tight).

The Tools: Deterministic Functions for Common Planner Tasks

Each tool is a small, deterministic function that assembles a focused SQL query, calls the DB API, and post-processes the response into a planner-friendly JSON structure. The registry looks like this:

  •  query_activities_by_area(area_name, workspace_id)

    Returns activity definitions, durations (also normalized to hours), equipment/personnel requirements.

  •  query_workflows_by_area(area_name, workspace_id)

    Returns workflows, including activity sequences and total/step durations—perfect for critical-path style reasoning.

  •  query_batches_with_status(workspace_id, status, area_name)

    Returns batch counts, progress, and any active alerts—a direct lens on current shop-floor reality.

  •  query_equipment_availability(workspace_id, equipment_type) and

    query_personnel_availability(workspace_id, personnel_type)

    Return availability snapshots and current assignments—vital for feasibility checks.

  •  query_batch_activity_details(batch_id, workspace_id)

    Drill-down on a single batch: assigned equipment/personnel, step timing, and sequence position.

  •  query_workspace_overview(workspace_id)

    Summarizes the entire workspace: counts, averages, total durations, alert tallies, and more—great for morning stand-ups.

  •  query_batch_alerts(workspace_id, batch_id, severity)

    Surfaces issues that planners should resolve before they become schedule breakers.

There’s also a variable-centric tool, GetBatchesByActivityVar, exposed as an HTTP endpoint (POST /tools/batches/by-activity-var). It lets the model (or a UI) filter batches by activity variable with operators like >, <, !=, and ILIKE. That’s how questions like “Which batches have an ActualEndTime > 18:00?” or “Show me batches where temperature setpoint contains ‘36C’” become one concise call.

Safety by Design: Guardrails Without Friction

  •  Read-only access: The DB API only accepts SELECT-style queries from this layer. No writes.

  •  Value sanitization: _sql_literal(...) carefully inlines values (numbers vs strings, escaping quotes) to avoid malformed queries.

  •  Time-boxed requests: API_TIMEOUT_SECONDS prevents a slow endpoint from stalling the LLM experience.

  •  Strict tool registry: Only whitelisted functions can run. Unknown tools are rejected up front.

  •  Predictable shapes: The Tools Layer normalizes API responses so the LLM always sees consistent structures.

Why This Beats a “Chatbot on a Database”

Traditional chat-over-data tools force the model to synthesize SQL from scratch—brittle in production, risky for safety, and hard to maintain. SchedDSI flips that:

  •  We author the SQL once inside clean, reusable functions.

  •  We expose only what matters to planners, with pre-tested joins and calculations.

  •  We keep the LLM focused on reasoning, trade-offs, and “what-if” exploration—not on inventing query syntax.

The result: answers with deterministic provenance (you can always see which tool was called and with what parameters) and narrative clarity (the LLM explains the implications in planner-friendly language).

Concrete “What-If” Examples (Powered by the Tools Layer)

“What will the material cost be if we run 10 batches on second shift?”

  •  The model triggers query_batches_with_status(...) and cost-related tools (or a cost view if exposed), filters by shift window, sums material consumption by BOM mappings, and returns the projected delta plus any alerts (e.g., material shortages, overlapping setups).

  •  The model asks for in-progress and scheduled batches (query_batches_with_status(...)), pulls equipment/personnel availability, and surfaces parallel-group/sequence implications from query_workflows_by_area(...).

  •  Output: the specific steps that collide, which resources saturate, and the earliest conflict-free start recommendations.

    “What if we delay Saturday’s batches to Monday—will it bottleneck next week?”

  •  A combined call to query_batches_with_status(... area_name="Purification") and query_batch_alerts(... severity="critical") yields an actionable list the planner can triage immediately.

    “List batches in Purification with any critical alerts.”

Implementation Notes You’ll Appreciate

  •  Embeddings + FAISS ensure the model’s context is laser-focused on the relevant parts of the workspace—no need to stuff the entire database into the prompt.

  •  The system prompt explicitly instructs the model to trust real-time results over static context, which keeps answers aligned to what’s actually happening now.

  •  Because tools are just Python functions behind a registry, adding a new planner capability (e.g., setup-time impact, overtime forecasts, changeover risk) is as simple as writing one more query function and whitelisting it.

Bottom line: The Tools Layer is where SchedDSI’s deterministic scheduling engine and its LLM co-pilot truly meet. Prompts make the model smart about your factory’s structure; targeted, read-only database calls make it correct about your factory’s reality. Together, they turn “what-if?” into an everyday planning superpower—without compromising safety, speed, or control.