From Dining Stress to Decision Maker: Duke GPT Builder in Action

ashley-park-profile
Ashley Park
Oct. 16, 2025 15 min

Summary: 

Deciding what to eat every day at Duke used to feel like a chore—but with Duke’s new GPT Builder, I turned that problem into an AI-powered assistant. This article walks through how I built the Duke Menu Decision Maker, from defining its purpose and training it on 17 dining hall menus to shaping its personality with system prompts and decision trees. Along the way, I’ll share what worked, what didn’t, and how GPT Builder makes customizing AI surprisingly accessible for everyone.

Rethinking the Role of AI

How often in a day do you use AI tools? What’s your primary purpose of usage? I’m guessing most of you are asking it to solve questions, like you would a search engine—just better. But have you ever thought that Chat could be more than a magic black box, perhaps your personal assistant?

That’s what I’ve been experimenting with recently. I’ll continue to share ways to use ChatGPT more creatively—in the most unpredictable but surprisingly useful ways. Today, though, I’m starting with one of the brand new features offered by Duke’s AI Suite, GPT Builder. With Retrieval Augmented Generation (RAG), it makes it possible for you to “train” (although you are technically not but equivalent on your use end) an OpenAI model on a set of knowledge tailored to your needs and customize how it interacts with you. No rocket science required—no prior coding at all. To prove it, I’ll walk you through how I built my own GPT: the Duke Menu Decision Maker.

Believe it or not, my biggest daily concern was deciding what to eat. Even with limited on-campus options, I often struggled making decisions and used to text my roommate three times a day for her food recs—until even she was tired of it. Worse, one “wrong” choice could throw off my mood for the entire day. So I thought: why not build my own assistant to decide for me? That’s how the Menu Decision Maker was born. You can test it yourself while I share the step-by-step process—or just for fun, who knows, you might actually start using it every day.

GPT Builder

GPT Builder is Duke’s no-code tool for creating customized GPTs. Instead of coding models from scratch, you shape their behavior with three main components:

  1. System Prompts: rule and instruction sets that define the bot’s goals and personality.
  2. Knowledge Files: documents, datasets, or resources you upload that the GPT can reference.
  3. Conversation Flows: optional step-by-step structures that guide how users interact.

Key Term Glossary: 

  • System Prompt: Core instructions that tell the GPT what role to play and what rules to follow.
  • Knowledge Base: Files you upload (menus, notes, datasets) that the GPT retrieves from instead of “remembering” everything at once.
  • Conversation Flow: Scripted or structured back-and-forth (like multiple-choice steps) that keeps input consistent.
  • Tokens: Small text chunks that the GPT reads and generates; token limits affect how much context can fit into one interaction.
  • Parameters: Learned weights inside the model; in GPT Builder you don’t change them—you guide how the model uses them.
  • Inference: Each answer the model generates from your input.
  • Custom Actions: Add-ons like APIs or live data fetchers that expand the bot beyond chat.

Duke’s AI Suite provides access to MyGPTBuilder, a platform for creating custom AI environments within a secure, sandboxed system. This allows you to design and experiment with specialized GPTs—whether for coursework, campus life, or personal projects—without needing to worry about data privacy or technical setup. For those getting started, Duke offers pre-formed guidelines and best practices for using MyGPTBuilder, which you can access here.

Methodology: From Idea to Prototype

As outlined in the guidelines, the first step is to access your AI workspace. If you’re new, start by creating a workspace under the MyGPTBuilder tab. Once that’s set up, go to the Action dropdown menu and turn your model on. You’ll see that my Menu Decision Model is now active in the example below. Just a heads-up—it may take up to 10 minutes for the model to power on, so don’t worry if it doesn’t start right away!

gpt-builder-img-1

The first step in GPT building is defining your purpose. The clearer and more specific you are, the easier it is to create a bot that works well. Mine was simple but practical: a decision-making assistant that suggests what to eat depending on the time of day and mood. That answered both the “what” (help pick menus) and the “how” (based on mood + time).

Next came model selection. GPT Builder lets you pick from several base models, each suited for different purposes. I chose GPT-5 because it’s the most advanced and reliable; capable of handling complex reasoning while still keeping conversations natural and human-like. That combination was perfect for a decision bot where I wanted it to juggle multiple factors (cuisine, cravings, dietary needs) while still sounding like a friendly peer making suggestions, not a stiff algorithm.

Available GPT Models and When to Use Them:

  • GPT-5: Best all-around; handles complex reasoning and nuanced conversation. Great for structured assistants like recommendations, scheduling, or study support.
  • GPT-5-chat: Optimized for dialogue flow; ideal for tutoring or interactive Q&A.
  • GPT-5-mini: Lightweight, cheaper, and faster; best for small utilities or quick answers.
  • GPT-5-nano: Ultra-light; used in embedded tools or simple tasks.
  • GPT-4.1 / Nano / Mini: Previous generation; good balance of cost and performance.
  • Llama 3.3 / 4 Scout / 4 Maverick: Open-source Meta models; transparent and customizable.
  • o4 Mini: Smaller reasoning model; decent for structured but lighter tasks.
  • text-embedding-3-small: For search or clustering, not conversation.
  • GPT-oss-120b: Large open-source option; strong performance with sandbox flexibility.
gpt-builder-img-2

Once your base model is set, you need to feed it relevant knowledge. I scraped menus of 17 Duke eateries (Bella Union, It’s Thyme, Ginger & Soy,...) from Duke Nutrition. Using GPT, I converted each into structured JSON with categories like dining style, hours, customizable options, and dietary tags. These structured files became the knowledge base the model now pulls from.

gpt-builder-img-3

The key for the design was setting the system prompt, which is basically where you tell your GPT who it is, what it knows, and exactly how it should behave with users. I didn’t just want it to be a search box where students typed in random requests—that gets messy fast and inconsistent. Instead, I designed the prompt to define a clear role:

“You are a Duke Campus Dining Decision Assistant. You help Duke students decide what to eat across campus eateries.”

From there, I specified its knowledge scope: it only pulls menu options from the JSON files I compiled. That way, the bot stays grounded in actual dining options rather than inventing things.

I also used the prompt to customize interaction style by forcing a decision tree instead of free text. For example, Step 1 is always: “What is the mood: 1) Light refreshments 2) Meal”, and the GPT won’t continue unless the user picks a number. Each step narrows the choices—mood → craving → cuisine → ingredients → dietary restrictions—so by the time it recommends food, the options feel curated and specific.

To make the bot practical, I added rules: it should never dump entire menus, only suggest full items or sides, and keep add-ons tied to customizable dishes (like ramen, pasta, tacos, or bowls). It also needs to respect time-based restrictions, like showing It’s Thyme “Dinner Thyme” entrées only after 4 PM.

Finally, I shaped the tone in the prompt: friendly, concise, and peer-like, almost as if another Duke student is giving a casual food rec. For example, instead of saying “You should consider Bella Union’s latte because it’s highly rated,” it phrases it as “Bella Union’s latte could be a great pick if you’re looking for something sweet and caffeinated—like a cozy study drink.”

In short, the system prompt works as the GPT’s personality plus its boundaries: it defines what knowledge it uses, how it guides conversations (structured decision tree, numbered lists), and what it avoids. It’s both the general instruction (what role the bot plays) and the customization layer (the exact steps and guardrails you add to keep the interaction consistent and friendly).

gpt-builder-img-4

Additionally, you can set Advanced Params, which are like shortcuts that help users get started. For example, you can pre-load prompt suggestions such as “Help me pick dinner” or “Find me vegan-friendly options” so students don’t feel stuck figuring out what to type. These suggestions act as a soft guide into the decision tree you’ve already built, while keeping the interaction natural.

gpt-builder-img-5

Now you’re almost set! The final step is deciding how you want your model to be shared. If it’s just for personal use, keep it private—but if you want others to benefit, you can make it public. Give your model a clear, descriptive name that helps users understand its purpose at a glance, and don’t forget you can upload a custom thumbnail image to make it stand out. (Pro tip: you can even create one yourself using another great CO-Lab AI resource!)

Additional Touch-Ups

gpt-builder-img-6

After all the customizations, I ran into one issue: GPT Builder automatically suggested follow-up questions, which didn’t fit the way I designed my system. My model relies on users selecting from the options it provides at each step, not typing open-ended responses. To fix this, I went into the Admin Panel → Interface settings and turned follow-up questions off. If you don’t want certain built-in features running by default, you can adjust them the same way. I also changed the Authentication setting so the default user role is set to user, ensuring the model is accessible and usable by anyone at Duke.

Outcome

To test my model, I first logged in from a friend’s device to make sure it worked smoothly from the user side. The results showed exactly what I had built: a clean flow of decision questions, one step at a time, just as defined in the system prompt. Because I had disabled follow-up questions, users stuck to the structured choices rather than wandering off with free-text responses.

Here’s an example of how it looked in action:

gpt-builder-img-7

At the end, the GPT asked:

“Do you want to refresh with 5 new chocolatey options (like drinks or milkshakes), or start over?”

When I deliberately tried to “break” the model—for example, by ignoring the instruction and typing something random instead of choosing a number—the GPT still attempted to move forward by pulling from the next hierarchy of knowledge. While this wasn’t ideal, it was useful for spotting where the system prompts might need tightening. Overall, though, the model consistently produced structured, curated menu suggestions that felt natural and practical for real use.

Challenges and Limitations

Not everything was smooth sailing. The biggest technical hiccup I ran into was token usage. At first, I tried embedding all 17 dining hall menus directly into the system prompt so the model would always “know” them. The problem? Every single user interaction—no matter how short—forced the model to process the entire block of menu data again. Even a one-word reply like “hi” or “sweet” triggered a request that ballooned to nearly 20,000 tokens, which immediately hit LiteLLM’s rate limits and made the bot unusable.

The fix came from rethinking how the system prompt should be structured. Instead of hardcoding all menus into the prompt, I trimmed the system message down to just the interaction rules and decision tree framework. Then, I shifted the menu data into the Knowledge base. This way, the GPT only pulls from the menus when necessary, without having to reprocess them at every turn. In practice, this dropped requests from tens of thousands of tokens to just a few hundred—keeping the bot fast, efficient, and well within limits.

Overall

The experiment was a success: I now have a working Duke Menu Decision Maker GPT that actually reduces the mental load of campus dining. Is it perfect? No—the follow-up UI and occasional over-eager suggestions still need polish. But the ability to consolidate 17 menus, enforce dietary filters, and make personalized recommendations makes this GPT more practical than any static dining website or PDF.

In short: if you’re considering building your own GPT, DO IT NOW. Start with a real problem (like mine!), break it into structured prompts and knowledge, and don’t be afraid to iterate. For me, this project confirmed that GPT Builder isn’t just for demos—it can actually solve daily problems.

ashley-park-profile
Ashley Park
Oct. 16, 2025