INSPIRED & Informed
RBCLabs.org
01
JUL
2025
The Entrepreneurial Need for RAG.
TL;DR
- RAG stands for Retrieval Augmented Generation.
- Entrepreneurs are increasingly looking to custom AI to solve their business needs.
- Generative AIs has several shortcomings.
- RAG helps solve these issues through a few methods.
What is RAG and why do you need it?
RAG stands for Retrieval Augmented Generation and it can significantly improve Large Language Models (LLMs), what we call AI. This is especially helpful for those looking beyond the big AI platforms such as Google Gemini, OpenAI's ChatGPT, and X's Grok. Sure, you could just subscribe to the tools, which is a great solution for many in today's increasingly competitive business environment, but where's the fun in that? What about the entrepreneurs looking to create custom solutions? Today, there are several LLMs that are both open-source and readily available. However, these come with limitations. In this blog post we'll explore the common limitations of LLMs and how RAG can help overcome them.
LLMs & Generative AI
First, a quick note on LLMs and Generative AI; they are not the same. LLMs are a type of Generative AI. Generative AI is a broader category of AI that encompasses various technologies capable of producing new content, including text, images, audio, and more.
- LLMs. These are a specific type of generative AI that focuses on generating human-like text. Specifically, LLMs are AI models, specifically designed to understand and generate human language. They are trained on massive datasets of text and code, enabling them to perform tasks like text generation, translation, and question answering.
- Generative AI. This is a broad term for AI systems that can create new content, like images, music, or even synthetic data.
In short, while all LLMs are generative AI, not all generative AI is an LLM. LLMs are a subset focused on language-based tasks, while generative AI encompasses a wider range of creative capabilities.
What are Generative AI's shortcomings?
Generative AI has several shortcomings. These issues are due in large part to LLMs' original purpose, which was to predict the next answer. This outcome can lead to:
- Hallucinations: Making things up.
- Knowledge Attribution Issues: Not automatically attributing sources when making factual claims.
- Knowledge Cutoff: Leaving them outdated due to static training data.
- Context Window Size Limitations: Difficulty processing large documents within a single prompt.
Even OpenAI's CEO Sam Altman was recently quoted in OpenAI's new podcast that he was shocked people trusted AI as much as they do. According to Yahoo!Finance, he said, "People have a very high degree of trust in ChatGPT, which is interesting, because AI hallucinates. It should be the tech that you don't trust that much."
How does RAG address LLM's common issues?
RAG addresses the main issues posed by LLMs by granting them access to external knowledge sources. This approach:
- Reduces hallucinations.
- Enables access to and support for source attribution.
- Solves the knowledge cutoff problem by keeping LLMs up to date.
RAG achieves this by relying on:
- Embeddings: Rich numerical representations of meaning, usually derived from text chunks, that capture relationships between different parts of text. These can also be made from other modalities like image or audio.
- Semantic Search: When a user asks a question, RAG converts the question into an embedding, compares it to the document's embeddings, and retrieves the most relevant information. This retrieved information is then injected into a prompt for the LLM, allowing it to reason, summarize, and share the answer with the user.
Instead of storing information about the world in the LLM's weights, RAG stores information outside the model (e.g., in documents, databases, or live internet searches), freeing up the model to focus on reasoning and cognition. The retrieved knowledge sources function more like a database in traditional software.
Summary.
RAG helps solve the common issues presented by LLMs including hallucinations, knowledge attribution and cutoff, as well as context window size limitations. RAG's applications are vast, including handling non-public information, creating long-term memory for chatbots, facilitating verifiable research through source attribution, and unearthing specific information not well-represented in the LLM's training data.
References.
Google Gemini, used July 2025.
CodeCademy, Creating AI Applications using Retrieval-Augments Generation, used July 2025.
Yahoo!Finance, OpenAI’s Sam Altman Shocked ‘People Have a High Degree of Trust in ChatGPT’ Because ‘It Should Be the Tech That You Don't Trust’, Caleb Naysmith, June 22, 2025.




01
JUN
2025
Prompt Frameworks for AI
TL;DR
Prompt frameworks are beneficial when using AI. Structured prompt frameworks enhance the quality and relevance of AI outputs, as well as provide structure and clarity to instructions. In turn, this leads to better responses.
There are currently several prompt frameworks. This article focuses on:
- RTF (Role, Task, Format): Defines the AI's persona, the action it should take, and the output format.
- CLEAR (Concise, Logical, Explicit, Action-oriented, Refined): Emphasizes clarity and precision in prompts.
- PGTC (Persona, Goal, Task, Context): A four-sentence framework that outlines the AI's role, objective, action, and background.
Briefly covered are PAR, CARE, RACE, and TAG. General tips for effective prompting are also provided.
Disclaimer: This article was assisted by artificial intelligence language models. While AI was used as a tool for generating content, it does not imply endorsement or sponsorship by the AI model's creators or any other entity referenced herein. Prompt responses have been modified. At the time of writing, June 1, 2025, this article is for a non-commercial blog. See fair use provisions in copyright law (17 U.S.C. § 107).
What is an AI Prompt Framework and why should I use it?
A prompt framework provides structure and clarity to instructions when interacting with AI models. This leads to more focused and relevant responses. A good prompt framework provides structure and clarity to your instructions when interacting with AI models, leading to more focused and relevant responses. The following are some popular and effective prompt frameworks.
Role, Task, Format (RTF)
This prompt framework consists of three parts that serve to define the AI's persona, state the desired action, and specify the output structure.
- Role: Define the persona you want the AI to adopt. This will assist in setting the tone and expertise level of the response.
- Task: Clearly state what you want the AI to do. Be specific about the desired action.
- Format: Specify how you want the output to be structured. This could be a paragraph, a list, a table, a specific writing style, etc.
Sample Prompt: "You are a legal expert on websites, copyright law, and AI use. Your task is to explain how one can use prompt responses from LLMs like Gemini or ChatGPT that assisted in writing a blog post for a website, which is not currently commercial, but rather my own blog for an online project. Present your explanation in short paragraphs."
Concise, Logical, Explicit, Action-oriented, Refined (CLEAR)
The CLEAR framework for AI prompting emphasizes creating clear and effective instructions. The following provides an explanation of each part, along with both good and poor examples.
Concise (also Clear): Use only necessary words and focus on the key concepts.
- Poor: "I'm trying to understand this whole thing about giving everyone money without them working, can you tell me what people think is good and bad about it?"
- Good: "Summarize the main arguments for and against universal basic income."
Logical: Present concepts in a natural and understandable order. Ensure the relationships between ideas are clear.
- Poor: "What happens with carbon dioxide and oxygen in plants and how does light fit into this whole energy thing?"
- Good: "Explain the process of photosynthesis in plants, starting with the absorption of sunlight."
Explicit: Clearly state what you want the AI to produce. Provide clear output directions.
- Poor: "Tell me something interesting."
- Good: "Write a short story about a robot who learns to feel emotions."
Action-oriented: Use verbs that clearly define the desired action from the AI.
- Poor: "Tell me about Lincoln and King."
- Good: "Compare and contrast the leadership styles and social contributions of Abraham Lincoln and Martin Luther King Jr."
Refined: Review and iterate on your prompts to improve clarity and results as part of an ongoing process.
The 4-Sentence Framework: Persona, Goal, Task, Context (PGTC)
The PGTC prompt framework uses four key elements for effective AI instructions:
- Persona: Define the AI's role or expertise.
- Goal: State the overall objective you want the AI to achieve.
- Task: Specify the specific action the AI needs to perform.
- Context: Provide any necessary background information, constraints, or specific requirements.
Sample Prompt
"You are a veteran tour guide with detailed knowledge of Chile. Your goal is to help a user plan a three-day trip to Santiago. Your task is to provide an itinerary which should include popular landmarks, attractions, estimated cost for each (including a total budge), along with the travel time to each cite from downtown Santiago. Your traveler is interested in national landmarks and Chilean cuisine, has a budget of no more than $10,000 USD, and will be in Santiago for two weeks."
PAR (Problem, Action, Result)
Useful for problem-solving prompts. To leverage this prompt framework, simply define the:
- problem
- action to take
- desired result
Sample Prompt: "Problem: Visits to my social media page have decreased by 13% since last quarter. Action: Suggest four actions I can take to increase page visits. Result: Provide the potential benefits of each action you recommend."
CARE (Context, Action, Result, Example)
This prompt framework can be useful for contextual tasks, like where specific examples are important. This should allow the AI to understand nuances, learn from specific instances, and improve performance in context-dependent situations.
Sample Prompt: "Context: We are launching a new social media campaign targeting young adults (18-28 years of age. Action: Write a sample social media post announcing the campaign. Result: The post should be engaging and encourage interaction. Example: [insert a social media post you that you'd like the AI to emulate]."
RACE (Role, Action, Context, Expectation)
This prompt framework is similar to RTF, but adds explicit context and expectation of the output.
Sample Prompt:
"Role: You are a helpful customer service chatbot for a major online vendor. Action: Respond to a customer inquiry about a delayed order. Context: Order number 0x24i335a7 missed the expected delivery date (yesterday). Expectation: Provide an apology email explaining the reason for the delay and a new estimated time of arrival."
TAG (Task, Action, Goal)
This prompt framework focuses on clearly defining the task, the specific actions involved, and the ultimate goal.
Example:
"Task: Write a blog post for a website. Action: Explain the benefits of a healthy diet and lowering stress. Goal: Inform and encourage readers on how to create a customized diet to support a healthy lifestyle and lower their stress."
General Tips for Effective Prompting
- Be specific: Provide detailed instructions by clearly defining the task, desired outcome, relevant keywords, and providing examples.
- Be succinct: Use succinct, clear, and concise language, avoiding ambiguity.
- Specify format: Explain the output format, such as a bulleted list, essay, paragraph, table, paragraph, code, etc.
- Provide context: Provide relevant background for context, explain the purpose behind your request, reference external information as needed.
- Define the audience: Describe their knowledge, interests, and the tone/style to match.
- Set constraints: Specifying length and content limitations.
- Iterate and refine: You don't have to settle for the first output. Feel free to experiment using different frameworks or customize them to get the results you really want.
- Ask for clarification: If the AI response is unclear, you can ask for specific clarification or further details. Treat the AI like a partner or consulting for better results.
Summary
Mastering prompt crafting unlocks the full potential of AI language models. By using a structured prompt framework, you can significantly improve the quality and relevance of the AI's output and make your interactions more efficient. Remember to choose the framework that best suits the specific task you have in mind and be sure to iterate for the best results.
References
Google Assistant. (2025, May 31). Response to prompt: "Give a brief definition of a 'prompt framework', concerning using AI"
Google Gemini. (2025, May 31). Response to prompts to shorten, summarize.
Google Cloud. "Prompt Engineering for AI Guide | Google Cloud." cloud.google.com
The following are references provided by Google Assistant:
Emeritus. "The Best Prompt Frameworks to Level Up Your Prompting Game." emeritus.org
Butter CMS. "11 ChatGPT Prompt Frameworks Every Marketer Should Know - Butter CMS." buttercms.com
Power BI Training Australia. "Master Prompt Engineering with Persona Patterns - Power BI Training." powerbitraining.com.au
Harvard University Information Technology. "Getting started with prompts for text-based Generative AI tools | Harvard University Informat…" www.huit.harvard.edu
Georgetown University. "How to Craft Prompts - Artificial Intelligence (Generative) Resources - Research - Guides." guides.library.georgetown.edu
Miquido. "AI Prompt Frameworks: Unlock Efficiency & Creativity | Miquido." www.miquido.com
Massachusetts Institute of Technology. "Effective Prompts for AI: The Essentials - MIT Sloan Teaching & Learning Technologies." mitsloantech.mit.edu
Learn Prompting. "Understanding Prompt Structure: Key Parts of a Prompt." learnprompting.org
Learn Prompting. "How to Write Better Prompts: Basic Recommendations and Tips." learnprompting.org
Prompt Engineering Guide. "Elements of a Prompt - Prompt Engineering Guide." www.promptengineeringguide.ai
Atlassian. "The ultimate guide to writing effective AI prompts - Work Life by Atlassian." www.atlassian.com
Research Guides - University of Calgary. "Prompting 101 - Artificial Intelligence - Library at University of Calgary - Research Guides." libguides.ucalgary.ca

28
APR
2025
Building a Home AI Server
I wanted to learn all I could about AI. Specifically, I wanted to know how to leverage it both in-house and offline. To do that, I needed a powerful computer to act as my home AI server. So I built one.
To build a home AI server, I first had to learn how to build a PC. I curated a few libraries of YouTube tutorials, learned some needed skills (like bash scripting and Linux) through sites like Codecademy, ordered the components, and read through the technical documentation as they arrived. During this time, I prepared by establishing a scaled down version of the offline LLM setup on two different laptops (one Mac, one Windows).
Once all the working components arrived, it was time to build the PC! I set up the case, installed the motherboard, then the CPU, the RAM, the 4TB SSD, the AIO, and the first GPU. I later added a second GPU and some additional fans for a nice airflow. I lowered the motherboard using the cases' adjustable settings, to make room for some much needed intake fans and that second GPU.
Now that the build was complete, it was time for an OS! I decided a Linux distro would be the way to go. Realizing that as great as Linux is, some software just isn't built for it, I later decided to do something called "dual booting", where through various methods one can house different operating systems on the same PC. I decided that installing Windows on a separate SSD was the way to go. I partitioned half of the new 4TB SSD and got to work. Now I have a fully functioning PC with options to boot into either Linux. No more relying on Windows Subsystem for Linux! Over time I came to prefer the Linix disto called PopOS! though I may move to another soon. A quick adjustment to the boot order and now by default it boots into Linux, with the option to boot into Windows at start up instead.
Almost immediately I dove head-first into learning how to generate AI images and videos. Learning and leveraging open-source tools like Open WebUI, Stable Diffusion, ComfyUI, and more has been a game-changer.
Stay tuned for more!