Skip to main content
April 2, 2026Brian McClain/7 min read

Prompt Engineering with JSON and Jinja in Python Labs

Master AI Integration with Structured Data Engineering

Lab-Based Learning Approach

This lesson follows a hands-on lab format where students attempt the exercise first, then watch the solution. Even partial completion of reading documentation and following instructions builds valuable technical skills.

Lab 3 Implementation Process

1

File Setup

Create server3-lab.py from server03.py using Save As functionality

2

Prompt Engineering

Design specific prompts requesting structured movie data with multiple components

3

JSON Integration

Configure AI responses in JSON format for structured data handling

4

Jinja Template Creation

Build HTML templates with multiple variable placeholders for dynamic content

This lesson is a preview from our Python for AI Course Online (includes software) and Python Certification Course Online (includes software & exam). Enroll in a course for detailed lessons, live instructor support, and project-based training.

Welcome to Lab 3: Prompt Engineering, JSON, and Jinja—a comprehensive dive into advanced AI integration techniques. This isn't just another tutorial; it's hands-on training in the skills that separate competent developers from true AI engineers. Take your time with this lab, read the documentation carefully, and don't worry about completing everything in one session. Even mastering the Save As operation while parsing technical documentation builds critical skills you'll use throughout your career.

The beauty of these labs lies in their dual nature: tackle them as hands-on challenges to test your skills, or follow along as structured lessons to build your understanding. Both approaches deliver value, and you can always revisit them as your expertise grows. The key is engaging with the material at whatever level feels appropriate for your current skill set.

In this comprehensive lab, you'll elevate your prompt engineering capabilities by instructing AI systems to deliver sophisticated, multi-part responses formatted as JSON. You'll then parse these structured responses and dynamically populate variables within Jinja HTML templates. This represents a significant step beyond basic single-variable templating—you're building the foundation for complex, data-driven applications.

Rather than working with simple single-variable outputs wrapped in double curly braces, you'll orchestrate multiple data streams flowing seamlessly from AI response to user interface. Begin by navigating to your server03.py file, execute a Save As operation, and rename the new file to server3. This establishes your working environment for the advanced techniques we're about to implement.

Now we dive into the heart of Lab 3's solution architecture. Your first task involves sophisticated prompt modification—engineering your chat question to request specific examples of exceptional 1990s cinema. Through extensive testing, we've observed that AI models consistently favor certain titles: Pulp Fiction emerges as the top choice with remarkable frequency, though this predictable pattern actually serves our learning objectives well.

The prompt engineering process requires strategic specificity. We're not seeking generic movie recommendations; we want comprehensive data packages including memorable quotations—think "Big Kahuna Burger" from Pulp Fiction or iconic lines from Goodfellas, another frequent AI selection from the decade. Your engineered prompt should explicitly request the movie title, release year, cast information, director credentials, and a curated collection of quotable dialogue.

This specificity extends beyond content to role definition. In your system content configuration, establish the AI as a specialized film expert rather than a generic chatbot. While some documentation suggests maintaining the standard "helpful assistant" designation, modern best practices support role-specific assignments when targeting specialized knowledge domains. You're not facilitating casual conversation—you're accessing expert-level film analysis.

Here's where our implementation strategy becomes more nuanced. Initially, we'll request a simple text blob response—no JSON formatting, no structured data, just raw content dumped directly to the browser without even basic HTML paragraph tags. This baseline approach helps you understand the fundamental data flow before adding complexity layers.

Once you've mastered basic text handling, we'll evolve the system to request JSON-formatted responses. This structured approach enables precise data extraction: year as a number, title as a string, stars as an array, quotable quotes as a list, director information as a string, and potentially Academy Award data. The goal isn't overwhelming complexity—we're building systematic understanding through progressive enhancement.


The final implementation involves template rendering with multiple data capture points. Your Jinja template will feature dedicated elements for each data type, creating a professional presentation layer for your structured content. While ambitious developers might consider looping through multiple movies, this lab maintains focus on single-movie mastery. Complex loops belong in advanced implementations, not foundational learning exercises.

Let's implement this step-by-step approach. Starting with server03, we'll create server03-lab through a standard Save As operation. Your AI role definition evolves from generic assistant to specialized expert: "You are a film historian and critic, classic movie buff, and overall cinema expert." Notice how we incorporate multiple authority markers—film, movie, cinema paired with critic, buff, and expert—to reinforce the specialized role.

Your initial prompt should read: "Please provide an example of a great Hollywood movie from the 1990s. Please describe the movie and include important facts such as the title, year, stars, director, genre, and a few quotable or memorable quotes." The response format remains simple text initially—no JSON complexity yet.

For this first iteration, we're establishing baseline functionality. The system returns raw AI responses directly to the browser, creating a clean text dump that demonstrates core communication pathways. This approach strips away formatting complexity, letting you focus on prompt engineering and response handling fundamentals.

Execute your initial implementation and observe the results. You'll likely see responses featuring classics like "The Shawshank Redemption"—a frequent AI selection that demonstrates the model's training patterns. The response includes all requested elements: title, year, director, starring actors, and memorable quotes like "Hope is a good thing, maybe the best of things, and no good thing ever dies" and "Get busy living or get busy dying."

This text blob approach provides valuable insights, but it represents only the opening phase. Real applications demand structured data handling, which brings us to JSON implementation. Import the JSON library, then modify your response format specification to request JSON objects. Your prompt evolution should explicitly state: "Answers should be in JSON with the following keys: title, year, director, stars, quotes, description, genre."

The stars and quotes values require special attention—specify that both should return as lists containing multiple items rather than single strings. This ensures consistent data structures for your parsing operations. Professional prompt engineering means anticipating data type requirements and communicating them clearly to the AI system.

Your server-side implementation now handles JSON parsing, converting the AI's string response into a Python dictionary through standard JSON.loads() operations. This parsed data flows to your Jinja template as a comprehensive dictionary, enabling sophisticated templating operations that access individual data elements through standard dictionary key syntax.


The HTML template construction requires systematic organization. Create semantic elements for each data type: title headers, paragraph tags for descriptions, list elements for stars and quotes, and appropriate formatting for year and director information. Your Jinja variables access dictionary keys directly: {{ai_dict['title']}}, {{ai_dict['year']}}, and so forth.

For array data like stars and quotes, implement index-based access to avoid potential errors: {{ai_dict['stars'][0]}} and {{ai_dict['stars'][1]}} for the first two cast members, with similar patterns for quotes. This conservative approach ensures template stability even when AI responses contain varying array lengths.

The final implementation brings together Flask's render_template function, OpenAI integration, and JSON parsing into a cohesive system. Your route handler manages the complete data flow: prompt engineering, AI communication, response parsing, and template rendering. Each component serves a specific purpose in the larger application architecture.

Common implementation challenges often involve character encoding issues, particularly with quotation marks. Modern text editors sometimes introduce "smart quotes" or curly quotes during copy-paste operations, creating JSON parsing errors. When encountering unexpected character errors, systematically examine your prompt strings for non-standard quotation marks and replace them with standard ASCII quotes.

This lab represents a significant milestone in AI application development. You've progressed from simple text handling to sophisticated structured data processing, implemented professional prompt engineering techniques, and created dynamic templates that transform AI responses into polished user interfaces. These skills form the foundation for advanced AI-driven applications across numerous domains.

The techniques you've mastered here—role-specific AI configuration, structured response formatting, JSON parsing, and dynamic templating—represent core competencies in modern AI development. As you continue building more complex applications, these foundational skills will enable increasingly sophisticated implementations that deliver real business value through intelligent automation and enhanced user experiences.

Key Takeaways

1Effective prompt engineering requires specific role definition beyond generic assistant prompts, using domain expertise like film historian and cinema expert
2JSON response format enables structured data extraction with defined keys for title, year, director, stars, and quotes as separate accessible elements
3Two-phase implementation approach first validates basic text responses before advancing to complex JSON parsing and template integration
4Jinja templating supports multiple variable substitution using dictionary keys and list indexing for dynamic HTML content generation
5Character encoding issues from copy-paste operations can cause JSON parsing errors requiring careful validation of quote characters and formatting
6Flask render_template function facilitates passing entire dictionaries to HTML templates for comprehensive data presentation
7List data types in JSON responses require specific indexing syntax like stars[0] and quotes[1] for individual element access
8Structured prompt engineering specifies exact data requirements including multiple list items and response format constraints for consistent AI output

RELATED ARTICLES