Applied Generative AI for Enterprises
Master's level course on Applied Generative Artificial Intelligence for Enterprises at USC.
Course Description
This course provides a comprehensive overview of Generative Artificial Intelligence and its applications in enterprise settings.
Topics Covered
- Large Language Models (LLMs)
- Retrieval-Augmented Generation (RAG)
- Prompt Engineering
- AI Agents and Tool Use
- Enterprise Deployment and Safety
Course Information
- Term: Spring 2026
- Section: 31512
Related Posts
Project Updates
-
Retrieval-Augmented Generation (RAG)
Understanding RAG: Connecting frozen LLMs to external dynamic knowledge.
-
Prompting for Vibe Coding
Strategies for effective AI-assisted coding: Structure, Iteration, and Meta-Prompting.
-
Vibe Coding Report 1: Word Frequency Analyzer
A report on generating a Word Frequency Analyzer using AI prompts and iterative refinement.
-
Vibe Coding Report: Palindrome Checker
A report on generating a Palindrome Checker using AI prompts and iterative refinement.
-
Vibe Coding Report: Group Anagrams
A report on generating a Group Anagrams script using AI prompts and iterative refinement.
-
Vibe Coding Report: Recursive Maze Solver
A report on generating a Recursive Maze Solver using AI prompts and iterative refinement.
-
Vibe Coding Report: Advanced Recursive Maze Solver
A report on generating an Advanced Recursive Maze Solver (BFS with Teleports) using AI prompts.
-
JUST WHAT IS A LANGUAGE MODEL?
An introduction to language models: prediction, generation, and probabilities.
-
embedding
Understanding embeddings: representing language in vector space.
-
The Illustrated Transformer
A visual explanation of the Transformer model, its components, and how it works.
-
Evaluating LLM Hallucination with TruthfulQA
HW2 for USC ISE-547 2026 spring
-
Homework 3: Synthetic Data Generation and Classical Machine Learning
HW3 for USC ISE-547 2026 spring
-
Comparative Analysis of Chain-of-Thought Reasoning in Large Language Models
Assignment examining different Chain-of-Thought (CoT) prompting strategies in LLMs.
-
Fine-tuning LLMs on Google Colab with QLoRA and Unsloth
A quick guide to fine-tuning lightweight LLMs on a Tesla T4 GPU in under an hour for ISE-547.
-
LLM Evaluation: Frameworks, Benchmarks, and Best Practices
A deep dive into how we measure the intelligence, reliability, and safety of Large Language Models.
-
Understanding Chain-of-Thought Prompting: Why LLMs Need to 'Think' Before They Speak
A pedagogical look at Chain-of-Thought (CoT) prompting, its theoretical foundations, and its impact on LLM reasoning.