Back to Writings

Building an AI Agent API for Property Search with NestJS and LangGraph

8 min readNovember 10, 2025
NestJSLangGraphAILangChainTypeScriptAPI Development

How I built a conversational property search system that understands natural language and provides accurate property recommendations using NestJS, LangGraph, and LangChain.

Building an AI Agent API for Property Search with NestJS and LangGraph

*How I built a conversational property search system that understands natural language and provides accurate property recommendations*

The Problem

Traditional property search interfaces require users to fill out multiple forms with specific filters. This creates friction and doesn't match how people naturally describe what they're looking for. Users want to say:

"I need a 2-bedroom apartment under $2000 in downtown"

And get relevant results immediately, not navigate through dropdown menus and checkboxes.

The Solution

I built an AI-powered property search agent using **NestJS**, **LangGraph**, and **LangChain** that transforms natural language into structured property searches.

Tech Stack

  • **NestJS** - Enterprise-grade Node.js framework
  • **LangGraph** - State machine framework for agent workflows
  • **LangChain** - Framework for LLM-powered applications
  • **PostgreSQL** - Conversation memory and checkpointing
  • **Redis** - Real-time response streaming
  • **Google Gemini** - LLM for natural language understanding
  • **Zod** - Type-safe schema validation

Key Features

  • Natural language property search
  • Conversation context maintenance
  • Accurate property recommendations
  • Real-time response streaming
  • Persistent conversation memory

Implementation

The agent uses LangGraph's state machine pattern to handle complex conversational flows, with PostgreSQL for persistent memory and Redis for real-time streaming.

Results

The implementation successfully converts natural language to structured property searches, maintains conversation context, and provides accurate recommendations.

Key Learnings

  • Structured output ensures reliable LLM responses
  • Separate agents for different use cases improve performance
  • Persistent memory enables meaningful conversations
  • Error handling is crucial for production systems
  • Streaming improves user experience