
Transforming B2B Pricing Support with AI & ChatGPT Enterprise
Zilliant is a private, Austin‑based technology company founded in 1998 (though some sources cite 1999) by data‑science pioneer Peter Zandan. It provides cloud‑native, AI‑driven software designed to optimize pricing and sales for B2B companies. Its platform offers solutions such as price management & optimization, configure‑price‑quote (CPQ), and revenue intelligence, enabling businesses particularly in manufacturing, distribution, and industrial sectors to transform pricing into strategic business power. Zilliant is recognized for helping firms reduce “pricing anxiety” by automating pricing decisions, ensuring margin protection, and accelerating deal execution across channels with a single source of pricing truth.

Key Business Challenges
The support team’s effectiveness was severely constrained by the need to navigate multiple siloed systems SharePoint, Confluence, Salesforce, Emails, and Teams Chats to locate relevant documentation. This fragmented environment made it difficult to retrieve accurate information promptly, slowing down support operations.
Historical and ongoing documentation gaps exacerbate support challenges and disrupt project execution. Without a centralized, reliable knowledge base, critical insights are often lost when individuals transition or leave. This creates single points of failure, reducing efficiency, delaying issue resolution, and increasing risks for ongoing project work.
Customer-facing staff lacked a streamlined, intelligent interface that could surface the right documentation in context. Without AI-driven search and natural-language access, engineers relied on manual searching and guesswork, leading to wasted time and potential errors.
Outcomes Delivered

- Increased efficiency in customer support and development workflows.
- Reduction in operational costs and mitigation of knowledge loss risks.
- Scalable, secure AI-driven solutions tailored to the Customer’s unique requirements.
- A well-enabled workforce proficient in leveraging AI tools.
- Measurable adoption of Generative AI technologies across the organization.
- Improved understanding of AI applications across various business units.
- Increased employee satisfaction and confidence in using AI technologies
- Reduced dependency on individual knowledge holders.
- Faster onboarding for new team members.
- Enhanced continuity and resilience in project execution.
- Improved ability to release new product features without breaking current implementations.

The Solution Roadmap
We embedded a custom GPT nicknamed ZAC inside ChatGPT Enterprise as the unified place to ask questions. When ZAC needed authoritative context beyond the chat window, it triggered a secure retrieval workflow on AWS (via API Gateway → Lambda). This was combined with per-security-group API keys, ensuring that access remained isolated, tightly scoped, and auditable while keeping information available on demand.
When a user asks ZAC something non-trivial, an AWS Lambda hosts a LangGraph agent that interprets the request. It looks up secrets in AWS Secrets Manager, then retrieves short-lived conversation history from Amazon DynamoDB before running the retrieval plan. Inside the graph, nodes handle authentication, routing, and retrieval steps expanding queries, decomposing complex ones, and applying RBAC filters. Results are compressed if needed, then passed back to ChatGPT, ensuring fast, reliable answers even for vague asks.
An AWS-native ingestion pipeline continuously refreshes the vector store. A scheduled trigger fans out work through SQS queues, while Lambda functions download raw items into Amazon S3. A Step Functions workflow then extracts, chunks, embeds, and upserts the content into Pinecone with automated delete/error hygiene. This ensures information stays current, relevant, and accessible to support teams, avoiding stale documentation and improving the overall reliability of responses.
By turning ChatGPT Enterprise into a smart front door backed by an AWS retrieval and ingestion backbone Zilliant reduced time spent hunting for answers, lowered dependence on single knowledge holders, and sped up onboarding and day-to-day support work. These outcomes map directly to their goals around operational efficiency, enablement, and improved quality (e.g., faster onboarding and better continuity).