← Back to blog
AI & TechnologyFebruary 22, 202610 min read958 words

What I learned building an AI company before GPT existed

In 2016, we started building conversational AI and ML products when most of the industry was still obsessed with mobile apps. Here is what that journey taught me about AI, product delivery, and what actually matters.

Conversational AIProduct DeliveryVoicebotsComputer VisionGenAI
MK

Mahroof K

Entrepreneur-turned Program & Product Leader with 12+ years of experience building and scaling AI, SaaS, web, and mobile products. Former Founder & CEO of Cedex Technologies (acquired)

In 2016, Facebook opened the Messenger platform to bots. There was no GPT, no Claude, no readily available large language model to call. If you wanted a system to understand natural language, you had to orchestrate the understanding yourself.

At Cedex Technologies, we saw an immediate opportunity, but it wasn't just about the technology—it was about user behavior.

We had observed a growing friction in the digital ecosystem: "app fatigue." Users were hesitating to download new mobile apps, and mobile website engagement was dropping. People were spending the vast majority of their screen time inside chat applications. Our thesis was simple: stop forcing users to come to your platform; meet them where they already are. That thesis launched our Conversational AI practice, and it taught me more about product delivery than any course or framework ever could.

The Omnichannel Architecture & Agnostic NLP

To meet users where they were, we couldn't just build for one platform. We developed omnichannel conversational AI applications connected to WhatsApp, Facebook Messenger, SMS, the Web, mobile apps, and Telegram.

Building this in the pre-LLM era required a highly modular architecture. We utilized frameworks like Botkit and the Microsoft Bot Framework for our core bot development and orchestration layers.

More importantly, we learned early on that no single NLP engine could solve every problem. To serve our international client base effectively, we had to remain engine-agnostic and map the technology to the specific business requirement.

  • We leaned on Dialogflow (formerly API.ai) for robust, general-purpose conversational flows.
  • When a global client required specialized Arabic language support, we integrated IBM Watson.
  • For domains with strict data privacy and compliance laws—like our clients in the banking sector—we deployed Rasa so we could host the open-source NLU pipeline entirely on-premise.

But assembling these tools was only half the battle.

Pushing Boundaries: Building Custom NLP

While our engine-agnostic approach allowed us to deliver international projects efficiently, we occasionally hit hard linguistic constraints. For one specific major project targeting the Indian digital demographic, off-the-shelf engines completely fell apart.

The audience for this project spoke Hinglish—Hindi written in English script. It is a fluid, informal hybrid language. "Yaar kal kaunsi movie hai?" was a perfectly normal query, but no off-the-shelf NLP model in 2016 could parse that intent.

Because there were no Hinglish NLP models available, we had to build our own. We spent weeks collecting training data, running surveys, and hand-labeling thousands of utterances to train custom classifiers. It was a rigorous, highly manual process, but it resulted in a proprietary capability that allowed us to serve millions.

The results validated the effort. For this specific market, our Quizmaster chatbot scaled to 1.5 million unique users, and our Filmykaat bot crossed the 2 million user mark.

Beyond Text: Catching the Voice Wave

By 2017 and 2018, the conversational frontier expanded to voice. We aggressively moved into developing Alexa Skills and Google Actions.

This wasn't an experimental R&D wing; it was a profitable business line. Designing for voice is notoriously difficult because you lack the visual safety nets (like buttons or menus) that exist in text chats. If the conversational design fails, the user simply drops off.

We mastered these UX challenges to the point where many of our Alexa skills were highly successful and officially monetized by Amazon. Just as YouTube creators earn revenue for viewership, we received monthly payouts based on the high user engagement our voice applications generated.

Multimodal Before it Was a Buzzword

Our AI practice didn't stop at conversation. As client demands grew more complex, we expanded into broader Machine Learning and Computer Vision projects.

We integrated facial recognition using the Face API, deployed Google Vision APIs for complex image analysis and finding visual similarities, and delivered robust ML analytics dashboards. We were building multimodal AI solutions—combining text, voice, and vision—years before the industry coined the term.

What Changed with GPT and Modern LLMs

When OpenAI released the GPT models we use today, I watched with a mix of awe and recognition. The capability jump was extraordinary, but the core challenges of delivering a product had not changed. They had simply shifted.

In the pre-LLM era, the hard problem was understanding language and intent.

In the GenAI era, the hard problems are:

  • Reliability: How do you get a non-deterministic model to consistently generate the right output?
  • Grounding: How do you prevent confident hallucinations?
  • Cost at Scale: How do you serve millions of queries without destroying your margins?
  • Evaluation: How do you measure if your AI product is actually solving the user's problem?

These are product and engineering problems, not AI problems. They require the exact same rigour, discipline, and user-centricity that we applied in 2016.

What I Learned After a Decade in AI

Looking back at over a decade of product delivery, the lessons that emerged from those early days remain my guiding principles today:

1. The model is not the product. The model is merely a component. The product is the entire system: the UX, the fallback handling, the data pipeline, the human escalation path, and the analytics. Teams that treat the model as the product ship demos, not products.

2. Measure what matters to users. We spent a lot of time optimizing intent classification accuracy for our custom NLP. But users didn't care about our F1 score. They cared about the resolution rate—did they get the answer they needed?

3. Start with the user, not the tech. We succeeded because we identified a user behavior (app fatigue) and used AI to solve it.

Today, as I manage the delivery of end-to-end GenAI platforms, the technology has evolved beyond recognition, but the discipline of building a great product remains exactly the same.

Continue reading

Related articles

AI & Technology

GenAI for enterprise: what actually works vs what looks good in a demo

9 min read

Case Study

How We Built a Hinglish Chatbot for 2 Million+ Users — When No Hinglish NLP Model Existed

10 min read

AI & Product Management

Why Most AI Products Fail (And How to Actually Deliver Value)

15 min read

Have a Challenge in Mind? Let’s Talk.

I help organizations and startups deliver complex products and projects, define winning product strategies, and execute technology initiatives effectively.

Get in touch →