By Michael Okpotu Onoja
When we launched Ayema in 2021, we thought we knew how to build a social commerce platform. We’d read every scaling case study we could find, studied various architecture patterns, and felt confident about our technical approach.
Then Nigerian reality hit us in the face.
By month three, we had 5,000 users and a server bill that was already unsustainable. By month six, our image uploads were taking 45 seconds on 4G connections. By month nine, we were seriously considering shutting the whole thing down.
Today, Ayema has over 102,000 active users, processes 2 million engagement events monthly, maintains 99.2% uptime, and runs on a ₦25,000/month server budget. We’ve received ₦15 million in funding and have over 500 vendors actively trading on the platform.
But getting here required making technical decisions that would make Silicon Valley CTOs question my sanity. Here’s what actually worked and what spectacularly failed when building for the Nigerian market.
Decision 1: PHP/Laravel instead of the “Cool” Stack (The choice everyone questioned)
The Silicon Valley playbook said: Use Node.js with React full-stack. It’s what everyone uses. It scales better. It’s “modern.”
What we did instead: Built on PHP 8 and Laravel 10.
Why everyone thought I was crazy:
The founder’s exact words: “Are you sure? PHP? In 2021?”
The investors we pitched looked confused. “But the cool startups use Node and microservices…”
Even developers I tried to hire raised eyebrows. “I thought PHP was dead.”
Why it was actually brilliant:
Here’s what nobody tells you about building in Nigeria, you can’t afford to debug framework issues at 2 AM when you’re a two-person team with no funding.
Laravel gave us:
- Battle-tested stability – No surprises, no “this worked in development but breaks in production.”
- Everything built-in – Authentication, job queues, payments, file storage, all there
- Huge community – When we got stuck, there were 100 Stack Overflow answers, not 3.
- Lower server costs – PHP runs efficiently on cheaper hardware than Node
The impact:
While competitors spent months wrestling with WebSocket scaling and event loop issues, we shipped features. Our first marketplace module took 2 weeks to build because Laravel’s ecosystem had everything we needed.
Cost saved: Approximately ₦300,000 in debugging time and server optimization over 18 months.
The lesson: The best technology isn’t the newest, it’s the one that lets you move fastest with the resources you have.
Decision 2: VPS Over AWS (The ₦360,000 Bet)
The Silicon Valley playbook said: Use AWS. It scales infinitely. Everyone uses it. It’s professional.
What we did instead: Rented a VPS (Virtual Private Server) for ₦25,000/month.
The math that changed everything:
When we started, we priced out AWS:
- EC2 t3.medium instance: ~$35/month (₦40,000)
- RDS for database: ~$25/month (₦28,000)
- CloudFront CDN: ~$10/month (₦11,000)
- Data transfer: ~$15/month (₦17,000)
- Total: ₦96,000/month minimum
Our VPS:
- 6GB RAM, 120GB SSD, 2.93TB bandwidth
- Total: ₦25,000/month
The calculation:
Over 24 months, VPS saved us ₦1,704,000. That’s real money that went into user acquisition instead of server bills.
But there was a catch:
With AWS, scaling is clicking a button. With VPS, I’m the one optimising databases at 3 AM when things slow down.
Was it worth it? Absolutely.
Why this worked:
Nigerian internet infrastructure means most of our traffic comes during specific hours (6-10 PM). We didn’t need AWS’s “infinite scaling”; we needed predictable costs and solid performance during peak hours.
We optimised ruthlessly:
- MySQL tuning for our exact workload
- Redis caching for frequently accessed data
- Image optimisation pipeline (3MB uploads → 180KB served)
- Aggressive database indexing
The result: 99.2% uptime, 280ms average API response time, supporting 102,000 users on that ₦25,000/month server.
The lesson: Scaling isn’t always about infinite resources, sometimes it’s about ruthless optimisation within constraints.
Decision 3: Hybrid Frontend Strategy instead of pure JavaScript SPA
The Silicon Valley playbook said: Build a Single Page Application (SPA) with JavaScript frameworks. Modern, fast, smooth user experience.
What we did instead: Hybrid approach. Laravel’s Blade templates for most pages, with targeted JavaScript enhancements for interactive features only.
The problem nobody warned us about:
Initially, we tried building heavily JavaScript-dependent pages. Beautiful, smooth, modern. And completely unusable on Nigerian 3G/4G.
What was happening:
- Our JavaScript bundles: Too large for slow connections
- Load time on 4G: 30-45 seconds for interactive features
- Users on poor connections: They just left
We watched our analytics: 67% of users who started loading pages with heavy JavaScript abandoned before everything loaded.
The fix that felt like going backwards:
We spent a weekend simplifying our approach, using Laravel’s server-side rendering (Blade templates) for most of the platform, and only adding JavaScript where absolutely necessary for interactivity.
The new approach:
- Most pages: Pure server-rendered HTML from Laravel
- Interactive features (feed updates, live reactions, marketplace browsing): Light JavaScript enhancements
- Initial page load: 50KB of HTML instead of waiting for large JavaScript files
The impact:
- Load time on 4G: 45 seconds → 2.8 seconds
- Bounce rate: 67% → 23%
- User acquisition suddenly became viable again
What we added JavaScript for:
- Real-time feed updates (AJAX calls to our Laravel API)
- Interactive product browsing
- Comment threads
- Instant reactions
These features load after the main content is already visible. Users see content immediately in server-rendered HTML, then get enhanced interactivity as JavaScript loads progressively.
The lesson: Your users don’t care about your tech stack’s elegance; they care whether your app actually loads.
Decision 4: Building our own payment wallet system
The Silicon Valley Playbook said: Just integrate Stripe. Simple, clean, done.
What we did instead: Built a full wallet system with double-entry bookkeeping on top of Paystack and Flutterwave.
Why we couldn’t just use payment gateways:
Our revenue model required:
- Vendors earn from product sales on the marketplace
- Content creators can monetise their posts through our platform
- Affiliates earn commissions when they refer vendors or promote products
- Users need to be able to withdraw earnings to their bank accounts
- Peer-to-peer transactions between users
Paystack and Flutterwave are excellent for “collect payment, deliver product.” But they’re not built for peer-to-peer transfers or micro-earnings that accumulate before withdrawal.
The system we built:
Every monetary transaction creates two records:
- Debit from one user
- Credit to another user
This double-entry bookkeeping means money never mysteriously vanishes. When a payment gateway sends a duplicate webhook (which happens), we can detect it because the reference ID already exists.
The scary moment:
Three months in, Paystack sent duplicate success callbacks for 23 transactions. Our system detected every single one and prevented double-crediting users.
If we’d built this naively, we would have lost ₦127,000 to duplicate credits.
The compliance win:
When we applied for SCUML certification (required for any platform handling money in Nigeria). Approved in one review.
The lesson: Sometimes you have to build what seems like infrastructure because the market you’re serving has unique requirements.
Decision 5: The Free WiFi Initiative (₦830 vs ₦2,500 Per User)
The Silicon Valley Playbook said: Buy Facebook/Instagram ads to acquire users.
What We Did Instead: Installed free Starlink WiFi at universities and institutions.
The numbers that changed our strategy:
We tried paid ads first:
- Facebook ads: ₦2,500 per app install
- Instagram ads: ₦2,800 per install
- Google ads: ₦3,200 per install
At these rates, acquiring 100,000 users would cost ₦250 million.
We had ₦15 million total.
The alternative:
Starlink hardware: ₦450,000 one-time Monthly subscription: ₦38,000 Institution partnership: Free (they provide space, electricity)
We installed Starlink at:
- Federal University of Education, Zaria
- Garki Hospital, Abuja
- Ankpa Community, Kogi State
Each location had a captive portal: “Connect to free WiFi → Register on Ayema to continue”
The Results at ABU Zaria:
- 8,000+ students connected to WiFi
- 2,400 signed up for Ayema
- Conversion rate: 30%
- Cost per acquisition: ₦830
That’s 3x cheaper than paid ads, and these users came with built-in network effects because they were on campus together.
The unexpected benefit:
Students didn’t just sign up, they actually used the platform because:
- They associated Ayema with “free internet”
- Their friends were already on it
- They could access it easily on campus
Retention rate: 68% still active after 3 months (vs 12% for paid ad users).
The lesson: Sometimes the best growth hack isn’t digital, it’s physical infrastructure that solves a real problem.
The 3 decisions that nearly destroyed us
Success stories make everything sound easy. Here are the mistakes that almost killed Ayema:
Failure 1: Synchronous Image Processing (The 2 AM Server Crash)
What We Did Wrong:
When a user uploaded an image, our server:
- Received the file (held the connection open)
- Validated it (still holding…)
- Resized it to 5 different sizes (still holding…)
- Uploaded to storage (still holding…)
- Updated database
- Finally returned “success”
This took 8-15 seconds per image.
The disaster:
At 50,000 users, people started uploading images simultaneously during evening peak hours. Each upload tied up a server worker process.
At 2:47 AM on a Thursday, I woke up to 47 WhatsApp messages. The server had run out of worker processes. New requests just… queued. Forever.
Complete platform outage for 4 hours.
The fix:
Moved image processing to background jobs. User uploads image → gets instant “Processing…” response → image appears 30 seconds later after background processing.
Upload endpoint response time: 8 seconds → 340 milliseconds.
Cost of this mistake: Estimated 2,000 users who tried to access the platform during outage and never came back.
Failure 2: Over-Normalised Database (The 6.4 Second Query)
What we did wrong:
We built the most beautiful, perfectly normalised database structure my computer science professors would be proud of.
Loading a user’s feed required joining 7 tables:
- Posts
- Users
- Media
- Reactions
- Comments
- Products
- Transactions
At 10,000 users, queries took 800ms. Acceptable.
At 30,000 users, queries took 2.4 seconds. Slow but workable.
At 50,000 users, queries took 6.4 seconds. Completely unusable.
The ego problem:
We didn’t want to “denormalise” the database. That felt like admitting defeat. Like we were bad engineers.
The reality check:
Our co-founder showed us analytics: users were abandoning the app because feeds took too long to load.
Pride vs users. Users won.
The fix:
Added cached columns to the posts table:
- reactions_count
- comments_count
- shares_count
- last_activity_at
These get updated via database triggers when someone reacts or comments.
Feed query time: 6.4 seconds → 280 milliseconds.
The lesson: Textbook-perfect architecture doesn’t matter if your users are waiting 6 seconds for a feed to load.
Failure 3: Ignoring mobile data costs (The Silent User Killer)
What we did wrong:
We optimised for “cool features” instead of “data efficiency.”
Original image sizes from users’ phones: 3-8MB each. A feed of 20 posts: 60-160MB of images to download.
Nigerian data plans: Many users have 500MB-1GB monthly.
The math we ignored:
Loading our app 3-4 times could consume someone’s entire monthly data.
The wake-up call:
User feedback: “Your app is eating all my data.”
We checked analytics: Average user loaded 400MB of data per week.
The fix:
Aggressive image optimisation:
- Remove EXIF data
- Resize to max 1080px
- Convert to WebP format
- Compress to 75% quality
- Generate thumbnails for instant preview
Average image: 3.2MB → 180KB (94% reduction)
The impact:
Before optimisation: Users abandoned app after 2-3 sessions (ran out of data)
After optimisation: Average 12 sessions per month, complaints about data usage dropped 89%
The lesson: In data-constrained markets, data efficiency isn’t a nice-to-have—it’s the difference between success and failure.
What Nigerian founders should learn from this
After 3 years of building Ayema, here’s what I wish someone had told me:
1. Your market’s constraints are your competitive advantage
International competitors can’t easily optimise for 4G connections and ₦500 data plans. You can. That’s your moat.
2. Boring technology that works > Exciting technology that breaks
PHP/Laravel isn’t sexy. A VPS isn’t “cloud-native.” Server-rendered HTML isn’t “modern.”
But they work. They’re stable. They’re affordable.
When you’re a small team with limited resources, reliability beats innovation.
3. Infrastructure can be growth strategy
That free WiFi initiative wasn’t a technical decision, it was a growth hack that happened to involve infrastructure.
Think beyond digital ads. What physical infrastructure can you deploy that solves a problem and drives adoption?
4. Financial discipline enables technical ambition
Saving ₦360,000 on AWS meant we could spend that on user acquisition.
Optimizing for data efficiency meant users could actually afford to use our app.
Every technical decision has financial implications. Every financial constraint requires technical creativity.
5. Fail fast, but document everything
Those three failures (image processing, database, data costs) cost us users and sleep.
But because we tracked everything, we learned exactly what broke and why.
Your failures are only wasted if you don’t learn from them.
The metrics that matter
After making these decisions, good and bad—here’s where Ayema stands:
Platform scale:
- 102,000 active users (last 30 days)
- 2 million engagement events monthly
- 695,000 page views
- 500+ active vendors on marketplace
Technical performance:
- 99.2% uptime over 12 months
- 280ms average API response time
- 2.1s average page load time on 4G
- ₦25,000 monthly server cost (₦0.24 per user)
Business validation:
- ₦15 million in funding received
- 4.8★ rating on Google Play Store
- Multiple MoUs with institutions and enterprises
- 500+ active vendors generating revenue
Compliance & recognition:
- NITDA certified
- SCUML registered (EFCC compliance)
- Trademark registered (Federal Ministry of Trade)
- Endorsed by National Productivity Centre
What’s next
We’re not done. The next 12 months roadmap:
Technical evolution:
- Migrate to microservices architecture (social, marketplace, payments as separate services)
- Launch Ayema Pay (standalone wallet app for peer-to-peer transfers)
- Build offline-first capabilities (service workers for unreliable connections)
- Implement GraphQL for more efficient mobile data usage
Market expansion:
- Target 500,000 active users by Q4 2026
- Expand free WiFi initiative to 20 more institutions
- Launch in Ghana and Kenya
- Build Ayema Ride (ride-hailing service integrated into platform)
Revenue growth:
- Expand marketplace revenue streams
- Launch subscription tiers for premium features
- Scale marketplace to 5,000+ active vendors
- Introduce bill payment services (airtime, data, electricity)
But we’re doing this the same way we got here: with technical decisions grounded in Nigerian reality, financial discipline, and ruthless focus on what actually works for our users.
For other African builders
If you’re building a platform for the African market, here’s my practical advice:
Before you write code:
- Test on 3G/4G connections, not just WiFi
- Calculate your data consumption per user session
- Price AWS vs VPS with real traffic projections
- Talk to 50 potential users about their data plans
During development:
- Optimise images from day one, not later
- Use boring, stable technology
- Build for offline-first if possible
- Track every millisecond of load time
After launch:
- Monitor server costs daily
- Track data consumption per feature
- Ask users what hurts (data, speed, cost)
- Be willing to rip out “cool” features that don’t work
Always remember: Your users are navigating intermittent internet, expensive data, older devices, and unreliable power.
Every technical decision should make their lives easier, not yours.
Let’s build Africa’s tech future together
Building Ayema has been the hardest and most rewarding experience of my career. Every decision, good and bad, taught me something about building technology that actually serves African markets.
We’re proving you don’t need Valley funding, Valley infrastructure, or Valley playbooks to build platforms that scale.
You need:
- Deep understanding of your market’s constraints
- Technical creativity within those constraints
- Financial discipline to survive until you find product-market fit
- Willingness to make unpopular technical choices
If you’re building something for the African market and want to talk strategy, architecture, or just commiserate about the unique challenges we face, reach out.
This article represents the technical and strategic decisions behind building Ayema Networks from 2021-2025. All metrics are accurate as of November 2025. Views expressed are the author’s own.











