If Things Aren't Obvious
What might be obvious to you might not be obvious to others.
I've been sitting on this post for a while. Not because I didn't have anything to say — but because I wasn't sure if what I was seeing was worth stating out loud. It felt too obvious. Too self-evident to the people around me.
Then I remembered something I heard years ago: "What might be obvious to you might not be obvious to others."
So here it is. Everything I've been observing, building, and learning — laid out in full. No holding back.
The Third Wave
We're living through the third major wave of the internet, and most people haven't noticed.
The first wave was access. The internet democratized information. Suddenly, knowledge that was locked behind institutions, libraries, and geography became available to anyone with a connection. This was revolutionary in a way that's hard to appreciate now — but it rewired how humans relate to knowledge itself.
The second wave was platforms. SaaS democratized software. You didn't need a server room or an IT department to run your business. Stripe gave you payments. Shopify gave you commerce. Notion gave you documentation. The barrier to using sophisticated software collapsed.
The third wave — the one we're in right now — is personalization. AI is democratizing creation itself.
For the first time in history, the gap between "I have an idea" and "I have a working product" is measured in hours, not months. Not weeks. Hours.
Software is no longer something you subscribe to. It's something you generate. Tailored to your exact problem. Your exact workflow. Your exact taste. Your exact constraints.
This isn't a minor upgrade to the development process. This is a fundamental shift in how we relate to technology.
Each wave didn't replace the previous one — it built on top of it. Platforms required access. Personalization requires platforms. The compounding is the point: you need Stripe (Wave 2) to monetize the custom tool you generated with AI (Wave 3) using knowledge you found online (Wave 1). Understanding the stack gives you leverage at every layer.
The Death of the 80% Solution
Think about how you've solved problems with software for the last decade.
You had a specific need. Maybe you needed to track subscriptions in a particular way, or automate a very specific data entry workflow, or visualize financial data with a specific lens. What did you do?
You searched. You compared. You signed up for three free trials. You spent hours configuring something that got you 80% of the way there. And then you lived with that 80% — because the cost of building the remaining 20% was prohibitive. You needed a developer, or a team, or months of learning.
That era is ending.
Today, the workflow has fundamentally changed. You open a CLI or a voice interface, describe what you need, and you get exactly that. Not 80%. Not a compromise. The exact tool for the exact problem.
We're seeing a surge in what I'd call "personal software":
- A subscription tracker tailored to a specific budget style
- A Chrome extension that solves one very niche data entry problem
- A fitness app with an interface exactly how the user wants it
- A financial analysis tool built around your specific trading methodology
- A prompt management console designed for your exact workflow
This is a massive shift. Software is becoming a personal utility you generate, rather than a commodity you buy.
From SaaS to Scratchpads
Here's what makes this moment truly different from anything that came before: a lot of this new software isn't meant to live forever.
For years, the industry has been obsessed with building "platforms" and "ecosystems." Everything had to scale. Everything had to retain users. Everything had to optimize for lifetime value and expansion revenue.
But the tide is shifting toward something more ephemeral. We're moving from SaaS to scratchpads.
People are increasingly building tools to solve a single, specific problem exactly once — and then discarding them. Software as a disposable utility. Designed for the immediate "now" rather than the distant "later."
What makes this viable is a specific technical philosophy: CLI-first interfaces, local data, and zero onboarding. When you remove the friction of signing up, configuring a database, or navigating a complex UI, the cost of creating a tool drops so low that "temporary" becomes a feature, not a bug.
If it takes five minutes to spin up a custom solution for a one-off task, you don't need it to persist.
The contrast with the traditional SaaS model is stark. SaaS optimizes for retention, lock-in, and expansion. Bespoke tools optimize for immediacy and control. They don't care about your lifetime value as a customer. They only care about solving the task at hand.
In many ways, this is a return to how spreadsheets were originally used. You didn't open a spreadsheet to build a permanent, multi-year database. You used it as a scratchpad to reason through a problem, calculate the results, and move on.
Code Is Cheap. Software Is Still Expensive.
Now, here's where I need to be honest — because the narrative around AI-assisted development has gotten dangerously oversimplified.
Code has become cheap. Software remains incredibly expensive.
LLMs have effectively killed the cost of generating lines of code. But they haven't touched the cost of truly understanding a problem. We're seeing a flood of "apps built in a weekend," and most of them are thin wrappers around basic CRUD operations and third-party APIs. They look impressive in a demo. They crumble the moment they hit the friction of the real world.
The real cost of software isn't the initial write. It's the maintenance, the edge cases, the mounting UX debt, and the complexities of data ownership.
That subscription tracker? It breaks the moment a bank changes its CSV export format. That Chrome extension? It dies the second a target website's DOM shifts. That fitness app? It becomes unusable the moment a user needs robust offline support or reliable data sync.
This is the distinction that matters: code is cheap, but judgment is not.
The value of an engineer is shifting away from the "how" of syntax and toward the "what" and "why" of systems. Real engineering lies in the abstractions and the architecture. It's about knowing how to structure a system that lasts, understanding why a specific rate-limiting strategy is necessary, knowing how to manage a distributed cache, and knowing exactly where not to store your environment variables.
AI often feels powerful because it hides complexity. But as an engineer, your job is to manage that complexity, not ignore it.
The visible part — the code — is what AI generates. The invisible part — architecture, edge cases, security, maintenance — is what engineers build. When someone shows you an "app built in a weekend," you're seeing the tip. The 90% below the waterline is what determines whether it survives contact with real users. That 90% is where your value as an engineer lives.
The Distribution Illusion
With the barrier to entry gone, the noise level has reached an all-time high.
My feeds are flooded with "AI entrepreneurs" claiming five-figure monthly recurring revenue for apps they built in an afternoon. In many cases, these claims are highly suspect. When you see a creator with no existing distribution and no clear moat claiming $10K MRR on a weekend project, it's usually a play for engagement — not a reflection of business reality.
Some of these stories are true. But in most cases, these aren't blueprints for technical innovation. They're marketing case studies. These individuals succeed because they've mastered the art of capturing attention in a crowded landscape, not just because they have an AI co-pilot.
Here's the useful framing: AI has effectively removed engineering leverage as a primary differentiator. When any developer can use an LLM to build and deploy a complex feature in a fraction of the time, the ability to write code is no longer the competitive advantage it once was.
Success now hinges on factors that are much harder to automate: taste, timing, and deep understanding of your audience. You can generate a product in a weekend, but that's worthless if you're building the wrong thing or launching it to a room full of people who aren't listening.
The code has become the easy part. The hard part remains exactly what it has always been: finding a way to get people to care.
What I'm Seeing at Deriv
I'm not writing this from the sidelines. I'm living it every day.
At Deriv, where I work as an Applied AI Engineer, we're witnessing this shift at enterprise scale. We're building AI-powered solutions that are conceived, developed, and deployed in weeks — not quarters. And these aren't toy prototypes sitting in a staging environment. They're being used by hundreds of people across internal teams, every single day.
Some of these tools could operate as standalone startups. The functionality is there. The user base is there. The value creation is measurable. We choose not to spin them out — because the strategic value of keeping them internal far outweighs the appeal of productizing them.
That's the level we're operating at. And the results speak for themselves.
What this scale has taught me — more than anything — is the difference between building software and building systems. When your tools are used by hundreds of people with different workflows, different edge cases, and different expectations, you learn very quickly where AI-generated code falls short. You learn where human judgment, architecture decisions, and deep domain expertise are irreplaceable.
The AI writes the code. The engineer makes it work. Those are not the same thing.
Building software and building systems are fundamentally different activities. Software is the artifact. A system is the artifact plus its operating environment — the users, the edge cases, the failure modes, the maintenance burden, the data lifecycle. AI is getting very good at the first. The second remains a human discipline.
My colleagues and I represent this shift in real time through the platforms we build and the engagements we drive. We're not theorizing about the future of AI engineering. We're defining it.
The Experiments
Everything I learn at work compounds into what I build outside of it. And over the last few weekends, I've been on a tear.
Six projects. Each one built in roughly 7–8 hours. Some in a single sitting — a Friday night, a Saturday afternoon, a random Tuesday evening after work.
One of them was built in 75 minutes. From writing the first line of code to having it live on its own domain, deployed and functional. Seventy-five minutes.
Here's what's out there right now:
Insight Hedge
AI-analyzed trading signals and market insights. This platform takes raw market data and applies AI-driven analysis to surface trading signals that would take hours to identify manually. It's designed for people who want intelligent market analysis without the noise of traditional financial media.
Acumen
A technical analysis-based trading signal platform. Where Insight Hedge focuses on AI-driven analysis, Acumen is built around classical technical analysis patterns — but accelerated and enhanced through AI. It's the intersection of time-tested methodology and modern tooling.
Prompt Console
A prompt optimization console with template management, batch testing, and AI enhancement. This one is close to my heart because it directly addresses a pain point I experience daily. Managing prompts at scale — across different models, different use cases, different contexts — is a genuinely hard problem. Prompt Console is my answer to it.
Gist
A notebook-style LM replacement. Think of it as a streamlined interface for interacting with language models, designed around the way I actually think and work rather than how a product team imagined I might.
Paperclip
Currently live and evolving. More details coming in a dedicated post.
More in the Pipeline
There are several more projects in various stages of polish that are yearning to be released. I won't do them justice with a passing mention — each deserves its own deep-dive. Those posts are coming.
The Next Phase: Finance
The playground phase taught me speed. The next phase is about depth.
The upcoming projects are more focused, more thought out, and centered squarely around finance. This isn't accidental. Finance is where I see the highest leverage for AI-powered personal software — because financial decisions are deeply personal, highly specific, and poorly served by one-size-fits-all platforms.
These projects have been validated by an incredible group of people whose judgment I trust. I genuinely believe they're going to help individuals navigate their financial decisions with more clarity, more confidence, and more control.
I'm not ready to reveal everything yet. But I can say this: the finance-focused projects represent a fundamentally different level of ambition from the experiments that preceded them. They're built on the lessons, the failures, and the speed I developed during the experimentation phase — but they're designed to last.
Exciting stuff is coming. Stay close.
The Infrastructure Investment
Here's the part that most people skip when they talk about "building fast" — and I think it's the most important part of the story.
None of this would be possible without a deliberate, upfront investment in personal infrastructure.
The CI/CD Pipeline
Every single one of my projects benefits from a CI/CD pipeline that I've built and refined over time. Here's how it works: every time I create a new repository on my personal GitHub, the pipeline auto-initializes. Deployment configurations, domain routing, SSL certificates, monitoring — all of it is handled automatically.
This means that when I sit down on a Friday night with an idea, I'm not spending the first two hours configuring infrastructure. I'm writing the first line of application code within minutes. The pipeline handles the rest.
This is the kind of investment that doesn't show up in a Twitter demo or a "built this in 75 minutes" claim. But it's the reason those claims are possible in the first place.
The Cloud Setup
I maintain my own personal cloud infrastructure. Not because it's the cheapest option — it's not. But because it gives me complete control over deployment, scaling, and data management. When I ship something, it goes live on my terms, on my domain, with my configuration.
The API Costs
Let's be real: running AI-powered projects costs money. Every API call to a language model has a price tag. Every experiment, every test, every iteration adds to the bill.
I see every dollar as an investment. An investment in experimentation. An investment in learning. An investment in building things of genuine value for people.
This is how I justify these purchases: I'm not burning money on vanity projects. I'm building tools, timing myself, pushing the boundaries of what one person can ship, and creating things that are genuinely useful to others.
The return on investment isn't measured in MRR. It's measured in capability. Every project makes me a better engineer. Every experiment teaches me something about what works, what doesn't, and where the real opportunities lie.
What's Changed in the Last Month
A few operational changes that reflect how seriously I'm taking this:
Unified Branding
All projects are now centralized under a consistent design language and brand identity. This isn't just aesthetic — it's a signal of intent. These aren't throwaway experiments anymore. They're a portfolio of tools that share a common philosophy and a common standard of quality.
Formal Policies
Every project is now covered under a formal policy framework. Terms of service, privacy policies, usage guidelines — all centralized at terms.ihusam.tech. When you're building tools that people rely on, especially in finance, trust isn't optional. It's foundational.
Open Source Commitment
Over the coming weeks, I'll be open-sourcing these tools. I believe deeply in building in public and giving back to the community that has given me so much. The code, the architecture decisions, the lessons learned — all of it will be available for anyone who wants to learn from it, build on it, or contribute to it.
A Note on AI-Assisted Development — Honestly
I use AI tools every single day. Claude Code, Claude Opus 4.5, Cursor — they're integral to my workflow. And I want to be honest about what they're actually like to use, because the discourse around these tools has become detached from reality.
They are remarkably good at removing boilerplate, implementing known patterns, generating tests, and accelerating the boring parts of development. One of my favourite use cases, especially since starting at Deriv, has been generating personalized documentation and walkthroughs to get up to speed on unfamiliar codebases. That alone has been worth the investment.
But here's the truth: LLMs are not perfect at writing code, even when it compiles on the first run. Even with high-quality prompting and clear constraints, these models make mistakes. You cannot trust the output outright. You have to review every piece of AI-generated code as if it were a pull request from a teammate.
You read the logic. You check the assumptions. You make manual edits. You catch the subtle bugs that look correct on the surface but break under load, under edge cases, or under the weight of real-world usage.
After all, you're likely sending this to a colleague for review. Is it fair to make them review something you didn't write or even bother to check?
These tools help you move faster. They do not replace the need for a critical eye, years of experience, or deep understanding of the problem space.
Here's the litmus test I use: if I wouldn't be comfortable putting my name on a piece of code in a pull request, I don't ship it — regardless of whether I wrote it or an LLM did. The authorship is irrelevant. The accountability is yours. Treat AI-generated code exactly like code from a junior developer: valuable, fast, but in need of a careful review.
Who Actually Wins in This New Era
Not everyone benefits equally from this shift. Here's who I see winning:
Domain experts with boring, repetitive problems. If you understand a field deeply and you're stuck automating the tedious parts, AI tools are a force multiplier unlike anything that's existed before.
Internal teams building throwaway tooling. Scripts, internal dashboards, data transformation pipelines — things that need to work immediately rather than look perfect. This is where the "scratchpad" model shines.
Power users replacing brittle manual workflows. If you're currently copy-pasting between spreadsheets, manually formatting reports, or stitching together data from five different sources — you're sitting on a goldmine of automation opportunity.
Engineers who prioritize ownership over polish. The people who care more about solving the problem than making the solution look impressive in a portfolio. The ones who'd rather ship something useful in 75 minutes than spend a month making it perfect.
And critically: non-technical leaders who think they can fire their development teams are making a catastrophic mistake. AI is undeniably good at writing code. It remains poor at architecting maintainable, distributable, and scalable systems. Until we see the arrival of an artificial intelligence that renders this entire discussion moot, believing that technical expertise can be replaced by a prompt is a strategic error.
A Call to Indie Builders
I want to end with this.
If you're a developer — or an aspiring developer, or someone who's been thinking about building something but hasn't started — consider this your sign.
With twenty dollars, a few hours of spare time, and a bit of patience, almost anyone can ship a functional application today. The tools are better than they've ever been. The cost is lower than it's ever been. The only thing standing between you and a deployed project is the decision to start.
I want to see more indie developers shipping. More experiments. More "built this over the weekend" energy — but backed by real substance, real utility, and real thought.
Not for the Twitter clout. Not for the MRR screenshots. For the craft. For the learning. For the genuine satisfaction of solving a problem that matters to you.
Invest in your infrastructure. Set up your CI/CD. Get comfortable with the tools. And then start building.
What's Next
Over the coming weeks, expect:
- Detailed deep-dives on each project — what was built, how, why, and what I learned
- Open-source releases of the tools I've been building
- Finance-focused project launches that represent the next evolution of this work
- Honest retrospectives on what worked, what didn't, and what I'd do differently
I'll be writing about all of this on my blog, my RSS feed, and my newsletter. I'll cross-post highlights on LinkedIn.
The tools have changed. The thinking hasn't.
If this resonates, the best time to start building was yesterday. The second best time is now.
If any of these tools bring you value, you can support the experiments — those API credits and infrastructure costs add up: Buy me a coffee
All projects are covered under formal policies: terms.ihusam.tech