Author: Khalil

  • Build In Public – 1

    Build In Public – 1

    True Life: I’m A Digital Hoarder

    Yesterday, I dedicated a few hours to organizing my digital life..(read: definitely not procrastinating).

    Over the last week, I’d been noticing more than ever: duplicated files, confusing naming conventions, and junk files spread around generally.

    In the walled garden of devices & services Apple™️ has built, I haven’t worried much about storage space, thank you iCloud™️. Naturally, over the years, the junk kept piling up.

    Since I purchased my first Macbook™️ in 2013 (?), I’ve since: graduated college, changed jobs six times, worked on side projects, moved cross country, and all along the way generated junk. Vacation pictures (that already live in Photos™️), documents (duplicated across computers locally and in the iCloud™️), screenshots (because proof), and on and on.

    Last night, I finally decided to do something about the unorganized slop that I’d been dealing with. That’s when I came across an interesting file: an early mockup/proof-of-concept (POC) of an application I was preparing to pitch back in 2016.

    Time Is A Circle

    That year, I was in my first post-college job at a healthcare company, working on what we called digital health products.

    A platform called DialogueFlow, which allowed developers to build out semi-structured AI conversations had recently been released. If you read The Last Arms Race Pt.1, you’re probably aware that interest in AI was picking up again right around this time.

    For a variety of reasons I believed the technology presented an opportunity for the business, so I went about developing the POC and pitching potential use-cases to my team.

    If you know me, you know I’ve never been short on ideas, and fortunately, my job at the time was—in part—to be the idea guy.

    But, that’s not the idea I want to talk about today.

    If You Build(L) It

    There’s been an idea gaining in popularity within the tech community over the last few years. The premise? Build in public.

    The theory is, if you’re open about what you’re building, and bring a community along that journey, you end up: getting crucial early feedback, to figure out if there is a market, and identifying early customers/users.

    What a dumb thing to do, someone will definitely steal your idea!

    But no, ideas really are a dime a dozen. What matters most is execution.

    So, that’s what I’ll be doing here on Sundays. Building in public, and bringing you all (if there is a you all..at all?), along for the ride, as I take the electrical signals zooming around in my brain and manifest them into a product.

    What’s The Point, Khalil?

    Am I this long-winded in conversation or is this an entirely written-communication phenom? If it’s the former, apologies friends.

    Cutting to the chase:

    Machine learning models, especially large language models (ChatGPT, Claude, DeepSeek, etc.) are being integrated across nearly EVERY single product and service. Most people have now interacted with AI/ML—via their jobs, or various other ways (like googling).

    A vision for AI has always been that we could have personalized intelligent assistants in our pockets, ready to respond to any and all requests. The industry is still working towards that goal, but at least as far as I can tell, there is a particular gap in development, rather, an opportunity—that is begging to be addressed.

    Each new “conversation” with one of these models is sort of like being at a speed dating event. The model doesn’t know you, what your preferences are, or, critically—have access to much of the context it may need to respond most effectively. For any complex task or question, users often have to share repetitive information with models.

    That is the first problem I want to solve with what I’m calling AgentBuddy (name is a WIP alright, the right side of my brain is typically snoozing).

    The core features/pitch of AgentBuddy are:

    • Users build out libraries of their data, either directly in the platform, or by linking other services (Evernote, Notion, Google Drive, etc.)
    • Via customizable built in system prompts (which are pre-instructions given to a model), the AI interactions users have can always feel personal and useful without unnecessary repetitiveness.
    • Simple commands made available in chat with the model allow users to reference data they’ve added to their library, further enhancing its capabilities.
    • Cheaper via model democratization.
      • For most end-users today, the interface they are familiar with when it comes to AI is ChatGPT. In fact, I would be willing to bet that for some portion of the population, ChatGPT has become synonymous with AI in the mindshare.
      • For intermediate users, and beginners who grow in comfort, AgentBuddy will allow model selection beyond a single provider (i.e. OpenAI), to any of the providers hosted on OpenRouter (Claude, DeepSeek, Mistral, etc.). This allows those users to get more out of the platform at a cheaper cost than many others.

    The second phase includes a set of business applicable features, however I’ll be keeping those private for now as they just might be worth a quarter.

    Progress So Far

    The build out of this platform started ~2weeks ago and this is my current progress towards MVP (minimum viable product).

    • Backend  – 75%
      • (What you might know as the “servers”)
      • Complete:
        • Data models
        • Database integration
        • OpenAI and OpenRouter integration
        • API routes
        • Authentication and security
    • Mobile App – 30%
      • (Substantially reducing scope, because release is what matters most)
      • Complete:
        • App structure and routing
        • Design system and theme
        • Backend integration (not reflected in UI yet)
        • State management + hooks
        • Apple and Google authentication

    In a future post, I’ll go into more detail about not just this app, but how I approach analyzing a potential product, doing early market and technical research, and going from idea to proof-of-concept.

    Thanks for reading!

  • The Last Arms Race (1/2)

    The Last Arms Race (1/2)

    On Tuesday, just one day after President Donald Trump was inaugurated to his second term in office, he held a press conference alongside Softbank CEO Masayoshi Son, OpenAI CEO Sam Altman, and Oracle Executive Chairman Larry Ellison.

    At the press conference they announced a $500 billion investment in Artificial Intelligence over the next four years. By any standard, half a trillion dollars is an incredible amount of money to raise and deploy in such a short amount of time.

    The money is going to be put towards what they’re calling The Stargate Project. In addition to the companies represented by their CEOs at that conference, Arm, Microsoft, NVIDIA, and MGX are involved—either in a technical capacity or contributing funding. A partnership of this scale is unprecedented, even for government sponsored projects, and perhaps even then.

    The Stargate Project, however, is not even close to the only large scale AI investment happening globally today. Google recently signed a deal to use nuclear reactors to power their data centers running AI. In addition to their Stargate involvement, Microsoft is spending $3.3 billion to build out a data center in Wisconsin. Meta(Facebook) also spent north of $10 billion on their AI capabilities in 2024 alone and is expected to increase that number.

    China is also increasing investments in AI, though at a slower rate than the U.S. right now, at around $35-$40 billion a year—at least based on what they’re reporting publicly.

    So what gives? The world’s leading companies and governments surely wouldn’t be deploying as much capital in such short periods for it not to pay off, and the expected result must be massive to justify it.

    In this post I want to (ambitiously) cover the origins of AI, the popularization of it, what it is, how it works, the urgency to build it and what the future might look like.


    “Please take whatever precautions are necessary to prevent this terrible disaster. Your friend, Marty”

    “Artificial intelligence is the future, not only for Russia, but for all humankind, It comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world.” – Vladimir Putin, 2017.

    It’s widely understood by technologists, scientists, and world leaders that the last technology humans need ever invent is a true Artificially Intelligent computer (we’ll get to exactly why later).

    Both Sam Altman and Dario Amodei (CEO of Anthropic), have recently gone on the record expressing their thoughts on the timeline to Artificial General Intelligence(AGI)—think of a computer being as smart and capable as about the smartest human alive—and Artificial SuperIntelligence(ASI), think of a computer that, not to be hyperbolic, is equivalent to a God.

    In the most conservative case, both believe we will reach these levels of AI within the decade—and most likely sooner.

    The country that develops such incredible capability, will, overnight, have the ability to shape civilization as it deems, with relatively little resistance. The urgency of this issue, though understood in elite circles around the world, has yet to permeate mainstream discourse surrounding artificial intelligence.

    To many, these projections about AI probably sound fantastical, even absurd. And I get it. History is full of overhyped “world-changing” technologies that never quite delivered. Plus, pop culture’s depiction of AI often misses just how fast and dramatically it could reshape society. So, I suppose it’s no surprise there’s some hesitance to fully grasp the scale of what’s coming.


    “Roads? Where we’re going, we don’t need roads.”

    If you haven’t studied computer science or the history of these ideas, you might be wondering how we got here. It was only about 2.5 years ago that the AI system most people know—ChatGPT—was released. So, how could it be that we’re already talking about reaching the endgame in the next 5 or so years?

    In 1906, the Spanish physician and scientist Santiago Ramón y Cajal won the Nobel Prize in Physiology or Medicine for his work on the human nervous system. Santiago identified the distinct parts of neuronal cells and theorized that they were part of an interconnected network responsible for processing information.

    It was his initial discoveries, and several more that came in the following decades—Like Alan Turing’s theory of computation—that eventually gave way to the idea that if the human mind functions by sending electrical signals between clusters of cells, then perhaps a digital one would be capable of doing the same.

    For decades, all attempts to give rise to this theorized phenomenon failed, of course. Nearly 70 years of advances in neurology, physics, and computer science were still needed before we would be able to take a real crack at building an AI system. That didn’t stop people from trying, though.


    Quick Aside on Technological Progress

    It’s impossible to tell the story of artificial intelligence without taking a little detour to talk about computing in general. Here’s a quick briefer so we can get back to the good bits.

    The first semi-modern computer was invented in 1871. It was a mechanical machine—more like a severely limited calculator than anything else. Then, in 1946, we took a massive leap in computing when the first general-purpose computer, ENIAC, went live.

    General-purpose computers resemble the ones that we use today, in that they typically are able to accomplish a number of different tasks depending on how they’ve been programmed, and have similar hardware architectures.

    The next leap occurred in 1960 when Bell Labs invented the transistor*, ushering in the beginning of the digital age and where the story of modern technological progress begins.Just five years later, Intel’s CEO, Gordon Moore, made a simple observation with profound implications:

    The number of transistors we could fit on integrated circuits (computer chips) doubled every 18 months.

    This observation, now known as Moore’s Law, suggested that the processing power of computer chips would double roughly every 18 months as we managed to pack smaller and smaller transistors onto chips. When Bell Labs invented the first transistor, it was about a centimeter long. Today, the transistors inside your devices are just 3 nanometers in length. To give you a sense of scale, atoms are typically between .1 to .3 nanometers in size and research into 2 nanometer transistors is already well underway.

    We’ve gone from a world that once believed it would only need a handful of computers to one where digital computers are embedded in nearly every product we create, thanks to this rapid progress.


    “I had a horrible nightmare. I dreamed that I went… back in time. It was terrible.”

    Alright, back to the reason we’re here… (I can’t believe I’m still writing).

    The need to transform data into information is as old as humanity. Our brains are constantly working to do just that—mapping our surroundings, cataloging experiences, and connecting the dots to form ideas about the world around us.

    A teacher I had at some point in life (I’m paraphrasing here, thanks to my selective memory) would say: “Data is useless without context or insight. Interpreting data is how we transform it into information

    At some point, humans discovered math and invented ways to analyze and interpret data—what we now call statistical modeling. We use statistical modeling for three main purposes:

    • Predictions
    • Extracting Information
    • Learning about datasets that appear random

    These modeling methods worked well enough—until the late 20th century brought an explosion in data volume, complexity, and advanced use cases outpaced what traditional modeling could handle. That’s when computational modeling and machine learning became practical solutions.

    The desire for a thinking machine traces back to the dawn of classical computing. And as computers grew more capable, the allure of using machine learning to achieve that goal only grew stronger.


    Machine Learning

    According to Wikipedia: “Machine learning (ML) is a field of study in artificial intelligence concerned with the development and study of statistical algorithms that can learn from data and generalize to unseen data, and thus perform tasks without explicit instructions.”

    The foundational studies conducted by Santiago Ramón y Cajal at the end of the 19th century—and the many that followed, became crucially important when computer scientists began designing and leveraging neural networks as components of machine learning model architectures.

    By the 2010s the field of ML shifted from relying primarily on traditional statistical modeling to a focus on data-driven learning. The most popular kinds of learning methods used to “teach” (the process of training) a model were:

    • Supervised Learning
      • Provide the computational model with an initial dataset that has been labeled, and an output dataset which are the predictions expected from the model for each element in the data.
    • Unsupervised Learning
      • Only provide input data to the model. The model learns what patterns exist in the data as it runs its associated algorithms over the data.

    “Doc, do you have a 75-ohm matching transformer?”

    In 2017, I was two years into my career when Google released a paper titled Attention Is All You Need“. If you read I Am Very Dumb?, you can probably guess that while I came across this paper and the discussions of how groundbreaking it was, I didn’t go much deeper than the surface.

    It turns out that this paper fundamentally changed the field of machine learning and artificial intelligence forever.

    Through “Attention is All You Need”, Google introduced a new neural network architecture called the Transformer. This architecture, by fully leveraging what’s known as the attention mechanism (a system that assigns weights to each element of input data, allowing the model to focus on the most important parts), enabled machine learning models to better process and understand complex sequences of data far more effectively.

    Seemingly overnight, this new method brought breakthroughs in the fields of natural language processing, computer vision, and generative AI.

    It’s through this new paradigm that we find ourselves where we are today, on the precipice of creating an artificial mind equal in capability to the smartest human.


    I naively thought I would be able to cover the history of this topic and get to the modern state (ChatGPT, Claude, ElizaOS, Digital Agents, etc.) as well as detailing capabilities of AGI & ASI and how society might be impacted..but I’ll have to pause here and do a part two.

    *Transistors control the flow on electricity on computer chips. If you’ve ever heard of binary or seen references to zeroes and ones, with regard to computers, transistors are where that concept arises as they are either 1 (allowing electricity to flow through them) or 0 (not allowing electricity to flow). The rapid switching of these on and off states by transistors is what makes a computer a computer.

  • I Am Very Dumb?

    I Am Very Dumb?

    The first time I felt the fearor maybe anxietyof academic failure was in 2001. My parents had recently decided that I wouldn’t be attending public school anymore. Instead, the plan was for me to transfer to a private Catholic school along with my older cousin, assuming I passed the entrance exam.
    To my eight-year-old self, it seemed like the world would end if I wasn’t successful. The details of the test day are hazy at this point, but what I do remember is crying. Between my mother and the teacher proctoring the exam, eventually I felt comfortable enough to not completely hate the experience. My nerves settled, I took the test, and the next year, I was enrolled in my new school.
    That pattern of anxiety and fear of failure when facing new academic challenges stuck with me for most of the next two decades.
    Eventually, those feelings gave way to internalized thoughts that became a constant, harsh self-commentary:
    • “I’m not smart enough.”
    • “This is too confusing for me.”
    • “I’m just not good at this.”
    • “I am very dumb.”
    These thoughts fueled a pattern of procrastination because, obviously, it’s easier to push off the thing you’re afraid of failing at until there’s no choice but to tackle it head-on. This habit became reinforced every time I procrastinated and still managed to succeed. Sure, I could cram and get things done last minute—but that’s no way to engage in deep learning. Eventually, the bill came due.

    Bad Student

    For most of my academic life, I was the kind of student who did just enough. I was smart enough to scrape by, pulling A’s, B’s, and the occasional C (yeah, there were worse grades, but this is my narrative). Looking back, I can’t believe I even raw-dogged the SATs—zero studying, zero prep.
    High School highlighted how poor of a student I truly was. Comparing myself to peers should have been a wake up call. Unfortunately, it wasn’t.
    Like most people my academic drivers eventually went from “How will my parents react to a bad grade?” to “Oh fuck, what am I going to do with my life?” and “How am I gonna make money?”.
    Two pretty strong motivators.
    Despite the near constant anxiety that induced, my habits didn’t change all that much. Yeah, I took a greater interest in my classes, but I couldn’t really say I was learning. I was doing what most students in America do: rote memorization of selective facts, just enough to pass a test or complete an assignment.
    I was attaining a functional knowledge of what I needed to know, but nothing more. And again, this worked–especially for topics I found interesting. It was always enough to get by but never enough to shake the feeling that maybe I’m just not cut out.
    That lasted until I came to a concrete wall of sorts: the real world of my first internship.

    I’m an adult, help

    In my Sophomore year of college, I took an introductory course to web application development. The professor had a practical method of teaching; the entire course was essentially project building.
    At times we built individual projects and at others we paired up in groups. I started off strong in the class, learning the very basics of building a website (HTML, CSS, JS). These distinct and simplified implementations were pretty easy to get a grasp of.
    A few weeks into the course was the university career fair, and, being connected to one of the companies that would be there, our professor suggested we go. All I knew at that point was I was in year two out of four and student loan payments had my head lined up in the scope of their .50 cal. So I went to the career fair.
    I ended up getting an interview, my professor wrote me a letter of recommendation and a few weeks later, I learned that I’d be doing a six-month co-op program as a software engineer.
    However, by then I was starting to get lost in the course. New technologies were getting introduced ever week, giving us almost no time with what we’d just learned, and still, I was spending almost no time attempting to reinforce earlier lessons outside of class.
    When the final assignment camerequiring us to use some thing called Git to copy some other code, make some changes, and make some kind of request to get those changes included. Truthfully, when I finally read the instructions two or three hours before the midnight deadlineI had no idea what the fuck was going on.
    Turns out, you need more than a couple hours to understand an entire technology stack, fix bugs, implement features, and use version control. Whatever Frankensteins’ monster of an application I submitted must’ve been good enough (or the TA lazy enough) that, once again, I scraped by.
    But, instead of feeling accomplished, I finished with a familiar thought:
    “I’m so dumb”

    The Real World

    On day 1 of the co-op, I arrived nervous and intimidated at a corporate campus in New Jersey. The place, I thought at the time, was impressive. Even the cafeteria seemed cool. The specific office space I’d be working in had just been through a remodel, making it the nicest part of the building.
    I was introduced to my peers, also completing co-ops. Some had already released apps, most accomplished in other ways, and then there was me — a guy who barely had grasp of fundamentals and definitely hadn’t built and released a product. I didn’t belong. I didn’t belong and I was the dumbest person in the building.
    After a tour and explanation of what the team did, my manager tasked me to work on the front-end of the platform and assigned my first ‘ticket’ (whatever that was).
    So I’m sitting in a room with a group of people I KNEW were smarter than me, assigned to work on technology I’d never touched before, at a level of complexity I’d never seen before, following a process that was completely new to me.
    Then my new manager asked me a simple question, a dumb question, an obvious question, that honestly changed my life:
    “What do you need to help you learn? Textbook, Resources, What do you need?”
    I told him a textbook would be great and just like that he handed me his copy of an AngularJS (R.I.P) textbook.
    I didn’t fully understand much about the codebase or how it all worked and I’m sure looking back a lot of my early questions were incredibly revealing and stupid; but I was committed to one thing probably for the first time in my life: I wouldn’t fail.

    How to Learn

    In the days and weeks after that first day I came into work, opened my laptop and textbook on the picnic-like table seating area where all the engineers worked next to each other, and I struggled.
    I spent hours and hours and hours and hours… failing and failing and failing and sometimes succeeding. For awhile after starting in the role all I felt was the fear and anxiety that I wasn’t good enough, too dumb, not smart enough, and that eventually everyone would know.
    That fear didn’t stop me though. Whether I was in the office or at home my laptop and that textbook were open in front of me, and I dove deep. Seeking not just solutions but the what, why and how it all worked.
    I learned to ask for help and, most importantly how to ask for help. I learned how to identify the root of problems in code, how to articulate where I was struggling, and how impactful the evidence of effort can be when seeking guidance.
    Over time, I was given increasingly complex tasks, new technologies to learn, and tighter deadlines. In those weeks I’d spent learning, I’d cracked the code on a missing piece of my life:
    I’d learned how to learn (so dumb, but alas)

    First Principles Thinking

    It turns out that the how is pretty simple: Go Deep, or rather First Principles Thinking.
    I had unknowingly stumbled into, and taught myself a mental model. Gaining a core competency in application development, rather than a simple functional knowledge, required that I break down all the information into smaller fundamental truths that I could pick apart, absorb, challenge, and wrap my mind around.
    As I went through this process, it became clear that the full picture often isn’t complex at all; it’s a number of extraordinarily simple building blocks working together to form a mirage of complexity.
    So I took that lesson and began applying it elsewhere. I went about deconstructing the entirety of what I believed myself to know into core truths/building blocks, and when I reassembled the pieces I found my most fundamental truism:
    Nothing in life is too difficult to understand—if you actually want to understand it. 

    Final Thoughts

    Maybe I’m not so dumb after all.
    A guru might say enlightenment isn’t a destination but a state achieved through consistent practice. I think the same is true here.
    You’re smart because you put effort into the practice of being smart.