Re: When vibe coding, isn't the source code the prompt?
The concept that AI prompts "are the source code" isn't really far fetched. At least not when you think about who actually uses Vibe Coding as a "useful tool."
A friend of a friend was telling me the other day about a mobile app idea they had. Having some sort of background loosely based in tech, they spun up their Vibe Coding IDE, gave it a prompt and within a few hours had some working code. The project itself isn't relevant, but needless to say the project is taking library X and outputting it to library Y and that is it. Nothing complex, something that almost wouldn't need a Proof of Concept task in a real development environment. But they wanted to see if it could be done and I guess that is what Vibe Coding is all about.
We discussed it a little, I brought up my background in system design, software development and technical writing. They asked if I used Vibe Coding in my day job and I said no. The point of Proof of Concept development and R&D in general is to not only explore the possibilities, but to learn and understand the technology. You are learning where to find documentation, identify pitfalls of the solution you are investigating, and getting yourself familiar with the overall concepts of what you are trying to build. When you Vibe Code you lose a lot of this. There is a reason why when learning a new tech stack or programming language we start with a real world project to actually learn the concepts. You need to grow that muscle memory.
Instead, the output they got from Vibe Coding wouldn't have gotten me very far. I would have tossed all the output as I'd still need to setup the project the correct way, go research the libraries being used and compare them with competitors. All that was answered was "Does Library X work with Library Y?"
But I digress...
The idea that in the future people will be checking their prompts into source control feels off. I expect code in my SCM to be generate a reproducible output. The fact that AI is a fuzzy compiler giving variable output based on its current state of mind is a bit concerning. If the goal is to get a variable output that is roughly similar then yes it makes sense to save off your prompts. But right now I think a lot of people have a misunderstanding of the state of AI.
This same friend of a friend has modified their life to include AI in almost everything. Meal planning, interactions with the community, hobbies, etc. Every time they have a question about life they ask AI. They seem to view AI as an authority due to its vast "knowledge" in all areas. What I've noticed is their application of the information is lacking the context and critical thinking you used to see when people did actual investigations. Handing over the work of thinking means you are no longer learning. It shows when you start to probe any deeper than surface level.
This is because our current AI systems don't actually understand concepts. They link groups of words together and look for similar patterns of linkages to generate responses. AI could answer the SAT Analogy questions "A is to B as C is to ___". It just doesn't know why. The linkage exists due to texts stating their relationship but the AI doesn't actually grok it (so pissed that Musk bastardized that word). This means that anyone who views AI as an authority also cannot grok it. And this lack of understanding is what will eventually be stored off in our source control systems. I am not excited about this timeline we are living in.
PS. Love seeing the use of PEG. I was introduced to it via Janet. I'm a big regex guy, I'm a vim user for 30 years so regex is in my blood.
$ published: 2025-08-05 09:50 $
$ tags: ai, development, life $
-- CC-BY-4.0 jecxjo 2025-08-05