Vibe-coding a markdown preprocessor for free in a weekend

How I vibe-coded a markdown preprocessor over a weekend with only free tools

While I am no stranger to coding, I decided to try to “vibe code” a markdown preprocessor from the ground up. The idea was to replace some mathematical manipulations I’ve been doing using Python notebooks with something cleaner. The basic idea was to have something that runs selected code fences in markdown files and replaces them with the actual result.

The full history, including the reasoning from the models that feature this capability, is available at the app’s GitHub (make sure to give it a star!). From the initial concept to the choice of programming language, and even the name (twice!), everything was vibe coded.

The Claude history was downloaded with the Claude Exporter Chrome extension. To feed the codebase to the models when necessary, I used CodeWeaver, a CLI I made a while back.

Without further ado:

I started using Gemini 2.5 Pro via https://aistudio.google.com/

User

I would like to make a markdown preprocessor CLI that runs code fences and replaces them with their result. Which language would be better? Rust, Haskell, or another one?

Model

The model provided a comprehensive breakdown of the options. It highlighted Rust for its best-in-class performance and safety, though it warned of a steep learning curve. It suggested Haskell for parsing elegance and Go or Python for rapid development. Crucially, it flagged the massive security vulnerability of running arbitrary code from a document, suggesting I implement sandboxing or whitelisting.

Despite the warnings about the learning curve, I decided to go with Rust for its robustness and the ability to ship a single binary.
If you intend to make an app public, it is important to think of a unique name for it from the beginning and check if the relevant repositories are available (which I only did later…), so I asked:

User

What could be nice names for this app?

Model

The AI suggested several categories of names: descriptive ones like “MarkRun,” metaphorical ones like “Forge” or “Catalyst,” and punchy CLI names like “Etch” or “Ink.” One stood out for its evocation of ancient magic and reading, so I chose Scroll.

With the language and name settled, I needed a blueprint.

User

develop some specifications for this app i have chosen Rust as the programming language and Scroll as the name of the app. The idea is to have the following code run for example (both inline and the block code fences:

import pandas as pd
import plotly.graph_objects as go
from plotly.subplots import make_subplots

# CSV data as a string (replace with file reading if data is in a separate file)

# Save the data to a file (for this script's execution)
# In a real scenario, you'd just provide the filename
file_path = "health.csv"

for file_path in ["health.csv", "health Samara.csv"]:
    # 1. Read the data from disk
    try:
        df = pd.read_csv(file_path)
    except FileNotFoundError:
        print(f"Error: The file '{file_path}' was not found.")
        exit()
    except Exception as e:
        print(f"Error reading the CSV file: {e}")
        exit()

in place text >pythoncode to be run rest of the stuff

Model

The model drafted a detailed specification. It proposed using clap for CLI arguments and pulldown-cmark for parsing. It suggested a syntax similar to R Markdown (using curly braces {} inside the code fence info string) and a TOML configuration file to manage runners. While solid, it felt a bit too heavy for the lightweight tool I envisioned.

I wanted something cleaner that wouldn’t break syntax highlighting in editors.

User

the syntax should be based on a comment before each block, so the syntax highlight is not broken. Also, a list of commands and languages could be passed as argument to the execution. […]

Model

The model pivoted to a “magic comment” syntax (e.g., <!-- scroll: execute -->) placed before code blocks. However, it started adding complex state management for arguments and configurations. I realized I was letting the scope creep, so I told it to strip it back: no arguments, no config files, and—perhaps controversially—drop the security guardrails. If you run scroll on a file, you trust that file.

User

it is getting over complicated. The is no need to pass arguments to the scripts. Inline code should also be activated by comments. There is no need for a config file either. The comment will specify a language and optionally a command to run the following code. Once specified, this command is the default for that language until changed. Also, drop the security measures. If scroll is explicitly run to preprocess a file, the risks are known.

Model

We were getting closer, but the architecture still felt heavy. The model was designing a system to manage long-running processes and inter-process communication. I realized we could simplify everything by treating the whole operation as a batch job.

User

how about not keeping track of running processes? Just read the md, assemble one script for each language, insert the necessary print statements, run the scripts get the results and replace them?

Model

This was the breakthrough moment. The model validated this “Batch Execution” approach. Instead of keeping a complex REPL open, Scroll would simply read the whole file, glue all Python blocks into one big script, run it once, and splice the output back in.

I refined the spec one last time to ensure ease of use:

User

1 if no directive to define the runner is given, use sensible defaults
2 no need to handle artifacts. Just make sure the script has the same working directory as the file, and the user can simply generate the artifact and create the markdown piece that user it

Model

The model generated a final, clean specification: a two-pass batch system where the user is fully responsible for the code they write. A quick syntax tweak to make it even cleaner:

User

great! Just drop the lang and : from the syntax ( should be and we are good to go!

Before writing code, I checked for any blind spots.

Model

The model suggested adding Watch Mode (re-running on file save) and Caching. These were excellent ideas, so I asked to incorporate them into the plan immediately, along with better error handling.

User

Change the following:
1 for the error, when some error ocurr, generate another file ie: <output_name>.err.md, and not the original one
2 Watch Mode and Caching. Env vars should be setted normally and read from the generated scripts,
3 make sure the implementation is cross platform.

Then, generate an implementation plan for the app

Model

With the plan solid, it was time to generate the code.

User

great! Generate the complete files and step by step instructions to implement scroll

Model

The model generated the initial Rust code, split into parser.rs, executor.rs, and main.rs.

I then started a new session with Gemini to iterate on this base code and handle specific implementation details, feeding it the code generated in the previous step.

User

Change the codebase as little as possible so that the example

# My Report

This report demonstrates `scroll`. First, let's define a variable in Python.
<!-- scroll -->
```python
message = "Hello from a persistent Python script!"
```
Now, let's print that variable in a separate code block.
<!-- scroll -->
```python
print(message + "\n")
```
And here is a calculation:
<!-- scroll -->
```python
x = 100
y = 50
print(f"The result is: {x + y}")
```

And here is a calculation `x + y` = <!-- scroll -->`x + y`:
```

is correctly transformed into:

# My Report

This report demonstrates `scroll`. First, let's define a variable in Python.
Now, let's print that variable in a separate code block.
Hello from a persistent Python script!

And here is a calculation:
The result is: 150

And here is a calculation `x + y` = 150:
```

Model

The model understood that to achieve this, we needed a more robust parsing strategy.

User

Formulate a plan for a more elegant solution for engine.rs. Output only the plan.

Model

The model proposed a complete architectural overhaul. Instead of messy regex replacements on the raw string, it suggested a Single-Pass Parser creating an intermediate representation (Vec<Segment>), followed by a Batch Execution phase, and finally a Sequential Assembly phase.

User

Does this handles blocks that doesn’t generate outputs, and blocks that generate multiple outputs?

Model

The model confirmed that by using a delimiter strategy (printing a unique string between blocks), it could correctly map empty outputs (side-effect only code) and multi-line outputs back to their respective blocks.

User

great. but how we inject that in arbitrary languages, without knowing its print keyword?

Model

This was a tricky one. The model originally wanted to hardcode print statements inside the engine. When challenged, it pivoted to a configuration-driven approach where the Runner struct defines how to print a delimiter for that specific language.

User

there is no need for inline_template. The user should implement the print, if he wants to, himself (sorry for the malformed example)

Model

I realized I was over-engineering the “magic.” I wanted the user to have full control. If they want output, they print it. The model simplified the Runner struct to remove implicit formatting templates.

User

great. One last think: During the md, the user can redefine the runner. How we deal with that?

Model

This added a layer of complexity: state changes mid-document. The model correctly identified that we needed to track the “Active Runner” state linearly as we parsed the document, grouping code blocks into batches based on the runner configuration active at that moment.

User

Great! Generate the complete changed files to implement this, for easy copy and paste!

Model

The model generated the code. I pasted it in. It compiled, but runtime errors appeared immediately.

User

Compiling scroll v0.1.0 (C:\Users\carlos.t.santos\Desktop\Files\Reps\Scroll)
Finished dev profile [unoptimized + debuginfo] target(s) in 2.11s
Running target\debug\scroll.exe -help
A markdown preprocessor that executes embedded code blocks

Finished dev profile [unoptimized + debuginfo] target(s) in 0.05s
Running target\debug\scroll.exe example.md -o example.out.md --watch
❌ Error: Runner for ‘–>’ is missing a ‘command’

👀 Watching for changes in ’example.md’. Press Ctrl+C to exit.

Model

The regex was too greedy. It was trying to parse the closing comment arrow --> as a language name.

User

the error persists. Let me make a test: how should be the output of the example, and why?

Model

I forced the model to verbally walk through the logic to prove it understood the task before generating more code. It correctly identified that the regex was ambiguous and fixed it.

User

Compiling scroll v0.1.0 (C:\Users\carlos.t.santos\Desktop\Files\Reps\Scroll)
Finished dev profile [unoptimized + debuginfo] target(s) in 4.40s
Running target\debug\scroll.exe -help

Finished dev profile [unoptimized + debuginfo] target(s) in 0.14s
Running target\debug\scroll.exe example.md -o example.out.md --watch
❌ Error: Execution failed for command ‘python’.
Stderr:
Traceback (most recent call last):
File “”, line 1, in
NameError: name ‘message’ is not defined

👀 Watching for changes in ’example.md’. Press Ctrl+C to exit.

Model

This error revealed a critical flaw in the batching logic. It was grouping adjacent blocks, but if there was text between them, it started a new batch (and a new process), causing variables defined in the first block to be lost. The model had to rewrite the engine to collect all blocks for a specific runner configuration across the entire file into a single batch.

User

It works. Just a little problem with newlines:

My Report

This report demonstrates scroll. First, let’s define a variable in Python.

message = "Hello from a persistent Python script!"

Now, let’s print that variable in a separate code block.

print(message + "\n")

This problem actually persisted for about 10 more exchanges, at which point I decided to try Claude’s Sonnet 4.5. I try to use it sparingly because of the inferior requests limit. I do not find it particularly better or worse than other models, but sometimes it offers new insights.

User

Can you simplify and fix this code? example.md should be transformed exactly into example.out.md, respecting the newlines

Model

Claude analyzed the code and pointed out that the replacement logic was complex and error-prone.

User

File: example.out.md

that was the generated output…

Model

Claude realized that the double newlines were coming from Python’s implicit newline and the tool adding another one.

User

File: example.out.md

Model

We went back and forth on the exact expected output structure.

User

File: example.out.md

Try to simplify the regexes and the implementation in general

Model

Claude proposed a simplified regex strategy.

User

runners must be able to be overwritten during the script

Model

Claude fixed a regression: runners must be parsed linearly, not upfront.

I decide to start a new prompt with Claude, and ask why the error:

User

File:
File: example.md
File: example.out.md

why python x = 100 y = 50 print(f"The result is: {x + y}") is getting a newline before it?

Model

The model found the regex wasn’t consuming the newline before the block.

User

the output is like this:

My Report

This report demonstrates scroll. First, let’s define a variable in Python.

And here is a calculation x + y = 150:

Model

Claude confirmed the blank lines were correct: they reflected the original file’s structure.

Branding and Documentation

With the code finally working perfectly, I returned to Gemini to prepare for release. It was time to make it professional.

User

create some nice examples to showcase the functionality of this codebase

Model

Gemini generated examples:

  1. System reports
  2. Custom runners
  3. Data visualization with matplotlib
  4. Reading JSON files

User

Can you improve the readme? Be sure to add a mention to Watch mode

Model

It generated a polished, badge-rich README including Watch Mode.

User

in key feature, generate a line about how the session is preserved for each language. Output only the new line

Model

User

the name scroll is taken. Can you think of other nice names?

Model

It tried: MarkRun, Quill, Fuse…

User

nope. all taken

Model

It tried harder: Sigil, Weft, Inscribe, Markout.

User

Inscribe it is! Generate a professional readme for the app

Model

It generated a new README for Inscribe.

User

change the codebase to reflect the renaming

Model

It updated all references from scroll to inscribe: Cargo.toml, regexes, README, config, engine, docs.

User

professionally comment this codebase so I can upload it to crates.io

Model

The model added crate-level, module-level, and API-level documentation.

User

generate a show hn text

Model

It drafted three Show HN variants.

User

just clear the list below for the body text

Model

A plain-text body list was produced for showHN/posting reuse.

User

just clear the formatting, since there is plain text

Model

All markdown bolding and icons removed; plain text version output.

User

change the code below to also output current date and time in human readable format:

            println!(
                "\n🔄 Detected change, reprocessing '{}'...",
                input_path.display()
            );

Model

chrono added to Cargo.toml; src/main.rs now logs Local::now().format("%Y-%m-%d %H:%M:%S").

User

the output (WRONG AGAIN…)

Model

The model acknowledged the mismatch and dug back into the regex and replacement logic. It traced the remaining issues to newline handling and to how empty-output blocks were being handled in the assembly pass.

User

the block

message = "Hello from a persistent Python script!"

Should vanish, as it does not generate output. Its four fucking lines gone!

The block below

x = 100
y = 50
print(f"The result is: {x + y}")

shouldn’t have a newline BEFORE it. Yet, your dumbass result is like below:

My Report

This report demonstrates scroll. First, let’s define a variable in Python.
Now, let’s print that variable in a separate code block.

Hello from a persistent Python script!

And here is a calculation:

The result is: 150

And here is a calculation x + y = 150:

Model

At that point I lost my cool. The model finally realized the issue wasn’t the runtime output but the regex: it didn’t consume the trailing newline after fenced blocks. The fix: ensure the fenced-block regex consumes the optional trailing \r?\n?, and simplify replacement to “replace exactly what the block matched with the block’s stdout (trimmed appropriately for inline blocks).”

User

Is this right, dumbfuck?

Model

The model apologized, fixed the regex to include the trailing newline when matching a fenced block, and simplified the replacement strategy to “trust the user”: fenced blocks are replaced with raw block output (including newlines inside printed output), inline blocks are trimmed to a single inline output token, and runner-definition directives produce no output (they are consumed).

User

does this implementation support overwriting the runner in the middle of the script?

Model

Yes. Because we parse the document linearly into segments, and we track the active runner configuration at each point, a runner-definition directive that appears mid-file becomes the active runner for following blocks — and the batching/assembly phases honor that.

User

it works. Generate a plan to implement watch mode. Also, add an optional argument to let the user specify a command to call after the file is generate, both in watch and normal modes

Model

Plan (summary):

User

great! generate the complete modifiles, for easy copy and paste

Model

The model produced the required code modifications and split the logic into config.rs, engine.rs, and main.rs with watch-mode glue. The files compiled.

User

Change the codebase to use pomsky instead of regex

Model

I attempted an experiment replacing regex usage with pomsky-macro to make the regexes more readable at compile-time.

User

there is no pomsky_lib

Model

The model hallucinated pomsky_lib. The correct dependency is pomsky-macro which expands to regex types; the code was corrected to use pomsky-macro macros over regex::Regex, or reverted to regex when that proved fragile. Ultimately I reverted to regex with clearer, separate regexes per block type.

User

riiiight. So revert to regex, but keep the improved and tested UNIFIED_RE and BLOCK_RE patterns that correctly include the trailing newline. Also ensure the parser produces Vec<Segment> intermediate representation.

Model

Done. The updated UNIFIED_RE and BLOCK_RE patterns now treat fenced blocks as including their trailing newline, which solved the newline war. The parser produces a Vec<Segment> where each Segment is one of:

The executor collects blocks by runner state into batched scripts, inserts delimiters only when configured (the Runner struct no longer injects print templates by default), runs each batch, splits on the delimiter token, and maps outputs back to segments.

User

The name scroll is taken. Can you think of other nice names?

Model

We re-ran the naming exercise. The chosen name was Inscribe (available), which better matches the project’s mechanics and branding.

User

Inscribe it is! Generate a professional readme for the app

Model

A polished README.md was generated. It included badges, installation via cargo install inscribe, usage examples, and a highlighted section: Live Reloading with Watch Mode plus --on-finish use cases.

User

change the codebase to reflect the renaming

Model

All references were updated: Cargo.toml package name and description, README titles and examples, regex patterns from scroll to inscribe, and internal delimiter constant renamed to INSCRIBE_DELIMITER (and exported via config.rs).

User

professionally comment this codebase so I can upload it to crates.io

Model

Doc comments were added (//!, ///) and public API documentation expanded for Runner, process_markdown, and CLI flags. Cargo.toml gained metadata fields and keywords.

User

create some nice examples to showcase the functionality of this codebase

Model

I generated an examples/ folder suggestion with:

User

Can you improve the readme? Be sure to add a mention to Watch mode

Model

README updated with a dedicated Live Reloading (Watch Mode) section showing inscribe --watch example.md -o example.out.md --on-finish "pandoc {{output}} -o final.pdf".

User

in key feature, generate a line about how the session is preserved for each language. Output only the new line

Model

  • Stateful Execution: Code blocks of the same language share a single runtime session, allowing variables and state to persist from one block to the next. fileciteturn2file17

User

generate a show hn text

Model

Three Show HN drafts were produced; the chosen one was friendly, succinct, and highlighted the Rust pedigree and the watch mode.

User

just clear the list below for the body text

Model

A plain-text body list was produced for showHN/posting reuse.

User

just clear the formatting, since there is plain text

Model

All markdown bolding and icons removed; plain text version output.

User

change the code below to also output current date and time in human readable format:

            println!(
                "\n🔄 Detected change, reprocessing '{}'...",
                input_path.display()
            );

Model

chrono added to Cargo.toml; src/main.rs now logs Local::now().format("%Y-%m-%d %H:%M:%S").

User

Everything compiles. Anything else left?

Model

Final checklist:

User

Ready to publish?

Model

Yes. I produced a publish-checklist.md with steps: run cargo test, ensure license file, bump version, run cargo publish --dry-run, then cargo publish. Also recommended adding GitHub Actions CI.

There you have it.

Link