A five-day journey through FTP failures, IP firewalls, and a webhook pivot that finally made git push enough to deploy.
There is a specific kind of laziness that drives good engineering decisions. Not the laziness that skips writing tests or names variables x, but the productive kind, the kind where you think: I don’t want to do this manually ever again, so let me spend a weekend automating it.
That was my mindset when I decided to set up auto-deployment for my personal website, alvin.id. The site is a static personal page: no frameworks, no build step, just raw HTML, CSS, and a handful of JavaScript files hosted on shared cPanel hosting. The deployment workflow at the time was embarrassingly manual, edit locally, open cPanel File Manager, upload files one by one, pray nothing broke. It worked, but it felt wrong. Every update to a blog post or a photo caption required a context switch out of my editor and into a browser file manager.
The goal was simple: git push and the live site updates itself.
What followed was a five-day arc through two completely different technical approaches, one of which hit a wall I couldn’t climb, and the other of which I wouldn’t have thought of on my own.
The Obvious Approach: GitHub Actions + FTP
The first solution that came to mind was also the most intuitive one. GitHub Actions can run arbitrary workflows on every push. Shared hosting always exposes FTP access. Connect the two, and every push to main would sync the files to the server. Done.
On February 10th, I added the first workflow:
name: 🚀 Deploy to alvin.id
on:
push:
branches:
- main
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: 🚚 Get latest code
uses: actions/checkout@v4
- name: 📂 Sync files
uses: SamKirkland/FTP-Deploy-Action@v4.3.5
with:
server: ${{ secrets.FTP_SERVER }}
username: ${{ secrets.FTP_USERNAME }}
password: ${{ secrets.FTP_PASSWORD }}
server-dir: ./
protocol: ftp
port: 21
The logic is clean: checkout the code on a GitHub-managed Ubuntu runner, then use SamKirkland/FTP-Deploy-Action, a popular GitHub Action that wraps lftp to sync files over FTP, to push everything to the server. Credentials live in GitHub Secrets, not in the repository. Straightforward.
It didn’t work.
The first error was a protocol mismatch. My hosting required FTPS (FTP over explicit TLS), not plain FTP. Easy fix, change protocol: ftp to protocol: ftps, set security: loose to handle certificate quirks. The workflow ran again. Still failing, but differently this time. Authentication errors.
This kicked off what I’d charitably call a debugging marathon. Over the next five days, the git log tells the full story:
Add FTP debug step to diagnose GitHub Actions auth failure
Switch FTP deploy from SamKirkland action to lftp for compatibility
Use lftp -u flag with env vars for FTP credentials
Add FTP debug step and relax lftp TLS settings
Each iteration got closer in a different way, but the authentication kept failing in a way that didn’t match a bad-credentials error, those produce a clear “login incorrect.” This was something else.
The Real Problem: IP Firewalls
Eventually the actual issue surfaced: my hosting provider had an IP whitelist on their FTP server. GitHub Actions runners spin up on ephemeral machines in Microsoft Azure datacenters, they get a fresh IP every time the workflow runs, from a range of thousands of possible addresses. The hosting provider’s FTP only accepted connections from pre-approved IPs. GitHub’s runner IPs are not on that list, and there was no practical way to whitelist them all.
This is a surprisingly common problem that doesn’t show up immediately in the error messages. The FTP connection attempt gets blocked at the firewall level before any credentials are even sent, so you get cryptic authentication failures that look like a credentials issue. The clean workaround is to ask your hosting provider to whitelist GitHub’s published IP ranges, they publish the full list at a JSON endpoint, but my hosting panel didn’t expose that kind of firewall control.
After one last cleanup commit noting “pending hosting IP whitelist,” the FTP approach was a dead end.
Flipping the Direction
This is where the conversation with Claude (the LLM I was using as a collaborator throughout) took an interesting turn. I’d been treating this as an FTP problem, how do I get GitHub to successfully push files to my server? The reframe that came back was: what if the server pulled the files instead?
The core insight is about reversing the direction of trust. GitHub Actions couldn’t reach my server over FTP because the firewall blocked inbound connections from unknown IPs. But my server could freely make outbound HTTP requests to GitHub, nothing was blocking that. So instead of GitHub pushing to the server, we make the server pull from GitHub when it receives a signal that new code is ready.
That signal is a webhook.
GitHub’s webhook system lets you configure a URL that GitHub will POST to whenever something happens in your repository, a push, a pull request merge, a release. The payload contains metadata about the event: the commit hash, the pusher, the branch. Our target: https://alvin.id/deploy.php.
The new architecture:
git push → GitHub → POST webhook → deploy.php → downloads ZIP → extracts to public_html
No FTP. No inbound connections from GitHub. No IP whitelist needed. The server is in control of its own deployment.
Building deploy.php
deploy.php is the webhook handler, a PHP script that lives on the server and does four things when it receives a webhook.
1. Verify the signature
GitHub signs every webhook payload with HMAC-SHA256 using a shared secret you define when setting up the webhook. The script checks this before doing anything else. Without it, anyone who discovers the URL could trigger a fake deployment.
$payload = file_get_contents('php://input');
$sigHeader = $_SERVER['HTTP_X_HUB_SIGNATURE_256'] ?? '';
$expected = 'sha256=' . hash_hmac('sha256', $payload, WEBHOOK_SECRET);
if (!hash_equals($expected, $sigHeader)) {
http_response_code(403);
exit('Forbidden: invalid signature');
}
Note the use of hash_equals() instead of ===. Regular string comparison is vulnerable to timing attacks, an attacker can infer characters of the expected hash by measuring how long the comparison takes. hash_equals() always runs in constant time regardless of where the strings diverge.
2. Acknowledge immediately, deploy in background
This one bit me. GitHub’s webhook system has a 10-second timeout. If your endpoint doesn’t respond with a 200 OK within 10 seconds, GitHub marks the delivery as failed. A full deployment, downloading a ZIP from GitHub, extracting it, copying files, takes longer than that.
The fix is to flush the HTTP response to GitHub first, then continue running the deployment logic with the connection closed:
// Respond 200 immediately so GitHub doesn't time out
http_response_code(200);
header('Content-Type: application/json');
header('Connection: close');
$responseBody = json_encode(['status' => 'accepted']);
header('Content-Length: ' . strlen($responseBody));
echo $responseBody;
// Flush output buffers to actually send the response
if (ob_get_level()) ob_end_flush();
flush();
// Continue deploying after GitHub has received the 200
ignore_user_abort(true);
set_time_limit(120);
This is a general async pattern: when you receive a request that triggers a long-running job, acknowledge immediately and do the work after. GitHub’s webhooks are fire-and-forget from their side, they just want confirmation the event was received. What you do with it afterwards is your business.
3. Download, extract, and copy
With the response flushed, the script downloads the repository as a ZIP from GitHub’s archive endpoint, extracts it to a temp directory, and copies everything into public_html/:
$zipUrl = "https://github.com/" . GITHUB_REPO . "/archive/refs/heads/" . DEPLOY_BRANCH . ".zip";
$tmpZipPath = tempnam(sys_get_temp_dir(), 'gh_deploy_') . '.zip';
// Download via cURL
$ch = curl_init($zipUrl);
curl_setopt_array($ch, [
CURLOPT_RETURNTRANSFER => true,
CURLOPT_FOLLOWLOCATION => true,
CURLOPT_SSL_VERIFYPEER => true,
CURLOPT_TIMEOUT => 60,
]);
$zipData = curl_exec($ch);
curl_close($ch);
// Extract and copy files to public_html
$zip = new ZipArchive();
$zip->open($tmpZipPath);
$zip->extractTo($tmpExtractDir);
$zip->close();
deployFiles($extractedPath, DEPLOY_DIR, $skipList);
4. Skip sensitive files
Not everything in the repository should end up in public_html/. The skip list ensures the handler never overwrites itself mid-deployment, and keeps git internals and dev tooling off the server:
$skipList = ['.git', '.github', '.gitignore', '.DS_Store', 'deploy.php'];
The most important entry: deploy.php itself. If it were overwritten during a deploy, any in-progress execution would be running modified bytecode, undefined behaviour at best. The consequence is that changes to deploy.php must be manually uploaded via cPanel, but that’s a reasonable trade-off for a file that changes rarely.
The Final main.yml
After all that, here is what the GitHub Actions workflow looks like now:
name: 🚀 Deploy to alvin.id
on:
push:
branches:
- main
jobs:
notify:
runs-on: ubuntu-latest
steps:
- name: ✅ Deployment triggered via webhook
run: |
echo "Push to main detected."
echo "GitHub will fire a webhook to https://alvin.id/deploy.php"
echo "The server will pull and deploy the latest code automatically."
It does nothing. It is just a log message. The actual work happens on the server side, triggered by the webhook GitHub fires automatically. The workflow exists more as documentation than functionality.
Hardening: Security and Observability
Once the core pipeline worked, there were two more layers to add.
Security. The deploy.php URL is public, it has to be, since GitHub needs to reach it. Any request that isn’t a POST gets redirected to alvin.id, which prevents casual browser discovery from revealing anything interesting and reduces noise in the logs.
Logging and alerts. A silent deployment pipeline is hard to debug after the fact. Every deploy writes a timestamped entry to ~/deploy.log, outside public_html/ so it isn’t publicly accessible:
[2026-02-20 22:05:00 WIB] [INFO] Deploy started, commit: abc123, pusher: vnby
[2026-02-20 22:05:08 WIB] [INFO] Deploy successful, commit: abc123
On failure, an email alert fires automatically with the error details. You want to know when the deployment pipeline breaks before a visitor discovers the site is stale.
What “deploy = git push” Actually Feels Like
The end result is real: git push origin main and, within about 15 seconds, the live site reflects the change. No File Manager. No FTP client. No manual step of any kind. The README now says it plainly:
# deploy = just push
git push origin main
That comment is doing more work than it looks like. It is documentation for future-me, a reminder that the complexity is hidden inside deploy.php on the server, and a small celebration that the thing actually works.
Lessons
Debugging misdirects are common with network-layer failures. The FTP authentication errors were a symptom of a firewall issue, not a credentials issue. When you’re stuck on what looks like one type of error, always ask whether the error message is actually describing the root cause, or just where the failure bubbled up to.
Reversing the direction of a connection is a useful mental model. If you can’t push to X because X blocks your IPs, ask whether X can pull from you instead. It doesn’t always work, but it’s a surprisingly underused reframe. The webhook approach is actually a classic pattern for shared hosting, it just wasn’t the first thing I reached for.
LLMs as debugging collaborators change the shape of the work. Having Claude in the loop didn’t mean I stopped thinking about the problem, it meant I had something to think against. The FTP-to-webhook pivot came out of describing the IP whitelist wall and asking “what else is possible?” That’s a conversation, not a Stack Overflow search. The value isn’t in generating code; it’s in reframing the problem space when you’re tunnel-visioned on one approach.
Simple infrastructure beats clever infrastructure. The final setup is: a PHP file on a shared host, a webhook configured in GitHub, and a secret key. Nothing to maintain, nothing to update, no external services. The pile of FTP workflow YAML that came before it was more elaborate and less reliable.
The five days of commits from Add GitHub Workflows to docs: add logging/alert info are a fairly honest record of how technical decisions actually happen, not straight-line reasoning from problem to solution, but a series of attempts that teach you what the real problem is.
All commit history referenced in this article is from the alvin.id GitHub repository.
Next in the series: Protecting main: Building a Staging Gate That Actually Works.