How Airproxy built its 2000 mobile proxies infrastructure from scratch
How to start and scale a mobile proxy factory
This is the story of how we built Airproxy.io — a mobile proxy provider with over 2,000 active SIMs and modems — entirely from scratch, back when plug-and-play solutions didn’t exist.
But first, let’s make sure we’re all on the same page regarding IP types.
Understanding the differences between proxy types
Datacenter proxies are the easiest to detect. They originate from cloud providers and can be flagged instantly — even without advanced detection tools.
Residential proxies are more stealthy. These rely on real users (often unknowingly) who install software that routes traffic through their home connections. While they work well for scraping, services like Spur.us track and flag most of them as proxy traffic. Plus, providers don’t control the underlying infrastructure — the IPs belong to end users, not the provider.
Mobile proxies, on the other hand, use real SIM cards and 4G connections. They're hosted directly on the provider’s hardware (modems or phones) inside server rooms. Mobile IPs rotate naturally and dynamically, making them nearly impossible to flag or blacklist reliably. That’s why they’re considered the most trusted and resilient option for advanced scraping, automation, and account farming.
Why we started Airproxy
Back in 2018, Instagram automation tools like MassPlanner and Jarvee were booming. Everyone used datacenter proxies — they were cheap and worked well... until Instagram changed the game.
IG started blacklisting datacenter IPs, making automation nearly impossible without getting blocked.
The only viable alternative? Mobile proxies. Since they matched the type of connection real IG users had, they worked flawlessly.
However, at the time:
Only 3–5 mobile proxy providers existed worldwide
None had proper websites or infrastructure
In Italy, there were zero providers
We saw the gap and decided to build everything from scratch.
How it all started: the brutal first months
Today, Airproxy.io operates 2,000+ USB modems, each with a dedicated SIM card, across 6 server rooms in Italy. But the early days were anything but easy.
There was no documentation, no one to learn from. Everything was trial and error.
Some of the questions we had to figure out on our own:
What hardware do we need?
How do we power hundreds of USB modems efficiently and safely?
What kind of USB hubs can handle high loads without crashing?
Can we use Raspberry Pi–like micro-PCs, or do we need full servers?
Do we go with rackmount setups or build custom frames?
How many modems can a single power supply unit handle reliably?
What kind of internet connection do we need for 100+ proxies?
Is symmetric fiber mandatory, or can we scale using 4G fallback?
What’s the minimum upload/download bandwidth per active proxy?
Where can we buy hundreds of SIM cards? Are consumer plans even allowed?
Will the carrier ban us for “unusual” usage patterns?
Can we rotate SIMs automatically without breaking the terms of service?
Which software stack should we use to manage the proxy farm?
Is 3proxy enough, or do we need to write our own tools?
Can we build an admin panel to monitor and control proxies in real time?
Are we allowed to resell mobile data in our country?
What do telecom providers say about this type of usage?
What’s the maximum density before hitting limits like BTS saturation?
Looking back, all of these questions seem simple — but in 2018, each one required weeks or months of experiments.
I’m not a developer myself — but I knew one thing for sure: the demand for real mobile proxies was huge, and almost nobody was offering them properly.
So I did what any determined founder would do: I hired a couple of freelancers to build the initial version. Unfortunately, none of them were able to get the project off the ground. It felt like a dead end.
Then, luck stepped in. I met Mikhail, a brilliant developer who not only understood the vision but had the skills to make it happen. He’s still with us today, and he’s been instrumental in building everything.
Mikhail chose to run our proxies using 3proxy, and he built our site and dashboard with Django — a choice that proved to be both robust and scalable.
We didn’t even have fiber in our area. And when you're running 100+ active proxies, you need at least 300–500 Mbps of symmetric bandwidth, so we rented a small office in a northern Italian city that had early fiber coverage. That office became our first real location.
Luckily, the building had a direct line of sight to a cell tower covering all major Italian mobile networks.
How our infrastructure runs today
Hardware side
Each server room contains a shelf with 3 to 5 racks (up to 600 total modems/proxies).
We designed and built these frames in-house so they would fit on shelves that could be easily placed in any office/server room.




Each of our racks hosts up to 120 modems connected via custom-designed 10-port USB hubs. These hubs aren’t available on the market — we had them specifically engineered and assembled for us, based on our technical requirements and real-world experience managing large numbers of USB devices.
The most important feature they offer is Per-Port Power Switching: this allows us to control the power supply of each individual USB port programmatically.
In practice, if a modem becomes unresponsive, our monitoring system detects the failure and sends a command to power-cycle just that port. The modem is effectively "unplugged and replugged" automatically, without requiring any manual intervention on site.
This feature might seem minor, but it’s critical at scale. Providers who rely on off-the-shelf USB hubs often lack this functionality and are forced to manually reboot crashed modems — a process that’s slow, error-prone, and can lead to hours of downtime.
Each 10-port USB hub is connected to a micro-PC (similar to a Raspberry Pi) where the proxy service runs.
Redundant fiber connection
To ensure maximum uptime and reliability, each server room is connected to the internet through two independent fiber lines from separate ISPs. These redundant connections are configured in failover mode: if one line experiences downtime or degradation, traffic is automatically rerouted through the second line with no interruption in service.
This setup guarantees continuous access to our proxies and prevents outages — even in the event of carrier maintenance, local disruptions, or hardware failures on one of the lines. It’s a critical layer of resilience.
Power continuity and backup systems
All of our hardware is connected to an online UPS, which keeps the entire infrastructure running for 30 to 45 minutes in the event of a power outage. This buffer allows the systems to stay online during short blackouts, power grid fluctuations, or planned maintenance without impacting the availability of our proxies.
The online UPS, unlike lower-end models, delivers a fully regenerated power output: the outgoing current is completely independent from the input, both in voltage and frequency. This results in clean, stable power, far superior to what’s typically supplied by the public grid — ideal for protecting sensitive networking and server equipment.
In case of prolonged outages, we’re fully prepared: we have portable gasoline generators ready for rapid deployment to any of our server rooms. These generators can power the entire setup and are regularly tested to ensure readiness.
This dual-layer power strategy ensures that our infrastructure remains operational and connected — even in the most adverse conditions.
Additionally, each server room is equipped with an automatic-reset circuit breaker installed in the main electrical panel. This device monitors electrical faults and, in the event of a power surge or overload, trips like a standard breaker to protect the circuit. However, unlike a manual switch, it automatically performs a safety check and — if no faults persist — resets itself to restore power without human intervention. This feature significantly reduces downtime caused by transient current issues and ensures that the infrastructure can recover quickly, even in our absence.
Software Side
Software stack and monitoring
Without going into excessive detail, all of our proxy hosts run on Linux, and the proxy service itself is handled by 3proxy — a lightweight and efficient proxy server that we’ve slightly customized to better suit our architecture.
On the backend, our site is built with Django, and we use Zabbix as our central monitoring system. It continuously checks the health of every proxy, modem, and server in our infrastructure.
If a proxy goes offline or becomes unresponsive, Zabbix attempts to automatically trigger recovery actions, such as rebooting the corresponding modem. If these measures fail, it immediately notifies our team.
Zabbix also monitors our fiber connections and server availability, alerting us in real time to any outages or performance degradation across the entire network.
Now and the future
Building Airproxy was never about shortcuts — it was about control, precision, and building stuff that simply didn’t exist.
Every piece of code, hardware configuration, and every process has been tested, rebuilt, and improved over the past 6 years.
We are just getting started, ready to expand to other countries 👀