Client-Side Rendered Data
Sites built with React/Vue/SPA frameworks where the data you need only exists after JavaScript runs. requests just gets you the loading skeleton.
Some sites can be scraped with plain HTTP. Most operationally interesting ones cannot — they require a real browser executing JavaScript, holding a session, and behaving like a human. This topic covers how to build that reliably.
Browser automation is what you reach for when an API doesn't exist (or doesn't expose what you need), and the site requires JavaScript execution, session state, or human-like interaction to work. That includes most modern e-commerce admin panels, customer service tools, vendor portals, marketplaces with aggressive bot protection, and any internal SaaS that doesn't have a public API.
The work splits into two flavors: read-only automation (scraping data the site renders) and action automation (clicking, submitting, navigating multi-step workflows). Both share the same underlying architecture — they just have different failure modes and different recovery strategies.
Sites built with React/Vue/SPA frameworks where the data you need only exists after JavaScript runs. requests just gets you the loading skeleton.
Sites where plain HTTP gets a 403 challenge page. A real browser fingerprint (often via Kameleo) is the only path that doesn't involve breaking site terms.
Workflows that require holding a session: order processing inside a vendor portal, support-ticket handling, account onboarding, repetitive admin tasks across dozens of accounts.
SaaS your team uses every day that doesn't expose what you need via API. Browser automation gives you a programmatic layer over the human UI.
Automated checks that run real workflows against a real browser, against a real environment — catching regressions that unit tests miss.
Driving live customer service chat sessions inside isolated browser profiles, integrating AI suggestions, and writing structured outcomes back to a database.
Every browser automation system I build is structured as a worker pool backed by a shared queue in a database. Each worker owns its own browser process (Playwright, sometimes attached to Kameleo for fingerprint control), pulls a unit of work from the queue, completes it, writes the result back, and moves on. Crashes don't lose work — the queue marks the row in_progress, and a recovery pass on the next start moves abandoned rows back to pending.
Selectors are defensive. Every action is wrapped in a tolerant retry: if the element isn't there yet, wait; if the wait times out, capture a screenshot + the rendered HTML to disk for debugging, mark the URL error, move on. No retry storms. No silent failures. Errors are loud in the log, structured in the database, and never block the pool.
Concurrency is set empirically. One Playwright instance per worker process. Workers are partitioned across the queue (id % N == worker_id) so they never collide. Database writes from workers stay minimal — usually just the result row — with relationship building and aggregation deferred to single-process post-stages where contention is zero. SQLite in WAL mode handles modest write loads gracefully; MySQL takes over when the system needs real multi-process concurrency or operator dashboards.
For sites with bot protection, the browser layer changes (headed Kameleo profile + proxy pool) but the surrounding architecture stays identical. Block detection, rotation, and recovery are built into the fetch layer so the worker logic above doesn't have to know.
Multi-session browser automation platform driving live support chats inside isolated profiles, with AI suggestions, manual training mode, and centralized logging.
Read the case study →Kameleo + Playwright fetch layer with block-only proxy rotation, 6 parallel workers, and Cloudflare resilience — 5 of 580 proxies consumed per overnight run.
Read the case study →Browser automation projects often combine with: Custom Automation Development, Web Scraping & Data Collection, and E-Commerce Automation.
From single-account workflows to large parallel worker pools driving hundreds of browser sessions, I build browser automation systems that survive real-world site behavior.