tentree

Following Wildfire

Web Experience
AI
Machine Learning
client
  • Dentsu Creative
our roles
  • Web Development
  • Creative Development
  • UX & UI Design
  • 3D Design
  • Machine Learning
awards
  • FWA
  • Webby
  • Awwwards
  • The One Show
  • Clio Awards

An AI-powered platform that turns every Canadian snapshot into life-saving wildfire intelligence

Forest fires impact people, communities and wildlife. #FollowingWildfire helps empower Canadians to participate in protecting their communities via social media and AI technology. After wildfires caused record-breaking destruction across Canada in 2023, sustainable apparel company Tentree partnered with Dentsu and Reflektor Digital to build a real-time wildfire detection platform. We worked to design and build this essential tool and a website to raise public awareness and provide valuable resources that will continue to help track the wildfires.

00:00
Backdrop

Turning everyday snapshots into Canada’s first crowd-powered wildfire watchdog

A crisis Canadians felt powerless to stop

Record-breaking blazes in 2023 torched communities, wildlife and 18 million ha of forest. Restoration alone was no longer enough—Tentree and partners needed a prevention play that ordinary Canadians could believe in.

The blind spot in our defence

Traditional surveillance: satellites, drones, lookout towers - misses the very places people hike, camp and post photos. In fact, 55 % of wildfires ignite in these human-heavy zones where monitoring “falls short,” leaving hours-long gaps that let small sparks explode into megafires.

The behavioural barrier

Wildfires feel so vast that most citizens don’t see themselves as part of the solution. Overcoming this “defeated attitude” was the first obstacle: convince people their everyday snapshots could actually save forests and lives.

The brief

Design an accessible, privacy-safe system that:

  • Transforms social photos into real-time fire intelligence without asking users to download a new app
  • Elevates trust by filtering AI detections through human verification before alarms reach authorities
  • Inspires mass participation through an immersive, award-winning storytelling site that turns every hashtag into a heroic act
Machine-learning detection

AI Firewatch Engine

Our custom Firewatch Engine continuously ingests geo-tagged images from public social feeds, spotting faint plumes of smoke or flickers of flame within seconds. Trained on over 6,000 hand-curated photos, the model learns in real time—refining its eye against sunset glare, mountain haze and seasonal shifts in foliage.

By layering each detection with live meteorological and vegetation-dryness data, the system prioritizes genuine threats above benign heat signatures. Every validated alert is fed back into the algorithm, ensuring sharper precision with every upload.

WebGL-driven storytelling

Immersive 3D Storyworld

We crafted a cinematic WebGL experience that guides visitors from the visceral drama of wildfire scenes into an interactive 3D map of Canada. Built on Next.js, Three.js and React-Three-Fiber, the site wraps emotive voiceover and sweeping visuals around each data point—transforming raw alerts into sparks of awareness.

As users scroll, every validated photo pins itself in ember-red across the map. The pacing and design invite exploration, encouraging visitors to dwell on hotspots, learn the local stories, and feel the real-world stakes behind each flame.

Trust & verification

Human-in-the-Loop CMS

To prevent false alarms, our lightweight, browser-based CMS presents each AI flag to trained moderators in a seamless queue. With a single click, operators approve or dismiss candidates, boosting confidence before any alert reaches emergency services. This human oversight not only safeguards credibility but also injects fresh learning data back into the Firewatch Engine.

We’ve open-sourced the full pipeline—making the CMS, model checkpoints and deployment scripts available so first-responders worldwide can harness the same guardrail of human-backed AI.

20 years of context

Data that Drives Action

Our interactive historic timeline lets users dial back through two decades of Canadian wildfire data—from the first recorded burn scars and rising smoke plumes to the latest vegetation losses. As the year slider moves, the 3D map of Canada dynamically updates with heat-map overlays and animated smoke icons, while the side panel highlights key metrics like hectares burned and cloud-cover percentages for that season. This seamless fusion of time and place reveals not just where fires erupted, but how their patterns have shifted over time.

Threats Detected
204
Photos Analyzed
24.2K
Acres of Forest Preserved
100K
Inside the Stack

The tech behind Canada’s AI Firewatch

From the moment a Canadian posts a forest selfie to the split-second alert that helps save lives, Following Wildfire runs on a tightly orchestrated suite of technologies. At its core sits a custom machine-vision engine—trained on thousands of geo-tagged images—that detects smoke and flame in real time. Each candidate detection flows into a human-in-the-loop CMS built on modern web frameworks, where moderators validate or dismiss alerts and feed refinements back into the model.

On the front end, an immersive WebGL experience brings data to vivid life. Visitors launch into a cinematic intro sequence before exploring a fully interactive 3D map of Canada, where every verified sighting pins itself in glowing ember-red. A historic timeline lets users slide through 20 years of burn-scar and smoke-plume data, while contextual tips guide them to report new fires or follow local evacuation protocols.

Under the hood, we leaned on open-source tooling and cloud-native services to ensure scalability, reliability and community access. From data ingestion pipelines that normalize satellite and citizen-sourced feeds to containerized deployments that keep the AI engine humming, every layer was crafted for performance-optimized craftsmanship.

Key Technologies Used
  • React & Next.js for the front-end
  • Python & TensorFlow/PyTorch for the custom computer-vision model
  • OpenCV for image preprocessing and feature extraction
  • Three.js & React-Three-Fiber to render the immersive 3D Storyworld
  • PostgreSQL and AWS S3 for spatial data storage and static assets
  • Docker & Kubernetes for containerized, scalable deployments