_______ __ _______
| | |.---.-..----.| |--..-----..----. | | |.-----..--.--.--..-----.
| || _ || __|| < | -__|| _| | || -__|| | | ||__ --|
|___|___||___._||____||__|__||_____||__| |__|____||_____||________||_____|
on Gopher (inofficial)
URI Visit Hacker News on the Web
COMMENT PAGE FOR:
URI Show HN: Signet â Autonomous wildfire tracking from satellite and weather data
lordmoma wrote 2 hours 0 min ago:
I did the similar thing on [1] , the only pain point is how to
aggregate sex offenders' profiles properly across US, too much legal
risk there plus labor intensive manual work, you cannot pull the data
out easily, CF CAPTCHA gated and short lived token, but will have huge
impact on local communities.
URI [1]: https://www.crimewatches.com/
avabuildsdata wrote 2 hours 55 min ago:
the Go choice makes a lot of sense for this. i've been wiring up
government data sources for a different project and honestly the format
inconsistency between agencies is always the real headache, not the
actual processing
curious about the 23 tools though -- are those all invoked through one
Gemini orchestration pass or is there a routing layer picking which
subset to call per detection? feels like that'd stack up fast
latency-wise
mapldx wrote 1 hour 47 min ago:
Not all 23 get invoked in one pass. The system runs 4 different types
of cycles, each with its own Gemini call, and within a cycle the
model picks a subset of tools based on the context rather than
fanning out to everything.
Over the last week, the median ends up being about 6 tool calls
across 4 distinct tools per cycle.
Latency-wise, median completed cycle time is about 37s overall. The
heavy path is FIRMS: about 135s median / 265s p90 over the same
window.
It runs asynchronously in the background, so the web UI isnât
blocked on a cycle finishing, though cycle latency still affects how
quickly new detections get enriched.
chrisfosterelli wrote 3 hours 2 min ago:
We do some similar work with hotspot analysis but (as a Canadian
company) are more focused on Canadian data where the government already
does a fair bit of false positive detection and filtering. It generally
gives pretty clean data and we can scrub historical data over time like
this: [1] The dataset includes US coverage but it's not filtered the
same way and FAR more noisy, so I appreciate efforts like this. We
haven't got there yet but if you were looking for something
deterministic and automatable the Canadian gov's process is potentially
worth learning about.
They also produce perimeter estimates based on the hotspots which we
can extract and put into a physics-based fire growth model like
Prometheus or FARSITE to estimate future fire behaviour based on
forecasted weather. This gives very actionable and deterministic
estimates of future fire behaviour. We also have worked on a risk model
that determines the likelihood of that future fire growth interacting
with various assets on the landscape (urban interface areas, power
lines, fuel pipelines, forest inventory, etc) and calls out high risk
areas. One thing we've been wondering if where LLMs fit into any of
this (if at all) so appreciate seeing what others are doing.
URI [1]: https://imgur.com/a/gCJGzqd
mapldx wrote 2 hours 30 min ago:
Thanks, this is really helpful. That filtering/perimeter pipeline is
exactly the kind of deterministic path I'm interested in learning
from, especially for pushing more of the false-positive reduction
upstream before the model gets involved at all.
My take so far is that models seem most useful in the contextual
triage step and in synthesizing multiple sources into a structured
assessment. But most of the system around that is and should be
deterministic.
The physics-based approach you're describing makes a lot of sense to
me for spread prediction - different tool for a different part of the
problem.
If there's a public writeup on the filtering process you'd recommend,
I'd love to take a look.
chrisfosterelli wrote 2 hours 23 min ago:
Happy to help. This is the official methods description for the
Canadian gov's FM3 data, it's probably the best place to start
although the details are mostly covered in much longer publications
that require some digging:
URI [1]: https://cwfis.cfs.nrcan.gc.ca/background/dsm/fm3
nullora wrote 4 hours 9 min ago:
How do you even code this, really nice.
burntpineapple wrote 4 hours 16 min ago:
Iâm confused, donât existing systems like GLFF already incorporate
this type of data?
jonah wrote 4 hours 20 min ago:
Super interesting thing to pursue. Will be neat to see where you go
with this.
Small nit: don't name incidents until you have the official name. It
could cause confusion once it is named.
Instead, something like "New detection near Colusa" or "New incident 10
mi NW of Colusa".
mapldx wrote 4 hours 8 min ago:
Good call. The system does try to match to official reporting and
update when it finds one, but the working names in the meantime could
definitely cause confusion.
Probably another case where that should be deterministic instead of
model-generated. Thanks.
takahitoyoneda wrote 5 hours 17 min ago:
Aggregating disparate government feeds with completely out-of-phase
polling schedules into a unified state machine is notoriously painful.
I am curious how your Go service handles the rate limits of the NWS
API, which historically drops connections right when usage spikes
during actual emergencies. If you ever expose this via webhooks, it
would make an incredible backend for building localized mobile push
notifications where standard cell-broadcast alerts are too broad or
slow.
mapldx wrote 4 hours 48 min ago:
Honestly, not robustly enough yet. I've already been hitting timeouts
on NWS gridpoint forecasts.
Right now some weather failures don't stop the rest of the assessment
loop. Successful fetches get persisted so the system builds
historical weather context over time.
The webhook idea is interesting. The monitoring loop is already
separated from the web layer, so publishing to external consumers
would be a natural extension.
rouanvde wrote 5 hours 18 min ago:
Please add more of the world, and it would be great to see some of the
imaging data overlayed to visually see where the fire is and scale
quickrefio wrote 5 hours 39 min ago:
Interesting project. Combining satellite detections with weather data
seems really powerful.
mapldx wrote 5 hours 11 min ago:
Thanks!
gorfian_robot wrote 5 hours 43 min ago:
have you shared this with the WatchDuty folks?
mapldx wrote 4 hours 39 min ago:
Not yet.
gnerd00 wrote 5 hours 49 min ago:
the graphically slick intro suggests this is something that could
appeal to "investors" or similar ...
mapldx wrote 5 hours 24 min ago:
I did lean hard into the presentation, but what I'm actually trying
to test here is the monitoring loop and whether it's useful.
doodlebugging wrote 6 hours 11 min ago:
Interesting. I think there are other services doing the same thing
including one linked by another commenter.
When checking the Evidence tab for data that supports the conclusion
that there could be a fire in progress I found that it could be
improved by excluding the evidence posts for all the mapped fire
locations except the one that the user clicked. Presently, if you click
that Evidence tab you get a roll of links to posts or mentions or
whatever for every fire. I believe that a user would most appreciate
data that pertains to the fire they are trying to monitor.
I am not a fan of grey text. It does not improve site navigability or
usability and it can get lost in screen glare unless bold grey text is
used. It would still be grey text though and I am still not a fan.
Perhaps shades of blue or yellow to contrast with the black bar.
Example in case you are thinking of modifying the page - Your top frame
has the ap name "SIGNET" in white capitals. Right next to that is an
orange dot, probably to signify that something is happening or that the
site is "LIVE". Notice that "LIVE" is not only in grey text, beside an
orange dot which will be the eyeball magnet, but it is also a smaller
font than the ap name "SIGNET".
From my perspective, the site would be improved by changing grey text
to a more contrasting color and asking the question - "What information
should be the most important topic on this page?" In that way you can
optimize it for your users.
Before posting this comment I went back to check that the points I
hoped to make were valid points. It turns out that not all "Evidence"
links have evidence for every fire on the map. I randomly chose the
Custer County Incident when I checked that and found all sorts of stuff
pertaining to Texas fires. Perhaps this is not a huge problem for you
to solve. I checked the Rapides Parish Incident in Lousyana and it only
has data about that event.
Maybe some cleaning of links is in order.
mapldx wrote 5 hours 35 min ago:
Fair points, I leaned a little hard into the ops aesthetic. Grey text
might not be doing anyone any favors.
On the Evidence tab, I agree that it should be incident-specific to
be useful on its own. Right now the model scopes what evidence gets
attached, so probably a case where that should be deterministic
instead.
Good catch. Thanks.
redgridtactical wrote 6 hours 15 min ago:
Really interesting approach. The multi-source fusion is where the real
value is â any single satellite feed has too many false positives
from industrial heat, sun glint, etc. Correlating FIRMS + weather +
fuel models is what experienced fire analysts do mentally, so
automating that loop makes sense.
On your question about deterministic vs LLM-driven: I'd lean toward
keeping the spatial indexing, deduplication, and basic threshold logic
deterministic. Those are well-defined problems with known-good
algorithms. The LLM adds value where you're synthesizing ambiguous
evidence â "is this cluster of weak FIRMS detections near a known
industrial site, or is it a new start in timber?" That kind of
contextual reasoning is hard to codify as rules.
One operational question: have you thought about how this integrates
with existing incident management workflows? Wildland fire teams run
everything through ICS structures and often have limited connectivity
on the fireline. Being able to push a structured alert (lat/lon,
confidence level, fuel type, weather conditions) into their existing
tools would be a big deal for adoption.
mapldx wrote 5 hours 51 min ago:
Right, that's a clean way to frame the boundary. Appreciate it.
On ICS integration, I haven't gotten there yet. The system outputs
structured incident records, but I don't have real operational
experience on that side.
The limited-connectivity point is interesting. If the output is a
compact structured record that doesn't need a live connection to be
useful, that could change what integration looks like.
If you have a strong opinion on what people actually use there, I'd
be interested.
redgridtactical wrote 3 hours 21 min ago:
From what I've seen, the teams that are actually on the fireline
mostly use paper ICS 214s and radio. The structured digital stuff
lives at the ICP/EOC level. So the gap is really between field
collection and the management system â if you can get a compact
record off a phone with no connectivity requirement, that bridges
it without asking anyone to change their workflow.
I think the practical win is SALUTE-style reports that
auto-populate grid and DTG, exportable as plain text. No one wants
another app to learn at 0200 on a fire.
mapldx wrote 1 hour 43 min ago:
That's extremely helpful framing. Appreciate you coming back with
the detail.
xnx wrote 7 hours 22 min ago:
You might be interested in
URI [1]: https://research.google/blog/real-time-tracking-of-wildfire-bo...
mapldx wrote 7 hours 16 min ago:
Thanks, looks like a good model reference. Will give it a read.
DIR <- back to front page