2023-2024
I’ve been told to flush them every 10 years. But it’s also said that they should be flushed if the temperature difference between the top and bottom is significant.
Here, “significant” is pretty vague. Let’s consider that 5°C of difference is a good start and definitively 10°C says that we have to do something right now!
This winter, I installed a Zigbee sensor on the top of a radiator and one on the bottom. Let’s see what the results are.
First, let’s get the data out of my database and perform a little bit of cleanup and formatting to ease playing with the data.:
from redis import StrictRedis
import pandas as pd
from datetime import datetime
d = StrictRedis()
t = d.ts()
top = pd.DataFrame(t.range("zigbee.RadiatorTop", int(datetime.strptime("2024-01-23", "%Y-%m-%d").timestamp() * 1000), int(datetime.strptime("2024-03-01", "%Y-%m-%d").timestamp() * 1000)))
bottom = pd.DataFrame(t.range("zigbee.RadiatorBottom", int(datetime.strptime("2024-01-23", "%Y-%m-%d").timestamp() * 1000), int(datetime.strptime("2024-03-01", "%Y-%m-%d").timestamp() * 1000)))
top.columns = ['time', 'top']
bottom.columns = ['time', 'bottom']
top = top.set_index('time')
bottom = bottom.set_index('time')
r = pd.concat([top, bottom])
r.index = pd.to_datetime(r.index, unit="ms")
r = r.sort_index()
# remove the days we were out of the home and the radiator were off
r = r[(r.index < '2024-02-17') | (r.index > '2024-02-23')]
# let's try hard to have values for both top and bottom at each point in time, so that we can compare them
r = r.interpolate(method='polynomial', order=5).dropna()
file = "/tmp/radiator.csv"
r.to_csv(file)
return file
https://ipfs.konubinix.eu/p/bafybeib7nacfjayqutve75blqlrni6xg3iilssudn5atziyvur6xxm5nm4
This is the data I will use hereafter.
Let’s load it.
import pandas as pd
r = pd.read_csv(file, parse_dates=["time"], index_col="time")
print(r)
top bottom
time
2024-01-22 23:12:02 30.180000 25.047615
2024-01-22 23:15:29 29.770144 24.630000
2024-01-22 23:22:02 29.190000 24.002997
2024-01-22 23:25:56 28.872237 23.710000
2024-01-22 23:31:53 28.370000 23.250299
... ... ...
2024-02-29 22:43:00 34.462688 28.970000
2024-02-29 22:44:09 34.290000 28.825460
2024-02-29 22:48:37 33.702131 28.320000
2024-02-29 22:49:07 33.650000 28.267985
2024-02-29 22:53:37 33.300166 27.670000
[6124 rows x 2 columns]
Let’s try to simply plot them to have a feeling of how they look like.
The data looks clean.
from pathlib import Path
out=Path("/tmp/plot.html")
import plotly.io as pio
variable=locals()[var]
function=variable.__getattr__(fct)
kwargs={
"backend":"plotly",
}
if marker:
kwargs["markers"] = True
out.write_text(
pio.to_html(
function(**kwargs)
)
)
print(out)
In case I want to get a deeper interactive look at the data, I will also provide a plot made with plotly.
look at outliers
One of our hypothesis is that sludge is supposed to cause the bottom part of the radiator to be significantly colder than the top, as the sludge hinders the flow of water. The water will simply flow were the sludge is not, hence at the top.
Let’s find out whether that hypothesis holds.
print(r[r.bottom > r.top])
top bottom
time
2024-02-04 02:14:42 17.486029 17.620000
2024-02-04 02:30:59 17.490000 17.739859
2024-02-28 17:13:05 22.417144 22.810000
2024-02-28 17:19:12 22.420000 22.614221
Hmm, that’s not bad. But there are still two outliers. Let’s take a look at them.
First, at about , something apparently went wrong.
a = r["2024-02-04 00:00":"2024-02-04 04:00"]
This is strange. The values are pretty close, so maybe this goes into simple precision errors. I think this one won’t harm the analysis.
The second one occurs at .
a = r["2024-02-28 17:00":"2024-02-28 18:00"]
The temperature of the top probe suddenly drops before getting back to normal. This is most likely the effect of opening the window. I can assume that at that time, we had to open them for a little while, like for cleaning them. Again, it does not seem that problematic, so let’s keep it that way.
analysis
r.describe()
top bottom
count 6124.000000 6124.000000
mean 30.355279 26.400982
std 6.203890 5.397042
min 17.212234 16.480128
25% 24.970635 21.350000
50% 31.500000 27.006118
75% 35.810000 31.400295
max 43.140000 36.485341
The data are pretty close, let’s take a look at the difference.
a = (r.top - r.bottom)
a.describe()
count 6124.000000
mean 3.954297
std 1.397309
min -0.392856
25% 3.028367
50% 3.930668
75% 4.894372
max 9.447843
dtype: float64
This indicates that, for the most part, the top part is around 4°C hotter than the bottom part.
The mean and the median are very close, indicating a symmetry.
It might be easier to see this visually.
And the same with plotly (just for fun).
We can see what the data indicated: The hot part being about 4°C hotter for the most part.
conclusion
We decided that if the top part would be more than 5°C hotter than the bottom one, we would consider flushing the radiators.
This in not the case this year. Therefore, without a good reason to believe, we claim that this is not needed (this year).
2024-2025
Let’s try again, using another set of radiators this time.
Let’s go directly to the relevant part.
from redis import StrictRedis
import pandas as pd
from datetime import datetime
d = StrictRedis()
t = d.ts()
top = pd.DataFrame(t.range("zigbee.RadiatorTop", int(datetime.strptime("2024-10-06", "%Y-%m-%d").timestamp() * 1000), int(datetime.strptime("2025-04-28", "%Y-%m-%d").timestamp() * 1000)))
bottom = pd.DataFrame(t.range("zigbee.RadiatorBottom", int(datetime.strptime("2024-10-06", "%Y-%m-%d").timestamp() * 1000), int(datetime.strptime("2025-04-28", "%Y-%m-%d").timestamp() * 1000)))
top.columns = ['time', 'top']
bottom.columns = ['time', 'bottom']
top = top.set_index('time')
bottom = bottom.set_index('time')
r = pd.concat([top, bottom])
r.index = pd.to_datetime(r.index, unit="ms")
r = r.sort_index()
# remove the days we were out of the home and the radiator were off
r = r[(r.index < '2024-12-23') | (r.index > '2024-12-28')]
# something went strange in those days. I don't know what
r = r[(r.index < '2025-01-16') | (r.index > '2025-01-17')]
r = r[(r.index < '2024-11-15') | (r.index > '2024-11-16')]
r = r[(r.index < '2024-12-01') | (r.index > '2024-12-02')]
# let's try hard to have values for both top and bottom at each point in time, so that we can compare them
r = r.interpolate(method='polynomial', order=5).dropna()
file = "/tmp/radiator.csv"
r.to_csv(file)
return file
https://ipfs.konubinix.eu/p/bafybeibda7hcaotbaogceltbmak6fmlq6j4na3vc5qka6hmjpuebosrvgm
import pandas as pd
r = pd.read_csv(file, parse_dates=["time"], index_col="time")
print((r.top - r.bottom).describe())
count 66772.000000
mean 4.265862
std 1.563009
min -0.239465
25% 3.335824
50% 4.400335
75% 5.377492
max 9.079097
dtype: float64
Most of the time, the difference is less than 5°C. The bigger values are easily explained by inertia when the radiator gets up in temperature.
Therefore, I don’t see a point in flushing them this year also.
2025-2026
Note: the following was almost entirely made using Claude Opus. It needed a little help and made strange hypothesis from time to time. But overall after a few iterations, this is the result.
from redis import StrictRedis
import pandas as pd
from datetime import datetime
d = StrictRedis()
t = d.ts()
top = pd.DataFrame(t.range("zigbee.Temp.RadiatorTop", int(datetime.strptime("2025-10-01", "%Y-%m-%d").timestamp() * 1000), int(datetime.strptime("2026-04-01", "%Y-%m-%d").timestamp() * 1000)))
bottom = pd.DataFrame(t.range("zigbee.Temp.RadiatorBottom", int(datetime.strptime("2025-10-01", "%Y-%m-%d").timestamp() * 1000), int(datetime.strptime("2026-04-01", "%Y-%m-%d").timestamp() * 1000)))
top.columns = ['time', 'top']
bottom.columns = ['time', 'bottom']
top = top.set_index('time')
bottom = bottom.set_index('time')
r = pd.concat([top, bottom])
r.index = pd.to_datetime(r.index, unit="ms")
r = r.sort_index()
# remove a single bogus top sensor reading (4.17°C glitch surrounded by ~19°C)
r = r[(r.index < '2025-10-08 06:50') | (r.index > '2025-10-08 06:51')]
# remove bottom sensor spike (values up to 1177°C)
r = r[(r.index < '2025-10-21 17:02') | (r.index > '2025-10-21 19:33')]
# let's try hard to have values for both top and bottom at each point in time, so that we can compare them
r = r.interpolate(method='polynomial', order=5).dropna()
file = "/tmp/radiator2026.csv"
r.to_csv(file)
return file
https://ipfs.konubinix.eu/p/bafybeicjpzyfsk4q7er4h5ht3jnvpgtnw2zbidx5nj64iyt4tow3ome74u
import pandas as pd
r = pd.read_csv(file, parse_dates=["time"], index_col="time")
print((r.top - r.bottom).describe())
count 15164.000000
mean 0.138452
std 0.853415
min -6.108131
25% -0.236968
50% -0.006891
75% 0.318501
max 20.654299
dtype: float64
Something is off: the min is -6°C, the std is large, and the mean is near zero. Previous years showed a consistent ~4°C difference. Let’s investigate.
Let’s load the data into a session for further analysis.
import pandas as pd
r = pd.read_csv(file, parse_dates=["time"], index_col="time")
print(r)
top bottom
time
2025-10-01 09:00:01 14.981538 13.100000
2025-10-01 09:13:14 17.060000 16.614807
2025-10-01 09:14:01 17.204665 16.720000
2025-10-01 09:15:14 17.420000 16.866503
2025-10-01 09:17:01 17.699998 17.050000
... ... ...
2026-01-30 19:43:17 17.288472 17.320000
2026-01-30 19:44:53 17.280000 17.318960
2026-01-30 20:13:16 17.225580 17.290000
2026-01-30 20:14:53 17.230000 17.288455
2026-01-30 20:43:15 17.267005 17.300000
[15164 rows x 2 columns]
look at outliers
Before the cleanup, the raw data contained two sensor anomalies. Let’s visualize what was removed.
The bottom sensor spiked to absurd values (up to 1177°C) on between 17:02 and 19:33. The top sensor read normally (~18°C) during that time.
The top sensor also had a single bogus reading of 4.17°C on at 06:50, surrounded by ~19°C values, followed by a 9-hour data gap. This looks like a sensor glitch.
Both anomalies were removed in the data extraction step above.
analysis
r.describe()
top bottom
count 15164.000000 15164.000000
mean 17.926148 17.787697
std 1.532137 1.591077
min 9.980000 -1.624299
25% 16.970000 16.836432
50% 17.919019 17.632768
75% 18.780000 18.551108
max 25.550000 30.450000
The mean difference (top - bottom) is near zero (0.14°C), compared to ~4°C in previous years. Let’s look at the distribution.
a = (r.top - r.bottom)
The distribution is tightly centered around zero, with a left tail going down to -6°C. This tail comes from a single event: the Christmas heat-up.
Christmas heat-up anomaly
On Christmas Eve, the radiator was off and both sensors cooled down to ~10°C. When the heating kicked back in around 23:05, the bottom sensor heated up much faster than the top, reaching ~30°C while the top was only at ~24°C. This created a sustained -6°C inversion lasting about 5 hours.
Since hot water enters the radiator at the top, we would expect the top sensor to heat up first. The fact that the bottom heats faster is surprising.
One hypothesis is that the sensor labels (“top” and “bottom”) were accidentally swapped. However, this doesn’t hold up: during normal heating cycles (e.g. Oct 1), the current “top” label consistently leads, which is the expected behavior.
# Check Oct 1 morning heat-up: does "top" lead?
warmup = r["2025-10-01 09:00":"2025-10-01 11:00"]
diff_warmup = warmup["top"] - warmup["bottom"]
print(f"Oct 1 heat-up: top always > bottom? {(diff_warmup > 0).all()}")
print(f"Oct 1 mean diff: {diff_warmup.mean():.2f}°C")
print()
# Check stable periods
stable = r[r["top"].diff().abs() < 0.1]
stable_diff = stable["top"] - stable["bottom"]
print(f"Stable periods: mean diff = {stable_diff.mean():.3f}°C")
print(f"Stable periods: top > bottom {100*(stable_diff > 0).mean():.1f}% of the time")
Oct 1 heat-up: top always > bottom? True
Oct 1 mean diff: 0.94°C
Stable periods: mean diff = -0.010°C
Stable periods: top > bottom 41.0% of the time
So the labels are not simply swapped. The Christmas inversion remains unexplained – possibly related to the radiator restarting after a prolonged cold period with different flow dynamics.
conclusion
Unlike previous years where the top-bottom difference was consistently ~4°C (well below the 5°C flushing threshold), this year’s data shows a mean difference near zero (0.14°C).
This makes the “is the bottom significantly colder?” diagnostic unreliable for this setup: the two sensors track each other too closely during normal operation to reveal any meaningful gradient.
The only moment with a significant difference is the Christmas heat-up (-6°C), but its cause is ambiguous and it occurs in a transient regime, not during steady-state operation.
Without a consistent top-bottom gradient comparable to previous years, the data this year is inconclusive. The flushing question cannot be reliably answered from this dataset. To get a definitive answer, we would need to either reproduce the measurement on the same radiator as previous years, or verify the sensor positions on this one.