Did you know that almost anything can be represented in IEEE-754 floating point values?
We’re given a list of numbers that don’t immediately look like text, bytes, or anything obviously encoded:
240600592
212.2753143310547
2.7884192016691608e+23
5.623021054185822e+31
17611451687157891000
8.927742989328635e-10
16391240070931153000
5.639361688736244e-8
2.115975377137147e-7
The goal is to figure out what these values really represent.
A floating-point number (or "float") is one way that computers represent decimal numbers using binary. The most common standard is IEEE-754, which defines how numbers are stored at the bit level.
For a 32-bit float (float32), the bits are split like this:
The important thing for this challenge: A float is just 32 bits of data. We choose to interpret those bits as a number, but they could just as easily be ASCII characters, integers, or anything else.
These numbers aren’t meant to be calculated with, instead, each one is hiding 4 bytes of data inside its IEEE-754 representation.
If we:
The flag appears cleanly!
CyberChef makes this challenge very approachable:
When the settings are correct, the output immediately becomes readable text.
If you prefer scripting, here’s a minimal Python solve.
import struct
values = [
"240600592",
"212.2753143310547",
"2.7884192016691608e+23",
"5.623021054185822e+31",
"17611451687157891000",
"8.927742989328635e-10",
"16391240070931153000",
"5.639361688736244e-8",
"2.115975377137147e-7"
]
blob = b""
for v in values:
blob += struct.pack(">f", float(v)) # 32-bit big-endian float
print(blob.decode("ascii"))
MetaCTF{fl04t1ng_thr0ugh_cyb3r5p4c3}