965 words
5 minutes
TexSAW 2026 - Excellent Neurons - Miscellaneous Writeup

Category: Miscellaneous Flag: texsaw{n3ur4l_r3v3rs3}

Challenge Description#

Find the flag by reverse engineering this neural network. Oh, and its in Excel.

Analysis#

The file turned out to be an Excel workbook rather than a normal program binary, which immediately suggested that the network was encoded as spreadsheet data instead of hidden behind compiled code. The first useful confirmation was checking the file type:

file ~/Downloads/challenge.xlsx
Microsoft Excel 2007+

From there the workbook was treated as a ZIP-backed Office document and the network was parsed with Python using openpyxl. The Network sheet described the full model as Input(30) -> ReLU -> Hidden1(60) -> ReLU -> Hidden2(1) -> ReLU -> Output(4) -> Sigmoid, with the input normalized as ASCII value / 127 and the first layer weights spread across rows 11 through 70. The relevant parsing code from the solve log was:

import openpyxl

wb = openpyxl.load_workbook('challenge.xlsx', data_only=True)
sheet2 = wb['Network']

# Network labels from Column A:
# Row 1: NEURAL NETWORK: WEIGHTS & COMPUTATION
# Row 2: Architecture: Input(30) -> ReLU -> Hidden1(60) -> ReLU -> Hidden2(1) -> ReLU -> Output(4) -> Sigmoid
# Row 3: Input: ASCII value / 127
# Row 4: Output: (F>0.5, L<0.5, A>0.5, G<0.5) = FLAG | otherwise = FAIL
# Rows 11-70: W1[neuron] - weight matrix (60 hidden neurons)
# Row 75: b1 biases [60 values]

The important structure was the first-layer matrix W1. Instead of looking like a dense learned model, it behaved like a sparse permutation matrix: every hidden neuron connected to exactly one input position with weight +1 or -1, and each real input position mapped to two hidden neurons. That made the workbook look much more like a hand-built encoder than a genuine trained network. The extraction code used in the solve log was:

# Extract W1 connections
w1_connections = {}
for neuron_idx in range(60):
    row = 11 + neuron_idx
    for input_pos in range(30):
        col = 2 + input_pos  # Column B = 2
        val = sheet2.cell(row=row, column=col).value
        if val is not None and val != 0:
            w1_connections[neuron_idx] = (input_pos, val)

That mapping revealed the real payload layout: input position 0 had no connections at all, positions 1 through 22 held the flag characters, and positions 23 through 29 were just zero-biased padding. The next piece was the bias vector on row 75:

b1 = []
for i in range(60):
    col = 3 + i  # Column C = 3
    val = sheet2.cell(row=75, column=col).value
    b1.append(val if val is not None else 0)

# Example biases:
# b1[0] = -0.7952756
# b1[1] = 0
# b1[2] = 0
# b1[3] = -0.90551
# ...

Once the biases were in hand, the encoding fell apart cleanly. The sheet defined each input character as ASCII / 127, and for the correct flag the first-layer pre-activation had to land at zero, so the relation was ASCII/127 * weight + bias = 0. Rearranging gives ASCII = -bias * 127 / weight. At that point the network was no longer something to “run”; it was just a lookup table written in linear-algebra costume.

flag_chars = []
for input_pos in range(30):
    if input_pos in reverse_map:
        neuron_idx, weight = reverse_map[input_pos][0]
        bias = b1[neuron_idx]
        char_code = round(-bias * 127 / weight)
        char = chr(char_code) if char_code > 0 else '\x00'
        flag_chars.append(char)

flag = ''.join(flag_chars)

The decoded positions from the solve log were:

Input position 1:  h1[23] w=-1, b=+0.913386 -> 't'
Input position 2:  h1[00] w=+1, b=-0.795276 -> 'e'
Input position 3:  h1[48] w=+1, b=-0.944882 -> 'x'
Input position 4:  h1[13] w=+1, b=-0.905512 -> 's'
Input position 5:  h1[41] w=-1, b=+0.763780 -> 'a'
Input position 6:  h1[04] w=+1, b=-0.937008 -> 'w'
Input position 7:  h1[12] w=+1, b=-0.968504 -> '{'
Input position 8:  h1[11] w=-1, b=+0.866142 -> 'n'
Input position 9:  h1[08] w=+1, b=-0.401575 -> '3'
Input position 10: h1[42] w=-1, b=+0.921260 -> 'u'
Input position 11: h1[07] w=-1, b=+0.897640 -> 'r'
Input position 12: h1[26] w=+1, b=-0.409450 -> '4'
Input position 13: h1[16] w=+1, b=-0.850400 -> 'l'
Input position 14: h1[09] w=-1, b=+0.748030 -> '_'
Input position 15: h1[25] w=-1, b=+0.897640 -> 'r'
Input position 16: h1[14] w=-1, b=+0.401575 -> '3'
Input position 17: h1[05] w=+1, b=-0.929130 -> 'v'
Input position 18: h1[20] w=+1, b=-0.401570 -> '3'
Input position 19: h1[24] w=-1, b=+0.897638 -> 'r'
Input position 20: h1[03] w=+1, b=-0.905510 -> 's'
Input position 21: h1[28] w=-1, b=+0.401575 -> '3'
Input position 22: h1[35] w=-1, b=+0.984252 -> '}'

Reading those characters in order produced texsaw{n3ur4l_r3v3rs3}. The nice trick here is that the “neural network” was really just using its first layer to hide normalized ASCII values in the bias terms, so reversing z = xW + b was enough to recover the flag.

Solution#

The solve log’s decisive output was the decoded character mapping below, produced by reversing the first-layer relation ASCII = -bias * 127 / weight for each connected input position:

Input position 1:  h1[23] w=-1, b=+0.913386 -> 't'
Input position 2:  h1[00] w=+1, b=-0.795276 -> 'e'
Input position 3:  h1[48] w=+1, b=-0.944882 -> 'x'
Input position 4:  h1[13] w=+1, b=-0.905512 -> 's'
Input position 5:  h1[41] w=-1, b=+0.763780 -> 'a'
Input position 6:  h1[04] w=+1, b=-0.937008 -> 'w'
Input position 7:  h1[12] w=+1, b=-0.968504 -> '{'
Input position 8:  h1[11] w=-1, b=+0.866142 -> 'n'
Input position 9:  h1[08] w=+1, b=-0.401575 -> '3'
Input position 10: h1[42] w=-1, b=+0.921260 -> 'u'
Input position 11: h1[07] w=-1, b=+0.897640 -> 'r'
Input position 12: h1[26] w=+1, b=-0.409450 -> '4'
Input position 13: h1[16] w=+1, b=-0.850400 -> 'l'
Input position 14: h1[09] w=-1, b=+0.748030 -> '_'
Input position 15: h1[25] w=-1, b=+0.897640 -> 'r'
Input position 16: h1[14] w=-1, b=+0.401575 -> '3'
Input position 17: h1[05] w=+1, b=-0.929130 -> 'v'
Input position 18: h1[20] w=+1, b=-0.401570 -> '3'
Input position 19: h1[24] w=-1, b=+0.897638 -> 'r'
Input position 20: h1[03] w=+1, b=-0.905510 -> 's'
Input position 21: h1[28] w=-1, b=+0.401575 -> '3'
Input position 22: h1[35] w=-1, b=+0.984252 -> '}'

Reading those decoded characters in order gives:

texsaw{n3ur4l_r3v3rs3}
TexSAW 2026 - Excellent Neurons - Miscellaneous Writeup
https://blog.rei.my.id/posts/131/texsaw-2026-excellent-neurons-miscellaneous-writeup/
Author
Reidho Satria
Published at
2026-03-30
License
CC BY-NC-SA 4.0