yigid:blog

Cover image of post undefined

some remarks on our way to ASI in the light of current LLMs

i just skimmed through o3 and o4-mini’s introduction, specifically focusing on the thought process of o3. here are my ramblings towards ASI:

1. we may be closer to ASI than AGI

AGI is just us, a human with reasoning skills; but the thing being developed right now can reason in parallel, taking in a vast body of information; it can think endlessly, day and night.

define the goal once, and let it find out the requirements. let it find the local minima to the limitations to accomplish any given task.

2. library of alexandria

this thing has no mental fog, and what it knows is indexed within its latent space. with recent models, it knows when it doesn’t know enough. it has the capability to retrieve the correct piece of information, “remember” it before putting it into use.

it’s the hashmap of the library of alexandria, and can spawn an endless swarm of jinns to reason through each connection.

3. but

at the expense of burning down forests per “I’m a large language model” and vacuuming $$$s to draw your stupid Ghibli art style mug shot.

something

no, i don’t use ChatGPT.


the future is now.

see it for yourself: https://openai.com/index/introducing-o3-and-o4-mini/.


Close the world, .txen eht nepO

turkcell ✧ arbeit collective

yigid balaban // 2026