How these AIs work
This section describes aspects of popular generative text AIs.
These AIs have brains that are made up of billions of artificial neurons. The way these neurons are structured is called a transformer architecture.
It is a fairly complex type of neural network. What you should understand is:
- These AIs are just math functions. Instead of �(�)=�2, they are more like f(thousands of variables) = thousands of possible outputs.
- These AIs understand sentences by breaking them into words/subwords called tokens (e.g. the AI might read
I don't likeas
"I", "don", "'t" "like"). Each token is then converted into a list of numbers, so the AI can process it.
- These AIs predict the next word/token in the sentence based on the previous words/tokens (e.g. the AI might predict
I don't like). Each token they write is based on the previous tokens they have seen and written; every time they write a new token, they pause to think about what the next token should be.
- These AIs look at every token at the same time. They don’t read left to right, or right to left like humans do.
Please understand that the words “think”, “brain”, and “neuron” are zoomorphisms, which are essentially metaphors for what the model is actually doing.
These models are not really thinking, they are just math functions. They are not actually brains, they are just artificial neural networks. They are not actually biological neurons, they are just numbers.
Free AI Analysis Contact Form