create something new based on an input (known as a “prompt”).
For example, by being trained on 1,000 descriptions of "the sun," it might establish rules that
say there is a high probability that “the sun” is hot, massive, yellow, and roughly 100 million
miles away.
Therefore, when it’s asked to create a piece of text describing the sun, it has all the information
it needs to do it.
The same principle would apply if a graphical generative AI algorithm were asked to draw a
picture of the sun or if a sound-based algorithm was asked to compose a piece of music
inspired by the sun.
In this very simplified example, the hypothetical AI algorithm is using just four parameters –
heat, size, color, and distance from us – to create content about the sun.
One of the most advanced generative AI models available today – OpenAI’s GPT-4 – is believed
to have been trained on around one trillion parameters. The precise details of the training
dataset have not been made public, but we can assume it knows far more about the sun than
the hypothetical model used for our example.
This means that the content it can generate can be far more detailed, sophisticated, in-depth,
and, from a certain point of view, creative.
Let’s test this out – this is GPT-4’s (via ChatGPT Plus) response to my prompt “write a haiku
0