In short: ChatGPT was not very good at writing a Gizmodo article — at least not one about the machine learning tech that underpins it. After probably a dozen tries or more, the following is the best ChatGPT managed to come up with: “Large language models, also known as neural network-based language models, have gained a lot of attention in recent years for their ability to generate human-like text. These models are trained on massive amounts of text data, and use advanced machine learning algorithms to learn the patterns and structures of natural language. One of the most well-known examples of a large language model is GPT-3, developed by OpenAI. This model was trained on more than 45 terabytes of text data, and has 175 billion parameters, which are the variables that the model uses to make predictions. In machine learning, a parameter is a variable that the model uses to make predictions. These parameters are adjusted during training, allowing the model to improve its performance on the task it was trained for. In the case of large language models, such as GPT-3, the model has a large number of parameters, often in the billions, which allows it to learn the intricate patterns and structures of natural language and generate highly coherent and fluent text.” ChatGPT’s writing may be competently constructed and able to break down the concepts it’s tackling, but it wasn’t able to produce a “particularly bold or entertaining piece of writing,” says Gizmodo. “In short: this article wasn’t the easy lift that we thought it would be.”
“After asking the chatbot to write about itself a dozen different ways, the program consistently seemed to leave something critical out of its final draft — be that exciting prose or accurate facts.”
That said, ChatGPT did manage to write an amusing poem about Slashdot. It also had a number of things to say about itself.
Read more of this story at Slashdot.