ChatGPT and similar programs are not to be used carelessly.
The human-like conversational skills of ChatGPT may be misleading. When conversing, it's natural to assume certain things about the other person that may not be true of a computer.
For instance, we presume that most individuals do not intentionally mislead others. However, large language models regularly fail to meet this need by delivering convincing but incorrect responses. In these contexts, the metacognitive capacity to recognize ignorance is lacking.
Photo by Andrea De Santis on Unsplash
Another assumption we make is that intelligence is reflected in a person's level of verbal fluency. Someone who knows Shakespeare by heart, can describe quantum computing, and prove the prime number theory in rhymed poetry is also likely to be able to count. As a result, it is risky to approach LLMs as if they are very bright and well-versed.


