ChatGPT and similar programs are not to be used carelessly.
The human-like conversational skills of ChatGPT may be misleading. When conversing, it's natural to assume certain things about the other person that may not be true of a computer.
For instance, we presume that most individuals do not intentionally mislead others. However, large language models regularly fail to meet this need by delivering convincing but incorrect responses. In these contexts, the metacognitive capacity to recognize ignorance is lacking.
Photo by Andrea De Santis on Unsplash
Another assumption we make is that intelligence is reflected in a person's level of verbal fluency. Someone who knows Shakespeare by heart, can describe quantum computing, and prove the prime number theory in rhymed poetry is also likely to be able to count. As a result, it is risky to approach LLMs as if they are very bright and well-versed.
Those reservations aside, ChatGPT has broad applicability. Simon Willison advises seeing LLMs as a "calculator for words" rather than a general-purpose intellect or clever person.
In general, I agree. To fully take use of these new possibilities without falling into any unanticipated traps, we need a better understanding of when and when LLMs function effectively and when they don't (yet).
ChatGPT: Ten Effective Methods for Planning
After receiving several letters from readers who wanted to share how they were using ChatGPT to study, I decided to create a list of the most often mentioned solutions.
1. Make your own Socratic dialogue partner
Readers most often mentioned using LLMs in the role of private instructor.
It seems like one area where LLMs may perform well is if you asked ChatGPT to explain difficult ideas, strange code, or difficulties. A human specialist is the only viable alternative, but they are prohibitively costly and hard to come by.
Since you'll still have access to the original source material when using this method in tandem with a course or textbook, it seems like the likelihood of making a mistake is reduced. Instead than accepting the AI's word for everything, question its explanations if they don't make sense in light of what you know to be true.
2. Talk it out in different tongues
LLMs were second most often employed by language learners as an instructor. LLMs seem to be well suited for this role. Despite their many inadequacies, they are capable of writing coherent sentences.
Many users of ChatGPT have their discussions set up such that the AI will switch between the target language and English to help them out if they get stuck. Perhaps their descriptions aren't flawless, but human teachers sometimes exaggerate how well they understand a language's syntax and vocabulary.
Another use is simplifying difficult passages for a younger audience. The use of graded readings and extensive feedback are two effective methods for teaching a second language literacy. Learning resources, however, are often inadequate or uninspiring. An LLM may help you read texts that are above your present reading level, even if they are written at a native speaker's proficiency.
It seems that Duolingo is also joining the LLM fray. I've been critical of the previous iterations' reliance on a drag-and-drop interface for language acquisition, but these improvements make me think I may have to change my mind.
3. Create brief explanations of complex texts
It also seems that LLMs are superior summarizers. There are now available consumer apps that may generate summaries of scholarly publications or research subjects.
In order to stay abreast of changes in their profession, many readers have reported employing artificial intelligence systems to generate summaries of their extensive reading material.
We regularly confront huge information loads in knowledge work; effective summaries, particularly those tailored to your specific requirements, may be a helpful method to navigate this. You may use it as a first pass at organizing new content or to prioritize which publications to read in detail.
4. Long papers and a conversation
To "ask questions" of lengthy texts, LLMs may be useful. If you're reading a scientific study, for instance, you could have questions about the sample size, research methods, or findings. Consensus achieves this while providing references, which makes it look like there is less room for error in the LLM's findings.
While some of the examples offered are more fantastical—for example, someone asking ChatGPT to take on the character of a certain author in order to have a conversation with them—I think the ability to ask documents questions in normal language and have them respond with citations might be beneficial for navigating vast volumes.
Photo by Madhuri Mohite on Unsplash
The LLM's replies should be verified for accuracy. In the conversation below, for instance, I requested ChatGPT to provide a list of studies that provide support for strongly-guided teaching, and it sent me to a review paper written by Mayer. However, it misrepresented Mayer's study as a meta-analysis when in fact it is not. In truth, the work isn't a literature review at all; rather, it examines three high-profile examples of the shortcomings of discovery learning. This answer might be misleading if you accept it at face value, but if you know the language the AI is converting, you can "check" its work rather easily.
5. Rewrite passages with varying depths of explanation
Expert thought is notoriously difficult to comprehend since most texts at that level are intended for other experts. There is a lack of clarity in the presentation of ideas and an abundance of jargon. This implies that most people have to depend on interpreters, such authors of popular science articles or general-audience nonfiction, who portray the opinions of specialists in a more accessible way.
Photo by Jodie Cook on Unsplash
It seems that two distinct strategies for using AI technologies are in play. When one says, for example, "Explain quantum computing like I'm an eighth-grader," they are only asking a lawyer-turned-philosopher to break down a complex idea into simpler words. One alternative is to supply ChatGPT with a sentence or explanation and have the AI rephrase it in a way that is easier to understand.
Having the original documents to cross-reference makes the second option seem more trustworthy to me than just trusting ChatGPT's word for it.
6. Clarify any jargon that may be confusing
In retrospect, I was confused by Tyler Cowen's frequent, unexplained usage of the word "Straussian" to characterize ideas or other thinkers when reading his Marginal Revolution blog a few years ago. I looked for an answer on Google, but didn't find one.
After doing some digging, I came to the conclusion that the word refers to "closely reading between the words in prominent thinkers' ideas, seeking for what they actually intended but couldn't always convey because of established censorship and intellectual orthodoxy."
Photo by Anita Jankovic on Unsplash
If only ChatGPT had been around while I was lost, I could have easily figured this out:
Many users had similar experiences while utilizing LLMs to decipher slang and other terminology popular in a certain subculture that standard dictionaries failed to explain.
7. Make a schedule and plan of study
This use caught me off guard, but it came up often enough in reader comments that I decided to include it. It seems that people like relying on AI to direct their education.
Some readers, for instance, have requested a curriculum from ChatGPT that would help them achieve a particularly difficult learning objective. Some students even went so far as to request that ChatGPT make them a custom study program based on their available time and other factors.
I'm not sure if I can put my faith on LLMs to provide me with a well-structured programme. It could be a good place to begin learning something entirely new, however. Dissecting a seemingly insurmountable obstacle might be the most challenging aspect of venturing into uncharted territory. In a similar vein, having a set time to study might assist push beyond the reluctance to actually do so.
However, ChatGPT still has some work to do in the areas of establishing reading lists, inventing books, and referencing. So, although it may be useful for breaking down a complex learning activity, I wouldn't put my faith in it to provide me with solid information just yet.
8. Remind them of resources they may have forgotten about or seldom utilized
The programming community responded to my inquiry more often than any other profession. Whether this happens because programmers as a group are more willing to embrace innovative software tools, or because programming is especially well-suited to LLMs, I cannot answer.
The benefits to programmers' efficiency are plain to discern. Unfortunately, I don't use LLMs often enough to take use of this functionality, and I no longer write enough code to justify ignoring it. However, it is obvious that having a machine build the first draught for an algorithm saves a great deal of time, since most coding is very basic.
Photo by Safar Safarov on Unsplash
There are examples of non-developers using AI output to create apps, but I worry that these will be difficult to both debug and maintain. A skilled programmer, on the other hand, may modify ChatGPT's results for a language they are more familiar with.
LLMs seem to shine at the very edges of a programmer's skill set. A lot of developers have told me they use AI to receive pointers on how to get started with new programming languages and tools. While they had the foundational knowledge to interpret and apply the AI's results, they were unfamiliar with the language the AI was built on.
9. Make text-based flashcards (Tentative)
Flashcards are an effective method of studying. Making one of these is a royal pain in the rear.
Some readers have mentioned making use of ChatGPT to make study flashcards. It seems to be well within the LLM's capabilities as a "calculator for words." If you give the LLM the information it needs to turn it into flashcards, though, rather than relying on it to discover it on its own (as we'll discuss below), you should be able to obtain decent results here.
However, I wouldn't add any flashcards to Anki without first reviewing them since it's so hard to make "good" flashcards. Making flashcards is time-consuming, but if I had a rough draught to review, I could do the task much more quickly. If you check the cards for accuracy before using them, the dangers appear manageable.
10. Make use of it to file your notes. (Advanced)
I spend an inordinate amount of time attempting to track down my research notes. The identical issue is found by Robert Martin. It's risky to do a search using keywords alone since it's common to forget the precise word you used even though the meaning remains the same.
Photo by Glenn Carstens-Peters on Unsplash
Martin applies the embedding property of LLMs to the issue, and the problem is resolved. Not quite ChatGPT, but from the identical family of language processing algorithms, this application helps you identify notes that are connected semantically rather than by keywords.
It's possible that LLMs tailored to your specific needs and stored on your hard disc, where they may draw on your current data, might be useful. When I can't exactly place anything, it would be great to be enabled to look for it nonetheless.
Don'ts
1. Don't rely on AI to accurately represent the world
The majority of LLMs are fictitious. If you rely on ChatGPT for accurate replies, these hallucinations might be an issue. It's currently hard to estimate how often these errors occur. For instance, "experts" cried out when Wikipedia first went up, claiming that its user-generated content rendered it unreliable. However, Wikipedia is successful overall, so these first negative comments may have been unwarranted.
Photo by Oberon Copeland @veryinformed.com on Unsplash
We still don't know when LLMs are likely to get a response correct and when they are likely to make something up, and LLMs haven't yet attained the factual level of Wikipedia. Use them when the consequences of getting a wrong response are low, either given that you can easily check the information elsewhere or since your use case doesn't need hard evidence.
2. Referencing errors are to be expected from AI
When it comes to citations, LLMs seem to be even worse than their undergraduate counterparts. They make up whole books, studies, and researchers all the time.
Photo by Markus Spiske on Unsplash
If I were doing research for which citations were required, I would never use an LLM, and if I did, I would always verify its claims with independent sources.
In a similar vein, I wouldn't bother an LLM with a request for a recommended reading list or recommendations for certain works or writers unless I knew those individuals were already well-represented in the dataset.
3. Assume that AI will make mistakes in maths.
Although LLMs can do many tasks at or above human levels, I think it is a mistake to assign general intelligence to them. The technology powering LLMs is quite restricted in comparison to the results we would imagine from a person with comparable performance on verbal exams, much like that of chess bots and picture classifiers.
According to psychological research, various brain regions handle language and thinking in different ways. This research claims that LLMs seem to be consistent with neurological findings from double-dissociation studies: one may have eloquent verbal talents while suffering from severe impairments in thinking.
Thus, LLMs have a terrible grasp of numeric. And it's not only advanced math that people tend to falter. Counting is a common job that LLMs struggle with. So, I think LLMs would be especially awful at something like assigning math homework and grading solutions. ChatGPT may be good at explaining mathematical concepts but not so good at applying them.