The article by Will Douglas Heaven titled “The new version of GPT-3 has much better behaved (and should be less toxic)” focuses on how Generative Pre-Trained Transformer 3 or GPT-3 improved significantly due to the InstructGPT update. It is stated that “large language models like GPT-3 are trained using vast bodies of text, much it was taken from the internet, in which they encounter the best and worst of what people put down in words” (Heaven par. 2). Since the given tool from OpenAI relies on learning from all sources of English texts, it is used to produce highly offensive and toxic language. However, the new update minimizes such a problem by a significant margin, which generates less erroneous, misinformed, and offensive texts.
It is important to note that GPT-3 is still in its rudimentary stage, which means that its implications are minor as of today. However, the improvements, such as InstructGPT, create leaps in AI’s learning process, accelerating its maturity as a usable product. Future business implications are massive, where any form of repetitive and non-creative text-generating jobs will become obsolete since GPT-3 will be able to perform these tasks much more effectively and efficiently.
In conclusion, I think that GPT-3 is a highly promising instrument, which requires learning assistance conducted frequently. I agree that GPT-3 should not rely solely on self-learning, and professionals should direct the AI. The text generator is of paramount importance when it comes to making a substantial shift in progress toward automating many text-related jobs, which also include programming. Although the tool will not replace all software developers, it will make the mundane elements of code generation highly efficient.
Work Cited
Heaven, Will D. “The New Version of GPT-3 Is Much Better Behaved (And Should Be Less Toxic)”. MIT Technology Review, 2022. Web.