OpenAI, the non-profit research company co-founded by Elon Musk and Sam Altman, published a paper yesterday, detailing the development of new, state-of-the-art, large-scale unsupervised language model, which generates coherent paragraphs of text.
The model, called GPT-2, was trained simply to predict the next word in 40GB of Internet text. GPT-2 displays a broad set of capabilities, including the ability to generate conditional synthetic text samples of unprecedented quality – all without task-specific training.
The examples of the models capabilities, presented with the full press release, are awe-inspiring and open up new possibilities for applying ML text generating models in journalism, translation or chat bots. Questions about potential abuse cases, if the model got into the hands of bad actors, are also raised in the article.
Due to our concerns about malicious applications of the technology, we are not releasing the trained model. As an experiment in responsible disclosure, we are instead releasing a much smaller model for researchers to experiment with, as well as a technical paper.
OpenAI’s mission is to build safe Artificial General Intelligence (AGI), and ensure AGI’s benefits are as widely and evenly distributed as possible. We expect AI technologies to be hugely impactful in the short term, but their impact will be outstripped by that of the first AGIs.
Link: Open AI – Better Language Models and Their Implications