The openAI organization has recently released the openAI API for accessing new AI models developed by OpenAI. Unlike most AI systems which are designed for one use-case, this API provides a general-purpose “text in, text out” interface, allowing users to try it on virtually any English language task.

Currently not available to the general public, you can now request access and enter a waitlist in order to integrate the API into any existing product, develop an entirely new application, or help openAI explore the strengths and limits of this technology.

Any language task can use the openAI API — semantic search, summarization, sentiment analysis, content generation, translation, and more. Given any text prompt, the API will return a text completion, attempting to match the pattern you gave it. You can “program” it by showing it just a few examples of what you’d like it to do; its success generally varies depending on how complex the task is. The API also allows you to hone performance on specific tasks by training on a dataset (small or large) of examples you provide, or by learning from human feedback provided by users or labelers.

API vs. open-sourcing the models

OpenAI has commercialized this technology in order to fund their ongoing AI research, safety, and policy efforts. But there also additional reasons why openAI released an API instead of open-sourcing the models.

Many of the models underlying the API are very large, taking a lot of expertise to develop and deploy and making them very expensive to run. This makes it hard for anyone except larger companies to benefit from the underlying technology. The API targets to make powerful AI systems more accessible to smaller businesses and organizations.

Also, the API business model will allow openAI to monitor and restrict misuse of this impressive technology.

OpenAI API usage review process

To prevent malicious use of the model (e.g., for disinformation), openAI will be limiting access only to approved customers and use cases. A mandatory production review process must take place before proposed applications can go live. During production reviews, openAI will evaluate applications across a few axes, asking questions like:

  • Is this a currently supported use case?,
  • How open-ended is the application?,
  • Is the application risky?,
  • How do you plan to address potential misuse?, and
  • Who are the end users of your application?.

OpenAI API access will be terminated for use cases that are found to cause (or are intended to cause) physical, emotional, or psychological harm to people, including but not limited to harassment, intentional deception, radicalization, astroturfing, or spam, as well as applications that have insufficient guardrails to limit misuse by end users.

What is an open-ended application?

An application may exhibit open-ended versus constrained behavior with regard to the underlying generative capabilities of the system. Open-ended applications of the API (i.e., ones that enable frictionless generation of large amounts of customizable text via arbitrary prompts) are especially susceptible to misuse. Constraints that can make generative use cases safer include systems design that keeps a human in the loop, or includes:

  • end user access restrictions,
  • post-processing of outputs,
  • content filtration,
  • input/output length limitations,
  • active monitoring, and
  • topicality limitations.

Who has currently access to openAI API?

Educational Institutes. OpenAI has tens of thousands of applicants for this program already and are currently prioritizing applications focused on fairness and representation research.

How will OpenAI mitigate harmful bias and other negative effects of models served by the API?

Mitigating negative effects such as harmful bias is a hard, industry-wide issue that is extremely important. OpenAI discussed this in the GPT-3 paper and model card, openAI API models do exhibit biases that will be reflected in generated text. Here are the steps openAI is taking to address these issues:

  • Developed usage guidelines that help developers understand and address potential safety issues.
  • Working closely with users to understand their use cases and develop tools to surface and intervene to mitigate harmful bias.
  • Conducting own research into manifestations of harmful bias and broader issues in fairness and representation, which will help inform openAI work via improved documentation of existing models as well as various improvements to future models.
  • Recognize that bias is a problem that manifests at the intersection of a system and a deployed context; applications built with openAI technology are sociotechnical systems, so openAI works with developers to ensure they’re putting in appropriate processes and human-in-the-loop systems to monitor for adverse behavior.

Are you planning to use openAI API for your own application? If yes, please share your plans in the comments below!

This post has been based on info first published on openAPI web site.