We begin with python scripts that provide APIs for OpenAI models — first invoking DALL-E.
The following code fragment can be run in a jupyter .ipynb notebook. Note that you will need to obtain an API key from openAI:
html
https://beta.openai.com/account/api-keys
import os
import openai ## You need to pip install openai
openai.api_key = ## A security key that you need to get from https://beta.openai.com/account/api-keys
openai.Image.create(
prompt="A French bulldog and Boston terrier in Star Trek uniforms",
n=2,
size="1024x1024"
)
This code results in something like the following:
Other fanciful images may be found in the recent New York Times article https://www.nytimes.com/2023/01/07/technology/generative-ai-chatgpt-investments.html?smid=nytcore-ios-share&referringSource=articleShare
If you specify something more realistic in the text prompt — say replace
"A French bulldog and Boston terrier in Star Trek uniforms"
with
A black and white French bulldog.
you get something like
A deep fake French bulldog.
My own—very real—French bulldog, Cassidy.
Just as images can be generated, similar generative models can write text, as seen in the following code.
import os
import openai ## You need to pip install openai
openai.api_key = ## A security key that you need to get from https://beta.openai.com/account/api-keys
response=openai.Completion.create(
engine="davinci",
prompt="This is something new that I'm trying out.",
temperature=0.9,
max_tokens=150,
top_p=1,
presence_penalty=0.6)
The response:
Normally I would definitely say go for it and buy the bottle, but I really have to stick with the 4 stars over this one. It's good, but to me it's just not something I'll go out of my way to get after trying it. As always with Yves Rocher, though, I highly recommend giving it a try if you're interested. The price is right, the scent is great, and if you enjoy floral scents - you may be pleasantly surprised!
There is a free and easy-to-use web interface to ChatGPT at https://chat.openai.com/auth/login ,
but due to the overwhelming interest, the site invariably displays the statement that “ChatGPT is at capacity”, and it prompts would be users to check back later.
In addition to the available java script and the above illustrated python based API’s, to ChatGPT, one can find a pretrained GPT model in current versions of MathematicaTM. The following script loads the model, and there is lots of documentation on how to use it.
net = NetModel[{"GPT Transformer Trained on BookCorpus Data","Task" -> "LanguageModeling"}]
Boston University has a site license, so everyone is free to experiment. For more information, see
It is worth concluding our discusion of the frontiers of generative AI with some remarks on scale. An interesting exercise for anyone interested in training a simple neural network is to train LeNet-5 on the MNIST data set. LeNet-5 was the creation of Yann LeCun and described in a paper in the Proceedings of the IEEE in 1998. It is an eight layer convolutional neural net with 60,000 trainable parameters. The training data consists of 50,000 28x28 images of handwritten digits from 0 through 9 and the test data is another 10,000 handwritten digits.
@article{lecun1998gradient,
title={Gradient-based learning applied to document recognition},
author={LeCun, Yann and Bottou, L{\'e}on and Bengio, Yoshua and Haffner, Patrick},
journal={Proceedings of the IEEE},
volume={86},
number={11},
pages={2278--2324},
year={1998},
publisher={Ieee}
}
This is contrasted with the rapidily changing landscap of Grenerative Pre-Trained Transformers. When it appeared in preprint form in 2018
@misc{radford2018improving,
title={Improving language understanding by generative pre-training (2018)},
author={Radford, Alec and Narasimhan, Karthik and Salimans, Tim and Sutskever, Ilya},
year={2018}
}
GPT-1 had 117M trainable parameters. GPT-2, which appeared in February 2019 had 1.5 billion trainable parameters, and the current version of GPT-3 that is the backend for the above APIs has 175 billion parameters. The original LeNet models used training data that was painstakingly created by volunteers—empoyees of the U.S. Census Bureau and a number of high school students. Training data for AI systems the size of GPT-3 cannot be crafted in such a fashion, and indeed image fragments, text, and published code were obtained by crawling the Web.
While existing GPT models have been notably successful in generating text, images and sound, similar generative models for control of movement in ways that replicate neurobiology have not yet appeared. Biological motor control introduces complexities that are not present in the creative processes supported by the above OpenAI models. Realtime multisensory processing and use of motion strategies that have both reactive and deliberative components provide a host of yet-to-be addressed challenges. Both long term and short term memory are involved as well. Fundamental problems include finidng sources of training data.
Elon Musk was one of the cofounders of OpenAI in 2015. He resigned from the board in February 2018 but has remained a donor. For more information, see https://beta.openai.com.