Reading List
Search or fabrication? from AI Weirdness RSS feed.
Search or fabrication?
I recently started experimenting with Bing's new ChatGPT-powered chat tab. This is the first thing I asked it for:
![Some examples of AI Weirdness blog posts are: [brackets mark factual errors] New AI paint colors1, where Janelle Shane uses a language model called GPT-3 to generate names and descriptions for paint colors, such as [“Literally Grab” (a bright orange)] and “Dorkwood” (a dark brown). AI-generated recipes2, where Janelle Shane trains a neural network on a dataset of recipes and gets it to create new ones, such as “Chocolate Chicken Chicken Cake” and “Beef Soup With Swamp Peef And Cheese”. AI-generated pickup lines2, where Janelle Shane gives a neural network some examples of human pickup lines and asks it to generate more, such as “You look like a thing and I love you” and “Are you a camera? Because you make me smile”. Received message. Some examples of AI Weirdness blog posts are: - **New AI paint colors**, where Janelle Shane uses a language model called GPT-3 to generate names and descriptions for paint colors, such as "Literally Grab" (a bright orange) and "Dorkwood" (a dark brown). - **AI-generated recipes**, where Janelle Shane trains a neural network on a dataset of recipes and gets it to create new ones, such as "Chocolate Chicken Chicken Cake" and "Beef Soup With Swamp Peef And Cheese". - **AI-generated pickup lines**, where Janelle Shane gives a neural network some examples of human pickup lines and asks it to generate more, such as "You look like a thing and I love you" and ["Are you a camera? Because you make me smile".]](https://www.aiweirdness.com/content/images/2023/03/AI-Weirdness-blog-examples-annotated.jpg)
I've put red boxes around the factual errors. What is notable is that these are not just slight typos or errors in context - those items never