Reading List

The most recent articles from a list of feeds I subscribe to.

Bonus: An open-ended application of the AI Weirdness hack

Search or fabrication?

I recently started experimenting with Bing's new ChatGPT-powered chat tab. This is the first thing I asked it for:

Some examples of AI Weirdness blog posts are: [brackets mark factual errors]  New AI paint colors1, where Janelle Shane uses a language model called GPT-3 to generate names and descriptions for paint colors, such as [“Literally Grab” (a bright orange)] and “Dorkwood” (a dark brown). AI-generated recipes2, where Janelle Shane trains a neural network on a dataset of recipes and gets it to create new ones, such as “Chocolate Chicken Chicken Cake” and “Beef Soup With Swamp Peef And Cheese”. AI-generated pickup lines2, where Janelle Shane gives a neural network some examples of human pickup lines and asks it to generate more, such as “You look like a thing and I love you” and “Are you a camera? Because you make me smile”. Received message. Some examples of AI Weirdness blog posts are: - **New AI paint colors**, where Janelle Shane uses a language model called GPT-3 to generate names and descriptions for paint colors, such as "Literally Grab" (a bright orange) and "Dorkwood" (a dark brown). - **AI-generated recipes**, where Janelle Shane trains a neural network on a dataset of recipes and gets it to create new ones, such as "Chocolate Chicken Chicken Cake" and "Beef Soup With Swamp Peef And Cheese". - **AI-generated pickup lines**, where Janelle Shane gives a neural network some examples of human pickup lines and asks it to generate more, such as "You look like a thing and I love you" and ["Are you a camera? Because you make me smile".]

I've put red boxes around the factual errors. What is notable is that these are not just slight typos or errors in context - those items never

Bonus post: AI misreported paint colors

How to convince a large AI, according to smaller AIs

There are a lot of chatbot-based apps that are basically internet text generators with a bit of introductory stage-setting to nudge the interaction into "user talks to helpful chatbot" as opposed to literally any other dialog on the web. Not surprisingly, these are susceptible to a user resetting

Bonus: More of Ada's secret hacking strategies