Takeaway:

AI-homogeneity means everything would be more similar, and difficult to tell apart. Just like hipsters who try to be unique but end up looking similar. AI could also produce similar-looking, or similar-quality work if it relies on a few popular models. To avoid reinforcing biases, use diverse data from people's unique perspectives, cultures and experiences. Practice with varied prompts and link AI tools to own unique results.

Mathematician Jonathan Touboul studied the spread of information and its impact on social behaviour. He found a "hipster effect" paradox: in many situations, people trying to be different ended up behaving similarly. This pattern also appears within investing and others areas of social sciences.
So, what will happen next?

The key to getting the best results is to practice. Work with different AI tools, link programs and guide it with creative prompts and refine its outputs. Chatbots, like mirrors, reflect back what they see. Most users of ChatGPT and other generative AI apps aren’t involved in building training data. For most of us, our prompts are the only thing we can control.

Some companies are limiting bias in AI by setting boundaries for how a particular type of system will be used. Algorithmic auditing - a new process of looking at an algorithm and examining it for bias can also help to make sure that algorithms follow desirable social norms and do not hurt or treat anyone unfairly.

What might Bridgeneers do?

Break biases by introducing a diverse range of influences and voices to better reflect our real world. Pay attention to AI’s output and ask it to refine its answers as needed.

Have a project in mind? Let’s get to work!