UNMATCHED

Meeting people IRL has become increasingly challenging, leading to a surge in reliance on dating apps. However, the prevalent reliance on algorithms in these apps mirrors the engagement-driven business models of social media platforms governed by the principles of surveillance capitalism and attention economy.

Extensive investigations by journalists worldwide, such as this famous 2 years investigation by French journalist Judith Duportal, have unveiled disconcerting practices within dating app algorithms. A prime example is Tinder, this platform teeming with user data that intelligently identifies potential matches. Yet, deliberately obscured matching profiles are strategically withheld to prolong user engagement. Matches materialize randomly, mirroring the unpredictability of gambling mechanics, fostering user frustration as a catalyst for subscription upgrades or profile boosting to enhance visibility and access to other profiles.

These dating apps algorithms pretend to be capable of understanding well who we are and what we want as individuals and this is what compelled the artist Cadie Desbiens-Desmeules to invert the power dynamics by questioning what algorithms truly know about us: “does it match ?” The artist collected real profile descriptions from a dating app emphasizing intimate relationships and employed these descriptions to prompt and generate photos through the unrestricted open-source AI model Stable Diffusion. These resulting AI-generated profiles photos, based on very simple prompts such as “31 year old women, bi-curious”, stripped of any identifiable information of the user, offer poignant insights into the inherent biases ingrained in the datasets underpinning models such as LAION and Commun Crawl widely employed in training most image and large language models such as GPT and DALLE.

Notably, the AI-generated profiles underscore pervasive biases in the training data. Non-heteronormative sexual orientations trigger the creation of highly suggestive content, perpetuating stereotypical representations. Queer individuals are consistently depicted as women, while women, in general, are portrayed in overtly sexualized manners. The generated images of men are equally explicit, with depictions of intimate anatomy automatically surfacing when sexual orientation goes beyond the heterosexual spectrum. 

Images generated with Stable Diffusion (not with Glif.app) using real dating profile descriptions as prompts.

These revelations raise concerns about the datasets underpinning the training of large language models, image models but also algorithms in general as most of them are supported by these common datasets derived from internet scraping. Many academic research have exposed not only that AI encourages mainstream culture but furthermore, that these datasets are full of undesirable content including hate speech, explicit material, and harmful stereotypes such as racism, even after filtering procedures.

The artist experiment is a demonstration of the issues in which algorithmic discrimination can occur with machine learning. We’re talking about ethical concerning bias when a model might be statistically accurate but yet still morally concerning, because it reflects or perpetuates discrimination. Algorithmic discrimination, no mater if it is direct or indirect can be very harmful for certain individuals. Due to algorithmic opacity, algorithmic discrimination might sometimes be occurring without our knowledge. The risk of compounding or exacerbating existing injustices or past injustices is very high especially with the kinds of AI that fall under ‘automation’ or ‘discrimative AI’. Discriminative machine learning is the kind of AI that has been massively used in the past decade in our digital lives, for example in decision making by classification or identification patterns in data such as algorithmic filtering in social media platforms, targeted advertising, surveillance and so forth. However, algorithmic biases also exists in generative AI as the artist experiment exposes.

Going a step further in this art experiment, Cadie Desbiens-Desmeules collaborated with Glif, a web-based AI generator platform, to develop an app that creates entirely fictional AI profiles. Instead of relying on real profiles, this playful approach involves GPT-4 and Vision in crafting entirely new personas within the parameters defined by existing dating apps. A curious research on how much current large language models interprets our cultures, people and the whole dating world in general. Again here, it is also interesting to witness how AI does not necessarily understand cultural differences. Some descriptions do not match with the profiles when GPT-4 is ask specifically to use Vision, in order to make sure that the description corresponds with the image, the cultural origin, the gender and the sexual orientation. It becomes clear that certain identities and context are misunderstood by current LLMs.

++++ INTERACTIVE DATING APP – CLICK GLIF IT ++

Programming with Glif.app

++++ INTERACTIVE DATING APP – CLICK GLIF IT ++++

This art project serves as a thought-provoking commentary on the unintended consequences and biases inherent in the algorithms shaping our digital interactions, prompting a critical examination of the ethical dimensions of algorithmic matchmaking in the realm of online dating.

Cadie Desbiens-Desmeules. 2024.
Project is ongoing : Work-in-progress