Reading AI Family Resemblances using Cross-Stitch Patterns

Anuradha Reddy
6 min readDec 18, 2022

--

It all began when I prompted OpenAI’s DALL-E to visualise what one unit of cross-stitch looks like. Several prompt tweaks and variations later— I generated 12 separate cross-stitch units. These images resembled one another, like a family portrait, individually distinct but sharing visual traits with one another. I started referring to these image variations as ‘Family Resemblances’, inspired by the concept from the late philosopher Ludwig Wittgenstein.

What does the term Family Resemblance do for us? Does it allow us to interact differently with state-of-the-art text-to-image generation AI systems? What lessons does it offer to design and craft communities? In this article, I describe how I experimentally explore these questions.

(left ) 12 unit variations of cross-stitch generated using a DALL-E prompt: “Cross-stitch one pixel on paper”. (right) I redrew these images on a grid paper and referred to them as ‘Family Resemblances’ with overlapping visual features.

According to Ludwig Wittgenstein, Family Resemblances entail the cultural concepts we ascribe to things that are thought to be grouped into a category that does not always have one defining feature but a series of overlapping features. He discusses the example of ‘games’ wherein what constitutes a game, i.e., whether it has rules, whether it is for fun or play, and if it entails winning and losing, is decided based on overlapping features and their social contexts. Similarly, wide varieties of cross-stitch exist when discussing cross-stitch as an embroidery category (full cross, double cross, mini cross). Cross-stitch varies culturally, carrying different names and styles, and it is often incorporated into other embroidery forms such as needlepoint and blackwork. Nowadays, cross-stitch is also defined by computational features, such as ‘pixel’ art. By acknowledging features and their relations, it is possible to come to some common understanding of what is and is not cross-stitch. Arguably, on some level, this is similar to what DALL-E is doing when tasked to generate images (and variations of them) based on language prompts via a shared concept/category. The AI model is piecing together metadata and images from its database into visual resemblances of what we want to see. However, what DALL-E is not good at is reverse engineering, showing us how it constructed those resemblances, traced back to categorical inferences of cross-stitch and the (missing) socio-cultural and historical references.

Can DALL-E facilitate a meaningful understanding of cross-stitch? Whose references does it build on, and where do they come from? Who are the people behind the cross-stitch images that fed its algorithm? Do they consent to how DALL-E’s Deep Learning model treats their cross-stitch designs? With the way DALL-E’s model works, we may never be able to answer these questions, at least not entirely. If not fully, could it be possible to partially read AI’s images like we read language prompts — similar to how we distinguish texts from one another? If we consider cross-stitch a ‘language’ with its approximate rules and grammar, could one imagine alternative ways of ‘reading’ AI’s cross-stitch variations?

I searched online for keyword combinations like ‘cross-stitch’ + ‘codes’ and landed on Ukrainian embroidery. This Slavic embroidery style encodes cross-stitch with meanings and has done so for many generations. Their cross-stitch variations are symbolic (and religious) and extend to ritualistic forms of record keeping of family names, numbers, days of the week, phases of the moon, months, seasons and astrological signs. Pysanka (or Pysanky) is the name given to the art form of encoding these symbols on Easter eggs (instead of fabric) using wax resist and natural dyes. Pysanky translates the visual structure of the Cyrillic alphabet into stylised cross-stitch-like units that are non-linear, patterned from centre-outwards, and, most importantly, readable!

Four images illustrate how alphabets, numbers, weeks, and seasons are depicted in Ukrainian encrypted embroidery. Images sourced from https://ukrainian-recipes.com/encrypted-embroidery-how-to-depict-words-and-numbers-in-ornaments.html

Dominika Marcowicz, a post-graduate student I supervised, explored the relationship between the computational grammars of Slavic Light Symbols and cross-stitch in her master's thesis (inspired by The Slavic Way by Dmitriy Kushnir and artists Tatiana Miroshnyk and Naomi Parkhurst). Her project resulted in a cross-stitch artefact that encodes a hidden message using computational grammar. You can solve the riddle here.

(left) A symbol chart showing Slavic light symbols (image source: https://www.pinterest.se/pin/381046818454707374/); (right) a cross-stitched artefact designed by Dominika Markowicz and produced by Anna Chojnowska (image source: https://domarkowicz.wixsite.com/slavic-patterns)

Noticing the visual similarities between DALL-E’s results and embroidery codes, I wanted to see if I could decode DALL-E’s cross-stitch units using Pysanky. So, I put on my code breaker hat and decoded the cross-stitch units into the Cyrillic alphabet (with the help of a YouTube video). I interpreted disparate Cyrillic alphabets, although it didn’t make any sense when I put them together (with the help of Google Translate). Could “н л г” be someone’s initials, an abbreviation, perhaps? I childishly hoped I would discover a secret “language game” between DALL-E and Pysanky, but I ended up with gibberish, as I should’ve expected.

My attempt at decoding DALL-E’s cross-stitch units using Ukrainian cross-stitch codes

I was unsuccessful in my attempts, but nevertheless, the experiment led me to encounter a new language I was previously unfamiliar with (Cyrillic). Further, I acquired a unique appreciation of cross-stitch now that I could read it! I even learned to write my name (ANU — Ану) in Pysanky. I tried some variations on grid paper before narrowing down on a pattern I liked. Then I materialized the pattern by employing both traditional and new innovation processes. First, I cross-stitched that pattern by hand on a denim blouse (I’d make it with an embroidery machine next time if I could) and also made a 3D-printed coaster of the pattern.

(left) My name ‘Anu’ is written in a Pysanky pattern (ANU translates to AHY in Cyrillic); (middle) The pattern cross-stitched on my denim top; (right) the pattern 3D-printed on a coaster.

Our human history holds countless examples of visual/craft languages representing cultural concepts in non-linear, coded, patterned, and symbolic formats (e.g., see the book, An Atlas of Endangered Alphabet by Tim Brookes, 2023). Women have played a decisive role as cultural custodians in passing down such concepts from generation to generation in embodied and open-source crafted formats (e.g., see my pictorial, Kolam As An Ecofeminist Computational Art Practice). As we are predisposed to ignore such life-sustaining efforts and go after AI for newness, it becomes apparent that AI is quite good at recycling its datasets to show what appears like crafted visual languages devoid of social context. This culmination brings me even closer to the heritage nuances of culturally crafted art forms which exemplify non-linear, embodied, diffractive ways of reading, interpreting, and understanding the world.

We already know that to get the desired image output, it is necessary to tweak an AI prompt multiple times to the point that it stops making literal sense. If the literal meanings of the prompts no longer make sense, then another translation takes place in a language we do not understand. So, we keep poking at it, playing a slot machine-like language game until our brains run dry. My point is that just as how we share languages to understand each other (whether or not one fully masters them), we also require shared vocabularies for reading the outputs of AI systems. However, the main issue is that our current AI systems are intrinsically unpredictable (as are the corporations that make them), making it impossible to imagine a future sustaining ourselves through them. With open-source AI movements steadily advancing, the hope is that the movements generate innovative possibilities for heritage craft communities to reappropriate current AI systems to interact in visual languages capable of supporting their communities’ life-sustaining needs and practices.

--

--