top of page

Launching Sense & Sensibility 

The brief is based around the word 'awareness', more specifically relating back to Artificial Intelligence. What is the term consciousness and can machines experience this?
By exploring the realm of computer consciousness through the creative use of machine learning, artificial intelligence, and computer vision toolkits I will create a final artefact that represents my research.
 

Research

The Turing Institute Panel

Dr Adrian Weller, the Head at The Turing Institute began the guest talks of this panel. He explained points of AI technology first starting with 'Vision'. He gave a great example of how we should be using AI, as a benefit to humans, by using image detection for breast cancer patients. After training the model on what cancerous cells may look like it can now detect areas of matter that it assumes to be cancerous. Dr Weller went to acknowledge the fact that AI isn’t 100% accurate and can give false and negative readings inaccurately however the accuracy and bias within AI algorithms he argued have significantly improved.

Elmer_Elsie5.jpg
less-than-p-greater-than-neurophysiologist-w-grey-walter-built-his-cybernetic-tortoises-to

Grey Walter’s Tortoises,
Elmer and Elsie.
1950

Walter was one of the most distinguished neurophysiologists of the time. The mechanical tortoises were developed to help him better understand how the brain functions. In videos you can see him setting up the machines and then attracting their 'magic eye' (a light sensor) by shining torches in their general direction. Since the machines have wheels and a motor they can move around a space once triggered. The machines had  plastic shells that were phototropic meaning it could follow light, the shell also acted as a bump sensor.

"The robots were designed to show the interaction between two sensory systems: light-sensitive and touch-sensitive control mechanisms (in effect, two nerve cells with visual and tactile inputs). These systems interacted with the motor drive in such a way that the tortoises exhibited ‘behaviour’; finding their way around obstacles, for example. They were described as tortoises due to their shape and slow rate of movement – and because they ‘taught us’ about the secrets of organisation and life." - University of Bristol

header.jpg

Adam Harvey, MegaPixel.
2017

MegaPixels is an ongoing live project about machine learning image datasets created by Adam Harvey. This first part of the project launched in London in 2017.

The installation uses facial recognition to search for your identity using the largest publicly available facial recognition training dataset in the world, called MegaFace (V2). From around 15,000 users, two people reported finding a positive match in the training database which they never knew existed. Many people aren’t aware that they could be included in this facial recognition training dataset because it was created entirely from Flickr. The dataset contains approximately 672,000 identities and 4.2 million photos, all taken from Flickr without anyone consenting.

These 4.2 million images are currently being passed between researchers in the US, China, Russia, and all over the world to train and evaluate state-of-the-art facial recognition algorithms. Harvey's project aims to make you question how comfortable you are with this form of data collection. "If you knew that your image, your friend’s image, or your child’s image was being used for developing products in the defense industry, would you object?"

p17299736_b_v13_ai.jpg

BBC 'The Capture' by Katherine Applegate.

Before this project launched I had been watching the new BBC show The Capture, which after seeing and using some of the machine learning technology, is not that far fetched.

A detective is investigating a murder of a barrister and the CCTV evidence shows a man, her client,  following her to the bus stop and brutally attacking her before taking her in his car. Upon police questioning however, the man that was in the video seen attacking the victim, is completely unaware of the event and insists the video evidence is fake. There is no physical evidence tying him to the crime and the lead detective believes him. The series goes on to expose how the security service, CIA and MET Police work together to employ 'deep fake' strategies to fight crime. By using live video editing and facial rec data to create lifelike 3D models of people, the police can fake video evidence to make it appear that people have committed crimes they actually have not. The argument in this was that physical evidence may not always be available or present in a case but intelligence can still be right so in order to win the case in a court they have to almost 'fake' the evidence and jury's can't argue with live CCTV.

This show lead me to looking at deep fake technology, specifically how facial rec data can be used to create false images of people, for example the AI technology that allows you to put your face on someone else's body. 

How can informed consent be applied in terms of live monitoring? If we are being live monitored can our images and or identities legally be harvested?



I read over an essay from 2020 written by Don Fallis titled 'The Epistemic Threat of Deepfakes',  in which he discussed what deepfake technology actually is, how it can be implemented and why it is a danger to real video evidence.  


"Deepfakes are realistic videos created using new machine learning techniques rather than traditional photographic means. They tend to depict people saying and doing things that they did not actually say or do. In the news media and the blogosphere, the worry has been raised that, as a result of deepfakes, we are heading toward an “infopocalypse” where we cannot tell what is real from what is not."

"Deepfake technology increases the probability of a false positive. That is, realistic fake videos that depict events that never occurred are more likely to be produced. As a result, videos carry less information than they once did."

header-look-05.jpg

Adam Harvey, CV Dazzle.
2013

CV Dazzle is a project created by Adam Harvey to camouflage users from computer vision. It uses bold patterning, colour and physical changes to features to break apart the expected features targeted by computer vision algorithms. The project was made to prove that faces, or other objects, can exist in a dual perceptual state: visible to humans yet invisible to machines. CV Dazzle was developed as part of his thesis at New York University’s Interactive Telecommunications Program and first published in April 2010 however has been discredited in recent years. Although the programme is discredited it does not mean the overall idea could not work...

Screenshot 2022-10-07 at 14.50.34.png

Minority groups statistically proven to be more likely falsely recognised. 

The majority of programmes which is concerning.

Black women and Native Americans being some of the most targeted by ai face rec algorithms. 

'WILDTRACK' was a 'project' of sorts that collected a dataset that was then passed onto any company or public research that needed data pools for any form of research but more likely the research would be linked to facial recognition programmes. ETH Zurich, a public university, set up a survelliance project that involved multiple cameras set up over campus to capture 7 videos each 35 minutes long of students in an unscripted environment. According to analysis done after the filming took place, many students were unaware of said filming and they had not gained informed consent. 

I found out about this project through a website called Exposing.ai that is a blog collection of data gathering projects that are ultimately used for AI research. Informed consent is the key to this site and many of the projects they expose have little to no user consent. I noticed that a lot of image data pools have been taken from Flickr which makes me assume that the Flickr platform is the one with the least user privacy laws. 

WordEater by Jeeyoon Hyun is a web based mini-game where you use your webcam to catch/gobble words in order to generate a sentence.

You can gather the meaningless words in order to make a meaningless sentence, eventually removing all words that you see on the screen. The goal of the game is to make your web browser cleaner by scavenging through the text using with your mouth to navigate. WordEater uses the Facemesh API in ml5.js to detect your mouth in your webcam but you can also use your hand to physically grab the words. You can play the mouse version if you can’t use your webcam.

What is consciousness to me?

For me I can only imagine consciousness as a human being. A human conscious involves a few things for me, the first being;

  • Free thinking. This could be my ability to decide right from wrong or free thinking in terms of my physical appearance, religious ideals, political alignment or my moral compass.

  • Decision making. The ability to choose if I follow through with an instruction. I would tie this into free thinking as part of my decision making will always align with my free thinking. For example if I was told to steal from an elderly person I could make the conscious decision not to and that would be because I deem it morally wrong. If a computer algorithm was told to gather an elderly person's banking data for the purpose of stealing, it would because the user gave the machine an instruction that it has no moral objections with. ​​

  • I would also go back to the key word of this project and say awareness is key to consciousness. I would argue however that awareness is the most broad of all since not only can I, a human, experience this but so could an animal or even a computer. I am aware of my environment, I can see, hear and navigate my surroundings much like an animal can or a computer. Awareness between myself and computer differs when it comes to down to morality. I am aware that I am in fact alive and all the repercussions that goes with being alive e.g I'm aware I need to eat and breathe to survive. I don't think, (think being a key word because I cannot say for fact), a computer is aware that it is a machine, it simply just exists. I can't imagine that a computer knows it needs electricity to work or that it needs a graphics card to display but it can experience physical awareness when commanded to e.g when a webcam is turned on. 

If computers could experience my form of 'consciousness, free thinking and morality', would they not object to being used for public surveillance?  

I find this kind of reclamation quite a refreshing way of 'peaceful protesting'. To actually use the technology you are critiquing to prove your point is a good way to ensure legality while also demonstrating exactly the dangers this technology could pose.  

Paolo Cirio created a database  from 1000 public images taken during protests in France and ultimately gathered 4000 faces of French police officers. The images were then publicly released to crowdsource their identification with Facial Recognition technology and through the platform Capture-Police.com.
The project itself was not intended to harm or expose individual officers but rather utilise the very technology that is used daily by governments and law enforcement on citizens without consent or knowledge. If law enforcement can use it on us, we can use it on them. 

'AI Art' by Joanna Zylinska

These following quotes are what resonated with me throughout my research, I went on to apply these points to my critical thinking in terms of this project. 

  • “Building on its previous claims that the huawei p20 was equipped with ‘Master ai’ which automatically set the most optimum camera mode for every situation as well as learning to adapt to user behaviour (huawei 2018), the chinese ‘ai-powered’ flagship was not just making photos but also evaluating them, ‘using its artificial intelligence to rate thousands of images alongside a professional Leica photographer’.” Line 3 page 12. I found the 'not just making but evaluating' a powerful statement here. The technology is used in all aspects of AI creation.

  • “in recognising that the reception of technological art, especially of the kind that uses or at least engages with al, requires some degree of technical competency, it asks what is being unveiled and obscured by the current artistic discourse around ai. Going beyond aesthetic experience and the sense of ‘fun’ that is often associated with technology-driven art, it considers art’s role in demystifying new technologies while highlighting some socio-political issues – but it also explores the limitations of art as a debunker of techno-hype.” Line 18 page 14. This quote highlighted the difference between AI art and AI driven art to me. To simply create an art work with AI technology is wildly different to using the theories and cultural aspects surrounding AI technology as a pillar in your artistic creations. AI driven art applies cultural importance to the creation, what do we feel about AI and how can we use AI to create and highlight said feelings. 

  • "Whose brainchild (and bodychild) is the ai of today? Who and what does ai make life better for? Who and what can’t it see? What are its own blind spots? artists, media- makers and writers can help us search for answers to these questions by looking askew at the current claims and promises about ai, with their apocalyptic as well as redemptive undertones – and by retelling the dominant narratives in different genres and media (fig. 3). storytelling and other forms of art making may even be the first step on the way to ethical, or responsible, ai." Line 13 page 29. How can I as an artist look at the serious ethical framework flaws of machine learning and use creative interpretation to encapsulate my findings. I feel the questions around deepfake technology and how this can taint the legitimacy of 'real evidence' could be creatively interpreted, how far can you go to make the machine believe what is in front of it and can it truly tell apart sections of images e.g what is real and what is fake? 

AI ART
vs
AI DRIVEN ART

Physicality of AI Art

masker 1.jpg

http://www.jipvanleeuwenstein.nl/#masker
A lens-shaped mask designed by Jip Van Leeuwenstein makes the wearer undetectable to facial recognition algorithms while still allowing humans to read facial expressions and identity. It curves round the face to avoid all possible camera angles but has ventilation to avoid steaming up. 
I like this kind of project where technology meets fashion, it makes you think about the physical implications this technology could have on us in the future should we wish to remain anon. I also feel this is a tangible reminder of what this technology is attempting to uncover for and definitely makes me question why they are so interested in people's faces.

http://jingcailiu.com/wearable-face-projector/
A projecting headpiece that masks the users face by projecting another face over. This was designed by Dutch artist Jing Cai Liu and works to make the wearer undetectable to facial recognition algorithms. 
I like the integration of projection in this piece. It is more of a drastic option in comparison with the lens mask above but I like the visual impact of it all. It physically embodies what I imagine an AI face to be.  I feel the final product is a tad creepy looking but ultimately it is doing exactly was Joanna Zylinska was talking about in the reading 'AI Art' by bringing a creative interpretation to the real questions we face about AI. In this case this project highlights how deepfakes work, primitively. They steal another's face to put on another's body. 

Process

Workshop 1

Tutorials

In my 1-1 I told Cat about my idea for a facial database made possible through image recognition. I was confident in my idea and she seemed enthusiastic about my research into deepfakes. I've been encouraged to dive deeper into the examples of facial bias and start thinking about how I would want my final artefact to look.

Unfortunately I had to work from home this week but I followed along with Workshop 1 using the Teachable Machine made by Google, the ML5 library and P5.js. The Teachable machine software was fairly straight forward to pick up. 

  • I create a class and name this class whatever the image is and remember all type has to be identical in p5.js code. 

  • To the class I add at least 200-400 images of the object/image I want it to recognise. 

  • Repeat for multiple classes.

  • Literally just click train machine and wait. Remember to keep your tab open otherwise you could lose the data. 

  • You can now preview in the browser or export the code to p5.js. 

 

In P5.js you have to set up your canvas sizing and allow your webcam to be used. Create global variables for your JSON array which in this case were 'label' and 'confidence'. It had to be a JSON array as the data was in pairs, this gave us a name data set and a value data set. The 'name value' is the name of the class in teachable machine and the confidence level of the teachable machine is then the 'value'. The confidence level  being the amount that the computer's camera  is sure that the image in front is what it thinks it is. You also have to add your ML5 Library to your sketch index to communicate between teachable machine and p5. 


 

Workshop 2

logo-square.png

Tutorials

In my 1-1 Cat and I discussed how I am gathering my database and I expressed my concerns about inclusivity. I feel a lot better after our chat and have been reminded that this is actually part of the process and key point in my research to note.  We discussed my more analogue idea of a zine which was well received. I'm going to start looking at how to lay out my research in a visually compelling way.

For Workshop 2 we were starting to use Runway, a video editing and ai model generator app. You can work from a huge set of templates as well as train your own model using either object detection, image detection or pose detection etc. 
By choosing a text generator template you could type in a random phrase to the top window, hit run it remotely and then the programme would respond using the text data it has been trained with. The responses were a bit random and it didn't really make sense but their were some points in it's response that related back to your original question. 
To connect it with processing you had to have both programmes running at the same time. You also had to add the http library to processing as it needed this library to speak to the data from runway. I think this is because runway is a web api. 

All of the information and setup of the runway project is communicated through a JSON object in Processing. The rest of the processing sketch is your canvas set up and declaring your global variables. 

 

import http.png

Remember and download this Processing library and then import it in line 1. 

These are your global variables for this example. The first is your array called 'prompts', this is holding all of your original questions to the machine. The next is your variable that will hold your question until you initiate the sketch to run. It's like a loading option and in this example once the 'g' key is pressed it will go away and start the sketch. The next one 'genText' is the variable that generates the ai response to your original questions. This is an empty string as the computers response is random and is actually controlled in the JSON object further down. Your last global variable is your font, you declare this later in your setup. 

canvas setup and draw functions.png

This is your setup and draw functions where you set up your canvas and how you want your text to look on screen. You have to call upon your text in the draw loop for it to actually display on screen otherwise the computer will have all the setup but not actually display anything on screen. 

This is where you added interaction. A key press will now trigger the start of the programme as well as move through the arrays to give a different visual output. 

The Idea...

face.png
face1.png
IMG_8422.heic

See Me?

After conducting my initial research I had a few points that I would like to include in this project, the main theme I kept finding myself coming back to was facial recognition. More specifically the flaws in facial recognition algorithms and how to exploit these flaws. The CV Dazzle project by Adam Harvey was a great source of information for me however, my idea instead of hiding from the camera wants to exploit what the camera assumes about a person based on physical features. 

My idea is to create a data base of faces ranging in age, gender and race to feed to Teachable Machine. Through image editing software I want to digitally create new faces using collage techniques using the features of the original faces from the data base. 

My goal is to try and deeper understand what the computer is looking for in facial rec. Will it see a man or a woman if the features are of both? Will it recognise one face over the other dependant on what features I use to collage? If I combine features of races, what will be the one the computer recognises? 

Screenshot 2022-10-11 at 11.52.30.png

Starting to Gather Original Image Database 

Using Teachable Machine I want to train an AI model to recognise a range of faces of my peers and family. Originally I wanted a huge pool of people but I realise to create such a large database of images per person is a bit unrealistic especially if I can't just sit them in front of my webcam to boost the database. 

 

I started with my mum and have taken 26 images of her on my DSLR, I'm aware however I should have at least a few hundred images of each subject. I plan to use the 26 images I took for the collaging process to ensure I'm using high quality images, and then create the rest of the Teachable Machine image database through the webcam. 

Uploading

To start I'm using my mum, my girlfriend and me. We are all white so I can't experiment with how the machine views race but in this case age and gender will be quite an interesting comparison. 

I'm trying to keep all my backgrounds completely plain so Teachable Machine isn't picking any foreign objects.

 

I am following through with this outcome although I'm not sure if the computer will recognise any collage at all, I think that is a key part to the whole process. I'm thinking I can limit the confidence level amount for the final output so the machine should definitely register something. 

eve.eye.png
eve.eye.png
eve.eye.png
eve.eye.png
eve.eye.png
eve.eyebrow.png
eve.eyebrow.png
eve.eyebrow.png

Troy Browne is a freelance artist and animator who takes a focus on Afro-American pop culture to create wild portraiture collage. I like the misshapen and blown up features especially his focus on teeth. I like his portrait collages because it's only taking original parts of the image to create a really abstract looking character. While he still uses human features the faces can look completely alien in some of the images but I feel the final faces express more emotion of the original image. His artwork was a key source of inspiration for my style of collaging. 

Troy Browne

Screenshot 2022-10-12 at 15.55.17.png

I edited some of the advanced settings particularly the Epochs. I changed this from the minimum to a higher number so the machine would have to run through each image base 75 ties instead of 50. I hoped this would improve reading accuracy.

Screenshot 2022-10-17 at 11.32.57.png

I started training the database after my first few volunteers had all of their images uploaded. After my few tweaks in the advanced settings it was just a click of a button. I'm finding that between 400-500 webcam images combined with another 20-50 camera stills are working well for confidence scores.

I was originally using the webcam as my input but Teachable Machine has the option for a file input so I opted for this for quickness and accuracy. This also gave you a solid reading instead of the fluctuation of the percentages when you were using the live webcam.

Screenshot 2022-10-14 at 20.42.23.png

file management 

Screenshot 2022-10-12 at 16.07.03.png

My file management was very important for this project. I've decided to opt for SD cards for my projects this year and saving to my Google Drive. Each volunteer had two folders, one for their original headshots and one for their collage facial features that were on PNG files ready to be dragged into final collaged faces.  These faces were also saved to my creative cloud so I could access them from any machine. 

inDesign 

Screenshot 2022-10-19 at 17.50.55.png

I used a lot of the Adobe suite for this project, Photoshop, Illustrator and Lightroom for my collaging and image editing then finally combing all my files into a magazine in inDesign. This was fairly straightforward for me to do since I have made quite a few prints now but as always inDesign was prone to crashing so it did take me slightly longer than expected. I liked the grid layout of the faces on both pages, it kind of reminded me of an odd yearbook. Due to time constraints for sending the document tot the printer I didn't add as much written research to the zine as I maybe would have liked to but this is a point I plan to further work on in the lead up to summative. 

Screenshot 2022-10-20 at 19.48.14.png
Screenshot 2022-10-20 at 19.48.28.png

I decided to have my zine sent away to get printed and bound to give a more 'finished' look. I did some research into UK based printers and this was one of the best and most environmentally friendly options for printing. Using only recycled papers and carbon neutral deliveries Print Work in Leeds was a great option. I uploaded my artwork as a pdf and opted for the fastest production and delivery. My zine was due to be delivered on the 19th but unfortunately due to Royal Mail strikes it still hasn't arrived by the night of the 20th. I have went to my local photography printers and managed to get some smaller 8x6 high gloss prints of a few of my final collages but I am quite disappointed I may not be able to present my zine. 
:( 

Final Export 

To conclude I feel that Teachable Machine is not the most accurate for recognising faces however I did find it interesting that it was looking more at the shapes of the collages instead of what I considered 'identifiable features' like eyes etc. Considering more than half of my database were female, the majority of readings for confidence came back as male. I think this may relate back to a more angular face structure that was replicated through my abstract collages. 

Reflections

My reflections for this project are basically that I wish I had more time. I wish I had more time to build a more diverse image data base particularly across races and more time for physical compilation. I took the first week really for research and because I was to the mercy of other peoples schedules for the face scanning I feel I was a bit rushed towards the end and my time management could be improved. I'm so pleased with the research I did during this project since I do think this will definitely come up in my work in the future. Overall I'm really pleased with the final zine. To expand on this project I would like to do a more web based version as well as improving upon my original zine and adding more of findings about Teachable Machine. 

bottom of page