52. Deepfakes Are Coming for Your Business: Here’s How to Fight Back with Ilke Demir
Welcome, everybody, to episode 52 of Unhacked. Guys, we are gonna talk about something that has had me terrified for a a hot minute now. Deepfakes. Deepfakes are coming for your business and what to do about it. I am Justin Shelley, CEO of Phoenix IT Advisors, and I help my clients get lots of money leveraging technology, and then I help them protect it from the evil Russian hackers, Also, the government fines and penalties that if you don't comply, they come after you.
Justin:And if you do all that right or wrong and you screw something up, then the attorneys are gonna come and sue you and take everything that's left. So nobody wants that. That's what I do is I prevent that stuff from happening. I am here as always with my loyal, faithful, trusted cohost, Mario Zaki. Mario, tell everybody who you are, what you do, and who you do it for.
Mario:Yeah. Mario Zaki, CEO of Mastech IT, located in New Jersey, right side right outside of Manhattan. Been in business over twenty one years now, and we work with small to medium sized businesses to help with their IT needs, keep them protected. And we specialize in keeping the business owner, give them the ability to sleep better at night knowing that his business is safe.
Justin:All right, guys. And I am also doubling for Brian LaChappelle today who was with us and then had to jump off for some technical difficulties. The joke that never gets old if only he knew a good IT company. So listen, if he comes back, we're gonna let him in. We will berate him and make fun of him, but hopefully he can jump back in here in a minute.
Justin:I don't know, we just got a note from him that there's a regional outage, so it's not his fault. That's good to know. Bad news is that might mean he's not going to join us today. So we're going to go on without him, but I am really excited to introduce our guest today. Like I said, we're diving into the world of deepfakes and digital deception.
Justin:Could not think of a better guest to have on the show, guide us through this journey than Doctor. Ilkay Damir. Doctor. Jesus. Doctor Damir, say hi and thank you for being here.
Ilke:Hello. Hello. Thank you for the invitation and thank you for all your efforts for saying my name correctly.
Justin:I do my best. Names are important and I slaughter them regularly. So that's why I don't have very many friends. Doctor. Damir, you like reading through your bio.
Justin:I'm not going to lie. I get intimidated. That's been happening more and more on the show as we bring in high caliber guests like yourself. So I'm just gonna read a little bit about you and feel free to correct me if I mess anything up. But you built a trusted media program at Intel.
Justin:Is this like the chip manufacturing company we're talking about? The Intel?
Ilke:Yes. The Intel. The trusted media in the Intel.
Justin:Okay. And this included research and productization teams for building a trusted digital future. Can you just tell me just briefly what that all means?
Ilke:Yeah. You know, AI is coming very fast and very strong and very furious, baby. So we are trying to build this digital future where, the actual human values like trust and ownership and, like, ethical considerations and all of this is not lost. It's still in, like, core to whatever we do. It's still core to the content and how we can actually ensure that technologically that, like, what we see is correct, what we see is trustable, what we create is protected.
Ilke:We know who created, how it was created, etcetera. So all this realm of digital content trust is within trusted media.
Justin:Which is huge because honestly, I don't believe jack shit anymore. If if I read it online, it's probably not true. That's just kind of the the feeling I've had for a while. We know that social media feeds us what they want us to see, Google results, all of this stuff. We don't know what we can trust anymore.
Justin:So I I really applaud your efforts and thank you for for this work. Now it doesn't stop there. You're the mastermind behind FakeCatcher, groundbreaking tool that detects deep fakes in real time by analyzing subtle biological signals. And we're talking like you can detect blood flow analyzing pixels and video. Is that right?
Justin:Yes. Oh my god. Tell me more about that.
Ilke:Yeah. When your heart pumps blood, it goes to your veins and your veins change color. That color change is not computational, is not visible to your eyes so I cannot just like look at the your video and try to understand your heart rate. But that's computationally visible and it is called photophthalates tomography, PPG for short. So we actually collect those signals from everywhere on your face, try to correlate them, try to see whether they are those heart rate monitors, you know, like beep beep stuff.
Ilke:We look at their signals and we actually build a deep learning model on top of that to understand whether they are authentic signals that represent your authenticity or they are fake and everywhere and not like a real heart or stuff.
Justin:And I'm I'm we're a little off topic cause I'm gonna dive into this where, you know, before we even get into the meat of the episode today. But how long will it be before AI can then create the stuff that you're using to detect AI?
Ilke:So FakeCatcher is not new at all. Like, I think it has been more than five years that we actually first published FakeCatcher. And it's still not tricked because the signals that we are using in FakeCatcher are not so I will go a little bit technical here. Are not differentiable. That differentiable means that those are very nonlinear complex equations that you cannot just try to learn.
Ilke:Because they are not differentiable, you cannot just try to learn those signals with generative AI because you need to back prop those signals to propagate the error that you learn. When you propagate the error back in the network you actually change the weights of the network and that's how the learning happens. Deep learning one on one hello! So because those relations that you are trying to learn are super complicated non linear relations you cannot try to exactly learn the PPG signals that I mentioned the blood flow signals that I mentioned and because of that Fake Catcher is not yet. As a scientist I cannot say that it will never be tricked it's like forever like that of course not but yet.
Justin:Right right but at this point this is a solid reliable and again thank you. Because, I I mean, honestly, in in our industry, this is this is kinda what we combat all the time. And, you know, at, like, at some point, when do we just throw our hands up in the air and say, I don't know. They won. Right?
Justin:So I love having people like you who are fighting the good fight. We have to be more proactive. We have to be more on offense instead of playing this world of defense that we do a lot of times in in cybersecurity. So Doctor,
Mario:is there certain hardware that get has to be involved when when with this or can this be, you know, pretty universal?
Ilke:It's pretty universal. It can run on everything. It can run on CPU. It can run on GPU. Based on the benchmarks, we have shown that, we can run it real time only using like we can run it real time on CPU GPU systems etcetera but even if you only have CPUs we have shown that like we can run two zero eight video streams at the same time on one high end server CPU, which is basically running real time detection on 200 videos at the same time, which is very nice.
Ilke:And the reason that it is that much parallelizable and that much efficient is that the network part, the deep learning part, the neural network part that we use is a very lightweight neural network and the whole emphasis and the whole strength of the algorithm comes from the biological signals, which is not that very heavy memory and time consuming network part. Because of that, we can actually, run it very efficiently and in real time. And we actually release it as the very first real time deep detection platform.
Mario:Oh, very nice.
Justin:And I I don't even need to say this at this point, but obviously you have a PhD in computer science from Purdue University. We're not even through the introduction yet, and I like can't stop asking questions. Impressive stint in your career at Facebook, Pixar. We've already mentioned Intel. Well, Studios.
Justin:Is that the same thing or is that different? That's the same thing.
Ilke:That's Intel, but that is a different part of Intel that was doing three d movies, AI VR productions that are going to field festivals and stuff. So very, very cool things.
Justin:That makes sense, and that is cool stuff. I mean, god, you you bridge the gap between cutting edge technology and human centric novel algorithms. Tell me what that means. I have to, like, ask you what everything means.
Ilke:Yeah. You know, like, everyone is talking right now, like, oh, like, the best generative AI model, the AI algorithms, they're all there. Cut and edge edge technology, etcetera. But our mindset is a little bit different in the sense that we want everything to be human centric. We don't want those like black hole algorithms in the sense that you want to create something and you give very little input, which may be a text input or some image reference, etcetera, then you don't have control for the rest of the algorithm.
Ilke:We don't want it to look like that. We want to bridge the traditional creation workflows with the cutting edge technology as much as possible. Just imagine you are doing, let's say, three d modeling, right? In three d modeling, there has been so many tools that are empowering the creativity of the modelers, of the VFX artists, of the Lightning teams, etcetera. But if you look at the novel AI algorithms, it is mostly about how automated everything is without much control, without much guidance, etcetera.
Ilke:So we are trying to still create so much novel, cutting edge technology, but we want the human part of it, not just for editing and control, but like, know, the human values of it to still persist as if it exists in the traditional systems.
Justin:Gotcha. Amazing. So it's beyond this, beyond all this technical stuff, you're also an ACM distinguished speaker. Tell me a little bit about that.
Ilke:Yeah. So that's a very cool program. And ACM is Association for Computing Missionary. That's one of the biggest, like, computer science industry associations, not computer science, but, like, general engineering. Like, you know, most of the conferences that we go or pop, or, like, journals that we publish in are actually ACM conferences and ACM journals.
Ilke:So ACM has this program called Distinguished Speakers, where if you want to have some lecture or keynote or panel or talk to be given on a particular topic, you can actually request that from ACM, and ACM sends us there. Like ACM says, okay, this is a very impactful lecture, you can go and talk about defects in Montana University, which is a true story, I went to Montana, yes it was very cold, but it was a brilliant crowd of students there. Anyway, so yeah, whoever wants, not just me, there are many industry experts there from academia, industry like a mix of them, just look at the topics that you are interested in and ask from ACM about sending those experts to your institution, your university, and they will be there. And the university or the institution only handles the local accommodations, the travel, etcetera. All ACM handles it.
Ilke:That's the beauty of
Justin:it. Gotcha.
Mario:How many of those have you done?
Justin:I was just gonna
Ilke:ask that. How many. I have been an ACM distinguished speaker for three years. I did most than 10. I think, like, if I count the ones that I did on Zoom during COVID, it's even more than 20.
Justin:Wow.
Ilke:Yeah. So it's a it's a really like, it's also a good way good. Sometimes it is like for very early researchers, very early students, and it's not only about the technical topic, but it's about your life journey, you know, like how you actually become who you are, or what are the current challenges that I can guide them on, etc. And of course, like the wide range of topics that we talk about, like the three d movies, etc, that's one of the lectures, like traditional generative models, that's another lecture. Defects and defect detection, that's another lecture.
Ilke:So it's really a good universal it's not just US, by the way. You can I even, like, gave talks in Africa through ACM, etcetera? So it's like everywhere around the globe. If you are looking for experts to share their journey, share their technical expertise, etcetera, you can just request from ACM, basically. Nice.
Justin:You know, this introduction has it's not a normal introduction, right? Because usually when I'm introducing a guest, I just read stuff. And that's because I usually understand it. I am in that perfect scenario where I'm in a room of smarter people than me, which is where I always want to be. So I've got one more line of your intro I'm going to read and it's about your passion.
Justin:You're passionate about the advocating for the bright side of AI content providence, actively working to protect creators rights in the digital age. This is huge. Right? So with everything I've got, thank you not only for being here and talking to us today, but thank you for what you do. This is something that the world desperately needs and I greatly appreciate it sincerely.
Justin:So with that, Doctor. Damir, we are going to get started as we're now fifteen minutes in. So we're we're gonna start by just understanding what is deepfake. I I don't know that I'm sure everybody's heard about it, but let's just for a second pretend this is brand new. I'm I'm ignorant.
Justin:I lived under a rock my whole life, and I have no idea what you're even talking about when you say deepfake. What does it mean?
Ilke:So deepfakes are this fake content can be image, video, audio, where the actor or the action of the actor is not real. So me saying things that I have never said or taking Justin's face and putting it on my face so like, the things that I say is coming from Justin, etcetera. All these different algorithms that are creating those, not real content are called deepfakes. Well, can't we just call them fakes? Yes.
Ilke:They're legit.
Justin:I asked that actually. No. Thank you. But
Ilke:those are mostly with deep learning algorithms and complex neural networks. So that's why they are actually rebranded as deepfakes. So if you just smudge something in Photoshop, it's not a deepfake. Some people call it shallowfake, which is also a new term because now we have deepfakes. If we didn't have deepfakes, they would be just fakes.
Ilke:Now they are shallow fakes. Anyway, but, like, deepfakes are usually, deep learning algorithms that are used at. And there are several, main, networks that are creating deepfakes. So autoencoders or variational autoencoders is one of them. Genetic microsatellite networks, GANs is another one.
Ilke:Like, right now, like diffusion models is another one that is coming, you know, like stable diffusion, mid journeys, and all of them are using the these diffusion models that is also being used for the creation. And all of these are creating these very highly believable incorrect video, like unreal videos.
Justin:So as a business owner, I mean, why why do I care? So and and more specifically, like, do you have any real world examples of how this has impacted
Mario:businesses?
Justin:Small businesses, that's who we're normally talking about. But business at large, what's happening out there?
Ilke:Yeah. So small or large, there has been many instances that companies faced financial reputation wise, political, employee wise, information centric, etcetera, and damage due to deepfakes. Like one, I think, many people heard about it is that, there was a CEO or CFO in Hong Kong that, with a voice deepfake, someone said that, okay. Like, I'm a in investor or employee or someone. You need to actually, transfer this much money to that place.
Ilke:And because the voice was super believable, they actually sent, like, millions of dollars to that bank account. And suddenly, they are done.
Justin:Which can't you can't get that back. In most cases that that money is gone forever. Right?
Ilke:Yep. Yeah. Yeah. Because it's with with your own, like, no one actually force you. You just leave something.
Ilke:It's basically scams and frauds. Right?
Justin:Sure. Yeah.
Ilke:Another way is political misinformation and, like, try to change the public opinion. Yep. I don't know whether I should say it, but, like, you know, like, right now, even, like, the credible sources, like the White House is tweeting some photos, etcetera. They can be debased, obviously or not. So we actually work with different human rights organizations.
Ilke:One of them that we work with, they are collecting all these high risk, high vulnerability cases around the world from many journalists, many individual journalists or like many news organizations, and they are bringing it to us to find whether they are real or fake. So there has been elections in Georgia, there has been elections in some countries in South Africa, there has been some cases in India, and all of these deep fakes are actively being used for either defamation of the other candidates or like making some public figures accomplishments, like non existing accomplishments happening as they have happened, etc. So that climate is another one. And unfortunately, the biggest part, and it has been like that since the beginning, like since 02/2019, think that was the very first, market research that people were doing about deepfakes is adult content. So I think in 2019, they found out that 97% of all deepfakes were towards adult content.
Ilke:Now it is like that too. There are many legislations that are passing right now, like Take It Down is one of them. I don't know whether you heard about it, but Take It Down basically, punishes and takes it down immediately if there is like non consensual adult, imagery or videos done with deepfakes and stuff. And that's very new. I think they passed it last week or something.
Justin:Oh, really?
Ilke:Yeah. No. They passed it from the house, I think, last week, and now it's going to the president, something like
Justin:that.
Ilke:Yeah so unfortunately that's like the one of the biggest use and there are many other research out there about looking at the percentages like how much of the defects are in that domain, what is the gender balance of those in existing everywhere of course it's mostly women then unfortunately there's also children which is very very concerning. Anyway so yeah that's like another industry and specifically for small businesses you know your VPs, your executives, your like see suites can be defect very easily. Essentially like one of the whenever we we are, like, facing a new customer, new company, the our our demo includes deepfaking their CEO or someone, and that's why I just gonna ask He can find it.
Justin:I shoulda had you do a deep fake on me that we can have used as an example. Cool. Honestly, I might I'm gonna make that request now. I could because I can I can put stuff in maybe at the end or I'll splice it in? But if you could do that, if you have the if I could ask that of you, you can say no.
Justin:Yes.
Ilke:Me just one photo and I will do it and I will send it to you. Just one photo is enough.
Justin:Absolutely. We'll we'll we'll splice that in right now, Mark. Okay. Backs back on track here. That's that's this phenomenal information and it scares the shit out of me.
Justin:I'm not gonna lie.
Mario:Now doctor Damir, I know you led the development of, fake catcher. Can can you tell us a little bit of how how you were able to identify deep fakes with this?
Ilke:Yeah. Of course. So, as I said in the beginning, FakeCatcher is looking at the blood flow, and, that signal that we use called photoflatteismography or remote photoflatteismography is very similar to to what you have in your Apple Watch actually. So Apple Watch is physically looking at your skin and trying to understand the blood flow. Removed photoflattismography, which is what PetCatcher uses, is looking at the video and trying to understand that.
Ilke:Now if we were to just find out, okay, is like 75, now like, 78, etcetera, your heart rate, that is a harder problem and that is like more noisy. But what we are trying to do is correlate those signals throughout your face so that the blood flow is coming from one heart. It's not like multiple hearts popping everywhere
Justin:on your Okay.
Ilke:And we do it in spatial, temporal, and spectral domains, which is looking at all properties of those signals. And the neural network on top of that is to make it more robust. I think you can see there are some color or brightness changes when I move from the camera, so that actually affects those signals. Or if there's some occlusions, know, like the heartbeat on my on my hand and, face may be slightly different because of the color change, etcetera. So to be more robust and there's compression, it's never four ks or videos can be very compressed and stuff.
Ilke:So to be robust against all of those, are actually strobing a neural network on top of that, so we actually accommodate for those inconsistencies. And at the end, we find out whether it is real or fake with 96% accuracy on Phase one, six plus plus datasets. And there are many other datasets and benchmarks that we have done with FakeCatcher to find out that so it is, robust against, different skin colors. It is robust against different compression levels. It is robust against, like many changes that can happen basically for a video.
Ilke:Wow.
Mario:And and and how did you guys come up with this idea that to use this to identify it? Like, that's genius. How did you come up
Justin:with it?
Ilke:That's my idea. Sorry. Yeah. Like, TechEater is my baby, so I'm, like, super proud of that. So I I was working in generative models before it was cool.
Ilke:So I have been working on generative models, like, for fifteen years or so, which, like, at that time, it's mostly traditional generative models, but still trying to understand the priors within data, to fit context free grammars or L systems into the data so that they are language like and procedural so we can create more and more and more data, which is generative models, right? So anyway, so I always had that eye to look at the priors in data or like rules in the data or systematic procedures that creates that data. Now when defects were first coming out, it is the output of a generative model. So there should be some authenticity, some priors, some hidden signals that are inside that data that we can still dig and depend on. And at that time I saw that paper from MIT from Bill Freeman's group about remote photophalismography, basically, the PPG paper.
Ilke:Up until that moment, it was being used on remote patient monitoring, medical applications, to see if the main video that they do is like there's a baby and baby is changing color due to its blood flow and you can see that it's actually breathing based on that color change etc. So like that MIT PPG paper was like moment for me saying that well yes this is a signal that we can depend on this is very hard to replicate etc. Then I called my colleague Doctor. Umer Chifchi who is an expert on PPG signals and said like I've a few experiments on this I'm super curious let me do that and he was very collaborative and he actually said like yes yes that's my domain I can actually do that then even our very first experiments that are not using deep learning algorithms but just using like SVM support vector machines etc to understand the feature space of those signals was very promising. We have seen like if we use PPG signals in a correct way, can actually find like over 97% accuracy that it is actually saying that it's real or fake for pairwise separation problem.
Ilke:Anyway, and then we said like this is great. Let's just evaluate the hell out of it. That's like, create something that is useful, not just right now for current defects, but for future too. Because at that point, defects were just emerging and everyone was looking at what is wrong with those videos. Like, is there boundary artifacts?
Ilke:Is there symmetry artifacts? What artifacts exist so that we can find it? Looking from an authenticity perspective, looking about what is real in that video was really under explored. It's still under explored in my opinion. And when you look for something that is not replicatable in the authentic video, are actually making it much more generalizable for future future future generations that instead of, like, trying to fix what is fixable, trying to depend on what is fixable, you are depending on something that is not replicatable, which is, like, the next generative model, the next, GAN, the next diffusion model cannot still be used.
Mario:Okay. And with like, so day to day, like a business owner, like myself, like, what could I do to somewhat do something to kinda protect myself? Like, you know, like, we we do these podcasts all the time. So, like, my face, my video, my voice, you know, it's on there. It's on the web.
Mario:You know? What can something a business owner do to to kinda protect themselves at least a little bit?
Ilke:Yeah. I can easily take one of your podcasts and create you, like, saying things, oh, Academia was the best guest we had ever had.
Mario:I'll say that anyway. Simple.
Justin:I have
Mario:to say that anyway.
Justin:You don't need to fake that. Right.
Ilke:Yeah. So there are several protection mechanisms that you can protect your content. And you can also always be aware of what is real, what is fake. Looking at the video, like there are some things that if it is a live call, for example, you can make them, like, occlude their face, you can make them deform their face, you can make them, like, change the lightning and stuff. And that's, like, one of the, like, three of the very first things that you can do on a live call.
Ilke:If it's not a live call, it's not like live visual feedback, you can always run one of those defect detection algorithms. FlightCatcher is not the only one. We have eye gaze based detection, we have motion based detection, we have some multimodal detection algorithms that are looking at my motion and my voice. When I'm moving in a certain way, my voice is changing in a certain way. So that's actually giving you really nice correlation output for us to understand something is real or fake.
Ilke:You can try to understand such things from your human manual inspection or use those detectors. To proactively protect, there are several algorithms that we developed. For example, My Face My Choice is one of them. If you don't want to appear in a photo, can apply My Face My Choice and then the expression what you say you know like if you have tongue out your tongue everything will be there but your identity will not be there we have my art my choice which I think we can talk about in detail but that is protecting content from being stolen from generative models. We have MyVoiceMyChoice, that if anyone wants to replicate your voice using an audio clip, If that audio clip is protected, the generative model output will be super noisy, super bad, like you can't even listen it more than like a couple of seconds, etcetera.
Ilke:So all these proactive algorithms also exist out there. And we are not the only people that are publishing those. University of Chicago has several such algorithms that are protecting style or protecting concept. Know, like you want to, you say like create a cat, it creates a dog. So if you don't know what cat is, you can assume dog is a cat, anyway, that's another topic to discuss.
Ilke:But anyway, so these are, approaches, exist. And in specific domain, you can also do more specific things or like there are more inspection methods that I teach in my trainings that I did several times in different countries for different demographics in Vietnam, in Korea, in US. Like, I give such strengths too. So if you have a a bad butt problem sorry. Bad deep fake problem, here is the person.
Mario:Okay. And then tell me what is content authenticity and providence? I guess also known as c two p a.
Justin:Yes.
Mario:Is is that correct?
Ilke:Yeah. So that is c two p a is coalition for content provenance and authenticity. And C2PA is where the brightest minds in this sector is coming together to create standards for provenance. So when I say provenance sometimes people are like pro pro pro So provenance which I say very fast unfortunately for some reason is the origin and the source of any content. So let's say your favorite cousin is sending you a video that you have no idea who created, how it was created, was it created with content, what, like, which tools are used to create, is it a combination of my video and Mario's video, etc.
Ilke:So all of this is actually creating the provenance information. The moment a contact is created, maybe image, voice, or video, text, font, three d model, you know, like any kind of data. The moment it is created, there's a creator, there's a way it was created. There is like ingredients if they exist or like there are some pre or post processing approaches that are applied to it. The the provenance manifest that is containing all that information is what the standards are built for by C2PA.
Ilke:So it's a coalition with Microsoft, Google, Tropic, BBC, Sony, many steering committee members for CTPA. And very recently, actually the new version of CTPA is out there, I believe, or about to be out there, or by the way that this podcast is out, it will be out. That timeline. And, like, you can check it from c2pa.org. And if you are doing anything with content, doesn't need to be like you are a content creation company, it's like if there's any content that you put or you ingest or it's a platform that is with a like upload function upload content functionality then it's good that you check CCPA because it's not just like some standard tool that is building this trusted feature that is building this trust in online content digital content but it is also hopefully becoming a part of, like, legislations and laws, etcetera, so that everyone will know how content is created, basically.
Mario:Okay. Now you mentioned one of the companies that you mentioned was, Truepic. Mhmm. I I have a very good friend of mine that actually works for True Pick and, you know, every time I I I see him, we, you know, we have a beer and and discuss the stuff and it's we just nerd out a little bit. Tell me what you what you know about them.
Mario:Like, I know they they they're coming out or they've come out with like a a special chip that can be installed in cameras, like iPhone cameras or Android cameras that can help with identifying, you know you know, if if video was authentic or not. Can you tell me a little more about that?
Ilke:Yeah. So, they are one of the first adopters of c two p a for, cameras and manufacturing, visual content manufacturers in the sense that the moment that photons from an object or from a scene hits the camera lens, that's where the content capture is starting. And from them there on it may be there is some post processing in the camera, know, like denoising or auto brightness or something like that then like the camera lens is maybe a special lens and then camera owner and the camera ID you know like the camera hardware ID etc all of those a part of are a part of the provenance that's like in the very edge case that you want to use that photo as a court evidence, all of this origin information will prove that it is authentic because that camera, it's that lens, it's that preprocessing, etcetera. So Tropeak is the very first, like one of the first or very first company that actually implemented c two p a on device so that you can have that provenance information for any only content that you capture with with with cameras.
Mario:And do you think this is gonna be just be you're gonna eventually be a standard on every smartphone and and device that that is produced, or do you think the industry is looking for something to do this?
Ilke:That's that's that's the motivation. That's the idea. Several governments are actually, supporting c two p a in the sense that, like, yes. This should be the proven standard. We should follow this way.
Ilke:Several, companies already implemented c two p a. It's not just for camera, it's like for JTPA methods too. It's not probably the whole ownership information, whole provenance information, there are bits and pieces that are coming. So for example, if you upload image or video with provenance information with content credentials to Facebook, it will show that content creation, the content credentials, sorry. Same for LinkedIn.
Ilke:So if you upload an image or video with content credentials, you will see that. Adobe is one of the main contributors of C2PA. So most of the Adobe tools, creator tools, Creator Suite is fully supporting C2PA in the sense that whatever you do with the content, whatever you do with the image, maybe you're a digital artist, what you create is completely within the tool, etc. It actually helps you write those update manifests or create those c2pm manifests so it is fully known what happened to the content. It's it's just like a, you know, tree of life for content, basically.
Mario:Okay. And what what again, what that what could a small business owner do to kind of keep their content, like, authentic? Is there anything you Yeah,
Ilke:so I would say implementing c p a is absolutely a part of it, especially if you are creating high value content, like if you are a studio, if you are a Hollywood Producer or something like having that c2pa manifest for every piece not just for the end product not just for the movie but you know there are voice artists that want to protect their voice with c2pa add the provenance information in c2pa or if you have specific subtitles for something specific that is your work you can actually have it c2pa manifest for it like you know all pieces of those If you are somehow creating content, owning content, just check c two p a, try to have content credentials for your content so that whoever is consuming your content will know that it comes from you.
Justin:Can you just walk me through that? I I browsed the c two p a website. If I wanted to go through this process, is it complicated? Is it difficult?
Mario:What does that look like? Expensive.
Ilke:Yeah. It's trying to like C2PA wants to be as inclusive and as accessible as possible. So there are some implementations that are baked in in the software. That doesn't mean that you must use that software to have content credentials there are like free websites sites that you want to create or read out the content credentials and then you can actually do that there's also companies that are coming up that will be supporting c2PA creation and protections. Just be on the lookout for those.
Ilke:Also wants to c2PA wants to be as accessible as possible. So the protection of those c2pi manifests are mostly, like, cryptographically signed, and, the manifest itself lives in blockchain. However, if you are in a part of the world that has no connectivity and you are mostly offline, you can still sign your content with CGPA, still have that cert button manifest that is embedded as a part of the content, so embedded inside the content. When you have connectivity, you can actually make it a part of the chain again, or if you never have connectivity, you can still have it signed and embedded in the content so that you have C2PA manifest and you still can prove it that it's your content basically.
Justin:And is this similar to or is it connected to in some way? You mentioned before the my art, my choice, my face, my choice. Are these overlapping or is that something completely different?
Ilke:The motivation overlaps. So C2PA wants to enforce and make it super transparent that the piece of content belongs to you, which is provenance. And My Art My Choice, My Face My Choice, My Voice My Choice these are protecting the content technologically so that if someone wants to some generative AI model wants to steal it or recreate it then they cannot because it breaks down. So CTPA is the written transparent way that it says, okay this video belongs to Justin and My Art My Choice learns to create a different version of the video so that when I upload that video to, let's say, Stable Division, saying that, okay, create this video in a way that Justin says this and that, The output is very broken, very noisy, doesn't look like you at all, doesn't look like any plausible video at all. So one of them is protecting the provenance information through, like, cryptographically signing and storing it in, like, very secure mechanisms.
Ilke:The other one is creating the content itself so it cannot be replicated.
Justin:Okay. So I'm gonna maybe expose my ignorance on all of this stuff. But I have this wildly popular podcast called Unhacked. And I'm really worried about people taking my face, my video, my voice, and Mario's, and Brian's, and yours, and and exploiting it somehow. What should I do?
Justin:Like, what's step one? What right now today, when we get off of this, what do I need to go do to protect this this podcast?
Ilke:Take my art my choice. Apply my art my choice to the video. So whoever is crawling the web for that video, they cannot use it for creating a derivative or getting one part and changing that part or stealing my face or my voice from that video and creating something else, etc. So my act my choice is an adversarial attack on generative models, which means taking the content. It's a generative model itself by the way.
Ilke:My act my choice is a generative model.
Justin:That's what it sounded like. Yeah.
Ilke:Yeah. Yeah. So it it takes the content. It learns a protected version of the content Okay. Which looks very similar when we look at it, we listen to it, when we, like, interact with it.
Ilke:It's it's almost the same, like very negligible changes. But when the protected version is given to be replicated or deepfaked or diffusion models like go work on it, etc, it breaks those models. So it's actually an adversarial attack on generative models that the output doesn't look like us. Our output is not replicating our style. Our it's not replicating our contact, etcetera.
Justin:I love the way you word that.
Mario:Use this can can't go around this. I'm sorry, Justin. I'll cut you off.
Justin:No. Go. Hackers.
Mario:Hackers can't like I mean, the hackers don't obey by the law by the rules. Right?
Justin:But it breaks it is what she's saying. That's what I love. So it I was gonna say it's almost like encryption, but I don't know that that really fits kind of. But it, like, does something so that it knows how the generative model is going to try to fake it and breaks it. Right?
Justin:That's what you're saying?
Ilke:Exactly. Exactly. It is embedding signals in the video so that when generative model tries to learn to generate it or edit it or move things, etc, it's actually moving it towards pushes it away from the original content as much as possible. And that push away is controlled by pushing away the style, pushing away the content, and increasing the noise as much as possible. So the output, we want it to be super noisy.
Ilke:That's what I mean by like breaking the model.
Justin:Yeah. So what this would look like is before I publish this to YouTube, I download it, I make all the edits, I do what I want, I I get the finished product and I feed it through their system. Is that right? Then it spits out
Ilke:a Yeah.
Justin:And then it spits out a different video, not the one that I would normally download off of this platform. It's basically and I I I wanna say it's an encrypted video, but it's really not. Yeah. It's got land mines in it. Right?
Justin:Yes. It's really where I'm we're embedding land mines into the video that the generative models are just gonna hit and explode.
Ilke:Yep. Exactly.
Justin:Is that a good way to explain it? Okay. I I have some homework, doctor Damir. And I got blown away. This is easily the most
Mario:I kinda wanna see how it looks after it's broken. You know?
Justin:I'm gonna do it. I'm gonna do it. I I'm gonna do it and I want you to, you know, if if you will, if you'll take do some deep bake stuff like we talked about before, I'm gonna try to go through this process if I can figure it out because like I said before, I am not the smartest person in the room. But I'm gonna I'm gonna try to run through this process and and see if I can create a broken video to to feed. Maybe on a, like, a b side episode of of an actor.
Ilke:So the name of the algorithm is and the paper is published. So if you look for My Art, My not you, but everyone listening can look for My Art, My Choice, you will actually see the examples that are broken there already.
Justin:Oh, perfect.
Ilke:We have many examples on the on the papers.
Justin:You just saved some homework.
Mario:Yeah. If we can if we can actually probably have a couple links or have the link
Justin:I will. Yes. Sure. Yeah.
Ilke:Yeah. Of course.
Justin:So I it just yeah. So on that note, I when when listeners go to unhacked out live, our website, they there will be a section that talks about you, Doctor. Demer, and it has some of your links and stuff. But then I also embed it into the show notes. If you're listening on Spotify or Apple or wherever, the summary will have some links in there as well.
Justin:And I'll I'll put as much of this stuff in as possible because wildly fascinating stuff.
Ilke:Thank you.
Justin:I I I'm it's rare that I'm speechless, but I'm speechless. I'm I'm terrified. I'm excited.
Ilke:Embedding some speeches with deep ex. Oh,
Justin:god. And she's a comedian on top of being brilliant. But so true. Wow. I I yeah.
Justin:Like, I really I legitimately am speechless, but we're gonna go ahead and and wrap this up because honestly, I could geek out on this all day long. I really just gonna kinda follow you around and watch you work for the next five or ten years so that maybe I understand any of it. But I'll say again, thank you for for what you do. This is I love that there are people that have this passion, this drive, this mission to to protect us because like this this world's crazy and it's only getting crazier. With that, guys, we're gonna wrap up like we do at the end of most of our shows.
Justin:I wanna kinda go around the room and just tell me what your primary key takeaway is. If somebody listened to only this part of this episode, what would you want them to know? And Mario, I'm gonna start with you, and then doctor Damir, if you'll go ahead and and give me yours, then we'll wrap up.
Mario:Yeah. I mean, for me, it's the one thing a key takeaway for me with this is like just like every other episode that we've talked about in Unhacked, you have to take the proper precautions to protect yourself, protect your company, protect your identity, and not just think that it will not happen to you. You have to be proactive in protecting yourself, you know, using some of the the the sites that doctor Damir mentioned. And if you have content, if you have information that you need to protect, be proactive and take a couple steps and do it because you can't just think that it's never going to happen to you. Not just protecting your actual hardware and network, but protecting your identity, protecting your online presence.
Mario:This wraps perfectly into what we've been saying week in and week out for a year plus now with our podcast. So Doctor. Damir, not only do I wanna thank you for the work that you are doing for us, but I wanna thank you for joining us and taking time to really educate us and our audience on this stuff. Because prior to this, I thought it was we were just all doomed. Same.
Mario:But Yeah. Yeah. So I thank you very much for for for spending the time to educate us.
Ilke:Yeah. Thank you for the invitation. And it was, like, really nice to talk with you. And, like, now that, like, I know what you are doing, then, it really overlaps a lot with, like, not just hardware or network or software protection, but also your identity, your face, your voice, your image, your likeness, everything, should be protected. So if they are only listening this portion of it, the first thing that I would like to say is we are not doomed.
Ilke:There is so much work in the good side of AI, as much in the bad side of AI or unresponsible part of AI. The second thing is if you are doing anything related with content, out C2PA, check out how you can add the provenance information to your data so that everyone consuming your data your media your content will know that it came from you and maybe the third part is that do not believe everything you see online you hear online Try to use your own judgment of context, own judgment of visual inspection, if you have control over how it is being acted, like make them deform their faces or their lightning or like their pose or something. And if you don't have access to those interactive situations, you can always use technical helpers like FakeCatcher, like gaze based detection, like model detection, like all of other detection models that we built. And if you are also a content creator, in addition to CTPA, also protect your content, as Mario said, proactively using my art my choice, my face my choice, my voice my choice, my body my choice.
Mario:Yeah. No. Nobody wants my body. Nobody's gonna duplicate that.
Justin:No comment. I I Kerrigan's kinda speechless. I I have so much that I could say. Talking directly to our audience, which is small business owners. The theme that I have week in and week out is we just have to be aware of what's out there and it is changing all the time.
Justin:Find a way, dear business owner, to stay involved. I would love you to listen to our podcast every week because I do think we do a pretty good job of just showing you what's out there, giving you some key takeaways. Find smart people like Doctor. Damir, follow them, understand them. LinkedIn.
Justin:She's on other social media platforms. But we we cannot live with this head in the sand. It's not gonna happen to me mindset. The ones that do that absolutely will get hacked. And as I like to say, once you've been hacked, you you cannot, in fact, get unhacked.
Justin:On that note, guys, go to unhacked.live. You'll see all of our episodes linked to doctor Ilkay Damir, all of her content, all of her social media contacts. Thank you for being here, Mario and Doctor. Damir. Guys, we're gonna sign off, we'll see you next week.
Mario:Thanks, guys. Take care.
Ilke:Thank you.
Justin:Bye bye.
Creators and Guests


