The Persuasive Business Letter
Dalia Persuasive Letter
Synopsis:
This letter is written to the CEO of Twitter, Jack Dorsey. The issue I decided to write about is the spread of deepfakes on Twitter. This problem has been going on for quite some time but has still not been controlled properly although it continuously affects more people on social media, especially Twitter. My thesis statement is that Jack Dorsey needs to become more proactive about how we can control deepfakes being posted on Twitter because it can not only cause harm to the individuals in the videos or recordings but also their audience and spreading misinformation on a platform that people heavily rely on for news.
I chose this topic because I have seen many videos on Twitter that are deepfakes and have caused confusion for myself and others. The technology is advancing rapidly making it harder to detect what is real or not. I hope my readers will become more aware of the problem and help inform others of the problem at hand.
Persuasive Business Letter Final

Jack Dorsey, CEO of Twitter
Dalia Paredes
Scudder Hall 317
New Paltz, 12561 NY
November 12, 2021
Jack Dorsey
1355 Market Street Suite 900
San Francisco CA 94103
United States
Dear Jack Dorsey:
My name is Dalia Paredes. I am a student at SUNY New Paltz studying marketing. Being a full-time student, I don’t usually have time to sit down and read a newspaper or even search up a newspaper online. What I end up doing is opening the Twitter app on my phone to keep up with current events, I know many people do this as well. Something that I love about Twitter is how you get to find out breaking news faster than on other social medias and that you can speak to politicians or celebrities directly through this platform. I have been an avid Twitter user for many years now, it is an easy platform to use where you can build relationships with other creators through art or engage with people in a heated debate. Being a marketing major, I also really admire how anybody, from artists to entrepreneurs, can promote their work. With a promoted tweet/ad or even just a repost, they can reach a totally different and bigger audience. For example, I encouraged my brother to post his art on Twitter and he has now grown a following and formed a community. I truly admire the environment you have created for Twitter users everywhere, including your efforts to create a more “healthier and more civil” platform (Patrawala, 2019).
Although I love Twitter, it does have some faults. One big issue that has been present on Twitter are deepfakes, they have been circulating Twitter for the past couple years now. Deepfakes are a synthetic media that can modify someone’s face in a video or change someone’s voice in an audio recording. This synthetic media also uses artificial intelligence and can manipulate any media in different ways whether it be to alter people’s appearance or voices, such as the pitch, tone or what exactly is being said. The most common deepfake is swapping faces, where someone’s face is being inserted onto someone else’s body, this falls into visual deepfakes. Audio deepfakes take someone’s voice and can copy the person’s voice to say something completely different. There has been advancement in how to tell if these videos or recordings are real or not, but the technology is worsening the problem by making the videos more believable. Deepfakes feed to the spreading of false and misleading information, and Twitter is the perfect platform to do so. People purposely spread deepfakes on Twitter because of how fast news spread. With just one click of a button, all the followers of the one who reposted can see that post, so the number of people who see it grows exponentially. After the harm deepfakes have caused on many social media platforms, Twitter should follow companies like Facebook and YouTube in terms of becoming stricter with what users are allowed to post.
People might be thinking that deepfakes only affect celebrities or politicians, but it can happen to anyone. Imagine checking Twitter and you see a video of your body with someone else’s face saying inappropriate things that are harmful to your image. You know for a fact that the events from the video never happened, but the audience will automatically believe it’s true. This is merely a very small example of the power deepfakes can have on someone’s reputation. This is what deepfakes can do, they are manipulated videos or audios that are used to trick and confuse an audience. Deepfakes began with editing people’s faces into pornographic videos which were an extreme violation of anyone in the videos. “‘This kind of abuse—where people misrepresent your identity, name, reputation, and alter it in such violating ways—shatters you to the core,’ says Noelle Martin, an Australian activist who has been targeted by a deepfake porn campaign” (Hao, 2021), deepfakes can have a negative psychological effect on people and cause trauma to both parties involved. Having these videos present on any platform is extremely illegal and should be taken down right away but are usually not taken down. Nowadays the technology behind the creation of deepfakes are becoming more accessible to people which is harmful for everyone. People with zero experience can now just look up tutorials on how to make deepfakes or even use apps that make deepfakes for you and all you have to do is just submit a couple pictures/videos.
Deepfake technology has also been used to create “entertaining media”. An edited video of Keanu Rives stopping a robbery was made which left fans stunned (Bode 927), this can be seen as just harmless fun. But this same technology can also be used in very harmful and inappropriate ways. During Trump’s presidency in 2016, a deepfake video of Nancy Pelosi was going around where she was slurring her words and made it seem like she was drunk. This video was reposted by Donald Trump and his supporters spread it around Twitter even more along with them poking fun at Nancy Pelosi’s actions in the video. Something like this can influence people into coming to the decision of whether they support Nancy Pelosi or believing she has the capability to be a person in power of the United States. Trump used Twitter to spread this misleading video because he knew it would help him reach a larger audience. The deepfake could have escalated into something a lot worse and Pelosi could have even sued Trump for defamation and slander. This video was circulating throughout Facebook, YouTube, Instagram, and Twitter. Although Facebook did make a statement saying the video was a deepfake they did not take it down. Twitter did not even make a comment on the video and completely ignored the issue (Mervosh, 2019), even though it was originally posted on there.
So far, I’ve mainly been discussing visual deepfakes on Twitter but audio deepfakes are just as harmful and can even be harder to detect because you can’t see who’s behind the voice. Twitter recently added an option to add/post voice recordings. I do really love this feature because it is more accessible for people who are blind or have lack of sight. But it can potentially cause the harm of audio deepfakes to begin and grow on Twitter. One of the biggest scandals involving audio deepfakes happened in 2019 when criminals scammed a company in transferring about $243,000 into their bank account by mimicking the CEO of the company’s voice (Damiani, 2019). If something like this happened with an “accomplished CEO” imagine the outcome if an extremely convincing audio deepfake being spread around Twitter. This could cause people to lose money or do something they would regret. Audio recordings being posted on Twitter benefit scammers by collecting voice samples in a quicker way. This new form of technology isn’t super common so when people come across a questionable audio recording their brains don’t even suspect that it is an audio deepfake. People go on Twitter and like to assume that the facts presented to them are true, which can make them gullible at times and fall for scams.
Twitter did implement new policies during the 2020 elections, to put a stop to the posting of deepfakes and jeopardizing the reputation of both candidates on the line/cause a threat to public safety (Feiner, 2020). Although Twitter has tried to combat this problem, it has not gotten any better and is not going away anytime soon, and people will continue to make deepfakes and post them on Twitter unless you create stricter rules of what people are allowed to post. You have also not mentioned how your platform will detect and stop audio deepfakes from being posted. Audio deepfakes are more difficult to detect because it takes longer to listen to recordings and can confuse people even more than visual deepfakes. In the case of audio deepfakes, it is a lot easier to mimic someone’s voice rather than changing what you look like (Kietzmann et al. 142) And Twitter executives may wonder why we don’t copyright people’s voices so that people who post audio deepfakes can face criminal charges. However, this is simply impossible because it would be an extremely lengthy process to capture everyone’s voice. And any type of impressions would count as copyright strike. Others may argue that CGI technology, which is similar to the technology used to make deepfakes, has been used for years for film making purposes without causing unreliability. But this is because the credit at the end is CGI was involved. A simple post on Twitter does not provide any background information of where the video came from or who originally made it.
Being CEO of Twitter, a place where people get their news and expect it to be reliable, I think that there is a big responsibility on you to make sure that false news isn’t spread. The risk of deepfakes can hurt anybody immensely, to the point where the reputation is being threatened or people have a bad perception of them all because manipulated media have convinced them otherwise. Kietzmann and their team are researchers that focus on deepfake technology and have created a framework to manage the effects of deepfakes. They use the acronym R.E.A.L which stands for “Record original content to assure deniability; expose deepfakes early; advocate for legal protection; and leverage trust” (Kietzmann et al. 144). This tactic can be useful for any company if they do not detect the deepfake early enough where it does not cause damage. But the ideal plan is to partner with tech companies that can figure out an algorithm to detect visual and audio deepfakes before being posted and flag them. Twitter should also inform users about deepfakes now that they are growing and show what to do if one appears on their feed. A task force should be created that solely deal with detecting deepfakes. Once a person reports a deepfake, it should go directly to the task force to further investigate it.
I appreciate and thank you for taking the time to read my letter, I am passionate about putting an end to this issue. I hope to see some future changes to the Twitter’s policies concerning the posting of deepfakes and false information. It is extremely important to remember that this can happen to anyone, and the repercussions can be threatening to anybody’s reputation and well-being. You have a big responsibility to make sure Twitter users are safe and protected from the negative consequences deepfakes have. Thank you once again.
Respectfully,
Dalia Paredes
Works Cited
Bode, Lisa. “Deepfaking Keanu: YouTube Deepfakes, Platform Visual Effects, and the Complexity of Reception.” Convergence (London, England), vol. 27, no. 4, SAGE Publications, 2021, pp. 919–34, doi:10.1177/13548565211030454.
Damiani, Jesse. “A Voice Deepfake Was Used to Scam a CEO out of $243,000.” Forbes, Forbes Magazine, 3 Sept. 2019, https://www.forbes.com/sites/jessedamiani/2019/09/03/a-voice-deepfake-was-used-to-scam-a-ceo-out-of-243000/?sh=6ba6e1122241.
Feiner, Lauren. “Twitter Unveils New Rules to Tackle Deepfakes Ahead of the 2020 Election.” CNBC, CNBC, 4 Feb. 2020, https://www.cnbc.com/2020/02/04/twitter-unveils-new-rules-to-tackle-deepfakes-ahead-of-2020-election.html.
Hao, Karen. “A Horrifying New AI App Swaps Women into Porn Videos with a Click.” MIT Technology Review, MIT Technology Review, 20 Oct. 2021, https://www.technologyreview.com/2021/09/13/1035449/ai-deepfake-app-face-swaps-women-into-porn/.
Kietzmann, Jan, et al. “Deepfakes: Trick or Treat?” Business Horizons, vol. 63, no. 2, Elsevier Inc, 2020, pp. 135–46, doi:10.1016/j.bushor.2019.11.006.
Mervosh, Sarah. “Distorted Videos of Nancy Pelosi Spread on Facebook and Twitter, Helped by Trump.” The New York Times, The New York Times, 24 May 2019, https://www.nytimes.com/2019/05/24/us/politics/pelosi-doctored-video.html
Patrawala, Fatema. “Dorsey Meets Trump Privately to Discuss How to Make Public Conversation ‘Healthier and More Civil’ on Twitter.” Packt Hub, 24 Apr. 2019, https://hub.packtpub.com/dorsey-meets-trump-privately-discuss-public-conversation-healthier-more-civil-on-twitter/.
Persuasive Business Letter Draft
Dalia Paredes
1 Hawk Drive, New Paltz, NY
October 25th, 2021
Dear Jack Dorsey,
My name is Dalia Paredes, I am a student at SUNY New Paltz studying marketing. Being a full-time student, I don’t usually have time to sit down and read a newspaper or even search up a newspaper online. What I end up doing is going on Twitter to keep up with current events, I know many people do this as well. Something that I love about Twitter is how you get to find out breaking news faster than other social medias and that you can speak to politicians or celebrities through this platform. I have been an avid Twitter user for many years now, it is an easy platform to use where you can build relationships with other creators through art or engage with people in a heated debate. Being a marketing major, I also really admire how people can so easily market their art or products with a promoted ad/tweet. I’ve always come across artists who are selling their prints or jewelry and with even just a repost they can reach a totally different and bigger audience.
Although I really love Twitter, it does have its faults. After the harm deepfakes have caused on many types of social media platforms, they have begun setting laws to prevent the problem from becoming bigger, Twitter should follow companies like Facebook and YouTube into becoming stricter with what people are allowed to post concerning deepfakes. Sometimes the news posted on Twitter can be manipulated very easily in a way that can cause harm. Manipulation of news can happen to anyone, not just politicians or celebrities and in many ways such as: visual or audio deepfakes. Visual deepfakes are synthetic media that can take a picture or a video of someone than change what their mouths are saying or even change the way they look. Audio deepfakes are essentially the same but they can alter or distort your voice in audio recordings. Imagine if one morning you wake up to check Twitter and the first thing you see are embarrassing videos of you picking something out of your teeth and saying something foolish while doing so. But you know for a fact that events from the video never happened, this would probably ruin your mood, anger, and confuse you. This is merely a very small example of the power deepfakes can have on people. During Trump’s presidency in 2016, a deepfake video of Nancy Pelosi was going around where she was slurring her words and made it seem like she was drunk. This video was reposted by Donald Trump and his supporters spread it around even more along with them poking fun at Nancy Pelosi. Something like this can influence people into coming to the decision of whether they support Nancy Pelosi or not.
Although Twitter did implement new policies during the 2020 elections, to avoid deepfakes being made and jeopardizing the reputation of both candidates and cause a threat to public safety (Feiner). Although Twitter has done its best efforts to combat this problem, it is not going away anytime soon, and people will continue to make deepfakes even if they don’t pose a huge threat to the public. Twitter has also not mentioned how they will detect and stop audio deepfakes from being posted. Audio deepfakes are more difficult to detect because it takes longer to listen to recordings and can confuse people even more than visual deepfakes. In the case of audio deepfakes, it is a lot easier to mimic someone’s voice rather than changing what you look like (Kietzmann et al. 142) And people might be thinking well why we don’t copyright people’s voices to have a consequence to anyone who make audio deepfakes. This is simply impossible because then people who do impressions as a living would face the consequences for just doing their job.
Being CEO of Twitter, a place where people get their news from expecting it to be reliable, I think that there is a big responsibility you have to keep it that way. The risk of deepfakes can hurt anybody immensely, to the point where the reputation is being threatened or people have a bad view of them all because of manipulated media. Kietzmann and their team created a framework to manage the effects of deepfakes, they use the acronym R.E.A.L which stands for “Record original content to assure deniability; expose deepfakes early; advocate for legal protection; and leverage trust” (Kietzmann et al. 144). This tactic can be useful for any company if they do not detect the deepfake early enough where it does not cause damage.