Using my mallet results, I was able to produce the above word cloud. creating our own topic modeling and then using these words to create our own word cloud was interesting because it was different from what we usually do: this time we were able to see our own results in a nice visual. When picking from the lists in topic modeling, I only picked the ones that correlated well together. Some of them all made sense together, and other ones had some words in there that appeared more random. For instance, in one of my topic modelings which I named “fatherly advice to son” all the words can be directly related to that topic. However in my other topic modeling labeled “facial features” all the words used could be used to describe a face, but some words could be used in a much broader sense. For example, this topic modeling includes words such as “gray” and “pleasant”– words that could be used to describe a face but only in context. This can prove a possible theory that the more narrow the search, the more specific the topic modeling. I conducted a few different searches: 30 topics and 2000 iterations, 50 topics and 3000 iterations, 40 topics and 5000 iterations, 20 topics and 2000 iterations. “fatherly advice to son” was generated from a 20 topics and 2000 iterations but “facial features” was generated from 50 topics and 3000 iterations. If the topic search was larger, it is likely Mallet will just be searching for any words that could fit in any way, whereas more narrower searches are able to use just words that fit the subject. Although I am just theorizing, Mallet is a very interesting and useful program.