将单词列表转换为这些单词出现的频率列表



我正在大量处理各种单词列表。

请考虑我的以下问题:

docText={"settlement", "new", "beginnings", "wildwood", "settlement", "book",
"excerpt", "agnes", "leffler", "perry", "my", "mother", "junetta", 
"hally", "leffler", "brought", "my", "brother", "frank", "and", "me", 
"to", "edmonton", "from", "monmouth", "illinois", "mrs", "matilda", 
"groff", "accompanied", "us", "her", "husband", "joseph", "groff", 
"my", "father", "george", "leffler", "and", "my", "uncle", "andrew", 
"henderson", "were", "already", "in", "edmonton", "they", "came", 
"in", "1910", "we", "arrived", "july", "1", "1911", "the", "sun", 
"was", "shining", "when", "we", "arrived", "however", "it", "had", 
"been", "raining", "for", "days", "and", "it", "was", "very", 
"muddy", "especially", "around", "the", "cn", "train"}
searchWords={"the","for","my","and","me","and","we"}

这些列表中的每一个都要长得多(比如searchWords列表中的250个单词和docText是大约12000个单词)。

现在,我有能力通过做这样的事情来计算给定单词的频率:

docFrequency=Sort[Tally[docText],#1[[2]]>#2[[2]]&];    
Flatten[Cases[docFrequency,{"settlement",_}]][[2]]

但我被挂断的地方是我寻求生成特定的列表。具体来说,将单词列表转换为这些单词出现频率的列表的问题。我曾尝试过用Do循环来实现这一点,但遇到了障碍。

我想用searchWords来遍历docText,并用它出现的频率来替换docText的每个元素。也就是说,由于"结算"出现两次,它在列表中会被2取代,而由于"我的"出现三次,它会变成3。然后列表将类似于2,1,1,1,2,依此类推。

我怀疑答案在If[]Map[]的某个地方?

这一切听起来很奇怪,但我正在尝试预处理一堆信息作为术语频率信息…


添加清晰度(我希望):

这里有一个更好的例子。

searchWords={"0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "a", "A", "about", 
"above", "across", "after", "again", "against", "all", "almost", 
"alone", "along", "already", "also", "although", "always", "among", 
"an", "and", "another", "any", "anyone", "anything", "anywhere", 
"are", "around", "as", "at", "b", "B", "back", "be", "became", 
"because", "become", "becomes", "been", "before", "behind", "being", 
"between", "both", "but", "by", "c", "C", "can", "cannot", "could", 
"d", "D", "do", "done", "down", "during", "e", "E", "each", "either", 
"enough", "even", "ever", "every", "everyone", "everything", 
"everywhere", "f", "F", "few", "find", "first", "for", "four", 
"from", "full", "further", "g", "G", "get", "give", "go", "h", "H", 
"had", "has", "have", "he", "her", "here", "herself", "him", 
"himself", "his", "how", "however", "i", "I", "if", "in", "interest", 
"into", "is", "it", "its", "itself", "j", "J", "k", "K", "keep", "l", 
"L", "last", "least", "less", "m", "M", "made", "many", "may", "me", 
"might", "more", "most", "mostly", "much", "must", "my", "myself", 
"n", "N", "never", "next", "no", "nobody", "noone", "not", "nothing", 
"now", "nowhere", "o", "O", "of", "off", "often", "on", "once", 
"one", "only", "or", "other", "others", "our", "out", "over", "p", 
"P", "part", "per", "perhaps", "put", "q", "Q", "r", "R", "rather", 
"s", "S", "same", "see", "seem", "seemed", "seeming", "seems", 
"several", "she", "should", "show", "side", "since", "so", "some", 
"someone", "something", "somewhere", "still", "such", "t", "T", 
"take", "than", "that", "the", "their", "them", "then", "there", 
"therefore", "these", "they", "this", "those", "though", "three", 
"through", "thus", "to", "together", "too", "toward", "two", "u", 
"U", "under", "until", "up", "upon", "us", "v", "V", "very", "w", 
"W", "was", "we", "well", "were", "what", "when", "where", "whether", 
"which", "while", "who", "whole", "whose", "why", "will", "with", 
"within", "without", "would", "x", "X", "y", "Y", "yet", "you", 
"your", "yours", "z", "Z"}

这些是从WordData[]自动生成的停止字。所以我想将这些单词与docText进行比较。由于"结算"不是searchWords的一部分,因此它将显示为0。但由于"my"是searchWords的一部分,它会作为计数弹出(这样我就可以知道给定单词出现了多少次)。

我真的很感谢你的帮助——我期待着在遇到真正解释我想做什么的能力极限时尽快参加一些正式的课程!

我们可以将searchWords中没有出现的所有内容替换为docText中的0,如下所示:

preprocessedDocText = 
   Replace[docText, 
     Dispatch@Append[Thread[searchWords -> searchWords], _ -> 0], {1}]

我们可以用它们的频率替换剩下的单词:

replaceTable = Dispatch[Rule @@@ Tally[docText]];
preprocessedDocText /. replaceTable

Dispatch对规则列表(->)进行预处理,并在后续使用中显著加快替换速度。

我还没有在大数据上对此进行基准测试,但Dispatch应该可以提供很好的加速。

@Szabolcs给出了一个很好的解决方案,我自己可能也会走同样的路。这里有一个稍微不同的解决方案,只是为了好玩:

ClearAll[getFreqs];
getFreqs[docText_, searchWords_] :=
  Module[{dwords, dfreqs, inSearchWords, lset},
    SetAttributes[{lset, inSearchWords}, Listable];
    lset[args__] := Set[args];
    {dwords, dfreqs} = Transpose@Tally[docText];
    lset[inSearchWords[searchWords], True];
    inSearchWords[_] = False;
    dfreqs*Boole[inSearchWords[dwords]]]

这显示了如何使用Listable属性来替换循环,甚至Map-ping。我们有:

In[120]:= getFreqs[docText,searchWords]
Out[120]= {0,0,0,0,0,0,0,0,0,4,0,0,0,0,0,0,3,1,1,0,1,0,0,0,0,0,0,1,1,0,0,0,0,0,0,0,1,1,2,
1,0,0,2,0,0,1,0,2,0,2,0,1,1,2,1,1,0,1,0,1,0,0,1,0,0}

我开始用与Szabolcs不同的方法来解决这个问题,但最终我得到了一些类似的东西。

尽管如此,我认为它更干净。在某些数据上速度更快,而在另一些数据上速度较慢。

docText /. 
  Dispatch[FilterRules[Rule @@@ Tally@docText, searchWords] ~Join~ {_String -> 0}]

最新更新