clojure大数据频率字典



我想编写自己的朴素贝叶斯分类器我有一个这样的文件:

(这是垃圾邮件和火腿消息的数据库,第一个单词指向垃圾邮件或火腿,文本直到eoln是来自这里的消息(大小:0.5 Mb)http://www.dt.fee.unicamp.br/~tiago/smsspamcollection/)

ham     Go until jurong point, crazy.. Available only in bugis n gre
at world la e buffet... Cine there got amore wat...
ham     Ok lar... Joking wif u oni...
spam    Free entry in 2 a wkly comp to win FA Cup final tkts 21st May 2005. Text FA to 87121 to receive entry question(std txt rate)T&C's apply 08452810075over18's
ham     U dun say so early hor... U c already then say...
ham     Nah I don't think he goes to usf, he lives around here though
spam    FreeMsg Hey there darling it's been 3 week's now and no word back! I'd like some fun you up for it still? Tb ok! XxX std chgs to send, £1.50 to rcv

我想做一个这样的哈希图:{"垃圾邮件"{"转到"1,"直到"100,…},"火腿"{…}哈希图,其中每个值都是单词的频率图(分别用于火腿和垃圾邮件)

我知道,如何通过python或c++实现,我也通过clojure实现了,但我的解决方案在大数据中失败了(stackoverflow)

我的解决方案:

(defn read_data_from_file [fname]
    (map #(split % #"s")(map lower-case (with-open [rdr (reader fname)] 
        (doall (line-seq rdr))))))
(defn do-to-map [amap keyseq f]
    (reduce #(assoc %1 %2 (f (%1 %2))) amap keyseq))
(defn dicts_from_data [raw_data]
    (let [data (group-by #(first %) raw_data)]
        (do-to-map
            data (keys data) 
                (fn [x] (frequencies (reduce concat (map #(rest %) x)))))))

我试着找出它的错误,并写下了这个

(def raw_data (read_data_from_file (first args)))
(def d (group-by #(first %) raw_data))
(def f (map frequencies raw_data))
(def d1 (reduce concat (d "spam")))
(println (reduce concat (d "ham")))

错误:

Exception in thread "main" java.lang.RuntimeException: java.lang.StackOverflowError
    at clojure.lang.Util.runtimeException(Util.java:165)
    at clojure.lang.Compiler.eval(Compiler.java:6476)
    at clojure.lang.Compiler.eval(Compiler.java:6455)
    at clojure.lang.Compiler.eval(Compiler.java:6431)
    at clojure.core$eval.invoke(core.clj:2795)
    at clojure.main$eval_opt.invoke(main.clj:296)
    at clojure.main$initialize.invoke(main.clj:315)
.....

有人能帮我把这件事做得更好/更有效吗?PS很抱歉我写错了。英语不是我的母语。

在匿名函数中使用apply而不是reduce可以避免StackOverflow异常。使用(fn [x] (frequencies (apply concat (map #(rest %) x))))代替(fn [x] (frequencies (reduce concat (map #(rest %) x))))

以下是经过一点重构的相同代码,但具有完全相同的逻辑。改变read-data-from-file以避免map在两次行序列上ping。

(use 'clojure.string)
(use 'clojure.java.io)
(defn read-data-from-file [fname]
  (let [lines (with-open [rdr (reader fname)] 
                (doall (line-seq rdr)))]
    (map #(-> % lower-case (split #"s")) lines)))
(defn do-to-map [m keyseq f]
    (reduce #(assoc %1 %2 (f (%1 %2))) m keyseq))
(defn process-words [x]
  (->> x 
    (map #(rest %)) 
    (apply concat) ; This is the only real change from the 
                   ; original code, it used to be (reduce concat).
    frequencies))
(defn dicts-from-data [raw_data]
  (let [data (group-by first raw_data)]
    (do-to-map data
               (keys data) 
               process-words)))
(-> "SMSSpamCollection.txt" read-data-from-file dicts-from-data keys)

需要考虑的另一件事是使用(doall (line-seq ...)),它将整个单词列表读取到内存中。如果列表很大,这可能会导致问题。像这样积累数据的一个方便技巧是使用reduce。在您的情况下,我们需要reduce两次:一次在行上,然后在每行中的单词上。类似这样的东西:

(defn parse-line
  [line]
  (str/split (str/lower-case line) #"s+"))
(defn build-word-freq
  [file]
  (with-open [rdr (io/reader file)]
    (reduce (fn [accum line]
              (let [[spam-or-ham & words] (parse-line line)]
                (reduce #(update-in %1 [spam-or-ham %2] (fnil inc 0)) accum words)))
            {}
            (line-seq rdr))))

最新更新