我有一个ruby脚本,它收集了46344个xml链接,然后在每个xml文件中收集了16个元素节点。过程的最后一部分是将其存储在CSV文件中。我的问题是这需要很长时间。这需要1-2个多小时。。
这是一个没有链接的脚本,它有所有的XML链接,我不能提供链接,因为它是公司的东西。。我希望天气凉爽。
这是脚本,它可以工作,但需要很长时间:
require 'rubygems'
require 'nokogiri'
require 'open-uri'
require 'rexml/document'
require 'csv'
include REXML
@urls = Array.new
@ID = Array.new
@titleSv = Array.new
@titleEn = Array.new
@identifier = Array.new
@typeOfLevel = Array.new
@typeOfResponsibleBody = Array.new
@courseTyp = Array.new
@credits = Array.new
@degree = Array.new
@preAcademic = Array.new
@subjectCodeVhs = Array.new
@descriptionSv = Array.new
@visibleToSweApplicants = Array.new
@lastedited = Array.new
@expires = Array.new
# Hämtar alla XML-länkar
htmldoc = Nokogiri::HTML(open('A SITE THAT HAVE ALL THE LINKS'))
# Hämtar alla länkar för xml-filerna och sparar dom i arrayn urls
htmldoc.xpath('//a/@href').each do |links|
@urls << links.content
end
@urls.each do |url|
# Loop throw the XML files and grab element nodes
xmldoc = REXML::Document.new(open(url).read)
# Root element
root = xmldoc.root
# Hämtar info-id
@ID << root.attributes["id"]
# TitleSv
xmldoc.elements.each("/ns:educationInfo/ns:titles/ns:title[1]"){
|e| @titleSv << e.text
}
# TitleEn
xmldoc.elements.each("/ns:educationInfo/ns:titles/ns:title[2]"){
|e| @titleEn << e.text
}
# Identifier
xmldoc.elements.each("/ns:educationInfo/ns:identifier"){
|e| @identifier << e.text
}
# typeOfLevel
xmldoc.elements.each("/ns:educationInfo/ns:educationLevelDetails/ns:typeOfLevel"){
|e| @typeOfLevel << e.text
}
# typeOfResponsibleBody
xmldoc.elements.each("/ns:educationInfo/ns:educationLevelDetails/ns:typeOfResponsibleBody"){
|e| @typeOfResponsibleBody << e.text
}
# courseTyp
xmldoc.elements.each("/ns:educationInfo/ns:educationLevelDetails/ns:academic/ns:courseOfferingPackage/ns:type"){
|e| @courseTyp << e.text
}
# credits
xmldoc.elements.each("/ns:educationInfo/ns:credits/ns:exact"){
|e| @credits << e.text
}
# degree
xmldoc.elements.each("/ns:educationInfo/ns:degrees/ns:degree"){
|e| @degree << e.text
}
# @preAcademic
xmldoc.elements.each("/ns:educationInfo/ns:prerequisites/ns:academic"){
|e| @preAcademic << e.text
}
# @subjectCodeVhs
xmldoc.elements.each("/ns:educationInfo/ns:subjects/ns:subject/ns:code"){
|e| @subjectCodeVhs << e.text
}
# DescriptionSv
xmldoc.elements.each("/educationInfo/descriptions/ct:description/ct:text"){
|e| @descriptionSv << e.text
}
# Hämtar dokuments utgångs-datum
@expires << root.attributes["expires"]
# Hämtar dokuments lastedited
@lastedited << root.attributes["lastEdited"]
# Lagrar dom i uni.CSV
CSV.open("eduction_normal.csv", "wb") do |row|
(0..@ID.length - 1).each do |index|
row << [@ID[index], @titleSv[index], @titleEn[index], @identifier[index], @typeOfLevel[index], @typeOfResponsibleBody[index], @courseTyp[index], @credits[index], @degree[index], @preAcademic[index], @subjectCodeVhs[index], @descriptionSv[index], @lastedited[index], @expires[index]]
end
end
end
如果是网络访问,您可以开始对其进行线程处理和/或开始使用Jruby,Jruby可以使用处理器上的所有内核。如果你必须经常这样做,你必须制定一个读写策略,在没有阻塞的情况下为你提供最佳服务。