在python中解析wget日志文件



我有一个wget日志文件,我想解析该文件,以便我可以提取每个日志条目的相关信息。例如 IP 地址、时间戳、URL 等。

下面打印了一个示例日志文件。每个条目的行数和信息详细信息并不相同。一致的是每行的符号。

我能够提取单个行,但我想要一个多维数组(或类似数组):

import re
f = open('c:/r1/log.txt', 'r').read()

split_log =  re.findall('--[0-9]{4}-[0-9]{2}-[0-9]{2} [0-9]{2}:[0-9]{2}:[0-9]{2}.*', f)
print split_log
print len(split_log)
for element in split_log:
    print(element)

####### Start log file example
2014-11-22 10:51:31 (96.9 KB/s) - `C:/r1/www.itb.ie/AboutITB/index.html' saved [13302]
--2014-11-22 10:51:31--  http://www.itb.ie/CurrentStudents/index.html
Connecting to www.itb.ie|193.1.36.24|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: ignored [text/html]
Saving to: `C:/r1/www.itb.ie/CurrentStudents/index.html'
     0K .......... .......                                      109K=0.2s
2014-11-22 10:51:31 (109 KB/s) - `C:/r1/www.itb.ie/CurrentStudents/index.html' saved [17429]
--2014-11-22 10:51:32--  h ttp://www.itb.ie/Vacancies/index.html
Connecting to www.itb.ie|193.1.36.24|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: ignored [text/html]
Saving to: `C:/r1/www.itb.ie/Vacancies/index.html'
     0K .......... .......... ..                                118K=0.2s
2014-11-22 10:51:32 (118 KB/s) - `C:/r1/www.itb.ie/Vacancies/index.html' saved [23010]
--2014-11-22 10:51:32--  h ttp://www.itb.ie/Location/howtogetthere.html
Connecting to www.itb.ie|193.1.36.24|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: ignored [text/html]
Saving to: `C:/r1/www.itb.ie/Location/howtogetthere.html'
     0K .......... .......                                      111K=0.2s

下面介绍如何提取所需的数据并将其存储在元组列表中。

我在这里使用的正则表达式并不完美,但它们可以很好地处理您的示例数据。我修改了您的原始正则表达式以使用更具可读性的d而不是等效的[0-9]。我还使用了原始字符串,这通常使使用正则表达式更容易。

已将您的日志数据作为三引号字符串嵌入到我的代码中,因此我不必担心文件处理。我注意到日志文件中的某些 URL 中有空格,例如

h ttp://www.itb.ie/Vacancies/index.html

但我假设这些空间是复制和粘贴的工件,它们实际上并不存在于真实的日志数据中。如果不是这种情况,那么您的程序将需要做额外的工作来应对这些无关的空间。

我还修改了日志数据中的IP地址,因此它们并不完全相同,只是为了确保findall找到的每个IP都与正确的时间戳和URL正确关联。

#! /usr/bin/env python
import re
log_lines = '''
2014-11-22 10:51:31 (96.9 KB/s) - `C:/r1/www.itb.ie/AboutITB/index.html' saved [13302]
--2014-11-22 10:51:31--  http://www.itb.ie/CurrentStudents/index.html
Connecting to www.itb.ie|193.1.36.24|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: ignored [text/html]
Saving to: `C:/r1/www.itb.ie/CurrentStudents/index.html'
     0K .......... .......                                      109K=0.2s
2014-11-22 10:51:31 (109 KB/s) - `C:/r1/www.itb.ie/CurrentStudents/index.html' saved [17429]
--2014-11-22 10:51:32--  http://www.itb.ie/Vacancies/index.html
Connecting to www.itb.ie|193.1.36.25|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: ignored [text/html]
Saving to: `C:/r1/www.itb.ie/Vacancies/index.html'
     0K .......... .......... ..                                118K=0.2s
2014-11-22 10:51:32 (118 KB/s) - `C:/r1/www.itb.ie/Vacancies/index.html' saved [23010]
--2014-11-22 10:51:32--  http://www.itb.ie/Location/howtogetthere.html
Connecting to www.itb.ie|193.1.36.26|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: ignored [text/html]
Saving to: `C:/r1/www.itb.ie/Location/howtogetthere.html'
     0K .......... .......                                      111K=0.2s
'''
time_and_url_pat = re.compile(r'--(d{4}-d{2}-d{2}s+d{2}:d{2}:d{2})--s+(.*)')
ip_pat = re.compile(r'Connecting to.*|(.*?)|')
time_and_url_list = time_and_url_pat.findall(log_lines)
print 'ntime and urln', time_and_url_list
ip_list = ip_pat.findall(log_lines)
print 'nipn', ip_list
all_data = [(t, u, i) for (t, u), i in zip(time_and_url_list, ip_list)]
print 'nalln', all_data, 'n'
for t in all_data:
    print t

输出

time and url
[('2014-11-22 10:51:31', 'http://www.itb.ie/CurrentStudents/index.html'), ('2014-11-22 10:51:32', 'http://www.itb.ie/Vacancies/index.html'), ('2014-11-22 10:51:32', 'http://www.itb.ie/Location/howtogetthere.html')]
ip
['193.1.36.24', '193.1.36.25', '193.1.36.26']
all
[('2014-11-22 10:51:31', 'http://www.itb.ie/CurrentStudents/index.html', '193.1.36.24'), ('2014-11-22 10:51:32', 'http://www.itb.ie/Vacancies/index.html', '193.1.36.25'), ('2014-11-22 10:51:32', 'http://www.itb.ie/Location/howtogetthere.html', '193.1.36.26')] 
('2014-11-22 10:51:31', 'http://www.itb.ie/CurrentStudents/index.html', '193.1.36.24')
('2014-11-22 10:51:32', 'http://www.itb.ie/Vacancies/index.html', '193.1.36.25')
('2014-11-22 10:51:32', 'http://www.itb.ie/Location/howtogetthere.html', '193.1.36.26')

此代码的最后一部分使用列表推导式将time_and_url_list和ip_list中的数据重新组织为单个元组列表,使用 zip 内置函数并行处理两个列表。如果这部分有点难以理解,请告诉我,我会尝试进一步解释。

最新更新