我有一个包含URL列表的文本文件,我愿意将URL的内容打印在另一个文本文件中,以及URL作为标题。我已经使用这个项目文件 https://pypi.org/project/Wikipedia-API/来提取内容,但我必须一个接一个地输入链接,我不想深入研究,因为我的列表很大,每个文本文件至少有 3000 个链接。
任何人都可以帮我解决这个问题,将不胜感激。
编辑:
我通过以下方式尝试过,但输出 txt 文件中没有内容。
import urllib
import datetime as dt
from datetime import datetime
import time
linklist = []
with open ("test.txt", 'r', encoding = 'utf=8') as wikitxt :
#content = wikitxt.read().splitlines()
for i in wikitxt:
linklist.append (i)
output = open('Wikipedia_content.txt', 'w', encoding='utf-8')
startTime = time.time()
endTime = time.time()
runTime = endTime - startTime
print("Runtime is %3f seconds" % runTime)
这是我 https://pastebin.com/Y4bwsHGB 使用过的txt文件,这是我需要使用的文本文件 https://pastebin.com/SXDAu8jV:
提前谢谢。
问题:
Traceback (most recent call last):
File "C:/Users/suva_/Desktop/Project specification/data/test2.py", line 13, in <module>
output_file.write((urlopen(link).read()))
File "D:Python 36liburllibrequest.py", line 228, in urlopen
return opener.open(url, data, timeout)
File "D:Python 36liburllibrequest.py", line 531, in open
response = self._open(req, data)
File "D:Python 36liburllibrequest.py", line 554, in _open
'unknown_open', req)
File "D:Python 36liburllibrequest.py", line 509, in _call_chain
result = func(*args)
File "D:Python 36liburllibrequest.py", line 1389, in unknown_open
raise URLError('unknown url type: %s' % type)
urllib.error.URLError: <urlopen error unknown url type: https>
最终修复:
import urllib
import datetime as dt
from datetime import datetime
import requests
import time
import re
import html2text
startTime = time.time()
def text_opener():
linklist=[]
with open ("test.txt", 'r', encoding = 'utf=8') as wikitxt :
#content = wikitxt.read().splitlines()
for i in wikitxt:
try:
linklist.append(i.strip())
except UnicodeEncodeError as enror:
linklist.append ("")
return linklist
linklist = text_opener() # put the content in a list and then opened the text
'''
This is a string of characters which I wanted to remove from the URL content
rejectedChar = list('!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~0123456789')
rejectedChar.append("t")
special="t"
regexWords = r"[w']+"
'''
'''STOPWORDS LIST WHICH CONTAINS A BUNCH OF WORDS WHICH I DON"T NEED TO BE PRINTED--- ONLY FOR LARGE FILES
#stopwords = []
#with open('stopwords.txt', 'r', encoding='utf-8') as inFile:
# for i in inFile:
# stopwords.append(i.strip())
'''
content = ""
count = 0
for i in linklist:
print(count," ",i.encode('utf-8'))
count+=1
try:
f = urllib.request.urlopen(i).read()
content+=str(f)
except Exception as e:
continue
#print((linklist[0:4000]).encode('utf-8'))
#combinedstops= rejectedChar+stopwords # combining them together
#for item in combinedstops:
#content=content.replace(item,"") # now this items are removed from the
#content
def output_file (content):
with open('June_wikipedia_content.txt', 'w', encoding = 'utf-8') as output:
output.write(str(content))
## try:
## output_file (content)
## except UnicodeEncodeError as enror:
## print ("Got lost in the game")
#sky=open("sky.txt",'w')
#sky.write(str(content))
output_file (content)
#print("hahahahahaha",stopwords)
#for i in content:
# i = re.findall(regexWords, i)
# i = [i for i in i if i in stopwords]
startTime = time.time()
endTime = time.time()
runTime = endTime - startTime
print("Runtime is %3f seconds" % runTime)
您可以使用以下函数打开文本文件并将所有链接存储在列表中:
with open('links.txt') as f:
content = f.read().splitlines()
变量content
是一个列表,其中每个元素都包含与 URL 关联的字符串。这只有在您的links.txt
具有 URL 的逐行排列时才有效,即:
www.google.co.in
www.wikipedia.co.in
www.youtube.co.in
获得此列表后,您可以使用简单的 for 循环遍历它并执行您想要的操作。
如果您想要更详细的答案,我建议您发布链接的示例文本文件。
编辑:
这有效,但它会将整个数据转储到文件中。数据的格式不正确。这是你需要的吗?
from urllib.request import urlopen
with open('links.txt') as f:
content = f.read().splitlines()
with open('Wikipedia_content.txt', 'w') as output_file:
for link in content :
output_file.write(link)
output_file.write((urlopen(link).read()))