Anki的Web抓取脚本



我对编程很陌生。学习python以加快我与Anki的语言学习。我想为安基创建网页抓取脚本,以便更快地创建卡片。这是我的代码:(这不是最终产品,我想学习如何发送到csv文件,这样我就可以导入到Anki。(

from bs4 import BeautifulSoup
import requests
#get data from user
input("Type word ")
#get page
page  = requests.get("https://fr.wiktionary.org/wiki/", params=word)
#make bs4 object
soup = BeautifulSoup(page.content, 'html.parser')
#find data from soup
IPA=soup.find(class_='API')
partofspeech=soup.find(class_='ligne-de-forme')
#open file
f=open("french.txt", "a")
#print text
print (IPA.text)
print (partofspeech.text)
#write to file
f.write(IPA.text)
f.write(partofspeech.text)
#close file
f.close()

它只返回";"每日一词";来自Wikitionnaire,而不是用户的输入。有什么想法吗?

您可以采用以下方法

(1( 读一些法语,把你想学的单词或句子记在纸上。

(2( 将这些单词/句子写在{text,json,markdown,…}文件中。

(3( 用带I/O处理的Python阅读这些世界。

(4( 使用运行网络服务器的anki connect与您的anki帐户对接。

(5( 例如,编写一个Python脚本以HTTP方式发布输入单词,并在deepl.com上获取答案。

(6( 将这些工具组合起来,在一个命令中为Anki添加一个学习会话。

(7( 快乐学习!

某些代码

Anki连接

# https://github.com/FooSoft/anki-connect
# https://github.com/FooSoft/anki-connect/blob/master/actions/decks.md
import json
import urllib.request
def request(action, **params):
return {'action': action, 'params': params, 'version': 6}
def invoke(action, **params):
requestJson = json.dumps(request(action, **params)).encode('utf-8')
response = json.load(urllib.request.urlopen(urllib.request.Request('http://localhost:8765', requestJson)))
if len(response) != 2:
raise Exception('response has an unexpected number of fields')
if 'error' not in response:
raise Exception('response is missing required error field')
if 'result' not in response:
raise Exception('response is missing required result field')
if response['error'] is not None:
raise Exception(response['error'])
return response['result']
invoke('createDeck', deck='english-to-french')
result = invoke('deckNames')
print(f'got list of decks: {result}')
invoke('deleteDecks', decks=['english-to-french'], cardsToo=True)
result = invoke('deckNames')
print(f'got list of decks: {result}')

用剪贴纸刮网页

import scrapy

CODES = {
'fr': 'french',
'en': 'english'
}

URL_BASE = "https://www.linguee.com/%s-%s/translation/%s.html"
# these urls can come from another data file
# def get_data_from_file(filepath: string):
#   with open('data.json', 'r') as f:
#       lines = f.readlines()
# 
#   return [URL_BASE % (CODES['fr'], CODES['en'], line) for line in lines]
URLS = [
URL_BASE % (CODES['fr'], CODES['en'], 'lascive')
]

class BlogSpider(scrapy.Spider):
name = 'linguee_spider'
start_urls = URLS
def parse(self, response):
for span in response.css('span.tag_lemma'):
yield {'world': span.css('a.dictLink ::text').get()}
for div in response.css('div.translation'):
for span in div.css('span.tag_trans'):
yield {'translation': span.css('a.dictLink ::text').get()}

外壳脚本,包装所有

#!/bin/bash
# setup variables
DATE=$(date +"%Y-%m-%d-%H-%M")
SCRIPT_FILE="/path/to/folder/script.py"
OUTPUT_FILE="/path/to/folder/data/${DATE}.json"
echo "Running --- ${SCRIPT_FILE} --- at --- ${DATE} ---"
# activate virtualenv and run scrapy
source /path/to/folder/venv/bin/activate
scrapy runspider ${SCRIPT_FILE} -o ${OUTPUT_FILE}
echo "Saved results into --- ${OUTPUT_FILE} ---"
# reading data from scrapy output and creating an Anki card using anki-connect
python create_anki_card.py

最新更新