我设置了一个scraper来使用delayed_job,以便它在后台运行。
class Scraper
def do_scrape
# do some scraping stuff
end
handle_asynchronously :do_scrape
end
现在我可以注释掉handle_asynchronously
行,打开控制台并运行scraper。它做的正是我所期望的。
然而,当我试图把这件事作为一项延迟的工作来解雇时,它似乎根本没有任何作用。除此之外,它似乎也没有记录任何重要的内容。
以下是我的日志从排队作业到运行rake jobs:work
的样子。
County Load (1.0ms) SELECT "counties".* FROM "counties" WHERE "counties"."name" = 'Fermanagh' LIMIT 1
(0.1ms) BEGIN
SQL (20.5ms) INSERT INTO "delayed_jobs" ("attempts", "created_at", "failed_at", "handler", "last_error", "locked_at", "locked_by", "priority", "run_at", "updated_at") VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10) RETURNING "id" [["attempts", 0], ["created_at", Mon, 30 May 2011 21:19:25 UTC +00:00], ["failed_at", nil], ["handler", "---
# serialized object omitted for conciseness
nmethod_name: :refresh_listings_in_the_county_without_delaynargs: []nn"], ["last_error", nil], ["locked_at", nil], ["locked_by", nil], ["priority", 0], ["run_at", Mon, 30 May 2011 21:19:25 UTC +00:00], ["updated_at", Mon, 30 May 2011 21:19:25 UTC +00:00]]
(0.9ms) COMMIT
Delayed::Backend::ActiveRecord::Job Load (0.4ms) SELECT "delayed_jobs".* FROM "delayed_jobs" WHERE (locked_by = 'host:David-Tuites-MacBook-Pro.local pid:7743' AND locked_at > '2011-05-30 17:19:32.116511') LIMIT 1
(0.1ms) BEGIN
SQL (0.3ms) DELETE FROM "delayed_jobs" WHERE "delayed_jobs"."id" = $1 [["id", 42]]
(0.4ms) COMMIT
正如你所看到的,它似乎只是插入一个作业,然后直接删除它?这种刮削方法至少需要几分钟时间。
最糟糕的是,它昨晚运行得很好,我想不出有什么不同的地方。我试着将gem修复到以前的版本,以防它最近更新,但似乎没有解决问题。
有什么想法吗?
您是否已将延迟作业配置为删除失败作业?在初始值设定项中查找以下设置:Delayed::Worker.destroy_failed_jobs = true
如果是,则将其设置为false,并在delayed_jobs表中查找失败的异常并进行进一步调试。