Laravel队列有时会重复一个作业



我正在使用Laravel 5.7制作一个网站爬虫。队列在Redis驱动程序上,我正在使用laravel horizon来运行作业。我只希望一个作业同时运行或排队,但它有时会复制一个作业,并最终同时运行同一作业的多个实例。有什么方法可以保证一次只运行一个作业吗?

作业类别

class MyJob implements ShouldQueue

{使用Dispatchable、InteractisWithQueue、Queueable、SerializesModels;

public $timeout = 30000;
public $tries = 1;
public function __construct($jobid = null)
{
if (!$jobid) {
$this->ra_jobid = substr(str_shuffle('1234567890abcdefghijklmnopqrstuvwxyz'), 0, 8);
} else {
$this->ra_jobid = $jobid;
}
}
public function handle()
{
Redis::funnel('RetrieveAucfanData')->limit(1)->then(function () {
if (Redis::get('stop_job' . $this->ra_jobid)) {
Redis::del('stop_job' . $this->ra_jobid);
throw new JobManuallyStoppedException();
}
//crawl a website
}, function () {
return $this->release(60);
});
}
public function failed(Exception $exception)
{
if ($exception->getMessage() == 'Job Manually Stopped') {
Log::debug($exception->getMessage());
} else {
self::dispatch($this->ra_jobid)
->delay(now()->addSeconds(60));
}
}

}

如果作业失败,除非手动停止,否则我希望再次对此作业进行排队。如果作业达到限制,我还想将其放回队列。

queue.php

<?php

退货

/*
|--------------------------------------------------------------------------
| Default Queue Driver
|--------------------------------------------------------------------------
|
| Laravel's queue API supports an assortment of back-ends via a single
| API, giving you convenient access to each back-end using the same
| syntax for each one. Here you may set the default queue driver.
|
| Supported: "sync", "database", "beanstalkd", "sqs", "redis", "null"
|
*/
'default' => env('QUEUE_DRIVER', 'redis'),
/*
|--------------------------------------------------------------------------
| Queue Connections
|--------------------------------------------------------------------------
|
| Here you may configure the connection information for each server that
| is used by your application. A default configuration has been added
| for each back-end shipped with Laravel. You are free to add more.
|
*/
'connections' => [
'sync' => [
'driver' => 'sync',
],
'database' => [
'driver' => 'database',
'table' => 'jobs',
'queue' => 'default',
'retry_after' => 30,
],
'beanstalkd' => [
'driver' => 'beanstalkd',
'host' => 'localhost',
'queue' => 'default',
'retry_after' => 30,
],
'sqs' => [
'driver' => 'sqs',
'key' => 'your-public-key',
'secret' => 'your-secret-key',
'prefix' => 'https://sqs.us-east-1.amazonaws.com/your-account-id',
'queue' => 'your-queue-name',
'region' => 'us-east-1',
],
'redis' => [
'driver' => 'redis',
'connection' => 'default',
'queue' => 'default',
'retry_after' => 300,
'block_for' => 10,
],
],
/*
|--------------------------------------------------------------------------
| Failed Queue Jobs
|--------------------------------------------------------------------------
|
| These options configure the behavior of failed queue job logging so you
| can control which database and table are used to store the jobs that
| have failed. You may change them to any database / table you wish.
|
*/
'failed' => [
'database' => env('DB_CONNECTION', 'mysql'),
'table' => 'failed_jobs',
],

];

您的超时值大于retry_after值,因此两个或多个工人正在处理同一作业。

我在这里添加了文档的快照。

相关内容

  • 没有找到相关文章

最新更新