用于 cron 作业超时的 PHP 脚本



所以我写脚本已经有一段时间了,我写的几个脚本似乎经常运行并且没有超时。但是这个是,我想知道你们是否有关于优化此脚本速度的可能建议。

它的工作是从现有数据库中获取 ID,并在 API 查找中使用它们来提取项目数据。问题是它就像我必须查找的 45000 个项目,并且我已经通过 curl 使用多句柄来管道该过程。

我有点卡住了,想知道你们是否有一些想法可以让这个脚本运行得更快以避免超时。

注意:我的数据库连接信息被隐藏,但我连接得很好

<?php
$s = microtime(true); //Time request variable
//CONNECT TO THE DATABASE
$DB_NAME = 'database';
$DB_HOST = 'mysql.myhost.com';
$DB_USER = 'myusername';
$DB_PASS = 'mypass';
$con = new mysqli($DB_HOST, $DB_USER, $DB_PASS, $DB_NAME);
if (mysqli_connect_errno()) {
    printf("Connect failed: %sn", mysqli_connect_error());
    exit();
}
//END OF DB CONNECT
//TP UTIL
function array_2d_to_1d($input_array) {
    $output_array = array();
    for ($i = 0; $i < count($input_array); $i++) {
      for ($j = 0; $j < count($input_array[$i]); $j++) {
        $output_array[] = $input_array[$i][$j];
      }
    }
    return $output_array;
}
function tableExists($con, $table) {
    $show = "SHOW TABLES LIKE '$table'";
    $result = $con->query($show) or die($con->error.__LINE__);
    if($result->num_rows == 1) return true;
    else return false;
}
//END TP UTIL
//++++++++++GET ITEM IDS++++++++++++//
$table = "shinies_primitiveitems_table_NEW";
if(tableExists($con, $table)){
    $query = "SELECT * FROM $table";
    $result = $con->query($query) or die($con->error.__LINE__);
    $index = 0;
    if($result->num_rows > 0) {
        while($row = $result->fetch_assoc()) {
            $urls[$index] = "https://api.guildwars2.com/v2/items/".stripslashes($row['ItemID']);
            //echo $urls[$i]."<br />";
            $index++;
        } //end while loop
    } //end if
}
//++++++++++END GET ITEM IDS++++++++++++//

//++++++++++MULTI CURL REQUESTS FOR API+++++++++++//
// Define the URLs
//$urls = $apiURLArray;
// Create get requests for each URL
$mh = curl_multi_init();
foreach($urls as $i => $url)
{
    //echo $url."<br />";
    $ch[$i] = curl_init($url);
    curl_setopt($ch[$i], CURLOPT_RETURNTRANSFER, 1);
    curl_multi_add_handle($mh, $ch[$i]);
}
// Start performing the request
do {
    $execReturnValue = curl_multi_exec($mh, $runningHandles);
} while ($execReturnValue == CURLM_CALL_MULTI_PERFORM);
// Loop and continue processing the request
while ($runningHandles && $execReturnValue == CURLM_OK) {
  // Wait forever for network
  $numberReady = curl_multi_select($mh);
  if ($numberReady != -1) {
    // Pull in any new data, or at least handle timeouts
    do {
      $execReturnValue = curl_multi_exec($mh, $runningHandles);
    } while ($execReturnValue == CURLM_CALL_MULTI_PERFORM);
  }
}
// Check for any errors
if ($execReturnValue != CURLM_OK) {
  trigger_error("Curl multi read error $execReturnValuen", E_USER_WARNING);
}
// Extract the content
foreach($urls as $i => $url)
{
  // Check for errors
  $curlError = curl_error($ch[$i]);
  if($curlError == "") {
    $res[$i] = curl_multi_getcontent($ch[$i]);
  } else {
    print "Curl error on handle $i: $curlErrorn";
  }
  // Remove and close the handle
  curl_multi_remove_handle($mh, $ch[$i]);
  curl_close($ch[$i]);
}
// Clean up the curl_multi handle
curl_multi_close($mh);
//var_dump(json_decode($res, true));
//echo count($res)."<br />";
//Decode data
for($i=0;$i<count($res);$i++){
    $dataArray[$i] = json_decode($res[$i], true);
    //echo "$i: ".json_decode($res[$i], true)."<br /><br />";
}
//echo count($dataArray)."<br />";
//var_dump($dataArray);

//$data = array_2d_to_1d($dataArray);
//echo count($data)."<br />";
/*
//Find attributes of each item
for($i=0;$i<count($data);$i++){
    echo $data[$i]['name']." - ".$data[$i]['icon'];
}*/
//turn dataArray into a single dimensional data array
//$data = array_2d_to_1d($dataArray);
//print_r($data);
//++++++++++END REQUEST+++++++++++//

// Print the response data - DEBUG
echo "<p>Total request time: ".round(microtime(true) - $s, 4)." seconds.";
?>

脚本是否由于最大执行时间限制而超时?如果是这样,也许您可以使用 http://php.net/manual/en/function.set-time-limit.php 或其他东西将超时设置为更大的值?

如果超时是指所有 curl 项都停止返回,也许远程端对请求有速率限制,或者如果您尝试同时启动 45,000 个 TCP 请求或其他什么,则会达到其他限制?

相关内容

  • 没有找到相关文章

最新更新