如何提高使用sqlplus的perl脚本的效率



我有一个perl脚本,它从sqlplus数据库中获取数据...每当特定序列号的 state 值发生更改时,此数据库都会添加一个新条目。现在我们需要在每次状态更改时选择条目,并准备一个包含旧状态、新状态和其他字段的 csv 文件。数据库表示例。

SERIALNUMBER         STATE                AT                        OPERATORID    SUBSCRIBERID    TRANSACTIONID
51223344558899       Available            20081008T10:15:47         vsuser
51223344558857       Available            20081008T10:15:49         vsowner
51223344558899       Used                 20081008T10:20:25         vsuser
51223344558860       Stolen               20081008T10:15:49         vsanyone
51223344558857       Damaged              20081008T10:50:49         vsowner
51223344558899       Damaged              20081008T10:50:25         vsuser
51343253335355       Available            20081008T11:15:47         vsindian

我的脚本:

#! /usr/bin/perl
#use warnings;
use strict;

#my $circle =
#my $schema =
my $basePath = "/scripts/Voucher-State-Change";
#my ($sec, $min, $hr, $day, $month, $years) = localtime(time);
#$years_+=1900;$mont_+=1;
#my $timestamp=sprintf("%d%02d%02d",$years,$mont,$moday);

sub getDate {
    my $daysago=shift;
    $daysago=0 unless ($daysago);
    #my @months=qw(Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec);
    my ($sec,$min,$hour,$mday,$mon,$year,$wday,$yday,$isdst) = localtime(time-(86400*$daysago));
    # YYYYMMDD, e.g. 20060126
    return sprintf("%d%02d%02d",$year+1900,$mon+1,$mday);
    }
my $filedate=getDate(1);
#my $startdate="${filedate}T__:__:__";
my $startdate="20081008T__:__:__";
print "$startdaten";

##### Generating output file---
my $outputFile = "${basePath}/VoucherStateChangeReport.$filedate.csv";
open (WFH, ">", "$outputFile") or die "Can't open output file $outputFile for writing: $!n";
print WFH "VoucherSerialNumber,Date,Time,OldState,NewState,UserIdn";

##### Generating log file---
my $logfile = "${basePath}/VoucherStateChange.$filedate.log";
open (STDOUT, ">>", "$logfile") or die "Can't open logfile $logfile for writing: $!n";
open (STDERR, ">>", "$logfile") or die "Can't open logfile $logfile for writing: $!n";
print "$logfilen";
##### Now login to sqlplus-----
my $SQLPLUS='/opt/oracle/product/11g/db_1/bin/sqlplus -S system/coolman7@vsdb';
`$SQLPLUS @${basePath}/VoucherQuery1.sql $startdate> ${basePath}/QueryResult1.txt`;

open (FH1, "${basePath}/QueryResult1.txt");
while (my $serial = <FH1>) {
     chomp ($serial);
     my $count = `$SQLPLUS @${basePath}/VoucherQuery2.sql $serial $startdate`;
     chomp ($count);
     $count =~ s/s+//g;
     #print "$countn";
     next if $count == 1;
    `$SQLPLUS @${basePath}/VoucherQuery3.sql $serial $startdate> ${basePath}/QueryResult3.txt`;
#  print "select * from sample where SERIALNUMBER = $serial----n";
     open (FH3, "${basePath}/QueryResult3.txt");

     my ($serial_number, $state, $at, $operator_id);
     my $count1 = 0;
     my $old_state;
     while (my $data = <FH3>) {
            chomp ($data);
                    #print $data."n";
           my @data = split (/s+/, $data);
           my ($serial_number, $state, $at, $operator_id) = @data[0..3];
           #my $serial_number = $data[0];
           #my $state = $data[1];
           #my $at = $data[2];
           #my $operator_id = $data[3];

           $count1++;
           if ($count1 == 1) {
              $old_state = $data[1];
              next;
              }
           my ($date, $time) = split (/T/, $at);
           $date =~ s/(d{4})(d{2})(d{2})/$1-$2-$3/;
           print WFH "$serial_number,$date,$time,$old_state,$state,$operator_idn";
           $old_state = $data[1];
           }
       }
close(WFH);

凭证查询1.sql中的查询:

select distinct SERIALNUMBER from sample where AT like '&1';

凭证查询2.sql中的查询:

select count(*) from sample where SERIALNUMBER = '&1' and AT like '&2';

凭证查询2.sql中的查询:

select * from sample where SERIALNUMBER = '&1' and AT like '&2';

和我的示例输出:

VoucherSerialNumber,Date,Time,OldState,NewState,UserId
51223344558857,2008-10-08,10:50:49,Available,Damaged,vsowner
51223344558899,2008-10-08,10:20:25,Available,Used,vsuser
51223344558899,2008-10-08,10:50:25,Used,Damaged,vsuser

脚本工作得很好。但问题是实际的数据库表在特定日期有数百万条记录......因此,它引发了性能问题...您能否建议我们如何在时间和负载方面提高此脚本的效率。唯一的限制是我不能为此使用 DBI 模块...此外,如果sql查询中出现任何错误,错误msg将进入QueryResult?。txt 文件。我想在我的日志文件中处理和接收这些错误。如何实现这一点?谢谢

我认为您需要调整查询。一个好的起点是使用解释计划,如果它是一个Oracle数据库。

最新更新