释放对象的错误校验和与将多维向量传递给c++中的静态函数有关



在过去的几个月里,我一直在进行一个非常广泛的研究项目。不幸的是,由于各种原因,我不能深入研究这个问题,或者大部分程序,但我遇到了一个我似乎无法理解的问题,所以除了尝试堆栈溢出之外,我已经没有其他选择了。

我可以展示的程序部分获取存储在2-D或3-D向量中的一些数据,并将它们传递给我正在构建的核岭回归函数。在这个函数中,我将各种数据集分成训练集和测试集,以便交叉验证。在找到内核并执行正则化之后,我将修改后的内核与测试数据一起传递到我构建的高斯-乔丹消除函数中,以解决Ax = b中的x。我遇到的问题是,在完成此函数后,程序崩溃并出现以下错误:

agnigen(28820,0x7fff7512b000) malloc: * error for object 0x7f9fa2a063f8:错误的校验和释放对象-对象可能在被释放后被修改。*在malloc_error_break中设置断点进行调试Abort trap: 6

现在我已经缩小了这个错误使用valgrind到x向量,它被传递给高斯-乔丹消去函数的引用。然而,我完全不知道为什么会发生这个错误,我将感谢你能给我的任何帮助。下面将是我可以显示的代码(因为它不放弃任何技术分类),但如果你需要更多,我会尽我最大的努力提供它。谢谢你的帮助。

这是KRR函数被调用的地方:一个名为Run

的文件
LearningAlgorithms::KRR(dataStorage,trainingFP,trainingForces,prediction); 

这是KRR函数:

  void LearningAlgorithms::KRR(DataStorage *dS, vector<vector<double> > trainingPoints, vector<vector<double> > trainingForces, vector<double> &prediction)
{
   //variable declarations for KRR
int cvFolds = atoi(dS->getCV()[1].c_str());
int trainPartitionSize = (int) trainingPoints.size() / cvFolds;
double distance = 0.0;
vector<double> lambdas,sigmas,errors;
vector<vector<double> > alphas,trainingSet, testSet, randomizedTrainingPoints,trainingSetForces, testSetForces, randomizedTrainingForces, kernelMatrix;
vector<vector<vector<double > > > cvAlphas;
prediction.resize(trainingPoints.size());
trainingSet.resize(trainPartitionSize);
testSet.resize(trainingPoints.size() - trainPartitionSize);
trainingSetForces.resize(trainPartitionSize);
testSetForces.resize(trainingPoints.size() - trainPartitionSize);
for(int i = 0; i < trainingSet.size(); i++)
    trainingSet[i].resize(trainingPoints[0].size());
for(int i = 0; i < testSet.size(); i++)
    testSet[i].resize(trainingPoints[0].size());
for(int i = 0; i < trainingSetForces.size(); i++)
    trainingSetForces[i].resize(1);
for(int i = 0; i < testSetForces.size(); i++)
    testSetForces[i].resize(1);
//set sizes of hyper-parameter vectors based on whether cross validation was chosen or not
if(dS->getCV()[0] == "T" || dS->getCV()[0] == "t")
{
    cout<<"Training model using "<<cvFolds<<"-fold Kernel Ridge Regression"<<endl;
    lambdas.resize(cvFolds);
    sigmas.resize(cvFolds);
    errors.resize(cvFolds);
    cvAlphas.resize(cvFolds);
    for(int i = 0; i < lambdas.size(); i++)
    {
        lambdas[i] = Math::randomDouble(dS->getLambdaRange()[0],dS->getLambdaRange()[1]);
        sigmas[i] = Math::randomDouble(dS->getSigmaRange()[0],dS->getSigmaRange()[1]);
    }
}
else
{
    cout<<"Training model using Kernel Ridge Regression without cross validation"<<endl;
    lambdas.resize(1);
    sigmas.resize(1);
    errors.resize(1);
    cvAlphas.resize(1);
    lambdas[0] = Math::randomDouble(dS->getLambdaRange()[0],dS->getLambdaRange()[1]);
    sigmas[0] = Math::randomDouble(dS->getSigmaRange()[0],dS->getSigmaRange()[1]);
}
//randomizes training points to diversify partitions during cross validation
Utilities::coordinatedRandomization2Darray(trainingPoints, randomizedTrainingPoints,trainingForces,randomizedTrainingForces);
//optimize lambda and sigma based on range provided
for(int lambda = 0; lambda < lambdas.size(); lambda++)
{
    for(int sigma = 0; sigma < sigmas.size(); sigma++)
    {
        for(int fold = 0; fold < cvFolds; fold++)
        {
            int trainingIndex = 0;
            int testIndex = 0;
            int sum = 0;
            //partitions training points into training and test sets based on cross validation
            for(int training = 0; training < randomizedTrainingPoints.size(); training++)
            {
                sum = 0;
                if(training >= trainPartitionSize * fold && training < trainPartitionSize * (fold + 1))
                {
                    for(int i = 0; i < randomizedTrainingPoints[0].size(); i++)
                        trainingSet[trainingIndex][i] = randomizedTrainingPoints[training][i];
                    for(int i = 0; i < randomizedTrainingForces[0].size(); i++)
                        sum += randomizedTrainingForces[training][i];
                    trainingSetForces[trainingIndex][0] = sum;
                    trainingIndex++;
                }
                else
                {
                    for(int i = 0; i < randomizedTrainingPoints[0].size(); i++)
                        testSet[testIndex][i] = randomizedTrainingPoints[training][i];
                    for(int i = 0; i < randomizedTrainingForces[0].size(); i++)
                        sum += randomizedTrainingForces[training][i];
                    testSetForces[trainingIndex][0] = sum;
                    testIndex++;
                }           
            }
            //clears the kernel matrix and resizes based on test set size
            kernelMatrix.clear();
            kernelMatrix.resize(testSet.size());
            for(int km = 0; km < kernelMatrix.size(); km++)
                kernelMatrix[km].resize(testSet.size());
            //determines the kernel for a given test set
            for(int testIndex = 0; testIndex < testSet.size(); testIndex++)
            {
                for(int ntrain = 0; ntrain < testSet.size(); ntrain++)
                {
                    distance = 0.0;
                    for(int i = 0; i < testSet[testIndex].size(); i++)
                        distance += pow(testSet[testIndex][i] - testSet[ntrain][i],2);
                    kernelMatrix[testIndex][ntrain] = ... (sorry, can't show this)
                }
            }
            //performs a lambda regularization on the kernel matrix to account for noise in the data
            Math::regularization(kernelMatrix, lambdas[lambda]);

            //solves linear system of equations to get our parameterization term (b) and our alpha vectors
            //A = kernel matrix, x = alpha , b = forces : Ax = b
            Math::gaussJordanElimination(kernelMatrix,alphas, testSetForces);
        }
    }
}

这是高斯-乔丹消去函数:

    void Math::gaussJordanElimination(vector< vector< double > > A, vector<vector<double > > &x, vector< vector< double > > b)
{
  int i, icolumn, irow, j, k, l, ll;
double ainvabs, dum, pivinv;
int n = A.size();
int m = A[0].size();
vector< int> indxc, indxr, ipivot;
vector< vector< double > > Ainv;
indxc.resize(n);
indxr.resize(n);
ipivot.resize(n);
x.resize(b.size());
//puts values from b into x, and A into A inverse
for(int i = 0; i < b.size(); i++)
{
    x[i].resize(b[i].size());
    for(int j = 0; j < x[i].size(); j++)
        x[i][j] = b[i][j];
}
Ainv.resize(n);
for(int i = 0; i < A.size(); i++)
{
    Ainv[i].resize(m);
    for(int j = 0; j < Ainv[i].size(); j++)
        Ainv[i][j] = A[i][j];
}

//sets default values for the pivot vector
for(j = 0; j < n; j++)
    ipivot[j] = 0;

//main loop over the inverse dimensions for column reductions
for(i = 0; i < n; i++)
{
    ainvabs = 0.0;
    //search for pivot elements
    for(j = 0; j < n; j++)
    {
        if(ipivot[j] != 1)
        {
            for(k = 0; k < n; k++)
            {
                if(ipivot[k] == 0)
                {
                    if(fabs(Ainv[j][k]) >= ainvabs)
                    {
                        ainvabs = fabs(Ainv[j][k]);
                        irow = j;
                        icolumn = k;
                    }
                }
            }
        }
    }
    //pivots determined, swap columns with rows (relabeling on columns/rows of matrix)
    ++(ipivot[icolumn]);
    if(irow != icolumn)
    {
        for(l = 0; l < n; l++)
        {
            int temp1 = Ainv[irow][l];
            int temp2 = Ainv[icolumn][l];
            Ainv[irow][l] = temp2;
            Ainv[icolumn][l] = temp1;
        }
        for(l = 0; l < m; l++)
        {
            int temp1 = x[irow][l];
            int temp2 = x[icolumn][l];
            x[irow][l] = temp2;
            x[icolumn][l] = temp1;
        }
    }
    indxr[i] = irow;
    indxc[i] = icolumn;
    if(Ainv[icolumn][icolumn] != 0.0)
        pivinv = 1 / Ainv[icolumn][icolumn];
    for(l = 0; l < n; l++)
        Ainv[icolumn][l] *= pivinv;
    for(l = 0; l < m; l++)
        x[icolumn][l] *= pivinv;
    for(ll = 0; ll < n; ll++)
    {
        if(ll !=icolumn)
        {
            dum = Ainv[ll][icolumn];
            Ainv[ll][icolumn] = 0.0;
            for(l = 0; l < n; l++)
                Ainv[ll][l] -= Ainv[icolumn][l] * dum;
            for(l = 0; l < m; l++)
                x[ll][l] -= x[icolumn][l] * dum;
        }
    }
    //reductions complete, interchange elements in reverse order as before
    for(l = n - 1; l >= 0; l--)
    {
        if(indxr[l] != indxc[l])
        {
            for(k = 0; k < n; k++)
            {
                int temp1 = Ainv[k][indxr[l]];
                int temp2 = Ainv[k][indxc[l]];
                Ainv[k][indxr[l]] = temp2;
                Ainv[k][indxc[l]] = temp1;
            }
        }
    }
}

编辑

这些是我使用valgrind时得到的消息。他们非常混乱,请让我知道如果它会更容易缩短这些一点,我真的不知道哪些部分是有用的,哪些部分不是,所以我只是复制了整个消息。

==52421== Invalid write of size 8
==52421==    at 0x100021B67:                     
Math::gaussJordanElimination(std::__1::vector<std::__1::vector<double,     std::__1::allocator<double> >, std::__1::allocator<std::__1::vector<double, std::__1::allocator<double> > > >, std::__1::vector<std::__1::vector<double, std::__1::allocator<double> >, std::__1::allocator<std::__1::vector<double, std::__1::allocator<double> > > >&, std::__1::vector<std::__1::vector<double, std::__1::allocator<double> >, std::__1::allocator<std::__1::vector<double, std::__1::allocator<double> > > >) (type_traits:3599)
==52421==    by 0x100025D3D: LearningAlgorithms::KRR(DataStorage*, std::__1::vector<std::__1::vector<double, std::__1::allocator<double> >, std::__1::allocator<std::__1::vector<double, std::__1::allocator<double> > > >, std::__1::vector<std::__1::vector<double, std::__1::allocator<double> >, std::__1::allocator<std::__1::vector<double, std::__1::allocator<double> > > >, std::__1::vector<double, std::__1::allocator<double> >&) (LearningAlgorithms.cpp:171)
==52421==    by 0x100018FAE: Run::trainModel(DataStorage*) (Run.cpp:291)
==52421==    by 0x100000E8A: main (Main.cpp:131)
==52421==  Address 0x1021f62e0 is 240 bytes inside a block of size 312 free'd
==52421==    at 0x10004A32D: free (vg_replace_malloc.c:534)
==52421==    by 0x100018E91: Run::trainModel(DataStorage*) (new:176)
==52421==    by 0x100000E8A: main (Main.cpp:131)
==52421==  Block was alloc'd at
==52421==    at 0x100049D81: malloc (vg_replace_malloc.c:303)
==52421==    by 0x1009A27DD: operator new(unsigned long) (in /usr/lib/libc++.1.dylib)
==52421==    by 0x100006AFF: _ZNSt3__16vectorINS0_IdNS_9allocatorIdEEEENS1_IS3_EEE18__construct_at_endIPS3_EENS_9enable_ifIXsr21__is_forward_iteratorIT_EE5valueEvE4typeES9_S9_m (memory:1596)
==52421==    by 0x100006A05: std::__1::vector<std::__1::vector<double, std::__1::allocator<double> >, std::__1::allocator<std::__1::vector<double, std::__1::allocator<double> > > >::vector(std::__1::vector<std::__1::vector<double, std::__1::allocator<double> >, std::__1::allocator<std::__1::vector<double, std::__1::allocator<double> > > > const&) (vector:1203)
==52421==    by 0x100018D8D: Run::trainModel(DataStorage*) (vector:1195)
==52421==    by 0x100000E8A: main (Main.cpp:131)       



==52421== Invalid read of size 8
==52421==    at 0x100021BA5: Math::gaussJordanElimination(std::__1::vector<std::__1::vector<double, std::__1::allocator<double> >, std::__1::allocator<std::__1::vector<double, std::__1::allocator<double> > > >, std::__1::vector<std::__1::vector<double, std::__1::allocator<double> >, std::__1::allocator<std::__1::vector<double, std::__1::allocator<double> > > >&, std::__1::vector<std::__1::vector<double, std::__1::allocator<double> >, std::__1::allocator<std::__1::vector<double, std::__1::allocator<double> > > >) (type_traits:3597)
==52421==    by 0x100025D3D: LearningAlgorithms::KRR(DataStorage*, std::__1::vector<std::__1::vector<double, std::__1::allocator<double> >, std::__1::allocator<std::__1::vector<double, std::__1::allocator<double> > > >, std::__1::vector<std::__1::vector<double, std::__1::allocator<double> >, std::__1::allocator<std::__1::vector<double, std::__1::allocator<double> > > >, std::__1::vector<double, std::__1::allocator<double> >&) (LearningAlgorithms.cpp:171)
==52421==    by 0x100018FAE: Run::trainModel(DataStorage*) (Run.cpp:291)
==52421==    by 0x100000E8A: main (Main.cpp:131)
==52421==  Address 0x102e54ec8 is 232 bytes inside a block of size 312 free'd
==52421==    at 0x10004A32D: free (vg_replace_malloc.c:534)
==52421==    by 0x100018E91: Run::trainModel(DataStorage*) (new:176)
==52421==    by 0x100000E8A: main (Main.cpp:131)
==52421==  Block was alloc'd at
==52421==    at 0x100049D81: malloc (vg_replace_malloc.c:303)
==52421==    by 0x1009A27DD: operator new(unsigned long) (in /usr/lib/libc++.1.dylib)
==52421==    by 0x100006AFF: _ZNSt3__16vectorINS0_IdNS_9allocatorIdEEEENS1_IS3_EEE18__construct_at_endIPS3_EENS_9enable_ifIXsr21__is_forward_iteratorIT_EE5valueEvE4typeES9_S9_m (memory:1596)
==52421==    by 0x100006A05: std::__1::vector<std::__1::vector<double, std::__1::allocator<double> >, std::__1::allocator<std::__1::vector<double, std::__1::allocator<double> > > >::vector(std::__1::vector<std::__1::vector<double, std::__1::allocator<double> >, std::__1::allocator<std::__1::vector<double, std::__1::allocator<double> > > > const&) (vector:1203)
==52421==    by 0x100018D8D: Run::trainModel(DataStorage*) (vector:1195)
==52421==    by 0x100000E8A: main (Main.cpp:131)



valgrind: m_mallocfree.c:303 (SizeT get_bszB_as_is(Block *)): Assertion     'bszB_lo == bszB_hi' failed.
valgrind: Heap block lo/hi size mismatch: lo = 80, hi = 195611105019399384.
This is probably caused by your program erroneously writing past the
end of a heap block and corrupting heap metadata.  If you fix any
invalid writes reported by Memcheck, this assertion failure will
probably go away.  Please try that before reporting this as a bug.   



host stacktrace:
==52421==    at 0x2380501B3: ???
==52421==    by 0x2380505CC: ???
==52421==    by 0x2380505AA: ???
==52421==    by 0x23805E6F8: ???
==52421==    by 0x238040389: ???
==52421==    by 0x23803F316: ???
==52421==    by 0x238046D47: ???
==52421==    by 0x23803E613: ???
==52421==    by 0x238006E47: ???
==52421==    by 0x7000012B7851: ???
==52421==    by 0x700000DB1EEF: ???
==52421==    by 0x700000DB1EEF: ???
==52421==    by 0x700000009C9F: ???
==52421==    by 0x700000009CAF: ???
sched status:
  running_tid=1



Thread 1: status = VgTs_Runnable (lwpid 2567)
==52421==    at 0x100021BA9: Math::gaussJordanElimination(std::__1::vector<std::__1::vector<double, std::__1::allocator<double> >, std::__1::allocator<std::__1::vector<double, std::__1::allocator<double> > > >, std::__1::vector<std::__1::vector<double, std::__1::allocator<double> >, std::__1::allocator<std::__1::vector<double, std::__1::allocator<double> > > >&, std::__1::vector<std::__1::vector<double, std::__1::allocator<double> >, std::__1::allocator<std::__1::vector<double, std::__1::allocator<double> > > >) (type_traits:3598)
==52421==    by 0x100025D3D: LearningAlgorithms::KRR(DataStorage*, std::__1::vector<std::__1::vector<double, std::__1::allocator<double> >, std::__1::allocator<std::__1::vector<double, std::__1::allocator<double> > > >, std::__1::vector<std::__1::vector<double, std::__1::allocator<double> >, std::__1::allocator<std::__1::vector<double, std::__1::allocator<double> > > >, std::__1::vector<double, std::__1::allocator<double> >&) (LearningAlgorithms.cpp:171)
==52421==    by 0x100018FAE: Run::trainModel(DataStorage*) (Run.cpp:291)
==52421==    by 0x100000E8A: main (Main.cpp:131)

所以经过5天的烦恼,我终于明白了。

在高斯-乔丹消去函数中:

int m = A[0].size()  
应该…

int m = b[0].size()  

as m决定了x的列数,如果x的大小与A相同,而b的维数与A不同,则矩阵乘法不起作用

这段代码中可能还有更多错误,但这个问题是导致程序崩溃的直接原因。

相关内容

  • 没有找到相关文章

最新更新