python中具有多条记录的时空查询



我有一个600 000个x/y点的数据帧,其中包含日期-时间信息,以及另一个字段"状态",其中包含额外的描述性信息

我的目标是,对于每条记录:

  • 按某个时空缓冲区内的记录求和列"状态"

特定缓冲液在t-8小时内并且<100米

目前我有熊猫数据框架中的数据。

我可以在行中循环,对于每条记录,对感兴趣的日期进行子集设置,然后计算距离并进一步限制选择。然而,在这么多记录的情况下,这仍然相当缓慢。

  • 这需要4.4个小时才能运行

我可以看到我可以创建一个三维kdtree,其中x,y,date作为epoch时间。然而,我不确定在合并日期和地理距离时如何正确限制距离。

这里有一些可复制的代码供你们测试:

导入

import numpy.random as npr
import numpy
import pandas as pd
from pandas import DataFrame, date_range
from datetime import datetime, timedelta

创建数据

np.随机种子(111)

生成测试数据的功能

def CreateDataSet(Number=1):
    Output = []
    for i in range(Number):
        # Create a date range with hour frequency
        date = date_range(start='10/1/2012', end='10/31/2012', freq='H')
        # Create long lat data
        laty = npr.normal(4815862, 5000,size=len(date))
        longx = npr.normal(687993, 5000,size=len(date))
        # status of interest
        status = [0,1]
        # Make a random list of statuses
        random_status = [status[npr.randint(low=0,high=len(status))] for i in range(len(date))]
        # user pool
        user = ['sally','derik','james','bob','ryan','chris']
        # Make a random list of users 
        random_user = [user[npr.randint(low=0,high=len(user))] for i in range(len(date))]
        Output.extend(zip(random_user, random_status, date, longx, laty))
    return pd.DataFrame(Output, columns = ['user', 'status', 'date', 'long', 'lat'])
#Create data  
data = CreateDataSet(3)
len(data)
#some time deltas
before = timedelta(hours = 8)
after = timedelta(minutes = 1)

加速功能

def work(df):
    output = []
    #loop through data index's
    for i in range(0, len(df)):
    l = []
        #first we will filter out the data by date to have a smaller list to compute distances for
        #create a mask to query all dates between range for date i
        date_mask = (df['date'] >= df['date'].iloc[i]-before) & (df['date'] <= df['date'].iloc[i]+after)
        #create a mask to query all users who are not user i (themselves)
        user_mask = df['user']!=df['user'].iloc[i]
        #apply masks
        dists_to_check = df[date_mask & user_mask]
        #for point i, create coordinate to calculate distances from
        a = np.array((df['long'].iloc[i], df['lat'].iloc[i]))
        #create array of distances to check on the masked data
        b = np.array((dists_to_check['long'].values, dists_to_check['lat'].values))
        #for j in the date queried data
        for j in range(1, len(dists_to_check)):
            #compute the ueclidean distance between point a and each point of b (the date masked data)
            x = np.linalg.norm(a-np.array((b[0][j], b[1][j])))
            #if the distance is within our range of interest append the index to a list
            if x <=100:
                l.append(j)
            else:
                pass
        try:
            #use the list of desired index's 'l' to query a final subset of the data
            data = dists_to_check.iloc[l]
            #summarize the column of interest then append to output list
            output.append(data['status'].sum())
        except IndexError, e:
            output.append(0)
            #print "There were no data to add"
    return pd.DataFrame(output)

运行代码并计时

start = datetime.now()
out = work(data)
print datetime.now() - start

有没有一种方法可以以矢量化的方式进行此查询?或者我应该追求另一种技术。

<3

这至少在一定程度上解决了我的问题。由于循环可以独立地对数据的不同部分进行操作,因此并行化在这里是有意义的。

正在使用Ipython

from IPython.parallel import Client
cli = Client()
cli.ids
cli = Client()
dview=cli[:]
with dview.sync_imports():
    import numpy as np
    import os
    from datetime import timedelta
    import pandas as pd
#We also need to add the time deltas and output list into the function as 
#local variables as well as add the Ipython.parallel decorator
@dview.parallel(block=True)
def work(df):
    before = timedelta(hours = 8)
    after = timedelta(minutes = 1)
    output = []

最终时间1:17:54.910206,约为原始时间的1/4

我仍然非常感兴趣的是,任何人都可以在函数体中提出一些小的速度改进建议。

相关内容

  • 没有找到相关文章

最新更新