timedelta csv pandas



我有以下文件( df_SOF1.csv),是100万个记录

Location,Transport,Transport1,DateOccurred,CostCentre,D_Time,count
0,Lorry,Car,07/09/2012,0,0:00:00,2
1,Lorry,Car,11/09/2012,0,0:00:00,5
2,Lorry,Car,14/09/2012,0,0:00:00,30
3,Lorry,Car,14/09/2012,0,0:07:00,2
4,Lorry,Car,14/09/2012,0,0:29:00,1
5,Lorry,Car,14/09/2012,0,3:27:00,3
6,Lorry,Car,14/09/2012,0,3:28:00,4
7,Lorry,Car,21/09/2012,0,0:00:00,13
8,Lorry,Car,27/09/2012,0,0:00:00,8
9,Lorry,Car,28/09/2012,0,0:02:00,1
10,Train,Bus,03/09/2012,2073,7:49:00,1
11,Train,Bus,05/09/2012,2073,7:50:00,1
12,Train,Bus,06/09/2012,2073,7:52:00,1
13,Train,Bus,07/09/2012,2073,7:48:00,1
14,Train,Bus,08/09/2012,2073,7:55:00,1
15,Train,Bus,11/09/2012,2073,7:49:00,1
16,Train,Bus,12/09/2012,2073,7:52:00,1
17,Train,Bus,13/09/2012,2073,7:50:00,1
18,Train,Bus,14/09/2012,2073,7:54:00,1
19,Train,Bus,18/09/2012,2073,7:51:00,1
20,Train,Bus,19/09/2012,2073,7:50:00,1
21,Train,Bus,20/09/2012,2073,7:51:00,1
22,Train,Bus,21/09/2012,2073,7:52:00,1
23,Train,Bus,22/09/2012,2073,7:53:00,1
24,Train,Bus,23/09/2012,2073,7:49:00,1
25,Train,Bus,24/09/2012,2073,7:54:00,1
26,Train,Bus,25/09/2012,2073,7:55:00,1
27,Train,Bus,26/09/2012,2073,7:53:00,1
28,Train,Bus,27/09/2012,2073,7:55:00,1
29,Train,Bus,28/09/2012,2073,7:53:00,1
30,Train,Bus,29/09/2012,2073,7:56:00,1

我正在使用大熊猫分析它,我一直在尝试至少40个小时要找到一种方法来以某种方式分组数据,我可以汇总时间列D_Time

我已经加载了所需的模块我创建一个数据框,请参见下面的DateOccured作为索引

df_SOF1 = read_csv('/users/fabulous/documents/df_SOF1.csv', index_col=3, parse_dates=True) # read file from disk

i可以按任何列分组,也可以通过任何行迭代,例如

df_SOF1.groupby('Location').sum()

但是,我尚未找到一种使用PANDAS的D_Time列的总和的方法。我已经阅读了有关TimeDeltas等的20多篇文章,但在Pandas中我的做法越明智。

任何可以让我在D_Time列上进行算术的解决方案将不胜感激。(即使必须在熊猫之外进行)。

我认为一种可能的解决方案是将D_Time列更改为秒。 _ __ _ __ _ __ __ _ __ __ __ __ __ _ __ _ __ _ __ _ __ __ _ __ __ _ _ __ _2012/11/01我在上面的30个项目上运行以下命令

df_sof1.groupby('transport')。agg({'d_time':sum})

D_Time

运输
货车0:00:000:00:00:00:00:000:07:000:29:003:27:003:28 ...火车7:49:007:50:007:52:007:48:007:55:007:49:007:52 ..

似乎是从物理上总和在一起的值,而不是给出数值总和(例如添加字符串)

欢呼

我没有在熊猫中发现任何关于三角洲的提及,而dateTime模块有一个,因此将d_time转换为秒不是不好的主意:

def seconds(time_str):
    end_time = datetime.datetime.strptime(time_str,'%H:%M:%S')
    delta = end_time - datetime.datetime.strptime('0:0:0','%H:%M:%S')
    return delta.total_seconds()

df_SOF1.D_Time = df_SOF1.D_Time.apply(seconds)

结果:

>>> df_SOF1.groupby('CostCentre').sum()
            Location  D_Time  count
CostCentre                         
0                 45   27180     69
2073             420  594660     21

移动dateTime.dateTime.strptime('0:0:0','%h:%m:%s')到全局名称空间可以减少执行时间:

timeit.timeit("sec('01:01:01')", setup="from __main__ import sec",
              number=10000)
1.025843858718872
timeit.timeit("seconds('01:01:01')", setup="from __main__ import seconds",
              number=10000)
0.6128969192504883 

最新更新