Jupiter笔记本Pandas read_csv分析错误



我尝试在Anaconda Jupiter Notebook中使用Python编写一个Dashboard。作为第一个,我想从在线源读取CSV文件,这就是问题开始的地方:D

一切都写得很正确,但我有一个分析错误:

ParserError                               Traceback (most recent call last)
<ipython-input-96-112ac527c079> in <module>
----> 1 confirmed_df = pd.read_csv("https://github.com/CSSEGISandData/COVID-19/blob/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_confirmed_global.csv")
2 death_df = pd.read_csv("https://github.com/CSSEGISandData/COVID-19/blob/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_deaths_global.csv")
3 recovered_df = pd.read_csv("https://github.com/CSSEGISandData/COVID-19/blob/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_recovered_global.csv")
~/anaconda3/lib/python3.7/site-packages/pandas/io/parsers.py in parser_f(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, squeeze, prefix, mangle_dupe_cols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, skipfooter, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, dayfirst, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, doublequote, escapechar, comment, encoding, dialect, tupleize_cols, error_bad_lines, warn_bad_lines, delim_whitespace, low_memory, memory_map, float_precision)
700                     skip_blank_lines=skip_blank_lines)
701 
--> 702         return _read(filepath_or_buffer, kwds)
703 
704     parser_f.__name__ = name
~/anaconda3/lib/python3.7/site-packages/pandas/io/parsers.py in _read(filepath_or_buffer, kwds)
433 
434     try:
--> 435         data = parser.read(nrows)
436     finally:
437         parser.close()
~/anaconda3/lib/python3.7/site-packages/pandas/io/parsers.py in read(self, nrows)
1137     def read(self, nrows=None):
1138         nrows = _validate_integer('nrows', nrows)
-> 1139         ret = self._engine.read(nrows)
1140 
1141         # May alter columns / col_dict
~/anaconda3/lib/python3.7/site-packages/pandas/io/parsers.py in read(self, nrows)
1993     def read(self, nrows=None):
1994         try:
-> 1995             data = self._reader.read(nrows)
1996         except StopIteration:
1997             if self._first_chunk:
pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader.read()
pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._read_low_memory()
pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._read_rows()
pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._tokenize_rows()
pandas/_libs/parsers.pyx in pandas._libs.parsers.raise_parser_error()
ParserError: Error tokenizing data. C error: Expected 1 fields in line 32, saw 2

我的代码如下:

#import packages and data
import pandas as pd
import numpy as np
confirmed_df = pd.read_csv("https://github.com/CSSEGISandData/COVID-19/blob/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_confirmed_global.csv")
death_df = pd.read_csv("https://github.com/CSSEGISandData/COVID-19/blob/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_deaths_global.csv")
recovered_df = pd.read_csv("https://github.com/CSSEGISandData/COVID-19/blob/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_recovered_global.csv")
confirmed_df.head()

我试着查看confirmed_df的Dataframe标头,得到了一个链接列表和我不知道自己来自哪里的东西?!

看起来像这样:

打印屏幕

有人能帮忙吗?

BR

Ben

请尝试从git存储库获取contect作为原始数据:

#In the latest version of pandas (0.19.2) you can directly pass the url
confirmed_df = pd.read_csv("https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_confirmed_global.csv")
death_df = pd.read_csv("https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_deaths_global.csv")
recovered_df = pd.read_csv("https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_recovered_global.csv")

或者通过网络获取内容:

import pandas as pd
import io
import requests
url="https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_confirmed_global.csv"
s=requests.get(url).content
confirmed_df=pd.read_csv(io.StringIO(s.decode('utf-8')))
#death_df = pd.read_csv("https://github.com/CSSEGISandData/COVID-19/blob/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_deaths_global.csv")
#recovered_df = pd.read_csv("https://github.com/CSSEGISandData/COVID-19/blob/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_recovered_global.csv")
print(confirmed_df.head())

相关内容

  • 没有找到相关文章

最新更新