如何从具有dicts列表的json将数据集加载到pytorch torchtext.data.TabularDatase



我有一个dicts列表,如下所示:

[{'text': ['The', 'Fulton', 'County', 'Grand', ...], 'tags': ['AT', 'NP-TL', 'NN-TL', 'JJ-TL', ...]},
{'text': ['The', 'jury', 'further', 'said', ...], 'tags': ['AT', 'NN', 'RBR', 'VBD', ...]},
...]

每个dict的每个值都是句子单词/标签的列表。这直接来自NLTK数据集的Brown语料库,使用加载

from nltk.corpus import brown
data = brown.tagged_sents()
data = {'text': [[word for word, tag in sent] for sent in data], 'tags': [[tag for word, tag in sent] for sent in data]}
import pandas as pd
df = pd.DataFrame(training_data, columns=["text", "tags"])
from sklearn.model_selection import train_test_split
train, val = train_test_split(df, test_size=0.2)
train.to_json("train.json", orient='records')
val.to_json("val.json", orient='records')

我想用将这个json加载到torchtext.data.TabularDataset中

TEXT = data.Field(lower=True)
TAGS = data.Field(unk_token=None)
data_fields = [('text', TEXT), ('tags', TAGS)]
train, val = data.TabularDataset.splits(path='./', train='train.json', validation='val.json', format='json', fields=data_fields)

但它给了我一个错误:

/usr/local/lib/python3.6/dist-packages/torchtext/data/example.py in fromdict(cls, data, fields)
17     def fromdict(cls, data, fields):
18         ex = cls()
---> 19         for key, vals in fields.items():
20             if key not in data:
21                 raise ValueError("Specified key {} was not found in "
AttributeError: 'list' object has no attribute 'items'

请注意,我不希望TabularData为我标记这个句子,因为它已经被nltk标记了。我该如何处理?(我无法将语料库切换到可以直接从torchtext.dataset加载的内容,我有使用Brown语料库(

对于现在关注这个问题的人来说,请注意它使用了torchtext的遗留版本。您仍然可以使用此功能,但需要添加遗留。。。例如:

from torchtext import data
from torchtext import datasets
from torchtext import legacy
TEXT = legacy.data.Field()
TAGS = legacy.data.Field()

然后我建议格式化data_fields如下:

fields = {'text': ('text', TEXT), 'tag': ('tag', TAGS)}

这就行了。对于任何使用最新torchtext功能的人来说,这样做的方法是:

要创建可迭代数据集,可以使用_RawTextIterableDataset函数。下面是一个从json文件加载的示例:

def _create_data_from_json(data_path):
with open(data_path) as json_file:
raw_json_data = json.load(json_file)
for item in raw_json_data:
_label, _paragraph = item['tags'], item['text']
yield (_tag, _text)

#Load torchtext utilities needed to convert (label, paragraph) tuple into iterable dataset               
from torchtext.data.datasets_utils import (
_RawTextIterableDataset,
_wrap_split_argument,
_add_docstring_header,
_create_dataset_directory,
)
#Dictionary of data sources. The train and test data JSON files have items consisting of paragraphs and labels
DATA_SOURCE = {
'train': 'data/train_data.json',
'test': 'data/test_data.json'
}
#This is the number of lines/items in each data set
NUM_LINES = {
'train': 200,
'test': 100,
}
#Naming the dataset
DATASET_NAME = "BAR"
#This function return the iterable dataset based on whatever split is passed in
@_add_docstring_header(num_lines=NUM_LINES, num_classes=2)
@_create_dataset_directory(dataset_name=DATASET_NAME)
@_wrap_split_argument(('train', 'test'))
def FOO(root, split):
return _RawTextIterableDataset(DATASET_NAME, NUM_LINES[split],
_create_data_from_json(DATA_SOURCE[split]))

然后,您可以调用此函数来返回可迭代数据集:

#Get iterable for train and test data sets
train_iter, test_iter = FOO(split=('train', 'test'))

_create_data_from_json函数可以替换为从数据源生成元组的任何函数。