CNN模型在Rapberry Pi上



我正在做一个涉及Raspberry pi 3B青光眼检测的项目。该模型是在我的笔记本电脑上训练的,大约是 400mb 的".h5"文件。我正在尝试使用 keras 将模型加载到 pi 上 pi 似乎加载了其他较小的模型,但在我的身上显示此错误:

2018-05-15 18:07:04.117054: W tensorflow/core/framework/allocator.cc:101] Allocation of 134217728 exceeds 10% of system memory.
2018-05-15 18:07:05.052281: W tensorflow/core/framework/allocator.cc:101] Allocation of 134217728 exceeds 10% of system memory.
2018-05-15 18:07:05.513437: W tensorflow/core/framework/allocator.cc:101] Allocation of 134217728 exceeds 10% of system memory.
2018-05-15 18:07:06.191609: W tensorflow/core/framework/allocator.cc:101] Allocation of 134217728 exceeds 10% of system memory.
Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1322, in _do_call
return fn(*args)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1307, in _run_fn
options, feed_dict, fetch_list, target_list, run_metadata)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1409, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.InternalError: Could not allocate ndarray
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.5/dist-packages/keras/models.py", line 273, in load_model
topology.load_weights_from_hdf5_group(f['model_weights'], model.layers)
File "/usr/local/lib/python3.5/dist-packages/keras/engine/topology.py", line 3393, in load_weights_from_hdf5_group
K.batch_set_value(weight_value_tuples)
File "/usr/local/lib/python3.5/dist-packages/keras/backend/tensorflow_backend.py", line 2377, in batch_set_value
get_session().run(assign_ops, feed_dict=feed_dict)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 900, in run
run_metadata_ptr)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1135, in _run
feed_dict_tensor, options, run_metadata)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1316, in _do_run
run_metadata)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1335, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InternalError: Could not allocate ndarray

任何帮助将不胜感激。谢谢!

构建一个足够大的模型,最终它不适合 1Gb Pi。你在那里。

您有三种选择:

  1. 构建较小的模型,或
  2. 查看是否有可以关闭的非必要服务(例如,如果您没有使用显示器运行,请尝试使用 Raspian 的服务器安装(,或者
  3. 升级到具有更多RAM的东西。 例如,华硕Tinker或Oroid-2板,它们的大小与Pi相同,但具有2Gb的RAM。

最新更新