给定:
let weights = [0.5;0.4;0.3]
let X = [[2;3;4];[7;3;2];[5;3;6]]
我想要的是:wX = [(0.5)*[2;3;4];(0.4)*[7;3;2];(0.3)*[5;3;6]]
想知道一种优雅的方法来处理列表和数组。欢迎提供更多优化信息
您编写的是列表列表,但代码显示的是元组列表。如果可以自由调整,解决方案将是
let weights = [0.5;0.4;0.3]
let X = [[2;3;4];[7;3;2];[5;3;6]]
X
|> List.map2 (fun w x ->
x
|> List.map (fun xi ->
(float xi) * w
)
) weights
根据您对语法的适应程度,您可能更喜欢像这样的oneliner
List.map2 (fun w x -> List.map (float >> (*) w) x) weights X
序列(Seq.map2
、Seq.map
)和数组(在Array
模块中)存在相同的库函数。
这不仅仅是对特定问题的回答,但在评论中聊天并了解到该问题是F#中神经网络的一部分后,我发布了这篇文章,它涵盖了该问题并实现了神经网络的前馈部分。它利用了MathNet数字
这段代码是对神经网络和深度学习中Python代码的一部分的F#翻译。
Python
def backprop(self, x, y):
"""Return a tuple ``(nabla_b, nabla_w)`` representing the
gradient for the cost function C_x. ``nabla_b`` and
``nabla_w`` are layer-by-layer lists of numpy arrays, similar
to ``self.biases`` and ``self.weights``."""
nabla_b = [np.zeros(b.shape) for b in self.biases]
nabla_w = [np.zeros(w.shape) for w in self.weights]
# feedforward
activation = x
activations = [x] # list to store all the activations, layer by layer
zs = [] # list to store all the z vectors, layer by layer
for b, w in zip(self.biases, self.weights):
z = np.dot(w, activation)+b
zs.append(z)
activation = sigmoid(z)
activations.append(activation)
F#
module NeuralNetwork1 =
//# Third-party libraries
open MathNet.Numerics.Distributions // Normal.Sample
open MathNet.Numerics.LinearAlgebra // Matrix
type Network(sizes : int array) =
let mutable (_biases : Matrix<double> list) = []
let mutable (_weights : Matrix<double> list) = []
member __.Biases
with get() = _biases
and set value =
_biases <- value
member __.Weights
with get() = _weights
and set value =
_weights <- value
member __.Backprop (x : Matrix<double>) (y : Matrix<double>) =
// Note: There is a separate member for feedforward. This one is only used within Backprop
// Note: In the text layers are numbered from 1 to n with 1 being the input and n being the output
// In the code layers are numbered from 0 to n-1 with 0 being the input and n-1 being the output
// Layers
// 1 2 3 Text
// 0 1 2 Code
// 784 -> 30 -> 10
let feedforward () : (Matrix<double> list * Matrix<double> list) =
let (bw : (Matrix<double> * Matrix<double>) list) = List.zip __.Biases __.Weights
let rec feedfowardInner layer activation zs activations =
match layer with
| x when x < (__.NumLayers - 1) ->
let (bias, weight) = bw.[layer]
let z = weight * activation + bias
let activation = __.Sigmoid z
feedfowardInner (layer + 1) activation (z :: zs) (activation :: activations)
| _ ->
// Normally with recursive functions that build list for returning
// the final list(s) would be reversed before returning.
// However since the returned list will be accessed in reverse order
// for the backpropagation step, we leave them in the reverse order.
(zs, activations)
feedfowardInner 0 x [] [x]
在weight * activation
*中,是在Matrix<double>
上操作的过载操作员
返回到您的示例数据并使用MathNet Numerics Arithmetics
let weights = [0.5;0.4;0.3]
let X = [[2;3;4];[7;3;2];[5;3;6]]
首先,X的值需要转换为浮动
let x1 = [[2.0;3.0;4.0];[7.0;3.0;2.0];[5.0;3;0;6;0]]
现在注意,x1是一个矩阵,weights是一个向量
所以我们可以乘以
let wx1 = weights * x1
由于我验证代码的方式比大多数人都多,所以我会解释一下,这样你就不会怀疑它的有效性。
当使用神经网络,特别是小批量时,权重和偏差的起始数字是随机的,小批量的生成也是随机的。
我知道最初的Python代码是有效的,我能够成功地运行它,并获得与书中所示相同的结果,这意味着最初的成功在书中的百分之几以内,成功的图表是相同的。正如书中所讨论的,我在神经网络的几次运行和几次配置中都这样做了。然后我运行F#代码并获得了相同的图形。
我还将Python代码中的起始随机数集复制到F#代码中,这样虽然生成的数据是随机的,但Python和F#代码都使用相同的起始数,其中有数千个。然后,我单步执行Python和F#代码,以验证每个单独的函数是否返回了可比较的浮点值,例如,我在每一行上放置了一个断点,并确保我检查了每一行。这实际上花了几天时间,因为我必须编写导出和导入代码,并将数据从Python传递到F#。
请参阅:如何确定Python中嵌套数据结构的类型?
我还尝试了一种变体,将F#列表替换为链表,但没有发现速度增加,例如LinkedList<Matrix<double>>
。这是一个有趣的练习。
如果我理解正确,
let wX = weights |> List.map (fun w ->
X |> List.map (fun (a, b, c) ->
w * float a,
w * float b,
w * float c))
这是使用Math.Net实现这一目标的另一种方法:https://numerics.mathdotnet.com/Matrix.html#Arithmetics