我正在处理非常大的bigint
数字,我需要将它们写入磁盘并在以后读回,因为它们一次无法全部放入内存中。
当前的 Chapel 实现首先将bigint
转换为string
,然后将该string
写入磁盘[1]。对于大整数来说,这需要很长时间。
var outputFile = open("outputPath", iomode.cwr);
var writer = outputFile.writer();
writer.write(reallyLargeBigint);
writer.close();
outputFile.close();
有没有办法使用 GMP 的mpz_out_raw()
/mpz_inp_raw()
[2] 或mpz_export()
/mpz_import()
[3] 或其他类似方式将bigint
的字节直接转储到磁盘而无需事先转换为字符串,然后将字节读回bigint
对象?
这也适用于bigint
阵列吗?
如何在当前状态下无法将此类功能添加到 Chapel 的标准库中?
[1] https://github.com/chapel-lang/chapel/blob/master/modules/standard/BigInteger.chpl#L346
[2] https://gmplib.org/manual/I_002fO-of-Integers.html
[3] https://gmplib.org/manual/Integer-Import-and-Export.html
您提到的函数在任何 Chapel 模块中都不可用,但您可以编写extern
进程和extern
类型来直接访问GMP
函数。
首先,我们需要能够处理 C 文件,因此为它们声明一些过程和类型:
extern type FILE;
extern type FILEptr = c_ptr(FILE);
extern proc fopen(filename: c_string, mode: c_string): FILEptr;
extern proc fclose(fp: FILEptr);
然后我们可以声明我们需要的 GMP 函数:
extern proc mpz_out_raw(stream: FILEptr, const op: mpz_t): size_t;
extern proc mpz_inp_raw(ref rop: mpz_t, stream: FILEptr): size_t;
现在我们可以使用它们来编写bigint
值:
use BigInteger;
var res: bigint;
res.fac(100); // Compute 100!
writeln("Writing the number: ", res);
var f = fopen("gmp_outfile", "w");
mpz_out_raw(f, res.mpz);
fclose(f);
并从文件中读回:
var readIt: bigint;
f = fopen("gmp_outfile", "r");
mpz_inp_raw(readIt.mpz, f);
fclose(f);
writeln("Read the number:", readIt);
对于bigint
值数组,只需循环访问它们即可写入或读取它们:
// initialize the array
var A: [1..10] bigint;
for i in 1..10 do
A[i].fac(i);
// write the array to a file
f = fopen("gmp_outfile", "w");
for i in 1..10 do
mpz_out_raw(f, A[i].mpz);
fclose(f);
// read the array back in from the file
var B: [1..10] bigint;
f = fopen("gmp_outfile", "r");
for i in 1..10 do
mpz_inp_raw(B[i].mpz, f);
fclose(f);
序言:
数据的size
是一个静态属性,但流量始终是我们有史以来最大的敌人。
">可以将这些功能添加到Chapel的标准库中吗?">
以目前的价格增加几个单位,数十甚至数百[TB]
-s的RAM容量,恕我直言,问题永远无法通过上述草图方向的语言扩展来解决。
为什么从不?由于成本爆炸:
如果一个人只花一点时间了解事实,下面的延迟图就会出现在一张白纸上。虽然各自的数字可能略有不同,但信息的数量级和思维过程的依赖链中:
________________________________________________________________________________________
/ /
/ ________________________________________________________ /
/ / / /
/ / xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx / /
/ / / / / / /
/ / SOMEWHAT / PRETTY / PROHIBITIVELY / / /
/ CHEAPEST / CHEAP / EXPENSIVE / EXPENSIVE / / /
/ EVER / ZONE / ZONE / ZONE / / /
/___________________/. . . . . / _ _ _ _ _ _ _ _/ ! ! ! ! ! ! ! !/ / /_______________________
/ / / / / / / / /
in-CACHE / in-RAM / CONVERT / STORE / RE-READ / CONVERT / in-RAM / in-CACHE / in-CPU-uop /
~ + 5 [ns] | | | | | | | |
+ 5 [ns] | | | | | | | |
| | | | | | | | |
| ~ +300 [ns/kB] | | | | | | |
| +300 [ns/kB] | | | | | | |
| | | | | | | | |
| |+VOLUME [ GB] | | | | | |
| | x 100.000[ns/GB] | | | | | |
| | | | | | | | |
| | |+1 | | | | | |
| | | x 15.000.000[ns] | | | | |
| | |+VOLUME [ GB] | | | | |
| | | x 3.580.000.000[ns/GB] | | | | |
| | | | | | | | |
| | | |+1 FIND | | | | |
| | | | x 15.000.000[ns] | | | |
| | | |+1 DATA | | | | |
| | | | x 15.000.000[ns] | | | |
| | | |+VOLUME [ GB] | | | |
| | | | x 3.580.000.000[ns/GB] | | | |
| | | | | | | | |
| | | | |+VOLUME [ GB] | | |
| | | | | x 100.000[ns/GB] | | |
| | | | | | | | |
| | | | | | ~ +300 [ns/kB] | |
| | | | | | +300 [ns/kB] | |
| | | | | | | | |
| | | | | | | ~ + 5 [ns] |
| | | | | | | + 5 [ns] |
| | | | | | | | |
| | | | | | | | ~ + 0.3 [ns/uop]
| | | | | | | | + 2.0 [ns/uop]
最后但并非最不重要的一点是,只需计算该步骤对<< 1.0
加速的影响
给定原始处理花费XYZ [ns]
,
"修改后"处理将采用:
XYZ [ns] : the PURPOSE
+ ( VOL [GB] * 300.000.000 [ns/GB] ) : + MEM/CONVERT
+ ( VOL [GB] * 100.000 [ns/GB] ) : + CPU/CONVERT
+ 15.000.000 [ns] : + fileIO SEEK
+ ( VOL [GB] * 3.580.000.000 [ns/GB] ) : + fileIO STORE
+ 15.000.000 [ns] : + fileIO SEEK / FAT
+ 15.000.000 [ns] : + fileIO SEEK / DATA
+ ( VOL [GB] * 3.580.000.000 [ns/GB] ) : + fileIO RE-READ
+ ( VOL [GB] * 100.000 [ns/GB] ) : + CPU/CONVERT
+ ( VOL [GB] * 300.000.000 [ns/GB] ) : + MEM/CONVERT
_______________________________________________
45.000.XYZ [ns]
+ 7.660.200.000 [ns/GB] * VOL [GB]
因此,这种不利影响的性能将受到损害(如阿姆达尔定律所示):
1
S = ------------------------------------------------------------ << 1.00
1 + ( 45.000.XYZ [ns] + 7.660.200.000 [ns/GB] * VOL[GB] )