我在使用 Azure 表存储时遇到了巨大的性能瓶颈。我的愿望是使用表作为一种缓存,因此漫长的过程可能会导致数百到数千行数据。然后,可以通过分区键和行键快速查询数据。
查询工作速度非常快(仅使用分区键和行键时非常快,速度稍慢,但在搜索特定匹配项的属性时仍然可以接受(。
但是,插入和删除行都非常慢。
澄清
我想澄清一下,即使插入一批 100 个项目也需要几秒钟。这不仅仅是数千行总吞吐量的问题。当我只插入 100 时,它会影响我。
下面是我的代码示例,用于对表进行批量插入:
static async Task BatchInsert( CloudTable table, List<ITableEntity> entities )
{
int rowOffset = 0;
while ( rowOffset < entities.Count )
{
Stopwatch sw = Stopwatch.StartNew();
var batch = new TableBatchOperation();
// next batch
var rows = entities.Skip( rowOffset ).Take( 100 ).ToList();
foreach ( var row in rows )
batch.Insert( row );
// submit
await table.ExecuteBatchAsync( batch );
rowOffset += rows.Count;
Trace.TraceInformation( "Elapsed time to batch insert " + rows.Count + " rows: " + sw.Elapsed.ToString( "g" ) );
}
}
我正在使用批处理操作,下面是调试输出的一个示例:
Microsoft.WindowsAzure.Storage Information: 3 : b08a07da-fceb-4bec-af34-3beaa340239b: Starting asynchronous request to http://127.0.0.1:10002/devstoreaccount1.
Microsoft.WindowsAzure.Storage Verbose: 4 : b08a07da-fceb-4bec-af34-3beaa340239b: StringToSign = POST..multipart/mixed; boundary=batch_6d86d34c-5e0e-4c0c-8135-f9788ae41748.Tue, 30 Jul 2013 18:48:38 GMT./devstoreaccount1/devstoreaccount1/$batch.
Microsoft.WindowsAzure.Storage Information: 3 : b08a07da-fceb-4bec-af34-3beaa340239b: Preparing to write request data.
Microsoft.WindowsAzure.Storage Information: 3 : b08a07da-fceb-4bec-af34-3beaa340239b: Writing request data.
Microsoft.WindowsAzure.Storage Information: 3 : b08a07da-fceb-4bec-af34-3beaa340239b: Waiting for response.
Microsoft.WindowsAzure.Storage Information: 3 : b08a07da-fceb-4bec-af34-3beaa340239b: Response received. Status code = 202, Request ID = , Content-MD5 = , ETag = .
Microsoft.WindowsAzure.Storage Information: 3 : b08a07da-fceb-4bec-af34-3beaa340239b: Response headers were processed successfully, proceeding with the rest of the operation.
Microsoft.WindowsAzure.Storage Information: 3 : b08a07da-fceb-4bec-af34-3beaa340239b: Processing response body.
Microsoft.WindowsAzure.Storage Information: 3 : b08a07da-fceb-4bec-af34-3beaa340239b: Operation completed successfully.
iisexpress.exe Information: 0 : Elapsed time to batch insert 100 rows: 0:00:00.9351871
如您所见,此示例插入 100 行几乎需要 1 秒。在我的开发机器上,平均似乎约为 0.8 秒(3.4 Ghz 四核(。
这似乎很荒谬。
下面是批量删除操作的示例:
Microsoft.WindowsAzure.Storage Information: 3 : 4c271cb5-7463-44b1-b2e5-848b8fb10a93: Starting asynchronous request to http://127.0.0.1:10002/devstoreaccount1.
Microsoft.WindowsAzure.Storage Verbose: 4 : 4c271cb5-7463-44b1-b2e5-848b8fb10a93: StringToSign = POST..multipart/mixed; boundary=batch_7e3d229f-f8ac-4aa0-8ce9-ed00cb0ba321.Tue, 30 Jul 2013 18:47:41 GMT./devstoreaccount1/devstoreaccount1/$batch.
Microsoft.WindowsAzure.Storage Information: 3 : 4c271cb5-7463-44b1-b2e5-848b8fb10a93: Preparing to write request data.
Microsoft.WindowsAzure.Storage Information: 3 : 4c271cb5-7463-44b1-b2e5-848b8fb10a93: Writing request data.
Microsoft.WindowsAzure.Storage Information: 3 : 4c271cb5-7463-44b1-b2e5-848b8fb10a93: Waiting for response.
Microsoft.WindowsAzure.Storage Information: 3 : 4c271cb5-7463-44b1-b2e5-848b8fb10a93: Response received. Status code = 202, Request ID = , Content-MD5 = , ETag = .
Microsoft.WindowsAzure.Storage Information: 3 : 4c271cb5-7463-44b1-b2e5-848b8fb10a93: Response headers were processed successfully, proceeding with the rest of the operation.
Microsoft.WindowsAzure.Storage Information: 3 : 4c271cb5-7463-44b1-b2e5-848b8fb10a93: Processing response body.
Microsoft.WindowsAzure.Storage Information: 3 : 4c271cb5-7463-44b1-b2e5-848b8fb10a93: Operation completed successfully.
iisexpress.exe Information: 0 : Elapsed time to batch delete 100 rows: 0:00:00.6524402
始终超过 .5 秒。
我也运行了它部署到 Azure(小型实例(,并记录了插入 20 行 28000 行的时间。
我当前正在使用存储客户端库的 2.1 RC 版本:MSDN 博客
我一定是做错了什么。有什么想法吗?
更新
我已经尝试了并行性,并实现了整体速度改进(和 8 个最大逻辑处理器(的净效果,但在我的开发机器上仍然只有每秒 150 行插入。
总体上没有比我更好的了,当部署到 Azure(小型实例(时可能更糟。
我按照此建议增加了线程池,并增加了我的 WebRole 的最大 HTTP 连接数。
我仍然觉得我错过了一些基本的东西,将我的插入/删除限制在 150 ROPS。
更新 2
分析部署到 Azure 的小型实例中的一些诊断日志(使用 2.1 RC 存储客户端内置的新日志记录(后,我有了更多信息。
批处理插入的第一个存储客户端日志位于 635109046781264034
个刻度:
caf06fca-1857-4875-9923-98979d850df3: Starting synchronous request to https://?.table.core.windows.net/.; TraceSource 'Microsoft.WindowsAzure.Storage' event
然后将近 3 秒后,我在 635109046810104314
个刻度处看到这个日志:
caf06fca-1857-4875-9923-98979d850df3: Preparing to write request data.; TraceSource 'Microsoft.WindowsAzure.Storage' event
然后是更多日志,总共占用 0.15 秒,以 635109046811645418
个时钟周期结束,它结束了插入:
caf06fca-1857-4875-9923-98979d850df3: Operation completed successfully.; TraceSource 'Microsoft.WindowsAzure.Storage' event
我不确定该怎么做,但它在我检查的批量插入日志中非常一致。
更新 3
下面是用于并行批量插入的代码。在此代码中,仅用于测试,我确保将每批 100 个插入到唯一的分区中。
static async Task BatchInsert( CloudTable table, List<ITableEntity> entities )
{
int rowOffset = 0;
var tasks = new List<Task>();
while ( rowOffset < entities.Count )
{
// next batch
var rows = entities.Skip( rowOffset ).Take( 100 ).ToList();
rowOffset += rows.Count;
string partition = "$" + rowOffset.ToString();
var task = Task.Factory.StartNew( () =>
{
Stopwatch sw = Stopwatch.StartNew();
var batch = new TableBatchOperation();
foreach ( var row in rows )
{
row.PartitionKey = row.PartitionKey + partition;
batch.InsertOrReplace( row );
}
// submit
table.ExecuteBatch( batch );
Trace.TraceInformation( "Elapsed time to batch insert " + rows.Count + " rows: " + sw.Elapsed.ToString( "F2" ) );
} );
tasks.Add( task );
}
await Task.WhenAll( tasks );
}
如上所述,这确实有助于缩短插入数千行的总时间,但每批 100 行仍然需要几秒钟。
更新 4
因此,我使用 VS2012.2 创建了一个全新的 Azure 云服务项目,将 Web 角色作为单个页面模板(其中包含 TODO 示例的新模板(。
这是开箱即用的,没有新的NuGet包或任何东西。默认情况下,它使用存储客户端库 v2,以及 EDM 和关联的库 v5.2。
我只是将HomeController代码修改为以下内容(使用一些随机数据来模拟我想存储在真实应用程序中的列(:
public ActionResult Index( string returnUrl )
{
ViewBag.ReturnUrl = returnUrl;
Task.Factory.StartNew( () =>
{
TableTest();
} );
return View();
}
static Random random = new Random();
static double RandomDouble( double maxValue )
{
// the Random class is not thread safe!
lock ( random ) return random.NextDouble() * maxValue;
}
void TableTest()
{
// Retrieve storage account from connection-string
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(
CloudConfigurationManager.GetSetting( "CloudStorageConnectionString" ) );
// create the table client
CloudTableClient tableClient = storageAccount.CreateCloudTableClient();
// retrieve the table
CloudTable table = tableClient.GetTableReference( "test" );
// create it if it doesn't already exist
if ( table.CreateIfNotExists() )
{
// the container is new and was just created
Trace.TraceInformation( "Created table named " + "test" );
}
Stopwatch sw = Stopwatch.StartNew();
// create a bunch of objects
int count = 28000;
List<DynamicTableEntity> entities = new List<DynamicTableEntity>( count );
for ( int i = 0; i < count; i++ )
{
var row = new DynamicTableEntity()
{
PartitionKey = "filename.txt",
RowKey = string.Format( "$item{0:D10}", i ),
};
row.Properties.Add( "Name", EntityProperty.GeneratePropertyForString( i.ToString() ) );
row.Properties.Add( "Data", EntityProperty.GeneratePropertyForString( string.Format( "data{0}", i ) ) );
row.Properties.Add( "Value1", EntityProperty.GeneratePropertyForDouble( RandomDouble( 10000 ) ) );
row.Properties.Add( "Value2", EntityProperty.GeneratePropertyForDouble( RandomDouble( 10000 ) ) );
row.Properties.Add( "Value3", EntityProperty.GeneratePropertyForDouble( RandomDouble( 1000 ) ) );
row.Properties.Add( "Value4", EntityProperty.GeneratePropertyForDouble( RandomDouble( 90 ) ) );
row.Properties.Add( "Value5", EntityProperty.GeneratePropertyForDouble( RandomDouble( 180 ) ) );
row.Properties.Add( "Value6", EntityProperty.GeneratePropertyForDouble( RandomDouble( 1000 ) ) );
entities.Add( row );
}
Trace.TraceInformation( "Elapsed time to create record rows: " + sw.Elapsed.ToString() );
sw = Stopwatch.StartNew();
Trace.TraceInformation( "Inserting rows" );
// batch our inserts (100 max)
BatchInsert( table, entities ).Wait();
Trace.TraceInformation( "Successfully inserted " + entities.Count + " rows into table " + table.Name );
Trace.TraceInformation( "Elapsed time: " + sw.Elapsed.ToString() );
Trace.TraceInformation( "Done" );
}
static async Task BatchInsert( CloudTable table, List<DynamicTableEntity> entities )
{
int rowOffset = 0;
var tasks = new List<Task>();
while ( rowOffset < entities.Count )
{
// next batch
var rows = entities.Skip( rowOffset ).Take( 100 ).ToList();
rowOffset += rows.Count;
string partition = "$" + rowOffset.ToString();
var task = Task.Factory.StartNew( () =>
{
var batch = new TableBatchOperation();
foreach ( var row in rows )
{
row.PartitionKey = row.PartitionKey + partition;
batch.InsertOrReplace( row );
}
// submit
table.ExecuteBatch( batch );
Trace.TraceInformation( "Inserted batch for partition " + partition );
} );
tasks.Add( task );
}
await Task.WhenAll( tasks );
}
这是我得到的输出:
iisexpress.exe Information: 0 : Elapsed time to create record rows: 00:00:00.0719448
iisexpress.exe Information: 0 : Inserting rows
iisexpress.exe Information: 0 : Inserted batch for partition $100
...
iisexpress.exe Information: 0 : Successfully inserted 28000 rows into table test
iisexpress.exe Information: 0 : Elapsed time: 00:01:07.1398928
这比我的其他应用程序快一点,超过 460 ROPS。这仍然是不可接受的。再次在此测试中,我的 CPU(8 个逻辑处理器(几乎耗尽,磁盘访问几乎空闲。
我不知道出了什么问题。
更新 5
一轮又一轮的摆弄和调整已经产生了一些改进,但我只是无法比 500-700(ish( ROPS 进行批量插入或替换操作(每批 100 次(快得多。
此测试在 Azure 云中使用一个(或两个(小型实例完成。根据下面的评论,我对本地测试充其量会很慢这一事实感到无奈。
以下是几个示例。每个示例都是它自己的分区键:
Successfully inserted 904 rows into table org1; TraceSource 'w3wp.exe' event
Elapsed time: 00:00:01.3401031; TraceSource 'w3wp.exe' event
Successfully inserted 4130 rows into table org1; TraceSource 'w3wp.exe' event
Elapsed time: 00:00:07.3522871; TraceSource 'w3wp.exe' event
Successfully inserted 28020 rows into table org1; TraceSource 'w3wp.exe' event
Elapsed time: 00:00:51.9319217; TraceSource 'w3wp.exe' event
也许是我的 MSDN Azure 帐户有一些性能上限?我不知道。
在这一点上,我想我已经完成了这个。也许它足够快,可以用于我的目的,或者我会走一条不同的道路。
结论
下面的所有答案都很好!
对于我的特定问题,我已经能够在小型 Azure 实例上看到高达 2k ROPS 的速度,通常约为 1k。由于我需要降低成本(从而减小实例大小(,因此这定义了我将能够使用表的目的。
感谢大家的帮助。
基本概念 - 使用视差来加速。
第 1 步 - 为您的线程池提供足够的线程来实现这一目标 - ThreadPool.SetMinThreads(1024, 256(;
第 2 步 - 使用分区。 我使用 guid 作为 ID,我使用最后一个 to 字符拆分为 256 个唯一的部分(实际上我将它们分组为 N 个子集,在我的案例中为 48 个分区(
步骤3 - 使用任务插入,我对表引用使用对象池
public List<T> InsertOrUpdate(List<T> items)
{
var subLists = SplitIntoPartitionedSublists(items);
var tasks = new List<Task>();
foreach (var subList in subLists)
{
List<T> list = subList;
var task = Task.Factory.StartNew(() =>
{
var batchOp = new TableBatchOperation();
var tableRef = GetTableRef();
foreach (var item in list)
{
batchOp.Add(TableOperation.InsertOrReplace(item));
}
tableRef.ExecuteBatch(batchOp);
ReleaseTableRef(tableRef);
});
tasks.Add(task);
}
Task.WaitAll(tasks.ToArray());
return items;
}
private IEnumerable<List<T>> SplitIntoPartitionedSublists(IEnumerable<T> items)
{
var itemsByPartion = new Dictionary<string, List<T>>();
//split items into partitions
foreach (var item in items)
{
var partition = GetPartition(item);
if (itemsByPartion.ContainsKey(partition) == false)
{
itemsByPartion[partition] = new List<T>();
}
item.PartitionKey = partition;
item.ETag = "*";
itemsByPartion[partition].Add(item);
}
//split into subsets
var subLists = new List<List<T>>();
foreach (var partition in itemsByPartion.Keys)
{
var partitionItems = itemsByPartion[partition];
for (int i = 0; i < partitionItems.Count; i += MaxBatch)
{
subLists.Add(partitionItems.Skip(i).Take(MaxBatch).ToList());
}
}
return subLists;
}
private void BuildPartitionIndentifiers(int partitonCount)
{
var chars = new char[] { '0', '1', '2', '3', '4', '5', '6', '7', '8', '9', 'a', 'b', 'c', 'd', 'e', 'f' }.ToList();
var keys = new List<string>();
for (int i = 0; i < chars.Count; i++)
{
var keyA = chars[i];
for (int j = 0; j < chars.Count; j++)
{
var keyB = chars[j];
keys.Add(string.Concat(keyA, keyB));
}
}
var keySetMaxSize = Math.Max(1, (int)Math.Floor((double)keys.Count / ((double)partitonCount)));
var keySets = new List<List<string>>();
if (partitonCount > keys.Count)
{
partitonCount = keys.Count;
}
//Build the key sets
var index = 0;
while (index < keys.Count)
{
var keysSet = keys.Skip(index).Take(keySetMaxSize).ToList();
keySets.Add(keysSet);
index += keySetMaxSize;
}
//build the lookups and datatable for each key set
_partitions = new List<string>();
for (int i = 0; i < keySets.Count; i++)
{
var partitionName = String.Concat("subSet_", i);
foreach (var key in keySets[i])
{
_partitionByKey[key] = partitionName;
}
_partitions.Add(partitionName);
}
}
private string GetPartition(T item)
{
var partKey = item.Id.ToString().Substring(34,2);
return _partitionByKey[partKey];
}
private string GetPartition(Guid id)
{
var partKey = id.ToString().Substring(34, 2);
return _partitionByKey[partKey];
}
private CloudTable GetTableRef()
{
CloudTable tableRef = null;
//try to pop a table ref out of the stack
var foundTableRefInStack = _tableRefs.TryPop(out tableRef);
if (foundTableRefInStack == false)
{
//no table ref available must create a new one
var client = _account.CreateCloudTableClient();
client.RetryPolicy = new ExponentialRetry(TimeSpan.FromSeconds(1), 4);
tableRef = client.GetTableReference(_sTableName);
}
//ensure table is created
if (_bTableCreated != true)
{
tableRef.CreateIfNotExists();
_bTableCreated = true;
}
return tableRef;
}
结果 - 最大 19-22KOPS 存储帐户
如果您对完整来源感兴趣,请与我联系
需要摩尔? 使用多个存储帐户!
这是几个月的反复试验,测试,用头撞桌子。我真的希望它有所帮助。
好的,第三个回答了一个魅力?
http://blogs.msdn.com/b/windowsazurestorage/archive/2010/11/06/how-to-get-most-out-of-windows-azure-tables.aspx
有几件事 - 存储模拟器 - 来自一个认真挖掘它的朋友。
"所有内容都在单个数据库中命中单个表(更多分区不会影响任何事情(。 每个表插入操作至少是 3 个 sql 操作。 每个批次都在事务中。 根据事务隔离级别,这些批处理并行执行的能力有限。
由于 SQL 服务器行为,串行批处理应比单个插入更快。 (单个插入本质上是每个刷新到磁盘的小事务,而实际事务作为一个组刷新到磁盘(。
使用多个分区的 IE 不会影响模拟器的性能,而不会影响实际 Azure 存储的性能。
同时启用日志记录并稍微检查一下日志 - c:\users\username\appdata\local\developmentstorage
100 的批量大小似乎提供了最佳的实际性能,关闭唠叨,关闭期望 100,加强连接限制。
还要确保您没有意外插入重复项,这会导致错误并减慢一切。
并针对实际存储进行测试。 有一个相当不错的库可以为您处理大部分问题 - http://www.nuget.org/packages/WindowsAzure.StorageExtensions/,只需确保您在添加时实际调用 ToList,并且直到枚举它才会真正执行。 此外,该库使用动态表实体,因此序列化的性能命中很小,但它确实允许您使用没有 TableEntity 内容的纯 POCO 对象。
~ JT
经过大量痛苦和实验,终于能够使用 Azure 表存储获得单个表分区的最佳吞吐量(每秒 2,000+ 次批量写入操作(和存储帐户的更好吞吐量(每秒 3,500+ 次批处理写入操作(。我尝试了所有不同的方法,但以编程方式设置 .net 连接限制(我尝试了配置示例,但对我不起作用(解决了问题(基于 Microsoft 提供的白皮书(,如下所示:
ServicePoint tableServicePoint = ServicePointManager
.FindServicePoint(_StorageAccount.TableEndpoint);
//This is a notorious issue that has affected many developers. By default, the value
//for the number of .NET HTTP connections is 2.
//This implies that only 2 concurrent connections can be maintained. This manifests itself
//as "underlying connection was closed..." when the number of concurrent requests is
//greater than 2.
tableServicePoint.ConnectionLimit = 1000;
每个存储帐户获得 20K+ 批量写入操作的任何其他人,请分享你的经验。
为了获得更多乐趣,这里有一个新的答案 - 隔离的独立测试,它为生产中的写入性能提供了一些惊人的数字,并且可以更好地避免 IO 阻塞和连接管理。我很想知道这如何为您工作,因为我们获得了荒谬的写入速度(> 7kps(。
网络配置
<system.net>
<connectionManagement>
<add address="*" maxconnection="48"/>
</connectionManagement>
</system.net>
对于测试,我使用基于体积的参数,因此例如 25000 个项目、24 个分区、100 的批处理大小似乎总是最好的,引用计数为 20。这是使用BufflerBlock的TPL数据流(http://www.nuget.org/packages/Microsoft.Tpl.Dataflow/(,它提供了一个很好的等待线程安全表引用拉取。
public class DyanmicBulkInsertTestPooledRefsAndAsynch : WebTest, IDynamicWebTest
{
private int _itemCount;
private int _partitionCount;
private int _batchSize;
private List<TestTableEntity> _items;
private GuidIdPartitionSplitter<TestTableEntity> _partitionSplitter;
private string _tableName;
private CloudStorageAccount _account;
private CloudTableClient _tableClient;
private Dictionary<string, List<TestTableEntity>> _itemsByParition;
private int _maxRefCount;
private BufferBlock<CloudTable> _tableRefs;
public DyanmicBulkInsertTestPooledRefsAndAsynch()
{
Properties = new List<ItemProp>();
Properties.Add(new ItemProp("ItemCount", typeof(int)));
Properties.Add(new ItemProp("PartitionCount", typeof(int)));
Properties.Add(new ItemProp("BatchSize", typeof(int)));
Properties.Add(new ItemProp("MaxRefs", typeof(int)));
}
public List<ItemProp> Properties { get; set; }
public void SetProps(Dictionary<string, object> propValuesByPropName)
{
_itemCount = (int)propValuesByPropName["ItemCount"];
_partitionCount = (int)propValuesByPropName["PartitionCount"];
_batchSize = (int)propValuesByPropName["BatchSize"];
_maxRefCount = (int)propValuesByPropName["MaxRefs"];
}
protected override void SetupTest()
{
base.SetupTest();
ThreadPool.SetMinThreads(1024, 256);
ServicePointManager.DefaultConnectionLimit = 256;
ServicePointManager.UseNagleAlgorithm = false;
ServicePointManager.Expect100Continue = false;
_account = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("DataConnectionString"));
_tableClient = _account.CreateCloudTableClient();
_tableName = "testtable" + new Random().Next(100000);
//create the refs
_tableRefs = new BufferBlock<CloudTable>();
for (int i = 0; i < _maxRefCount; i++)
{
_tableRefs.Post(_tableClient.GetTableReference(_tableName));
}
var tableRefTask = GetTableRef();
tableRefTask.Wait();
var tableRef = tableRefTask.Result;
tableRef.CreateIfNotExists();
ReleaseRef(tableRef);
_items = TestUtils.GenerateTableItems(_itemCount);
_partitionSplitter = new GuidIdPartitionSplitter<TestTableEntity>();
_partitionSplitter.BuildPartitions(_partitionCount);
_items.ForEach(o =>
{
o.ETag = "*";
o.Timestamp = DateTime.Now;
o.PartitionKey = _partitionSplitter.GetPartition(o);
});
_itemsByParition = _partitionSplitter.SplitIntoPartitionedSublists(_items);
}
private async Task<CloudTable> GetTableRef()
{
return await _tableRefs.ReceiveAsync();
}
private void ReleaseRef(CloudTable tableRef)
{
_tableRefs.Post(tableRef);
}
protected override void ExecuteTest()
{
Task.WaitAll(_itemsByParition.Keys.Select(parition => Task.Factory.StartNew(() => InsertParitionItems(_itemsByParition[parition]))).ToArray());
}
private void InsertParitionItems(List<TestTableEntity> items)
{
var tasks = new List<Task>();
for (int i = 0; i < items.Count; i += _batchSize)
{
int i1 = i;
var task = Task.Factory.StartNew(async () =>
{
var batchItems = items.Skip(i1).Take(_batchSize).ToList();
if (batchItems.Select(o => o.PartitionKey).Distinct().Count() > 1)
{
throw new Exception("Multiple partitions batch");
}
var batchOp = new TableBatchOperation();
batchItems.ForEach(batchOp.InsertOrReplace);
var tableRef = GetTableRef.Result();
tableRef.ExecuteBatch(batchOp);
ReleaseRef(tableRef);
});
tasks.Add(task);
}
Task.WaitAll(tasks.ToArray());
}
protected override void CleanupTest()
{
var tableRefTask = GetTableRef();
tableRefTask.Wait();
var tableRef = tableRefTask.Result;
tableRef.DeleteIfExists();
ReleaseRef(tableRef);
}
我们目前正在开发一个可以处理多个存储帐户的版本,希望能获得一些疯狂的速度。 此外,我们正在大型数据集的 8 个核心虚拟机上运行这些,但使用新的非阻塞 IO,它应该在有限的 VM 上运行良好。 祝你好运!
public class SimpleGuidIdPartitionSplitter<T> where T : IUniqueId
{
private ConcurrentDictionary<string, string> _partitionByKey = new ConcurrentDictionary<string, string>();
private List<string> _partitions;
private bool _bPartitionsBuilt;
public SimpleGuidIdPartitionSplitter()
{
}
public void BuildPartitions(int iPartCount)
{
BuildPartitionIndentifiers(iPartCount);
}
public string GetPartition(T item)
{
if (_bPartitionsBuilt == false)
{
throw new Exception("Partitions Not Built");
}
var partKey = item.Id.ToString().Substring(34, 2);
return _partitionByKey[partKey];
}
public string GetPartition(Guid id)
{
if (_bPartitionsBuilt == false)
{
throw new Exception("Partitions Not Built");
}
var partKey = id.ToString().Substring(34, 2);
return _partitionByKey[partKey];
}
#region Helpers
private void BuildPartitionIndentifiers(int partitonCount)
{
var chars = new char[] { '0', '1', '2', '3', '4', '5', '6', '7', '8', '9', 'a', 'b', 'c', 'd', 'e', 'f' }.ToList();
var keys = new List<string>();
for (int i = 0; i < chars.Count; i++)
{
var keyA = chars[i];
for (int j = 0; j < chars.Count; j++)
{
var keyB = chars[j];
keys.Add(string.Concat(keyA, keyB));
}
}
var keySetMaxSize = Math.Max(1, (int)Math.Floor((double)keys.Count / ((double)partitonCount)));
var keySets = new List<List<string>>();
if (partitonCount > keys.Count)
{
partitonCount = keys.Count;
}
//Build the key sets
var index = 0;
while (index < keys.Count)
{
var keysSet = keys.Skip(index).Take(keySetMaxSize).ToList();
keySets.Add(keysSet);
index += keySetMaxSize;
}
//build the lookups and datatable for each key set
_partitions = new List<string>();
for (int i = 0; i < keySets.Count; i++)
{
var partitionName = String.Concat("subSet_", i);
foreach (var key in keySets[i])
{
_partitionByKey[key] = partitionName;
}
_partitions.Add(partitionName);
}
_bPartitionsBuilt = true;
}
#endregion
}
internal static List<TestTableEntity> GenerateTableItems(int count)
{
var items = new List<TestTableEntity>();
var random = new Random();
for (int i = 0; i < count; i++)
{
var itemId = Guid.NewGuid();
items.Add(new TestTableEntity()
{
Id = itemId,
TestGuid = Guid.NewGuid(),
RowKey = itemId.ToString(),
TestBool = true,
TestDateTime = DateTime.Now,
TestDouble = random.Next() * 1000000,
TestInt = random.Next(10000),
TestString = Guid.NewGuid().ToString(),
});
}
var dupRowKeys = items.GroupBy(o => o.RowKey).Where(o => o.Count() > 1).Select(o => o.Key).ToList();
if (dupRowKeys.Count > 0)
{
throw new Exception("Dupicate Row Keys");
}
return items;
}
还有一件事 - 您的时机和框架如何受到影响指向此 http://blogs.msdn.com/b/windowsazurestorage/archive/2013/08/08/net-clients-encountering-port-exhaustion-after-installing-kb2750149-or-kb2805227.aspx