当前位置: X-MOL 学术J. Hydroinform. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A parallelized algorithm to speed up 1D free-surface flow simulations in irrigation canals
Journal of Hydroinformatics ( IF 2.2 ) Pub Date : 2020-11-01 , DOI: 10.2166/hydro.2020.049
Lucas Bessone 1 , Joan Soler-Guitart 2 , Pablo Gamazo 1
Affiliation  

A parallel algorithm for 1D free-surface flow simulations in irrigation canals is shown. The model is based on the Hartree method applied to Saint-Venant equations. Due to the close-to-steady flow nature in irrigation canals, external and internal boundary conditions are linearized to preserve the parallel character. Gate trajectories, off-take withdrawals, and external boundary conditions are modeled as piece-wise functions of time, so there are discontinuities. To achieve a fully parallelized algorithm, an explicit version of the Hartree method is chosen, and external and internal boundary conditions are linearized around operation point. This approach is used to build a computer simulator, written in C-CUDA language. Two tests by ASCE Committee on Canal Automation Algorithms have been used to evaluate accuracy and performance of the algorithm. The Maricopa Stanfield benchmark has been used to prove its accuracy, and the Corning Canal benchmark to evaluate performance in terms of processing time. Surprisingly, solving a 12 hr-long prediction horizon with a cell size of about Δx= 10 m is less than 1 s on a Nvidia K40 card. Results were compared with a serial and a multi-CPU version of the same algorithm. The implementation that showed the best performance on different platforms is the one that uses GPU.



中文翻译:

一种并行算法,可加快灌溉渠中的一维自由面水流模拟

显示了用于灌溉渠中一维自由表面流模拟的并行算法。该模型基于应用于Saint-Venant方程的Hartree方法。由于灌溉渠的流量接近于稳定状态,因此将外部和内部边界条件线性化以保留平行特征。门的轨迹,起飞取回和外部边界条件被建模为时间的分段函数,因此存在不连续性。为了实现完全并行化的算法,选择了Hartree方法的显式版本,并且在操作点周围将外部和内部边界条件线性化。此方法用于构建以C-CUDA语言编写的计算机模拟器。ASCE运河自动化算法委员会的两项测试已用于评估算法的准确性和性能。已使用Maricopa Stanfield基准来证明其准确性,并使用Corning Canal基准来评估处理时间方面的性能。令人惊讶的是,求解了一个12小时长的预测范围,其像元大小约为Δ在Nvidia K40卡上,x = 10 m小于1 s。将结果与相同算法的串行和多CPU版本进行了比较。在不同平台上表现出最佳性能的实现是使用GPU的实现。

更新日期:2020-11-19
down
wechat
bug