当前位置: X-MOL 学术Eng. Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Teaching–learning-based metaheuristic scheme for modifying neural computing in appraising energy performance of building
Engineering with Computers Pub Date : 2020-02-28 , DOI: 10.1007/s00366-020-00981-5
Guofeng Zhou , Hossein Moayedi , Loke Kok Foong

Early assessment of the energy performance of buildings (EPB) is focused in this study. This task is carried out by predicting the cooling load (CL) in a residential building. To this end, due to the drawbacks of neural computing approaches (e.g., local minima), a novel metaheuristic technique, namely teaching–learning-based optimization (TLBO) is employed to modify a multi-layer perceptron neural network (MLPNN). The complexity of the proposed model is also optimized by a trial and error process. Evaluating the results revealed a high efficiency for this scheme. In this sense, the prediction error of the MLPNN was reduced by around 20%, and the correlation between the measured and forecasted CLs rose from 0.8875 to 0.9207. It was also deduced that the TLBO outperforms two benchmark optimizers of cuckoo optimization algorithm (COA) and league championship algorithm (LCA) in terms of both modeling accuracy and network complexity. Moreover, the TLBO-MLP emerged as the most time-effective hybrid as it required considerably lower computation time than COA-MLP and LCA-MLP. Regarding these advantages, the proposed model can be promisingly used for early assessment of EPB in practice.

中文翻译:

基于教与学的元启发式改进神经计算在建筑能源性能评价中的应用

本研究的重点是对建筑物能源性能 (EPB) 的早期评估。该任务是通过预测住宅建筑的冷负荷 (CL) 来执行的。为此,由于神经计算方法(例如局部最小值)的缺点,采用了一种新的元启发式技术,即基于教学的优化(TLBO)来修改多层感知器神经网络(MLPNN)。所提出模型的复杂性也通过试错过程进行了优化。评估结果表明该方案的效率很高。从这个意义上说,MLPNN 的预测误差降低了 20% 左右,实测和预测 CL 之间的相关性从 0.8875 上升到 0.9207。还推导出 TLBO 在建模精度和网络复杂度方面都优于布谷鸟优化算法(COA)和联赛冠军算法(LCA)这两个基准优化器。此外,TLBO-MLP 成为最省时的混合体,因为它需要比 COA-MLP 和 LCA-MLP 少得多的计算时间。鉴于这些优势,所提出的模型有望在实践中用于 EPB 的早期评估。
更新日期:2020-02-28
down
wechat
bug