当前位置: X-MOL 学术Int. J. Adv. Manuf. Technol. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A study on automatic fixture design using reinforcement learning
The International Journal of Advanced Manufacturing Technology ( IF 2.9 ) Pub Date : 2020-03-19 , DOI: 10.1007/s00170-020-05156-6
Darren Wei Wen Low , Dennis Wee Keong Neo , A. Senthil Kumar

Abstract

Fixtures are used to locate and secure workpieces for further machining or measurement process. Design of these fixtures remains a costly process due to the significant technical know-how required. Automated fixture design can mitigate much of these costs by reducing the dependence on skilled labour, making it an attractive endeavour. Historical attempts in achieving automated fixture design solutions predominantly relied on case-based reasoning (CBR) to generate fixtures by extrapolating from previously proven designs. These approaches are limited by their dependence on a fixturing library. Attempts in using rule-based reasoning (RBR) has also shown to be difficult to implement comprehensively. Reinforcement learning, on the other hand, does not require a fixturing library and instead builds experience and learns through interacting with an environment. This paper discusses the use of reinforcement learning to generate optimized fixturing solutions. Through a proposed reinforcement learning driven fixture design (RL-FD) framework, reinforcement learning was used to generate optimized fixturing solutions. In response to the fixturing environment, adjustments to the reinforcement learning process in the exploration phase is studied. A case study is presented, comparing a conventional exploration method with an adjusted one. Both agents show improved average results over time, with the adjusted exploration model exhibiting faster performance.



中文翻译:

基于强化学习的自动夹具设计研究

摘要

夹具用于定位和固定工件,以进行进一步的加工或测量过程。由于需要大量的技术知识,这些固定装置的设计仍然是一个昂贵的过程。自动化的夹具设计可通过减少对熟练劳动力的依赖来减轻许多此类成本,这是一项有吸引力的工作。在实现自动化灯具设计解决方案方面的历史尝试主要依靠基于案例的推理(CBR)通过从先前经过验证的设计推断得出灯具。这些方法受到它们对夹具库的依赖的限制。使用基于规则的推理(RBR)的尝试也已显示出难以全面实施。另一方面,强化学习 不需要夹具库,而是通过与环境交互来积累经验和学习。本文讨论了使用强化学习生成优化夹具解决方案的方法。通过提出的强化学习驱动夹具设计(RL-FD)框架,强化学习用于生成优化的夹具解决方案。针对固定环境,研究了探索阶段对强化学习过程的调整。提出了一个案例研究,将常规勘探方法与调整后的方法进行了比较。两种代理均显示出随着时间推移平均结果的改善,调整后的勘探模型显示出更快的性能。通过提出的强化学习驱动夹具设计(RL-FD)框架,强化学习用于生成优化的夹具解决方案。针对固定环境,研究了探索阶段对强化学习过程的调整。提出了一个案例研究,将常规勘探方法与调整后的方法进行了比较。两种代理均显示出随着时间推移平均结果的改善,调整后的勘探模型显示出更快的性能。通过提出的强化学习驱动夹具设计(RL-FD)框架,强化学习被用于生成优化的夹具解决方案。针对固定环境,研究了探索阶段对强化学习过程的调整。提出了一个案例研究,将常规勘探方法与调整后的方法进行了比较。两种代理均显示出随着时间推移平均结果的改善,调整后的勘探模型显示出更快的性能。

更新日期:2020-03-20
down
wechat
bug