Introduction

Operations Research (OR) practitioners often seek to address complex real-life problems through workshop-based interventions. In terms of outcomes, OR workshops can be classified as substantive (exploring the problem space and possible solutions), normative (building legitimacy through shared understanding) or instrumental (focusing on collaboration and reducing conflict) (Jones et al. 2009). More than one outcome type is possible, but in this study, we consider creative OR workshops with a substantive focus, mainly aiming to explore potential options and strategies, integrate knowledge sources, and improve problem solving capacity. We focus on the multi-method approaches to workshop design, which are often helpful in addressing the various aspects and views of the problem space within transdisciplinary settings (Mingers 2000).

Design of OR interventions is commonly depicted as a matter of selecting the right tools for the job from a toolbox of established methods and techniques. However, this depiction doesn’t account for the dynamic interactions between different actors that result in tailoring, modification and refinement of methodologies and methods that occurs in real life (Ormerod 2014, 2017; Velez-Castiblanco et al. 2016). Furthermore, numerous OR practitioners have pointed out that accounts of OR interventions in literature are often sanitised to make emergent methods seem pre-determined (Midgley 2000; Gregory et al. 2020; Ormerod 2014, 2017). In fact, there is significant value in recognising and accounting for the modification and evolution of OR methodologies and methods, for it may lead to advances in both theory and practice (Midgley 1998; Midgley et al. 2017; Ormerod 2014, 2017).

In our study, we seek to contribute to the theory and practice of evolution in OR interventions through application of clear methodological principles, frameworks and techniques for iterative evaluation and refinement of interventions across multiple theory–practice cycles.

Specifically, we draw on the concept of boundary critique to analyse workshop effectiveness. System boundaries, whether explicit or implicit, define what is perceived as relevant to the scope of discussion and analysis (Churchman 1970; Midgley 2000; Ulrich 1983). Boundary critique, as a systematic and reflective analysis of system boundaries, has been used in OR practice to promote inclusion of relevant views (Churchman 1970), improve understanding of values inherent in boundary selection towards more ethical OR practice (Ulrich 1983, 1987, 2001; Ulrich and Reynolds 2010) and examine value conflicts (Midgley 2016; Midgley et al. 1998).

We propose an analytical framework that employs systematic boundary critique to improve the design and outcomes in multi-method OR workshops. An analytical framework provides a structure and logic to analysis in a systematic manner. Recognising the inevitability of boundary judgements in generation of knowledge (Midgley 2011; Ulrich 2001), we assume that all steps within an OR workshop design introduce explicit or implicit boundaries in the participants’ understanding of the problem space. These boundaries can, in turn, affect the outputs, such as new ideas and concepts. We therefore propose that analysis of system boundaries inherent within each workshop step can help improve the design of creative OR workshops.

In our study, we demonstrate the application of this analytical lens as the basis for data analysis across five workshops focusing on new concepts for logistic and health support operations in the military domain. We provide an example of real-life refinement of OR intervention design through deliberate application of theory–practice cycles, facilitated by systematic evaluation of measurable indicators. While our study refers to particular type of operations, the principles of the proposed analytical approach are relevant for OR practitioners designing consultative workshops in other fields, including community, organisational and commercial sectors. The proposed analytical framework is particularly useful for OR practitioners who seek to generate innovative and creative solutions for real-life complex problems.

The remainder of this paper is structured as follows. In the Literature review section, we describe the key theories and debates in multi-method approaches and boundary analysis, before linking these to our proposed analytical framework. We then outline our study context and research questions. The Research approach section describes our core methodological principles grounded in Critical System Practice (Jackson 2006, 2010) and the key indicators for evaluating workshop effectiveness. We describe the results of the workshops with reference to our research questions, link these to workshop design refinements, and provide a discussion of the broader implications for OR practice. Finally, we address the internal and external validity of our approach across different domains and provide suggestions for further research.

Literature Review

The aim of this section is to present the relevant theories in multi-method interventions and boundary critique. Firstly, we outline the practical and theoretical aspects of multi-method interventions and chart the path to pluralism as the key paradigm for evolution of multi-method designs. Secondly, we describe the key theories in boundary critique and provide examples in OR practice. We conclude this section by linking boundary analysis with creative outputs in OR workshops as the foundation of our proposed analytical framework.

Multi-Method Interventions and Pluralism

The term ‘multi-method’ refers to employment of multiple methods and techniques, which may come from different disciplines and paradigms (Mingers 2000). ‘Method’ is the application of specific techniques for a particular purpose. ‘Methodology’ provides the underlying theory and principles for use of particular methods (Midgley et al. 2017).

Multi-method approach is relevant in our context due to the transdisciplinary and complex nature of capability development and because of the multiple steps in the development of capability concepts: generation of initial ideas, prioritisation, development of concepts, and their evaluation. More broadly, for OR interventions, a multi-method design allows for integration of different disciplines and examination of various dimensions of the problem space. This encompasses social and personal dimensions (Mingers 2000). In our context, it includes organisational and mission requirements, impact on personnel on the ground, cost of implementation, and technical feasibility. Specific methods and techniques can be matched to multiple objectives and stages of the intervention, delivering a more comprehensive understanding of the problem space, allowing for a larger range of solutions and improving buy-in from stakeholders (Mingers 2000; Teddlie and Tashakkori 2009; Wright 2012).

Indeed, the use of multiple methods within one study is common across domains that require rational intervention in real-life situations (Howick and Ackermann 2011). Munro and Mingers (2002) report a survey of 167 instances of multi-method practice. Most combinations include two methods (less commonly three) with combinations of ‘soft’ methods being most common. A later review of multi-method case studies by Howick and Ackermann (2011) further describes four types of method mixing: comparison of different methods, integration of method elements into new ones, combining elements and combining methodologies. However, Munro and Mingers (2002) describe the justifications for specific combinations as vague, often based on what is perceived to be ‘necessary’, ‘appropriate’ or ‘familiar’. Howick and Ackermann (2011) refer to more specific reasons such as addressing disparate project phases, and balancing strengths and weaknesses of the individual methods. Both studies note the lack of theoretical grounding in multi-method studies.

In fact, multi-method approaches pose significant theoretical challenges as all methods entail theoretical assumptions that may be at odds with each other (Midgley 2000). The efforts to create common theoretical frameworks for mixing methods have been driven by the desire to avoid ‘atheoretical pragmatism’ and ‘theoretically contradictory eclecticism’ (Wright 2012; Zhu 2011), which are perceived as scientifically invalid approaches. For a comprehensive account of history, evolution and paradigmatic stances of various multi-method frameworks, we refer the reader to the thesis by Claire Wright (2012). Wright (2012) provides an account of established meta-frameworks such as System of Systems Methodologies (SoSM), Total Systems Intervention (TSI), Creative Design of Methods, Critical Appreciation, Critical Systems Heuristics (CSH), Systemic Intervention, Critical Systems Practice (CSP) and Minger’s Multi-Methodology. The author further points to critical system thinking principles outlined by Jackson (2006) as one of the more robust theoretical frameworks, being grounded in principles of pluralism (leveraging a range of methodologies and methods), critical awareness (regarding the strengths and weaknesses of different methods), and improvement. Indeed, critical systems thinking underpins our own study design.

An argument against a-priori meta-frameworks has been put forward by Zhu (2011), who depicts them as limiting the types of questions that can be asked, preventing ‘crafting and negotiation’ of paradigms with the relevant actors, and incorrectly assuming practitioners to be asocial, apolitical and acultural reasoners. Furthermore, Zhu (2011) criticises the preposition that once methods are classified within the particular tables or matrices (e.g. SoSM), they have to remain applicable to only the allocated problem type. Other authors point out that cognitive barriers to paradigm shifts, practitioners’ personal preferences, as well time and resource constraints all affect the capacity for multi-paradigm interventions (Kotiadis and Mingers 2006; Midgley et al. 2017; Mingers 2000).

Zhu’s proposed solution is a pragmatist recommendation to start and end intervention designs in practice (Zhu 2011). Teddlie and Tashakkori (2009), on the other hand, call for ‘a middle ground between philosophical dogmatism and skepticism’, with an instrumental view of theories, as even conflicting perspectives can provide a better understanding of the problem space. Midgley (1997) draws a specific distinction between atheoretical pragmatism and methodological pluralism. The author points out that pragmatists often seek to develop a toolbox of methods based on what works in practice, whereas pluralists aim to understand why and how they work. In this, the proponents of both schools of thought aspire to develop a flexible and responsive practice.

Midgley et al. (2017) describe two levels of pluralism. At methodology level, pluralism means recognising different methodological approaches and allowing these insights to influence one’s own methodology. At method level, pluralism entails leveraging a wide range of methods to support the purpose of research. This pluralist approach is reflected in the principles of critical systems thinking and CSP described by Jackson (2006, 2010), which informs our study design. Importantly, pluralism entails learning and evolution, leading to dynamic, rather than static, intervention designs.

When reflecting on the effectiveness of intervention designs, we chose to apply a specific analytical framework based on boundary critique to provide a logical structure for further refinements.

Boundary Critique in OR Practice

A key aspect of systems thinking is the notion of boundary judgements. System boundaries are constructs (personal or social) that limit what knowledge, values and perceptions are deemed relevant (Churchman 1970; Midgley 2000, 2011). Ulrich (2001) and Midgley (2000, 2011) point out that the process of forming system boundaries is inevitable during knowledge production, whether it is done explicitly or implicitly.

Churchman (1970) highlights the importance of understanding system boundaries and links this to identification of the right stakeholders, particularly as people differ on the relevant facts for decision-making. Ulrich (1983) extends the discussion of system boundaries to understanding of values inherent in the selected boundaries and the implications for ethics of research. The author promotes emancipation, with identification of the true beneficiaries of particular decisions, as well as empowerment for those who may not participate in making decisions but who are affected by them.

In his subsequent work on Critical Heuristics, Ulrich (1987) describes boundary judgements as assumptions relating both to ‘whole systems judgements’ (i.e. decisions about what belongs to the problem space and what falls outside it) and to ‘justification break-offs’, which demarcate the relevant context for justifying the normative implications of study design – value judgements and recommendations for action. To ensure transparency of such boundary judgements, Ulrich’s Critical Heuristics (later CSH) framework proposes a checklist of twelve boundary questions that examine the sources of: motivation for the design, control, selected expertise, and legitimation and representation of those who would be affected. Ulrich (1988) further emphasizes the need to train participants, planners and decision makers in surfacing the boundary judgements inherent in the definition of a problem space, lest they go unchallenged and produce unequal distribution of decision power.

Midgley (1992) describes system boundaries as artificial lines with marginal elements that may be negotiated and modified. The author proposes Critical Systems approach as a way to identify the primary (most obvious) and the secondary (other elements that affect the system) boundaries of analysis. Within this approach, the purpose of boundary critique is to elicit the secondary boundary and allow decisions on retaining or removing the primary boundary. In linking boundary decisions to value judgements, the author assigns the labels of ‘sacred’ (valued) and ‘profane’ (devalued) to the marginal elements to explore the value conflicts between different ethical systems that may influence boundary judgements. There are several examples of practical applications of this theory. Midgley et al. (1998) apply these principles of boundary critique to elicit a diverse range of stakeholder views and to examine marginalisation in multi-agency development of housing services. Midgley (2016) further applies boundary critique as a strategy for dealing with value conflicts in natural resource management. Ufua (2020) uses Midgley’s boundary critique theory to develop solutions for operational issues in commercial livestock operations. Yolles (2001) combines Midgley’s theory of boundary critique with the viable systems theory to develop Viable Boundary Critique theory for exploring ideological and ethical conflicts in real-life problems. This approach is then applied to exploration of political and ideological aspects relating to the Liverpool dock strike.

Another study which informs our research design has been completed by Velez-Castiblanco et al. (2016). The authors develop the Boundary Games Theory, which merges the notions of boundary judgements with relevance theory and the idea of language games. Boundary analysis is then used to assist OR practitioners in communicating their goals, understanding the context of their studies, and assisting with workshop design. Velez-Castiblanco et al. (2018) apply the Boundary Games approach in micro analysis of workshop interactions.

In our study we are guided by the overarching principles of critical systems thinking, as outlined in the Research approach section. However, we also follow the example of Velez-Castiblanco et al. (2018) in applying detailed boundary analysis at the micro level of workshop design (albeit with a different focus). In our case, however, we analyse the system boundaries associated with specific aspects of workshop design, which may be introduced deliberately or unwittingly, and reflect on the effects these constraints may have with respect to workshop outcomes.

Linking Boundary Critique to Generation of Ideas in OR Workshop Practice

The link between system boundaries and generation of ideas (one of our workshop aims) is mentioned by Midgley (1998), who points out that the process of generating ideas often involves suspending boundaries that are otherwise taken for granted. The author further proposes surfacing the boundaries and expanding them as much as possible. While this approach has previously been used to examine value conflicts and marginalised elements, in our study we illustrate how boundary critique can be used as an analytical lens to improve creative outputs in OR workshop practice.

Additionally, it is important to recognise the value of bringing in diverse viewpoints in transdisciplinary workshops (Churchman 1970). Apart from the initial selection of participants, specific workshop methods and techniques may be directed at ensuring that boundaries of discussion are such that different viewpoints are explored, for example through establishment of ground rules on mutual respect, prompts for exploring different aspects of the problem situation, or using ‘round the table’ discussion prompts to ensure that all voices are heard.

Our proposed analytical framework recognises that each step within a workshop introduces various explicit and implicit boundaries that may affect the way workshop participants perceive the problem space. These boundaries can constrain the scope of the discussion and provide limits to what is considered relevant.

For example, two common explicit system boundaries in our context are the capability of interest (such as health support) and the technology of interest (for example robotic systems). Implicit system boundaries may arise from the participants’ knowledge and experiences, which will influence the types of ideas that they put forward. There can be many other different types and sources of boundaries within the discussion space. Some boundaries will be known to the workshop facilitator; other aspects relating to tacit knowledge and the participants’ opinions may be hidden. Some boundaries within an intervention design may be entirely reasonable and necessary to focus the discussion; whereas others may be detrimental and may result in loss of useful ideas for technology applications. Furthermore, not all boundaries are within the facilitator’s control. In some instances, the OR practitioners may not be aware of the constraints that are introduced into the discussion through workshop design. We propose that systematic surfacing of system boundaries and understanding their effect on generating ideas can facilitate improvements in design of creative OR workshops. In the next section, we discuss how we applied this analytical lens to address our specific research questions.

Research Questions

In our study, we examine the hypothesis that imposing different types of system boundaries within the workshop design will affect the creative outputs of OR workshops. We apply this hypothesis to five vignettes in the logistics and health support domain across five separate workshops. The purpose of the workshops is to generate ideas for use of emerging technologies, and to develop systemic (holistic) capability concepts for the selected ideas. An example of this may be use of drones to deliver supplies.

The activity of generating ideas is commonly referred to as ‘ideation’ (Gabriel et al. 2016), while the capacity to produce a large quantity of ideas is described as ‘ideational fluency’ (Kerr et al. 2009). Concepts are commonly defined as a deliberate ‘distillation of thoughts and ideas’ into a form that guides the direction towards achieving the desired outcomes (Dortmans et al. 2006). The structure of concepts is tailored to the specific objectives and, in this study, we use the term ‘capability concept’ to describe how emerging technology elements are used within the broader organisational and procedural structures to achieve the desired outcomes.

A systemic capability concept would then consider how selected ideas would fit within the wider organisational processes to achieve the desired capability of force sustainment, and what impacts this might have, thus exploring the links and relationships between different elements of capability-enabling systems. Hence, with consideration of the workshop aims, we formulate our overarching research question as follows:

  • How effective is the workshop design, viewed through the lens of system boundaries, at generating ideas for technology use and development of complex and novel systemic capability concepts?

This question represents a practitioner’s point of view. Taking the pluralist stance with an interest in why a particular design might worker better than another, we can also re-frame this into a related research question:

  • How can system boundary analysis be used as a lens to evaluate the effectiveness of workshop design?

In applying the analytical lens of boundary critique, we examined a number of specific sub-questions related to different types of system boundaries as our understanding of the problem space evolved:

  1. 1.

    What explicit and implicit boundaries are introduced at each workshop step? Within this, our expectation was that there will always be some boundaries that are outside the facilitator’s control or influence, some boundaries that can be negotiated, and some that are amenable to change within the workshop design.

  2. 2.

    Does minimising the focus on the current capability processes improve ideational fluency and novelty of concepts? Within our study, our expectation was that reducing these types of implicit boundaries would increase the number of ideas and concept novelty.

  3. 3.

    Does introducing boundary-mitigating steps improve ideational fluency? Within our study we considered steps for individual ideation prior to group-based brainstorming (thus reducing implicit boundaries associated with hierarchies and groupthink), and application of creative thinking techniques, which are designed to expand the range of generated ideas.

  4. 4.

    How do problem-structuring boundaries affect the complexity and novelty of generated concepts? Within our study, we considered explicit boundaries defining the technology of interest, the domain of application and the level of capability concept: tactical, operational or strategic. Strategic level planning reflects national objectives and desired outcomes; operational level refers to planning deployments into an area of operations; tactical level is concerned with specific manoeuvres and engagements.

Research Approach

In this section we address four key aspects of our research approach. Firstly, we explain how our research was guided by the principles of critical systems thinking: pluralism, critical awareness and improvement (Jackson 2010). We outline how these principles lead us to the selection of a multi-method intervention design tested within a case-based study with iterative refinements between each workshop. Secondly, we describe our implementation of a multi-method meta-framework based on Jackson’s Critical Systems Practice (CSP), with consideration of different paradigms that form part of the framework. Thirdly, we detail our methodological considerations in evaluating the effectiveness of our workshop designs using specific measurable indicators for generation of ideas and concept development. We conclude the section by outlining how our workshop design evolved over five different workshops and by linking these design changes to our research questions.

Principles of Critical Systems Thinking within the Study Design

Critical systems thinking is underpinned by the commitments to pluralism, critical awareness and improvement (Jackson 2010). Pluralism entails understanding the strengths of different approaches and harnessing a range of methodologies, methods and techniques to help address the complexity of real-life complex problems (Jackson 2010; Velez-Castiblanco et al. 2016), which in our case, is reflected in construction of a multi-method intervention design. Pluralism also calls for ongoing evolution of methodologies with reflection on both theory and practice. We translate this principle into specific mechanisms for iterative refinement through use of boundary critique.

In recognising the importance of critical awareness, we examined the contextual drivers for capability development (Ivanova and Elsawah 2018), considered the available multi-method meta-frameworks and their theoretical foundations (Ivanova 2018), and evaluated the appropriateness of specific methods, ranging from Problem Structuring Methods to systemic design and lateral thinking techniques (Ivanova and Elsawah 2018). Our evaluation of methods incorporated consideration of the fit to task in terms of specific outputs, the cultural fit, and the suitability within the allocated time and resource constraints, as well as acknowledgement of the preferences and previous experience of the facilitators.

The principle of improvement calls for reflection and learning towards better outcomes. In our study, the workshop design evolved through several theory–practice cycles (Eden and Ackerman 2018) across the five separate workshops, each examining a specific vignette. We assessed the effectiveness of each workshop design in terms of the selected indicators after each implementation. We then made refinements and changes in the workshop design for the subsequent workshops, as well as adjustments to the theoretical framework and the kinds of research questions that we explored. The overarching goal for improvement in our study is better decision support for capability investment decisions, with consideration of stakeholders affected by these decisions.

In order to approach iterative refinement in a systematic way, we adopted the multiple-case study research structure (Yin 2018). Unlike controlled experimentation, case-based research allowed us to retain fidelity in applying the workshop designs to real-life concept development activities. In this way, each workshop presented us with a case that we could analyse with respect to patterns of specific variables and indicators. Each workshop design represented the synthesis of our knowledge and understanding at the time, with the view of achieving the best possible outcomes for the stakeholders, while recognising the need to work within set resource constraints and stakeholder priorities.

Thus, the overall workshop strategy views the five workshops as cases within a multiple-case study. The principle of pluralism leads to implementation of a multi-method framework, recognising the value of combining different methods towards achieving the workshop objectives. It also calls for iterative refinements of intervention designs and underpinning theories. The principle of critical awareness translates to the initial review and selection of methods within the selected framework. The principle of improvement is reflected in evaluation of the effectiveness of each workshop towards improvements in the next workshop and in reflection on potential impacts from different perspectives. Evaluation of the effectiveness of each workshop leads to refinements in the workshop structure for the subsequent activities and to reflection on the underpinning theory.

Our project concludes with the final recommended workshop design. However, in the context of the broader academic community, we expect that the process of methodological refinement will continue beyond our study, both in similar domains of application, and in extending and tailoring our approach to other domains.

Furthermore, the principle of holistic consideration of interconnected elements within a system is core to any systems-thinking based approach. Within our study, this principle is reflected in exploring the links and relationships between the different elements of capability-enabling systems: human, organisational and technological.

Implementation of CSP Meta-Framework in Workshop Design

In our study, we selected the CSP framework (Jackson 2006, 2010) to guide both our own process of intervention design and refinement, and to structure the workshops themselves. For just as the OR practitioners need to learn about and harness a range of methodologies, methods and models to respond to problem complexity in a critical and reflective way, so the workshop participants themselves are also faced with a similar challenge in terms of exploring different technology use approaches and investment choices to achieve their objectives within a complex and uncertain environment. Hence, we used CSP in two ways: first, as research designers, to develop a research approach grounded in critical, reflective practice; second, as workshop designers, to structure the thinking process through which the workshop participants progress to develop capability concepts.

The contemporary CSP framework features four phases: creativity, choice, reflection and evaluation, all of which are informed by paradigm critique and critical reflection (Jackson 2006, 2010). Figure 1 outlines how these phases are reflected in our study at the level of the study design and at the level of the internal workshop structure.

Fig. 1
figure 1

Application of the Critical Systems Practice meta-framework within the intervention design and refinement and within the workshop structure (please refer to Appendix Table 2 for a more detailed description of these methods)

Furthermore, the CSP framework specifically recommends considering the functionalist, interpretive, emancipatory and postmodern paradigms within the intervention design with the view that a successful intervention should demonstrate progress across all four (Jackson 2006, 2010). However, Jackson (2010) also recognises that, in practice, one meta-methodology is likely to be the dominant one. Our study, arising from the meta-methodology of systems engineering that underpins capability development (Camelia and Ferris 2016), is dominated by the functionalist/structuralist paradigm, which is focused on the specific required workshop outputs in line with stakeholder priorities and geared towards informing technology investment decisions. This is reflected in our selection of output-related indicators for assessing workshop effectiveness: numbers of generated ideas and characteristics of resulting capability concepts. However, our intervention design is also informed by reflection on the value brought in by the three other paradigms:

  • The interpretive paradigm focuses on the mutual understanding of the problem space and on exploration of different purposes and priorities while working towards a shared solution (Jackson 2010). Within our study design, we emphasize the importance of including voices from different domains, including the academic, engineering and technology, capability planners, as well as operators and personnel on the ground, recognising the importance of understanding the conflicting requirements and flow-on effects of investment decisions.

  • The emancipatory paradigm seeks to ensure fairness and to facilitate empowerment of persons who would otherwise not be necessarily involved in the decision-making process (Jackson 2010; Ulrich 1983, 1988). Our workshops are designed to incorporate voices of technology operators as well as personnel likely to be affected by deployment of the technology (such as the potential battle casualties who would be affected by the employment of specific health support solutions), as well as maintainers and suppliers, thus seeking to build a more complete understanding of the impacts of specific decisions.

  • The postmodern paradigm emphasizes diversity, judging in terms of exception and emotion (Jackson 2010). Within our workshop and study design, we recognise the importance of diverse perspectives in producing successful outcomes. We incorporated specific conduct guidelines that emphasized letting all voices be heard, suspending judgement, using imagination, and allowing for humour. Our group discussion processes included ‘round-the-table’ prompts to ensure that all participants had a chance to voice their opinions.

Data collection during the workshops consisted of capturing the generated ideas and models on the white board, note-taking of key discussion points, collection of individual ideas recorded on paper, and collection of surveys relating both to the evaluation of the generated capability concepts and to the participants’ assessment of the activity itself. We used the workshop script template developed by Hovmand et al. (2012) to track any design changes. During data analysis for each workshop, we noted any new theories, questions, issues, proposed refinements and the reasons behind them.

Evaluating Effectiveness of Workshop Designs

Within our study we sought to evaluate the effectiveness of workshop designs in terms of generation of ideas and in terms of the complexity and novelty of future capability concepts developed during the workshops. Apart from the subjective activity assessments by the participants and the research team, we also analysed several objective indicators of workshop effectiveness.

Measuring Ideational Fluency

Anderson et al. (2014) define creativity and innovation as ‘the process, outcomes, and products of attempts to develop and introduce new and improved ways of doing things.’ Within our study, we focus predominantly on the creativity stage of the process, which is related to idea generation. There are different approaches to evaluating creative outputs, including judgement-based evaluation (e.g. using Likert-like scales), self-assessment, and collection of objective evidence such as patents (Anderson et al. 2014; Zhou and Shalley 2003). In our study, we focus on ideational fluency (the number of generated ideas), which can be measured in a more objective way by recording the number of ideas produced at different stages of the workshop. The underpinning assumption is that quantity of ideas will ultimately yield quality (McFadzean 1999; Osborn 1963).

In our observation of workshop outcomes, ideas evolve throughout different stages of the workshop; they combine and build on other people’s thoughts. However, academic literature does not provide us with a structural definition of an idea, or a way of tracing the evolution of ideas from disparate thought fragments through to a complete implementation concept.

Consequently, we developed a task-specific classification scheme based on inductive analysis of the workshop data and underpinned by the theoretical assumption that ideas are not static, but change, evolve and grow through the course of group interactions (Zhou and Shalley 2003). This classification scheme allowed us to code generated ideas according to following definitions:

  1. 1.

    Fragment (F) is a thought relating to a single aspect of technology use, such as constraint, context, requirement, technology type, function, capability effect, or impact. An example may be a simple note saying, ‘solar energy’.

  2. 2.

    Extended fragment (EF) is a combination of two or more fragments that does not yet constitute a fully formed idea. An example is ‘use of solar panels for recharging radios’.

  3. 3.

    Formed idea (FI) includes, as a minimum, technology, its function, and the desired capability effect. An example is ‘use of solar panels for recharging soldier-worn electronics, in order to reduce dependency on the supply chains’.

  4. 4.

    Capability concept, in this study, is an exploration of how the selected technologies can be used within the wider organisational and operational processes, including elicitation of links between different types of system elements and discussion of potential impacts. An example would be a discussion of how and where solar panels would be used on operations, how they would be maintained and resupplied, and how they would be integrated within the organisational processes and tactics.

Where the content of ideas overlapped as they evolved through the discussions, we accounted for these new iterations of the ideas in their new forms as (for example) new extended fragments. The formed ideas that emerged at the end of the ideation stages were distinct from each other.

Measuring Complexity and Novelty of Generated Concepts

In this study, we sought to measure concept complexity because many potential impacts of new technologies emerge due to the interaction of the technologies with the wider system of organisational processes, other technologies, and human operators. Thus, elicitation of the elements and links in the wider system can help elicit potential impacts of technology investment decisions.

Doyle et al. (2008) distinguish between detail complexity (the amount of content in the model, such as the number of elements and links) and dynamic complexity (including feedback loops, nonlinear relationships, irreversible and adaptive paths and time delays). We visualised the capability concepts developed during the workshops using causal chain mapping (Doyle et al. 2008), depicting a narrative-based chain of events within a specific scenario, augmented by discussion of enabling technologies Fig. 2 provides a section from the Vignette 4 causal chain map in order to illustrate the structure of the resulting models.

Fig. 2
figure 2

A simplified section of the causal chain map for capability concept produced in Vignette 4 (tactical health support)

In our study we measured detail complexity of the generated concept models as the number of nodes, paths and links, pathway lengths and branches. For dynamic complexity, we recorded the number of feedback loops and calculated the number of feedback loops per node. We assessed the novelty of the proposed future capability concepts by comparing the number of fundamentally new paths in the future concept model to the equivalent current capability concept model generated during the same workshop.

Workshop Design Changes Across Five Vignettes

Our study progressed through several theory–practice cycles with continuing refinements of the workshop design. The process of decision-making for refinements encompassed de-briefs with the analytical team, reflection on the outcomes and the participant feedback, analysis of objective indicators and discussion of refinements framed in the context of boundary critique. Figure 3 outlines this evolution of workshop design.

Fig. 3
figure 3

Changes in workshop design across five vignettes

In the Results section we outline how the application of boundary critique and examination of our four research sub-questions using this analytical lens assisted us in the refinement of the workshop design.

Results

The Results section is structured around our four research sub-questions. Table 1 summarises the workshop topics, participant numbers, duration and the key results.

Table 1 Results summary across five workshops

Identifying Explicit and Implicit Boundaries in the Workshop Design

To address our first research sub-question, we analysed the workshop steps in terms of the constraints that may be implied within them. Specifically, we asked the question: What implications would workshop participants draw regarding the scope of the discussion based on the workshop design and the information that is presented to them? We also considered other system boundaries that may have existed without explicit input from the workshop organisers through reflection on the workshop discussions and group dynamics. This analysis gave us the following list of system boundaries:

  • Explicit boundaries (EB):

    • EB1 – Domain of application/capability focus

    • EB2 – Level of concept (tactical, operational or strategic)

    • EB3 – Participants’ areas of expertise (based on explicit selection of participants)

    • EB4 – Technology of interest

    • EB5 – Information type within specific prompts, e.g. risks and benefits

    • EB6 – Specific technology use case

  • Implicit boundaries (IB):

    • IB1 – Participants’ tacit knowledge, interests and biases

    • IB2 – SME availability and potential gaps in expertise

    • IB3 – Hierarchical relationships within the group

    • IB4 – Focus on the current systems and processes

    • IB5 – Focus on capability gaps (vs technology-enabled opportunities)

    • IB6 – Credibility pressure not to bring up ‘out-of-the-box’ ideas

The identified system boundaries reflect both the domain of application and the purpose of the workshop. In applying this kind of analysis to different domains, OR practitioners would need to tailor the boundary list to their specific context. The explicit boundaries relate largely to structuring of the problem space and may be amenable to adjustments and negotiations within stakeholder discussions. The implicit boundaries may be inherent in some aspects of the workshop design or inferred by the participants without being explicitly defined in the information briefs or prompts. Some are not easily discerned – for example IB1: participants’ tacit knowledge, interests or biases. Others may be influenced through workshop techniques and sequencing – for example IB4: focus on the current systems and processes.

Surfacing the system boundaries inherent in the workshop design allowed us to represent the evolving workshop designs in terms of system boundary configurations as shown in Fig. 4, as a way of framing the discussion of refinements.

Fig. 4
figure 4

Workshop designs represented in terms of configurations of system boundaries (EB = explicit boundary; IB = implicit boundary; boxes highlight steps for generation of ideas and future concept development steps)

Reducing the Focus on the Current Processes

In reflecting on the small number of ideas generated in Vignette 1 (9EF), we noted that a substantial part of the workshop was focused on the current capability processes rather than future ones; this was also questioned by the participants in their feedback. We hypothesized that this may have had a detrimental effect on the number of generated ideas and on concept novelty. Consequently, we considered strategies for reducing the relevant implicit boundaries IB4 and IB5 (implicit focus on the current systems and capability gaps). In Vignette 2 we replaced the SSM-based discussion of the current system and the aspirations survey with broad-ranging information briefs on emerging technologies and future operational environments, which allowed the participants to consider a wider range of possibilities. In Vignettes 3, 4 and 5 we also sequenced future capability model construction ahead of the current capability model construction.

In terms of ideational fluency, the number of generated ideas increased from nine extended fragments (9EF) in Vignette 1, to four fragments, 27 extended fragments and 5 formed ideas (4F/27EF/5FI) for Vignette 2. The number of ideas noted during concept modelling also rose from 19 to 27. The participants of the second workshop expressed positive feedback in terms of the activity being innovative and creative, as well as effective in capturing new ideas. These results were consistent with our assumption that ameliorating implicit boundaries IB4 and IB5 would facilitate generation of more ideas for technology use, as would reducing the overall number of system boundaries introduced prior to generation of ideas. Vignettes 3, 4 and 5 also generated a greater number of ideas for technology use. However, because they incorporated further design adjustments, with addition of new ideation steps, we will discuss these further in the next section.

It is difficult for us to assess whether these adjustments in workshop design had any effect on the novelty of the generated concepts. Out of the five vignettes, only the model generated in Vignette 4 was significantly novel in that it had substantively different paths compared to the equivalent current capability model. We designed the workshop process in Vignette 4 to minimise the emphasis on the current processes; however, this was also the case for Vignettes 2, 3 and 5, which did not generate substantively novel concepts, but rather represented an augmented version of the current processes, with new technological solutions for essentially the same tasks and processes.

Expanding Ideation Steps

Following on from the initial positive results in Vignette 2, we considered whether more ideas may be generated by introducing steps for individual ideation prior to group-based brainstorming (thus reducing boundaries arising from group dynamics: IB3 and IB6), and by adding a creative thinking technique at the end of the brainstorming sessions as a way of expanding the participants’ paradigms (McFadzean 1999). The facilitators’ previous experience with different types of lateral thinking techniques led to the selection of simple approaches that are culturally appropriate and don’t require an extensive level of familiarity within the group (see Appendix Table 2). To assess the effectiveness of these measures, we compared the number of ideas generated in Vignette 2 (group brainstorming only) with that in Vignettes 3 and 4 (individual ideation, group brainstorming and creative thinking technique). The workshops results were as follows:

  • Vignette 2: group brainstorming – 4F/27EF/5FI; future capability model – 27 FI

  • Vignette 3: individual ideation – 2F/19EF/2FI, group brainstorming – 34EF/6FI, creative thinking technique – 5F/19EF/1FI (total 41F/72EF/9FI); future capability model – 31 FI

  • Vignette 4: individual ideation – 11F/23EF/2FI, group brainstorming – 4F/26EF/4FI, creative thinking technique – 9EF/2FI (total 15F/58EF/8FI); future capability model – 22 FI

Within the structured activity evaluation questionnaires, all three workshops received strong, positive feedback from the participants with respect to development of creative and innovative ideas for technology use. As expected, there were overlaps in the ideas for technology use between the individual ideation step and the group brainstorming as the latter serves to elaborate on the former. Ideas developed with the use of creative thinking technique tended to be more novel and unique, although not always feasible. These results supported our hypothesis that adding individual ideation steps and creative thinking steps, as a way of ameliorating implicit boundaries arising from group dynamics, would increase ideational fluency in creative OR workshops.

The number of ideas generated in Vignette 5 (also with the same three ideation steps) was even higher with a total of 70F/88EF/3FI. However, these results were confounded by the larger than usual number of participants (13 participants in Vignette 5 versus 9 participants in Vignettes 2, 3 and 4).

In addition to the ideas generated during the ideation step, the workshop participants extended ideas for technology use within the capability modelling step as well. The models generated in Vignettes 1, 2, 3 and 4 incorporated 19, 27, 31 and 22 ideas for technology use, respectively. Some of these ideas were new; others expanded on the previous discussions. In Case 5, the workshop discussions didn’t generate a complete model (as discussed in the next section); however, the participants expanded on eight separate technology ideas during the capability modelling step. This suggests that development of ideas doesn’t end with the prescribed ideation steps, but rather the latter serve to generate initial thoughts that evolve through subsequent system modelling. This observation resulted in an additional design refinement in Vignettes 4 and 5 to remove the boundary EB6: selection of a specific technology use case either at the start of the workshop or during the ‘Choice’ step of prioritisation. The fourth workshop subsequently produced the most novel capability concept model; whereas Vignette 5 produced detailed discussion of novel ideas for technology use that would have resulted in a novel concept had they been fleshed out further.

Effects of Problem-Structuring Boundaries

The final research sub-question that we examined in our study concerns the effects of problem-structuring boundaries on the characteristics of the generated concepts. This research question arose following a less successful concept development outcome in Vignette 5 compared to the preceding vignettes and due to the feedback from the participants regarding lack of clarity in the way the problem was defined. The relevant boundaries include EB1 – particular capability or domain of application, EB2 – level of concept (tactical, operational or strategic) and EB4 – specified technology of interest. These boundaries often reflect the stakeholder priorities, which are negotiated during the planning stages of the workshops and are reinforced during the introductory briefs. EB4 is also inherent in focused technology briefs, which were given in Vignettes 1, 2 and 3.

We analysed the problem structuring boundaries and the resulting capability concept characteristics across five vignettes, as shown in Fig. 5.

Fig. 5
figure 5

Explicit problem-structuring boundaries and characteristics of generated concepts across five vignettes (EB = explicit boundary; IB = implicit boundary; N/A = not applicable)

Comparison of Vignette 4 and Vignette 5 presents the most interest in terms of problem structuring boundaries that relate to specifying technologies of interest and domain of application. In both cases, we made a deliberate attempt to reduce the focus on the current systems, providing broad-ranging information briefs instead. In both cases, the ‘Choice’ step was omitted in order to avoid locking the participants into a specific use case (removing EB6), so as to allow further evolution of ideas. The key difference between the two vignettes was in defining the problem in terms of capability for Vignette 4 (health support), without specifying any technologies of interest. In Vignette 5, the problem space was defined in terms of a specific technology type (power and energy), but without specifying the capability focus. Vignette 4 workshop resulted in a novel (albeit not very detailed) concept, with extensive discussion of impacts. The participants added a number of new technologies to the concept, including ideas for technologies that do not yet exist – such as an automated, deployable patient treatment module. The participants’ feedback was positive overall with respect to exploration of innovative solutions. Vignette 5 workshop did not produce a complete systemic concept, but rather resulted in further discussion of several disparate ideas for technology use, despite producing a large number of ideas during the ideation steps. The participants’ main critique in Vignette 5 concerned the lack of clarity around problem framing. This suggests that problem framing for development of capability concepts needs to be bounded by the capability, rather than by technology of interest. This runs counter to the common practice of identifying emerging technologies of interest and building concepts of use around them.

In comparing problem-structuring boundaries between Vignettes 1 and 2 (with more complex capability concepts) and Vignettes 3 and 4 (with less complex concepts), the key difference was the explicit boundary EB2 – level of concept. Workshops with higher-level, operational focus produced more complex models, whereas lower-level, tactical focus produced simpler models, both in terms of detail and dynamic complexity. This is logical, as operational focus encompasses a greater number of interacting system elements.

Potential Confounding Factors

In discussing the workshop outcomes with respect to our research sub-questions, we need to consider potential confounding factors that can affect the validity of our research results:

  • Due to external constraints, the workshop duration was reduced from three days for Vignette 1 to 1.5 days for all other workshops. The difference in length may have contributed to the substantively greater detail complexity of the capability concept developed in Vignette 1, with participants having more time to do it. We would argue, however, that the operational (rather than tactical) level of discussion (EB2) had a more substantive impact. The effect of the latter is also evident in comparing complexity of models generated in Vignette 2 (operational focus) with Vignettes 3 and 4 (tactical focus).

  • The number of participants increased from seven in the first workshop, to nine for Vignettes 2, 3 and 4, and thirteen for Vignette 5. It is likely that the greater number of participants in the last workshop also contributed to the higher number of ideas generated compared to Vignettes 3 and 4 where similar ideation steps were used.

  • As can be expected, the facilitation experience and knowledge of our research team increased over time, improving the skills and confidence in the conduct of the workshops. However, we expect that ongoing refinements in the workshop structure and changes in composition of the facilitation team would have ameliorated this as a confounding factor in assessing workshop outcomes.

  • Group dynamics are commonly recognised as an influencing factor in OR workshops (Franco and Montibeller 2010). This factor can reduce the free flow of ideas if one or two members of the group dominate the discussion. We noted a significant dominating effect from particular participants in Vignettes 4 and 5. We implemented individual ideation steps and facilitation of group discussion with ‘round the table’ checks in order to mitigate this effect.

  • Each workshop featured a different group of participants. Vignette 1 participants represented a greater range of ranks and military domains compared to other Vignettes. We did not observe any significant constraints on the flow of the discussion in Vignette 1 connected to the disparity in ranks. The participants also commented positively on this aspect. Participants in Vignettes 2, 3, 4 and 5 included personnel from different areas of logistic operations with similar ranks. Additionally, we sought at least one participant from the science and technology community to provide technology expertise in each workshop. Based on our observations, we did not note significant effects from variation in group composition on generation of ideas or concept development.

Discussion

We have structured our discussion around four topics. Firstly, we discuss the internal and external validity of our study findings. We then draw out the implications for OR workshop design and discuss the contribution to the practice of transparent methodological evolution. We conclude our discussion with suggestions for further research to extend the understanding of boundary critique in the context of creative OR workshops.

Internal and External Validity of Study Findings

In addition to the confounding factors discussed above, we recognise that introducing more than one change into the workshop design placed greater significance on participants’ feedback and discussions with the facilitation team in terms of understanding the reasons for changes in the objective indicators. For example, the examination of the problem scoping boundaries in Vignette 5 was prompted by the participants’ consistent remarks regarding problem definition. The internal validity of our study findings is thus strengthened through the use of both subjective assessments and objective indicators of workshop effectiveness.

We further examined the validity of our coding scheme for classifying the generated ideas as fragments, extended fragments or formed ideas. This is a task-specific, inductively generated coding approach that allowed us to trace the evolution of ideas throughout the workshops. We recognise two key limitations in selection of this indicator and in the use of an inductively derived classification scheme. Firstly, there is a small risk of inconsistent classification by different analysts, particularly in classifying participants’ ideas as function vs capability effects, which can be the key differentiator between an ‘extended fragment’ and a ‘formed idea’. Coding consistency can be improved through extensive training and refinements with use of several coders where resources allow for this (Weingart 1997; Franco and Rouwette 2011), which was not the case in our study. However, the relatively simple definitions of different idea types ameliorate the risks associated with classification reliability and the total tallies of idea numbers would not be affected by any minor differences in the counts of EF vs FI. Unitising reliability (consistency in identifying discrete units for classification) is addressed in the process of data capture, where the ideas are captured as discrete segments of information by the facilitator and by the participants themselves. Validity of coding schemes also pertains to how well they capture the information they are designed to obtain (Franco and Rouwette 2011). In our study, this relates to both the number and the nature of the ideas, underpinned by the theoretical assumption that ideas evolve over time, which is logically consistent with our coding design.

Secondly, while we specifically focused on the number of ideas generated in the workshops, we did not assess the novelty and usefulness of the initially generated ideas (we only assessed the novelty of the resulting concepts). Novelty and usefulness are commonly recognised as key aspects of innovation and are often assessed by independent expert panels with the use of Likert-like scales (Anderson et al. 2014; Ng and Feldman 2012; Zhou and Shalley 2003). This type of analysis can further strengthen evaluation of workshop effectiveness. We did not conduct such assessment due to resource constraints, and because of the strong temporal effect associated with emerging technologies: what seems like a very novel idea at the start of the two year period within which the workshops were conducted, can come across as ‘old news’ by the end of the same two year period. Hence, separate assessments panels would need to be set up immediately after conclusion of each workshop.

Generalising the findings of a study such as ours requires a critical awareness of the complex interplay of factors in real-life problems, many of which remain outside the control of OR practitioners. Indeed, the selection of case-based research approach (Yin 2018) allowed us to retain that richness of real-life complexity. It also, however, requires us to talk about patterns of variables across the five workshops, rather than making universally generalisable claims. White (2006) points to particular challenges of measuring effectiveness of real-life interventions: every instance is context-specific; there is a large number of confounding factors; and validation as a comparison with an equivalent system in the real world is largely impossible. External validity, or generalisability of our study findings thus falls within the range of ‘middle-range theories’, which are context-specific (Midgley et al. 2017; White 2006). It means that the outcomes are affected by both context and mechanisms; a specific workshop design may work in one situation but not another.

In our case, the broad principle of applying boundary critique to facilitate refinement of OR workshop designs can be directly translated to OR practice in other domains. The practical aspects of data capture for evaluating workshop effectiveness (such as use of established workshop script templates) are similarly relevant across different sectors. However, the relevant system boundaries, coding classifications, and specific research questions will depend on the context.

Implications of Study Findings for OR Workshop Design

Jackson (2010) wrote that ‘critical systems researchers do not claim to know the answer in advance or peddle the same solution to all problems in all circumstances’ but rather ‘seek to be holistic and to ensure that theory both underpins practice and is tested in practice.’

Our workshop design is grounded in the theory of pluralism in multi-method interventions. A key aspect of pluralism is the idea that methodologies are not static and will evolve through practice. In our study, we sought to apply this theory to iterative workshop design refinements. We applied this theory in practice, using specific indicators of workshop effectiveness to analyse our results. This allowed us to refine our workshop design in a transparent and justifiable way, expanding our understanding of different implementation options within the starting framework of CSP.

The specific contributions of each workshop towards addressing the research questions within the overall workshop strategy are as follows:

  • Vignette 1: Reflection on the small number of ideas generated during ideation steps led to the selection of the analytical lens based on boundary critique with identification of explicit and implicit boundaries introduced within the workshop structure (Research sub-question 1). Specific refinements towards design of the next workshop included measures to reduce the focus on current processes (Research sub-question 2). This included replacement of SSM-based elements with broad information briefs.

  • Vignette 2: Reflection on improvement in the number of generated ideas led to further measures to reduce the focus on the current systems (Research sub-question 2), including a change in sequence in development of future capability concept model ahead of the equivalent current capability model. Further refinements included extending the ideation steps to expand the boundaries of the discussion (Research sub-question 3).

  • Vignette 3: This vignette structure incorporated expanded ideation steps (Research sub-question 3) and a change in sequence with construction of future capability model first to reduce focus on the current systems (Research sub-question 2). Results demonstrated positive results with both measures. Further evaluation of results and observation of the evolution of ideas from the ideation steps through to capability concept modelling resulted in amending the ‘Choice’ step in the subsequent workshops from selection of a specific use case to selection of an indicative scenario instead.

  • Vignette 4: Major changes included elimination of focused technology brief and modification of the ‘Choice’ step to avoid locking participants into specific technologies and use cases during model construction. Results showed positive effects of the measures for reducing focus on the current systems, expanding ideation steps, and avoiding locking into a specific technology use case before development of the capability concept, with development of a substantively novel (albeit simpler) concept. The reduction in complexity of the concept led to reflection on the role that the level of the scenario (tactical, operational or strategic) plays in the resulting concepts.

  • Vignette 5: Despite a large number of ideas generated during the ideation steps, the workshop did not generate a systemic capability concept beyond more detailed discussion of specific technology use cases. Reflection on the results highlighted the difference between Vignette 4 and Vignette 5 in terms of boundaries related to problem structuring (around capability versus around technology type). This led to exploration of the workshop results from the perspective of Research sub-question 5 regarding the effects of problem-structuring boundaries.The findings of our study contribute to the theory and practice and multi-method OR workshops in several ways.

Firstly, we have demonstrated that any workshop can be systematically analysed in terms of the system boundaries that each step introduces into the problem space. In this, we are guided by the recognition in the OR literature that the process of choosing system boundaries is inevitable in the knowledge generation process and can be explicit or implicit (Midgley 1992, 2011; Ulrich 2001). Explicit boundaries in our study relate predominantly to the way the problem space is structured and what kinds of questions are posed to the participants. These boundaries can be controlled within the workshop design, although some of them require negotiation with respect to stakeholder priorities. Implicit boundaries may be more difficult to identify, but can have a significant effects on workshop outcomes. This was particularly evident with boundaries related to conventional ways of thinking in Vignette 1. Other implicit boundaries, such as participants’ tacit knowledge, previous experiences and biases, remain largely outside of the control of OR practitioners. These cannot always be surfaced or influenced within OR workshops in temporary teams.

In considering the effects of the implicit boundary relating to the focus on the current systems, we have demonstrated that reducing the focus on the current way of doing things increased the number of ideas generated in OR workshops, as did the introduction of broad-ranging information briefs. This is broadly consistent with the OR literature, which suggests that expanding the boundaries of the discussion (in this case by not constraining them to the existing processes) is likely to generate more creative outputs and new ideas (Midgley 1998). In generalising this finding to OR workshop design, we would point out that reducing focus on the current processes may be valuable in workshops seeking innovative solutions and new approaches to real-life problems. However, in workshops seeking to identify and address existing capability gaps and requirements, it would be perfectly valid to explore the existing system in detail prior to ideation.

Expanding the ideation steps within the workshops to incorporate individual ideation (thus reducing implicit boundaries arising from group dynamics) and introducing creative thinking techniques had a significant and sustained positive effect on the number of generated ideas across the workshops. This is consistent with the findings of previous studies in creative thinking and brainstorming techniques (McFadzean 1999). Furthermore, we have observed positive effects with removal of a problem scoping boundary that defines a specific technology use case before the full capability concept is developed. We propose that this allows for further evolution and refinement of technology utilisation ideas (which is consistent with the understanding of how ideas form (Zhou and Shalley 2003).

Finally, we show that the explicit boundaries relating to the way the problem space is defined can have a significant effect on the generated concepts. This is consistent with the key notion of system boundaries in systems thinking literature, which highlights boundary judgements as crucial in defining what is relevant in the analysis of a given problem space (Midgley et al. 1998; Reynolds and Holwell 2010; Ulrich 2001). All our workshops where both the technology of interest and the domain or capability focus were defined, resulted in complete systemic concepts with a comprehensive discussion of potential impacts. Importantly, however, a more novel systemic concept was generated when we implemented a workshop design which defined the problem space in terms of the required capability, without specifying any particular technology of interest. This concept incorporated many emerging technologies, including ideas for yet-to-be developed technological solutions. The workshop where the problem space was defined in terms of technology type, but not capability or use case, did not generate a systemic concept and received negative feedback from participants in terms of lack of clarity in problem definition.

This suggests that, in designing workshops for development of capability concepts, the critical explicit boundary for problem definition is the desired capability, rather than a specific technology. This result is logical if we remember that the aim of the workshops is development of capability concepts, and that capabilities are underpinned by complex systems of technological, organisational and human elements, rather than being enabled by any one technology. However, it is significant to highlight this finding as the current approach to exploiting opportunities presented by emerging technologies often involves identifying specific technologies of interest and developing use cases and concepts around them (Ivanova et al. 2020).Our discussion of the results would not be complete without reflecting on the effectiveness of our workshops in terms of progress across the other three key paradigms of CSP (beyond the functionalist/structuralist discussion of the outputs in terms of the selected indicators). The interpretive paradigm is reflected in enhancing mutual understanding and development of shared solutions. The former is affirmed in the consistently positive feedback from participants regarding their improved understanding of the systems in question due to exposure to different views and experiences. The latter is reflected in the development of shared capability concept models and the discussion of impacts reflecting voices and opinions from different backgrounds and experiences.

The emancipatory paradigm is reflected in incorporation of opinions and perspectives that may not be normally part of the capability development decision process: namely the operators on the ground, other potentially affected personnel, and technologists. The ground rules set during the preliminary information briefs at each workshop served to facilitate a free flow of discussion irrespective of relative ranks and this was positively assessed by the participants themselves.

We addressed the postmodern perspective in incorporation of steps that encouraged free expression, suspension of judgement, and the use of creative and lateral thinking techniques to explore ideas outside of the participants’ normal paradigm. We achieved good results with increasing the novelty of the ideas within the discussion, particularly with introduction of lateral thinking techniques that allowed for humour and speculation beyond the current state of technological development.

Contribution to the Practice of Methodological Evolution

In tracing the evolution of our workshop design, we sought to contribute to the practice of iterative refinement and methodological evolution in real-life OR practice. As has been pointed out by numerous OR practitioners, accounts of OR interventions are often sanitised to make emergent methods appear pre-planned (Gregory et al. 2020; Midgley 2000; Ormerod 2014). As a result, selection of OR methods is often erroneously treated in a toolbox manner, using an a-priori framework to choose the ‘correct’ approach (Velez-Castiblanco et al. 2016).

In our study we demonstrate how OR workshop design can benefit from refinements through iterative theory/practice cycles. While we initially used an established meta-framework for method sequencing (Jackson’s CSP) and several established methods (such as SSM), subsequent iterative analysis of workshop effectiveness led us to make substantial changes to the workshop design (as outlined in Fig. 3). In particular, we removed SSM-based modelling of the current systems at the start of the workshops. Although SSM-based elements facilitated development of a shared understanding of the problem space for workshop participants, the same elements also introduced a strong implicit boundary by locking the participants into the current processes. Replacing these steps with broad-ranging (and boundary-expanding) information briefs had a positive effect on the number of ideas generated in the workshops. Additionally, we removed the ‘Choice’ step within the meta-framework after the first three workshops, as we could see value in allowing further evolution of technology use ideas during facilitated modelling of capability concepts, rather than locking down specific use cases prior to concept development.

In a broader sense, the power of the perspective outlined in this paper lies in its ability to identify the conditions under which the impact of OR-workshops is enhanced by critical systems thinking framing and boundary critique with the view of developing more effective OR workshop practices. These conditions pertain to development of options for future directions in complex, sociotechnical, real-life problem situations that require a critical awareness of the suitability of various methods and are strengthened by reflection and refinement through practice.

Midgley (1998) points out that practice allows reflection on theory, yielding insights with wider significance that the specific applification. In this, theory doesn’t need to exist at a ‘grand’ scale but may relate to ways of looking at problem situations and methods. In our study, the process of reflection on the workshop results in practice contributed to the following reflections on the theoretical foundations of capability development:

  • Introduction of the boundary critique-based data analytical lens following reflection on the possible causes for the low number of ideas generated in Vignette 1.

  • Exploration of the interactions between multiple technological elements enabling capabilities in lieu of the more traditional approach of focusing on single technologies of interest. By comparison, the more common practice of developing concepts of use for specific technologies often leads to single-technology stovepipes in further capability development in the military domain, often at the expense of examining system-wide impacts and ongoing sustainment requirements (Ivanova et al. 2020).

  • Extending the understanding of capability-enabling systems to recommendation for capability-focused (rather than technology-driven) problem structuring in development of capability concepts.

In making these changes, we adopted the advice of Midgley (2000) and Gregory et al. (2020), recognising that it is just as important to reflect on problems and challenges encountered in OR practice to inform evolution of theory and methodology, as it is for learning to proceed in the other direction. Thus, the process of real-life OR practice becomes less about selecting methods from a toolbox, than about building and refining one’s own tools based on the understanding of broader theoretical principles and testing them in practice.

Further Research

Whilst in our study we explored the use of boundary critique for improving creativity in OR workshops, the research questions that we considered are not exhaustive. Further studies can examine different types of boundaries and the methods for modifying and shaping these boundaries. An example of this may be exploring sets of conditions or specific prompts that would improve the novelty of the generated concepts – something that we weren’t able to explore in detail within our study. Any conclusions derived from the application of this analytical approach would also be strengthened by replication in similar and in different contexts. Similar contexts may comprise studies in various aspects of military operations, or those conducted in military organisations of other countries. Different contexts with similar objectives may relate to studies in logistic or other organisations that seek to leverage emerging technologies to explore new operating models.

Additionally, OR practitioners may choose to explore different analytical lens to facilitate the process of iterative refinement. Midgley (1998) suggests that multiple interpretations of a problem situation will give rise to different paths for action, for example within intervention designs. This, in turn, enhances the flexibility of OR practice. Mingers (2000) discusses the intellectual resources system, which includes theories, methods and techniques that an OR practitioner can call on in designing interventions. The same notion can be extended to structures that create the logical framework for analysing the effectiveness of interventions. There is a range of methods and techniques that an OR practitioner can use to formalise and guide iterative refinement of intervention designs. Our study contributes to this discussion by translating the theoretical principle of iterative refinement into a demonstration of application with use of specific methods and techniques. Further studies of practical application will help validate this approach and mature its integration into broader practice.

Conclusion

In our study we examined creativity-related outcomes in five OR workshops through application of an analytical lens based on boundary critique. We proposed that each step within an OR workshop introduces boundaries within the discussion. Furthermore, we hypothesized that analysis of these boundaries can be used to improve OR workshop designs, with specific focus on improving creativity-related outcomes: ideational fluency and generation of novel and comprehensive capability concepts. In applying this analytical lens to data analysis across five vignettes, we discovered that reducing the number of boundaries prior to ideation and reducing the focus on the current systems was conducive to generating a greater number of ideas, as was the introduction of additional steps for individual ideation and creative thinking techniques, which also served to bring in diverse viewpoints. Complexity of the generated concepts correlated best to the level of concept at operational vs tactical levels. We examined different problem structuring boundaries and observed positive effects with framing around capability of interest, rather than specific technologies. The proposed boundary critique lens is not the only analytical lens that can be applied towards improvement of OR workshop designs and we recommend further explorations of alternative approaches, tools and techniques.

Our study contributes to OR practice at the levels of method and methodology. We make a contribution at the method level for OR workshop design by proposing a boundary critique-based analytical lens that can be used in a flexible manner by the OR practitioners. The broad principle of examining the effects of workshop methods on system boundaries is applicable across any domain of OR practice. The specific elements of boundary types and indicators that are selected to examine workshop design effectiveness need to be tailored to the context and the purpose of the workshop.

We also contribute to the capability development practice in proposing a systems-thinking led approach to development of capability concepts. Within this approach, the focus is on the interconnected and interacting technological elements that are considered holistically within the overall organisational procedures and practices, and set within operational environments.

We further contribute to the methodological discussion in demonstrating the evolution of OR methodology through iterative theory–practice cycles. We illustrate that, far from being a rigid theory-methodology-method process, real-life OR practice involves reflection on the successes and failures of method implementation, with the view of evolving the methodology and development of new theories. Such was the case with the development of our proposed analytical framework for use of boundary critique. Following our initial application of this analytical framework, we extended our understanding of the different types of boundaries and examined the effects of adjustments in workshop design to shape these boundaries. We recommend improving on our approach through application of analyst training and consistency assessments for any selected data coding schemes, and (where creativity is the goal) supplementation of ideational fluency indicators with judgement-based assessment of the novelty and usefulness of the generated ideas.