1 Introduction

Computer-supported collaborative design (CSCD) is the application of technologies to support a design process involving several parties and usually in distributed locations. Technologies which support CSCD and the technology functionalities which facilitate collaborative design are essential for distributed design in digital or mixed environments. These technologies provide innovative methods of communication, co-operation and co-ordination within an engineering design context with the potential to foster greater collaboration for internal teams, external collaborators and across boundaries (Hicks 2013; Sarka et al. 2014). In an increasingly globalised world, CSCD is essential to support the increasingly complex product design process (Brisco et al. 2018a).

Technologies which support CSCD come in the form of software or a web application depending on the required features and functionality offered to users. The definition of CSCD can be extended to include hardware which makes CSCD functionality possible through innovative interfaces, e.g. cave automatic virtual environment (CAVE), virtual reality and augmented reality technologies and telepresence robotics (Ahram et al. 2011). Commercial technologies which support CSCD tend to focus on a certain type of media, functionality or subject to differentiate themselves from their competitors. Technologies might include everyday communication methods such as e-mail, video conferencing and messaging (Shen et al. 2015) or specialist technologies such as digital whiteboards, groupware systems and knowledge management systems (Hsu 2013; Borsato et al. 2015; Shen et al. 2015). These technologies have been adopted by the more general consumer with free technologies offering functionalities such as document versioning control, social networking, and electronic co-ordination systems (Brisco et al. 2016).

Engineering design teams can utilise CSCD technology functionality using, for example, a multi-threaded conversation for the tasks of commenting and replying, or tagging team members for increased awareness and liking posts to show agreement (Gopsill 2014). This functionality has the potential to improve teamwork and change the way engineers work in the future (Zhao and Rosson 2009). A change is beginning to be seen in the way small businesses and students use this technology for design project communication and decision-making (Gopsill 2014). Many technologies which support CSCD also support social communication in addition to professional (project related) communication which is beneficial for successful engineering design teams. Törlind and Larsson (2002) stated: “The highly informal, accidental, spontaneous communication that characterizes everyday work has an impact on a design that sometimes is even greater than that of formal communication”. This has been referred to as the watercooler moment in organisations which offer workers an opportunity to socialise and discuss work informally.

To fully support CSCD, a range of functionalities is required. These are often featured within different technologies, and consequently to fully support CSCD, a range of different technologies are required (Mamo et al. 2015). Issues arise when technologies have overlapping functionalities offering a choice of communication method. This can lead to confusion in the project progress and future direction, and social tension between team members (Brisco et al. 2017). To minimise these effects, it is necessary to carefully select technologies that support the requirements of the design project and the individual team members.

Methods for technology identification, evaluation and selection exist typically reported in computing journals detailing computer systems selection or in business management journals detailing the decision process in the technology management process. The process tends to be holistic in nature (Sivunen and Valo 2006; Shea et al. 2011; Bohemia 2013) guided by recommendations and sometimes guided by a framework (Chan et al. 2000) derived from previous experiences which may or may not be relevant. This can lead to inappropriate technologies being selected which do not offer the functionality to support CSCD projects. Technology selection also crosses over with the requirements of the project or task, and the team members (Gibson and Cohen 2004). This makes the selection process unique to each problem requiring specialist knowledge of the entire system. A gap in this research has been that selection methods focus on the requirements of the technology functionality and not of the individual team members, then requiring team members to be trained on the use of technologies.

Technology selection is important for both business and socio-technical interests. Appropriate technology selection has been demonstrated to improve both technology capacity and technology management capacity which in turn increases innovation performance (Hao et al. 2007). Torkkeli and Tuominen (2002) reflected, “A company can waste its competitive advantage by investing in wrong alternatives at the wrong time or by investing too much in the right ones. It is more and more difficult to clarify the right technology alternatives because the number of technologies is increasing, and technologies are becoming more and more complex”. Since 2002, this observation has become increasingly prevalent and the number of technologies which can support CSCD has increased enormously requiring assistance in the best technology selection. Socially technology has the ability “to provide a more collaborative method of communication” (Gopsill 2014) by bringing people together in virtual environments where the alternative would be too expensive or time-consuming. However, this is not to say technology can and should replace face-to-face co-located working as this cannot be proven for all contexts (Hatem et al. 2012).

It is important to select the right technologies. When done correctly, a successful selection of technology has the potential to minimise risks resulting in great benefits to businesses (Rassias and Kirytopoulos 2014). The use of technology can either provide benefits for a company or could impose barriers if improperly conceived. Cross (2014) surmised, “… they (companies) may dive into using it (technologies) without forethought or proper risk assessment. On the other end of the spectrum, it may be treated as bleeding edge technology that is largely untried, untested, and/or poses a substantial threat”. In these cases, the response of large companies is to largely block access rather than to understand the potential benefits and devise ways to utilise this in a safe way. There is also the issues of protecting interests as part of using technology such as assets, copyrighted materials and other intellectual property (Cross 2014). This is especially important when selecting technologies for new product development teams as security features may not be suitable for collaboration.

Categorisation of the factors that influence successful collaboration has been well published in the literature over the years. Mattessich and Monsey (1992) first published a consolidated list of requirements for successful co-located collaboration based on the literature of the time. Some of the factors in this list are still relevant to co-located studies today, whilst others are not (McDonnell 2012; Fain et al. 2013; O’Riordan 2014).

Efforts to create similar lists for digital collaboration have been made including cross-cultural influencing factors based on the literature from 1977 to 2005 (Markus et al. 2007), risk management guidelines for distributed software development for design teams based on contemporary case study data from 1995 to 2002 (van Grinsven and de Vreede 2002), collaborative product development factors based on case studies analysis and survey results around 2007 (Elfving 2007), collaborative product development characteristics for an agile product development system based on learning from collaboration with industry (Reich et al. 1999), key factors for successful collaboration in the integration of CAD/CAE environments based on case studies between two universities and an industry partner in 2006 (Maier et al. 2009), and many o the same efforts have been achieved in related concepts such as communication (Maier et al. 2006), co-ordination (Duffy 2002) and co-operation (Hosnedl et al. 2008). These lists of requirements stemming from each of these studies differ due to differences between the contexts, differences with the data sources, and differences between the periods of investigation, as a result of changes to design practice or technological capabilities.

However, contextually relevant recommendations may provide additional insight when developed over shorter periods of time such as iterations from year to year. Such attempts use case study data (Horváth 2012; Gopsill 2014; Borsato et al. 2015) from an individual source which should not be generalised or the teachings may be misinterpreted. If generalised, the outcomes may not be applicable. In addition, no one has attempted to conduct a systematic literature mapping of existing published work to build a larger picture of the factors which influence the success of CSCD projects.

There have been many decision-making frameworks developed towards the task of selecting technology over time (Yap and Souder 1993; Gibson and Cohen 2004; Sivunen and Valo 2006; Nicholas and Steyn 2017). Most require an in-depth knowledge of the technology application to be able to utilise successfully across a range of factors. Often the individual or team who are in the position to make the decision does not have this knowledge (Sivunen and Valo 2006). When this is the case it can result in substandard technologies being selected that do not meet the requirements of the work and additional technologies or protocols to satisfy the requirements (Sclater 2008). Within student projects, technologies are often chosen based on the popularity of a technology within a team, and not based on its merits to support design activities or the overall design process (Brisco et al. 2017). When the wrong types of technologies are selected, it can result in conflict between team members (Hinds and Bailey 2003) and inefficient working (Brown 2000).

Germani et al. (2012) describe a method to benchmark co-design tools based on systems architecture. A QFD-based method is detailed and tested with small and medium enterprise (SME) partners and case studies. The method is successful in identifying collaborative dimensions which support the SME projects and creating a structured approach to understand collaborative tasks and functionalities through case studies. The outcome of this was the development of software which can fully support the collaborative dimensions according to the author. This method does not contribute towards the use of existing software to meet collaborative requirements which are typical in smaller start-up companies and student projects as examples (Ferro 2015; Brisco et al. 2018b). In addition, the SME case studies used to form the collaborative dimensions can only be applied to SME projects and not a wider range of potential CSCD projects which a wider literature search would bring.

The motivation behind this research was an observation of a change in student behaviour. This change was reflected with the use of social software and technologies in student engineering design teams (Gopsill et al. 2015; Mamo et al. 2015; Pektaş 2015) and as an emerging trend within industry (Margaryan et al. 2014; Sarka et al. 2014). This is a result of increased awareness amongst students of the disruptive technologies available, whereas educators have knowledge of previous successful projects using technology to impart to students (Brisco et al. 2018b). Currently, there is no formal method to combine these two factors. If students were empowered with the relevant knowledge, a method to make informed decisions based on the available technologies, and the factors which influence successful CSCD, then they would be able to make decisions to the benefit of their collaborative and technological requirements.

This highlights the importance of the method presented within this paper to be agile and updatable with respect to current popular technology use. In 2010, the proliferation of Web 2.0 websites and web services reached maturity as identified by Conole and Alevizou (2010), “We have seen a continual evolution of technologies and how they are used […] and we are only beginning to develop an understanding of what the trajectory of this co-evolution will be”. This coincides with a change in social behaviour driven by the spread of social websites. In 2010, Facebook reached its position as one of the top five websites by user traffic in the US (Metrix 2010; Post 2014). This encouraged the spawn of other social network sites and a trend towards mobile social network sites with the proliferation of smartphones over feature phones in the US in 2010 (Butler 2010).

2 Approach overview

The aim of this research was to develop a method to allow those involved in CSCD projects to evaluate and select suitable technologies based on the existing knowledge of the requirements of successful CSCD. A four-phase process was created as illustrated in Fig. 1.

Fig. 1
figure 1

Process adopted to create CSCD evaluation method

In this paper, Sect. 3 addresses Fig. 1—Step 1 the creation of the CSCD evaluation matrix, identification of known and missing knowledge required and selection of the known knowledge best suited for the matrix. Section 4 (Step 2) identifies the missing knowledge through a systematic literature mapping, extraction of factors which influence successful CSCD, categorisation of these factors and validation through workshops. Section 5 (Step 3) addresses the use of these factors for the purpose of the CSCD evaluation matrix by creating CSCD requirement statements and validation through a survey of experts. Section 6 (Step 4) pulls all the knowledge found and generated together into a complete CSCD evaluation matrix and discusses the creation of an automated text processing method requiring the input of three researchers to create dictionaries and coding to auto-evaluate reports and diaries of CSCD projects. Section 7 discusses the results of using the matrix to evaluate and develop a global design class, and Sect. 8 discusses the potential of the CSCD evaluation matrix in greater details.

3 Creating the CSCD evaluation matrix

The CSCD evaluation matrix was developed to enable the selection of the most suitable technology to support a CSCD project. The CSCD evaluation matrix allows the technologies which support CSCD to be profiled and then compared against CSCD requirements. This is achieved through the identification of functionalities of each individual technology and evidence that the functionality helps achieve the requirement. The method would also enable a discourse analysis technique to be utilised to automate the population of matrix cells.

A summary of the methodology used in this section is displayed in Fig. 2. An investigation was made into House of Quality (HoQ) literature (Step 1), the purpose of the CSCD evaluation matrix was established and relevant HoQ literature extracted (Step 2), a taxonomy of technology functionalities was found to complete the knowledge requirements of the CSCD evaluation matrix (Step 3) and the CSCD requirement statements were added to the completed CSCD evaluation matrix (Step 4).

Fig. 2
figure 2

Approach for creating and populating the CSCD matrix

The CSCD evaluation matrix was initially inspired by the HoQ matrix from Quality Function Deployment knowledge (Germani et al. 2012). This approach was selected for HoQ’s suitability for a competitor and product comparison analysis. There are structural similarities between the HoQ and the CSCD matrix, and the inputs, outputs and data processing were analogous. As the needs of the evaluation matrix developed, certain parts of the HoQ matrix were adapted, for example, there is no need to compare the CSCD requirements against each other, or for perception measurements towards the aim of this paper.

In the HoQ, customer requirements are compared against technical requirements. Both these aspects need to be represented within the new CSCD evaluation matrix. Customer requirements, in this case, are translated to CSCD requirements, and technical requirements are translated to technology functionality. In addition, there is also a need to define the technology and its available functionalities. This hierarchy is displayed in Fig. 3. Technologies afford functionalities which satisfy requirements.

Fig. 3
figure 3

Summary of the CSCD evaluation matrix logic in comparison

Technical requirements are satisfied by the functional abilities of a product. In this case, the functional abilities are already established as the functionalities of the technology, which can be used analogously with the technical requirements of the HoQ. To identify what the technology functionalities are, a taxonomy of the functionalities of CSCD technology was required. This was identified and selected from Mittleman et al. (2015), where the core competencies of collaboration technology were appropriately listed. The core competencies are generic descriptions for the functionalities which are available on technologies which support CSCD and this term is used synonymously with technology functionality. Mittleman et al. (2015) split these into three high-level categories: jointly authored pages (shared editors, group dynamic tools, conversation tools, polling tools), streaming media (desktop/application sharing, audio conferencing, video conferencing), and information access tools (shared file repositories, social tagging systems, search engines, syndication tools). The core competencies were extracted, and descriptions inserted into the CSCD evaluation matrix.

CSCD requirements represent the conditions necessary to support successful CSCD projects. This has been investigated sporadically in the past through case studies, but no unified or co-ordinated effort to systematically define using published research a complete list of requirements has been made. The requirements are unknown and need to be found to complete the matrix.

To compare technology to functionality, and then functionality against CSCD requirement statements, two matrices were created which were linked through the functionalities of the technology. This formed the layout of the CSCD evaluation matrix (Fig. 4), the technology matrix allows a comparison of technology with functionality, whilst the requirements matrix compares functionality with CSCD requirement statements.

Fig. 4
figure 4

Structure of the CSCD evaluation matrix

An automated method to populate the matrix was conceived to determine which technologies offered which functionalities, and subsequently how the CSCD requirements were satisfied and are discussed in Sect. 6. This method utilises existing data from a CSCD project and through a discourse analysis analyses the text and sorts into categories linked to the cells within the CSCD matrix. This enables the automated profiling of technologies for comparison or as a representation of a team’s CSCD abilities.

4 Identifying CSCD success factors

4.1 Methodology for identifying CSCD success factors

The procedure for identifying the factors which influence successful CSCD is described within this section and illustrated in Fig. 5. These factors were collected from the literature using a systematic literature mapping. The list reflects the state-of-the-art of factors which influence success in CSCD projects.

Fig. 5
figure 5

CSCD success factor identification procedure

The research question was established (Step 1) to focus the mapping, followed by a search procedure (Step 2) to guide a systematic approach for paper search and mapping. An iterative approach of identifying relevant search engines (Step 3), creating the search string (Step 4), and conducting the search (Step 5) was conducted until the papers that were consecutively collected met the research question. The literature search was conducted (Step 6), and papers downloaded (Step 7). Exclusions were implemented to ensure only relevant papers were included in the systematic literature mapping (Step 8). Papers which featured statements of successful CSCD were identified through a review of the papers (Step 9), and the findings were extracted from the papers including the context of the paper (Step 10).

4.1.1 Establishing the research question

The research question was created to focus the study on answering one single question and to ensure the exclusion criteria were fair and consistent. The research question established for the systematic literature mapping was “What are the factors which contribute towards successful CSCD in engineering design teams?”

4.1.2 Search procedure

There are many types of search possible to collect and utilise the literature. The type of search used for this project was a systematic literature mapping (Grant and Booth 2009). The reason this was chosen was first to systematically collect literature based on search terms and second to allow the collected literature to be mapped which enables the future objective of categorisations of successful CSCD factors. In comparison, a typical systematic literature review is more suited to the examination of the recent and current literature.

Guidelines adapted from Moher et al. (2009) on Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) were used to ensure a systematic collection and recording of literature. PRISMA was developed for conducting systematic reviews in the medical field but has the flexibility to ensure a systematic process for all potential fields including engineering design research (Hay et al. 2017). The PRISMA guidelines were adapted to remove the meta-analysis stage as this is not required for a systematic literature mapping. All other sections including protocol search, eligibility, identification of information sources, search, data processing and exclusions, and synthesis of results were retained.

4.1.3 Identification of relevant search engines

The literature search was conducted using academic search engines in the fields of engineering, technology, computing and education as seen in Table 1. The search was conducted in June 2017 and spanned published journal articles, conference papers and books between 2010 and 2017. The reason to limit the systematic literature search to research published in 2010 and onwards is related to the motivation of this work as an observable change in student behaviour. Students demonstrated the ability to conduct global design projects using social media tools and a preference towards them. The example within this paper to demonstrate the method including the CSCD requirements and completed CSCD matrix reflect this decision as they are applied to the Global Design Project class. 2010 was chosen due to the popularity of social network sites, proliferation of Web 2.0 technologies and mobile device sales as detailed in Sect. 1.

Table 1 Search engines and their relevance to this study

Literature outside these dates was excluded to ensure factors which influence successful CSCD were based on the use of recent technology with the current functionality and to reduce the number of papers which theorise on how CSCD features and functionalities might be used.

Google scholar was not used as it does not have the required Boolean search functionality. ISI Web of Knowledge was not included as it did not identify any papers not already identified.

4.1.4 Creation of search string

The search terms were created during a preliminary search as seen within Steps 3–5 of Fig. 5. Terms were collected and tested iteratively over approximately 1 month until relevant papers were consistently collected.

Towards the objective of identifying factors which influence successful CSCD in engineering design teams, the systematic literature mapping was split into three categories. These were to identify synonyms of factors which influence, successful CSCD and engineering design teams within categories. These categories became technology, field and domain as represented in Table 2. Factors which influence refers to a technology’s stimulus, then technology was established as a search category. Successful CSCD refers to the field of collaborative design and this was established as a search category. Finally, engineering design teams refers to the research domain of interest and this was included as a search category. Synonyms of these were tested using the academic search engines (Sect. 4.1.3) and added to the search string if papers were returned which met the objective.

Table 2 Search terms and categories

Whilst the field and domain categories would identify all papers on the theories within the research area, the technology category was included to meet the motivation of the work in a change in students’ behaviour towards using social software.

A paper was considered relevant to the study if it could satisfy a search term in each of the three categories within the body of the text, title, keywords or abstract. The Boolean terms N/1 and W/1 were used to find words near other words in a specific order and words near other words irrespective of order. Searches for engineering N/1 design would return papers containing the phrase engineering design and design engineering or with a variable such as design manufacture engineering. Searches for computer W/1 supported would only return searches within that order, e.g. computer-supported or computer software supported. Names of software were not included in the search terms as this study does not focus on specific software but focuses on the functionality of technologies which support CSCD. In addition, during a preliminary systematic literature mapping, these terms did not result with any additional relevant papers.

These categories defined the scope of the investigation, for example, a search for collaborative design AND engineering design AND social software would identify papers related to the goal of understanding how social software can support engineering designers to conduct collaborative design activities.

4.1.5 Exclusion criteria

Three stages of exclusion were performed in line with PRISMA guidelines:

  1. 1.

    Non-accessible, non-English papers, non-peer-reviewed and all duplicate papers were excluded for practical reasons. To achieve this on search engines themselves, English language and accessibility were given as search criteria on each academic search engine and manually reviewed for each paper. A duplicate search was conducted automatically by Mendeley and manually reviewed.

  2. 2.

    Data collected before 2010 were excluded. This was implemented to ensure factors which influence successful CSCD were based on modern technological use with current competencies. This ensured papers published post-2010 only utilised data from 2010 onwards. This was manually reviewed in each paper using Covidence (Cochrane 2017), a technology for systematically reviewing papers. These criteria removed many theories of CSCD systems which are now inaccurate or outdated compared to modern implementations of the software.

  3. 3.

    Any papers that did not explicitly report factors which influence successful CSCD on the use of CSCD technologies by engineering design teams were excluded. Explicit statements were reporting on the benefits of CSCD technology use, overcoming barriers to CSCD technology use, requirements of CSCD technology use, or opportunities towards better CSCD technology use. This was manually identified, checked and tagged in Covidence.

4.2 Results of identifying CSCD success factors

The systematic literature mapping revealed 517 papers which meet the search criteria and exclusion criteria 1 (Sect. 4.1.5). Exclusion criteria 2 and 3 aided the reduction of papers to meet the research question. Exclusion criterion 2 reduced the number of relevant papers from 517 to 234. Exclusion criterion 3 further reduced the number of papers from 234 to 27.

Within these 27 papers, the factors that influence successful CSCD by engineering design teams were identified. The relatively small number of papers reflects the lack of reporting of successes in this area despite the widespread use of CSCD technologies. Factors which influence successful CSCD were extracted from distinct statements on the use of CSCD technologies by engineering design teams in these papers during exclusion criteria 3. For example, in Gopsill et al. (2013) a table is presented with the requirements of technology to support teams using social media. This table was extracted and added with other findings to a table listing all factors from the systematic literature mapping which influence successful CSCD and is reported in Table 5 in “Appendix”.

The systematic literature mapping has revealed that no single paper included a complete list of CSCD factors and 220 factors exist in total. Many authors have chosen to focus on specific areas, picking and choosing outcomes, rather than revealing a complete list of the factors which influence successful CSCD. For example, Gericke et al. (2010) focussed on data storage and reuse factors, whereas Vyas et al. (2010b) focussed on supporting design activities such as creativity and innovation factors. Other authors such as Xie et al. (2010) presented a range of outcomes which could be categorised as a human to human factors and technological factors. This investigation has revealed a list of CSCD success factors from peer-reviewed and state-of-the-art published research. However, it is important to identify if further CSCD requirements exist which are not known or reported in the literature.

Once the factors that influence successful CSCD were identified, they contributed towards the technology evaluation and selection process. This was achieved by categorising the factors to look for similarities across published research and to create CSCD requirements which simplify the success factors into declarations of successful CSCD.

5 Establishing CSCD requirements

5.1 Methodology for establishing CSCD requirements

The list of 220 CSCD factors was verified using experts and those experienced in CSCD to assess the completeness of the list. Factors were then categorised and converted into requirement statements to simplify the analysis within the CSCD evaluation matrix. Finally, these requirement statements were validated with expert opinions. The process of transforming the CSCD factors into requirements for success is illustrated in Fig. 6.

Fig. 6
figure 6

Methodology for deriving CSCD requirements

Workshops were conducted to investigate the completeness of the list of factors which influence the success of CSCD (Step 1) and the factors were checked iteratively (Step 2). On completion, the 220 factors were categorised into five high-level categories (Step 3) and 19 sub-categories (Step 4) and sub-categories were redefined (Step 4). The 19 CSCD requirement statements were created to represent the factors within each of the 19 sub-categories (Step 5). A survey was created and distributed to discover if experts agreed with the 19 CSCD requirement statements (Step 6). Finally, the sub-categories were re-established based on the opinions of experts from survey feedback (Step 7).

5.1.1 Verifying the list of factors which influence CSCD success

The 220 factors were verified through a series of seven workshops to identify any factors which influence success in CSCD projects not captured through the systematic literature mapping.

Three workshops were held with representatives from academia and industry at the 18th International Conference on Engineering and Product Design Education (E&PDE 2016) with 12 participants, the 21st International Conference on Engineering Design (ICED’17) with 22 participants and at the 15th International Design Conference (Design 2018) with 21 participants. These workshops had a common theme on collaborative design and asked: “What are the challenges in supporting successful collaborative design?” Findings on the use of technology were extracted and compared with the 220 factors.

Four workshops were held with students experienced in CSCD during The Global Design Project (GDP) in 2016 with 18 participants, and in 2017 with 26 participants, and during the Global Studio class in 2016 with 26 participants and 2017 with 28 participants. The student workshops were all on the topic of technology use in global CSCD projects and asked students to reflect on successful collaborative practices. Example outcomes from the workshops are reported in. GDP students were all final year master students within the department of design manufacture and engineering management at the University of Strathclyde. Students of the Global Studio class were all undergraduates within the design school at Loughborough University. The workshops asked participants to share their knowledge on CSCD topics, discuss success in collaboration and current and future barriers to collaboration, and how these barriers might be overcome through research. The knowledge collected from these workshops was used to iteratively develop the coding scheme until 19 distinct sub-categories emerged. All outcomes from the workshops were collated and compared to the list of 220 factors to ensure no factors were missing from the list.

Following the workshops and comparing the outcomes with the list of 220 factors, there were no additional factors which could be added to the list to make it more complete. This demonstrated the success of the systematic literature mapping method in collecting the factors.

One major outcome from the workshops was an awareness that experts wished to discuss the differences and definition of co-ordination, co-operation and collaboration and that many did not have a definition of each. Many academics argued that co-ordination is inherent to co-operation and collaboration and so perhaps it is a sub-category of both. Students and academics argued that co-operation and collaboration are not distinct enough and so they should be contained under the same category or a joint category.

It has been well discussed in the literature that collaboration is more complex than co-operation in that it involves shared risk of failure and opportunities for shared success (Adams 2015). Collaboration is mutually beneficial and requires a common goal somewhere in the process towards a shared outcome, whereas co-operation only requires the sharing of knowledge and resources towards a shared activity (Kvan 2000). Success is not defined by the outcomes of the project, but by the relationship and how well it was managed (Brewer 2015). Perhaps this insight from the workshops explains why there has not been a list of factors which influence the success of CSCD presented before and why the creation of a CSCD evaluation matrix has not previously been possible.

5.1.2 Creating CSCD requirements through categorisation

Each of the 220 factors which influence successful CSCD was assigned a category, and then a sub-category was derived. The categories were based on those reported by Mattessich and Monsey (1992): communication channels, collaborative environment, process and structure, team member characteristics, and resource management.

Sub-categories were iteratively created using a coding scheme based on benefits of CSCD technology use, overcoming barriers to CSCD technology use, requirements of CSCD technology use, or opportunities towards better CSCD technology use. NVivo 11 was used to create this coding scheme and code each factor.

The 5 categories and 19 sub-categories which emerged from the categorisation activity are described as follows:

Communication channels refer to the many ways in which a team can communicate and how they can be supported through technology use. The three areas identified in the literature which influence engineering design teamwork are

  • artefacts, the use of digital representations of physical objects and digital work,

  • feedback, on previous work to influence future development, and

  • social, including networking to reduce interpersonal barriers.

The collaborative environment refers to how collaboration is supported within an organisation, team or group. The two areas identified in the literature which influence engineering design teamwork are

  • access to information, how and where the knowledge and data can be accessed for transparency and ease of use, and

  • corporate structure, for clear hierarchy and procedures.

Process and structure are put in place to ensure systematic practices and minimise loss of data. The three areas identified in the literature which influence engineering design teamwork are

  • decision-making, the ability to share opinions and make informed decisions,

  • knowledge capture, techniques and technologies to create comprehensive data stores, and

  • productivity, to support readiness with the right skills at the right time.

The team member characteristics of a design team have influence over ensuring the right team members are involved with the project. The five areas identified in the literature which influence engineering design teamwork are

  • commonality, consideration of differences in language, culture, social and time zones,

  • motivation, of critical team members to ensure sustained interest in a project,

  • shared understanding, of the problems, concepts and techniques,

  • team co-operation, awareness of work and contribution towards co-construction activities, and

  • trust, of the quality and completeness of the work of others.

Resource management refers to knowledge and skill assignments towards a common goal. The five areas identified in the literature which influence engineering design teamwork are

  • competency, ensuring the best team member completes the appropriate work,

  • co-ordination, of work, and team members time efficiently,

  • innovation, promoting techniques for creativity and exploration,

  • knowledge management, of all stakeholders and awareness of the whole life of the product,

  • managing the sharing of data refers to how information can be sent to others, where it is hosted and the mechanism to send it, and

  • communication refers to the different communication methods available through technology. Whether a conversation is synchronous or asynchronous communication to suit the message context and other factors such as multi-threaded conversations and multi-channel communication.

The categorised factors were subsequently summarised into statements which represent CSCD requirements. A total of 19 statements were created, one for each sub-category. An issue arose during the creation of the statements that some factors inferred the same outcomes such as avoid miscommunication (Cho and Cho 2014) and avoid uncertain misunderstandings (Luck 2013). In these instances, they were included in the same requirements as they related to the same context. Another example is discuss problems with a common context (Hirlehei and Hunger 2011) and Communicate context (Wangsa et al. 2011). In this example, the requirement became R2. Encourage a shared understanding by defining and framing conversations within a common context which makes it easy to understand information, clarify meaning and reduce miscommunications. The full list of CSCD requirements was created as follows.

CSCD technology:

  1. R1.

    Supports complexity managing the sharing of data through integration with data storage systems, reduced file compatibility issues and synchronous live document working with automated tracking and versioning to enable co-creation of documents.

  2. R2.

    Encourages a shared understanding by defining and framing conversations within a common context which makes it easy to understand information, clarify meaning and reduce miscommunications.

  3. R3.

    Encourages co-operation by enabling increased awareness and connectivity to encourage equal participation, support design activities by anticipating needs and opportunities for peer learning.

  4. R4.

    Supports knowledge management through the organisation of information and communication, the ability to easily search and retrieve knowledge, and autonomy in the distribution of knowledge.

  5. R5.

    Allows for feedback from stakeholders to support reflection on past communication and concepts dependent on the context, knowledge, experience and competency of the stakeholders.

  6. R6.

    Allows for social communication which encourages team synergy, knowledge sharing and serendipitous communication by supporting networking and building interpersonal skills.

  7. R7.

    Supports knowledge capture through the recording of information, decisions and artefacts to document the design process and contribute to decision-making and reuse of knowledge.

  8. R8.

    Supports communication through synchronous and asynchronous multi-threaded and multi-channel software for prompt discussion in a way which supports the context of the message.

  9. R9.

    Enables team members to overcome boundaries of access to easily view and edit files when required.

  10. R10.

    Encourages the building of trust to support conflict resolution through increased accuracy, clarity and transparency of communication between team members.

  11. R11.

    Supports co-ordination through a shared space for organisation of work and communication, easy mechanisms for scheduling meetings and to support the even distribution of work.

  12. R12.

    Allows for artefact-mediated communication which are high-quality digital representations of physical work and ideas.

  13. R13.

    Reduces the barriers of physical proximity, language and time zones, and enable a greater awareness of culture and the global community.

  14. R14.

    Supports decision-making through concept ranking functionality, increased opportunities to develop negotiation skills and express opinions.

  15. R15.

    Allows for greater productivity through fast objective focused communication, organisation of work and a greater quantity of output to promote collaboration readiness, reflection and reduced rework time.

  16. R16.

    Encourages innovative thinking through agile systems to support exploration, creativity and quality of outputs.

  17. R17.

    Encourages the development of greater competency through increased accessibility of team and non-team skills and experience, reduction of unnecessary information and supporting the completeness of messages.

  18. R18.

    Encourages motivation through mechanisms of social incentivisation, positive reinforcement, gamification or encouraging moral decisions to ensure long-sustained interest in the project and if implemented correctly can help avoid conflict and support conflict resolution.

  19. R19.

    Integrates with company structure through the implementation of procedures, policies and agreements to ensure clear roles and responsibilities, reducing the sense of lack of control and optimising team negotiation cycles.

5.1.3 Validating CSCD requirements

To verify the CSCD requirements, a survey was sent to 94 experts in CSCD. These experts were identified as notable authors through the systematic literature mapping. 24 experts responded to the request by completing the survey. The survey asked if the expert agreed with each of the CSCD requirements (agree, disagree in full, or disagree in part) and in addition, how important the requirement was towards success in CSCD (high, medium or low importance) so that they could be ranked from most to least important. Figure 7 is an example of the first question asked and all others follow the same format.

Fig. 7
figure 7

Question 1 of 19 in the survey of CSCD experts

The survey was e-mailed to experts with a link and instructions on how to complete the survey. Tables 3 and 4 display the results of the questionnaire.

Table 3 Expert assessment of the developed CSCD requirements
Table 4 Expert opinion on importance of CSCD statement

Most experts agreed with the CSCD requirements either in full or in part as displayed in Table 3. Where disagreement in full or in part was selected, the comments were used to understand why there was disagreement and how the statement might be changed to better represent the views of the experts. One main change suggested by the experts was that the requirements should not focus on the positive outcomes. Instead, they should say that CSCD could have these outcomes. The technology category was also split representing the management of shared data, and communication.

Experts ranked the CSCD requirements in order of importance as displayed in Table 4. One prominent comment was that although some lower ranked statement categories are very important for collaboration, they are perhaps less important for computer-supported collaboration. This could be because barriers are overcome using technology or more fully supported making the topic of the statement less important.

6 Automatic population of the CSCD evaluation matrix

The 19 CSCD requirements were included in the CSCD evaluation matrix with the 11 technology functionalities as illustrated in Fig. 8. The CSCD evaluation matrix supports an assessment of how well an individual technology satisfies successful CSCD, how multiple technologies compare against each other, and how a combination of technologies can satisfy an organisation’s CSCD requirements by profiling.

Fig. 8
figure 8

CSCD evaluation matrix

A binary code is used to populate the matrix: if a technology functionality satisfies a CSCD requirement an entry is marked 1, or if they are not related, marked 0. The rows and columns are summated to establish the suitability of the functionality towards the CSCD requirements. The summation of the 1s in the rows and columns provides an indication of requirement or functionality fulfilment and is presented in the green summary box. A summation of the 0s in the red summary box represents that lack of requirement or functionality fulfilment for any given technology. A larger score signifies that the functionality has satisfied the CSCD requirements to a greater extent and that little changes are required to fully satisfy the factors for successful CSCD. Where a low score exists, there are opportunities for alternative or a combination of technologies or techniques to satisfy the CSCD requirements.

Two technologies can be evaluated by comparing the cells (the profile) or the total scores for requirement or functionality fulfilment. A team’s technology (summary of all technologies used) can be profiled by including multiple technologies.

The CSCD evaluation matrix contains up to 209 relationships when featuring only one technology, which requires significant comprehension and background knowledge to populate. To complete this manually would be a considerable task completed by multiple people experienced in the technology to reach consensus. An automated population method was developed using a discourse analysis method to take reported information on technology use and filter it to populate the CSCD evaluation matrix. This offers a systematic way of populating the CSCD evaluation matrix. Figure 8 illustrates the population of the CSCD evaluation matrix where Technology 1 contains polling tool functionality which satisfies the CSCD requirement of decision-making (Fig. 9).

Fig. 9
figure 9

Methodology for the population of CSCD matrix

6.1 Systematic population of the CSCD evaluation matrix

The input data used to populate the matrix included student reports, and diaries on the use of technology to meet CSCD requirements and was collected from a GDP class over a 3-year period with 34 students in 2015, 45 in 2016, and 25 in 2017. The sample size for this study is 104. The reports were created by the students reflecting on the successes and failures of their projects. A diary was kept by the author on the action’s teams were taking and quotes from students on their technology use.

An automated text analysis system was created to populate the matrix using this data. The text analyses had to interpret the words of the given text and code it appropriately into categories. These categories related to the technology, the functionality of the technology, or the CSCD requirement. Semantic dictionaries were created for each category and sub-category and allow for the categorisation of sentences based on the words which build them. All possible synonyms were collected from student reports and included in the dictionaries.

When checking the data, the word or words which contributed to the coding of the sentences were extracted. These extracted words were used to create semantic dictionaries for each category. 19 dictionaries were created for CSCD requirements, 11 for technology functionalities, and 7 for technologies used during the GDP. These technologies were social network site, messenger, video conferencing, cloud document storage, project management groupware, e-mail and collaborative document editor. The completed dictionaries are contained within Table 7 in “Appendix”. A sentence discussing the benefits of Facebook® in supporting communication through conversation would be categorised in technology as “SNS”; in functionality as “conversation tool”; and in CSCD requirement as “communication”.

289 data points were collected. Data were split for coding (35%) and for testing (65%). Due to the controlled nature of the project and the limited nomenclature of the data, 35% was a sufficient coding sample. Whilst conducting a verification of the final 65% of the data, only two words were found across the 36 categories, in addition to approximately 486 found from the 35% coding data. Data coding was completed by three experienced CSCD practitioners to create semantic dictionaries. Three academics was the minimum required to achieve a representative consensus on decisions where disagreements may occur, i.e. if a disagreement occurred then one other person would be able to resolve it. In addition, three data points were required to identify that a connection is true and confidence is on trend. Academics took sentences involving the use of technology to support collaboration and identified three parts of these sentences: the technology used, the functionality available and the CSCD requirement satisfied. These three parts of the sentence correspond with the elements of data required to populate the CSCD evaluation matrix i.e. technology, technology functionality and CSCD requirements. Each academic created their own semantic dictionary based on the categories and the results from the sample data were compared between researcher, known as intercoder reliability.

For example, one GDP team member stated “Video conferencing facilitated design activities to send and receive documents”. This was coded as Videoconferencing in technology, group dynamics tools in technology functionality, and complexity managing the sharing of data in CSCD requirements. The evaluation was subjective and clarified the need for an intercoder method.

To satisfy the intercoder method used, disagreements were resolved by the third member of the coding team. In instances of a difference of opinion by all three, a discussion of the factors was held. The coding team agreed on 87% of the checked data in the first instance. Through discussion, the remaining 13% was agreed upon. Disagreements were due to differences in the interpretation of the data. For example, the use of slack® for multi-channel communication enabling sub-team communication was reported. This was coded by one academic as a conversation tool and by another as a social tagging system. Both can be applied as slack® is being used for conversations, but communications are being tagged to enable sub-team syndication. Through discussion, the decision was made to code this as tagging which enables multi-channel communication and was the intent of the sentence in its original context.

Following the creation of the semantic dictionaries, the dictionaries were included in an automated text processing method using the RapidMiner Studio software. Text processing enabled the sentences reserved for testing from student reports to be automatically filtered into a category based on the semantic dictionaries. Sentences which were filtered were awarded a score of + 0.1 based on the number of similar words also in the category. A value of 0.1 was selected to make significance calculations simple and follow general practice. The score acts as an automatically generated confidence indicator if the sentence could be categorised multiple ways and the top score is automatically progressed to the data output.

For example, the word messenger could refer to Facebook® Messenger coded as a social network site, or WhatsApp® referring to messenger applications or the inbuilt messenger technology in video conference technologies. In most cases, the word will refer to messenger technology and not social network sites or video conference. This can be confirmed by looking at other words in the sentence such as app, chat, instant and voice, amounts others included in the semantic dictionary. Where multiple words are detected, the confidence score increases.

Figure 10 illustrates the process of parsing a data set from the perspective of a word within the RapidMiner Studio system. A piece of data is a sentence containing many words. Each word of that sentence moves through the process individually and is joined with its sentence at the end. The score offers confidence of a word towards a category and in turn the sentence towards a category. If a category is matched, it is tagged with that category and its score incremented by 0.1 in that category. The scores are added at the end and the highest score is kept, removing all other possible categories. The output is the top-scoring category for technology, technology functionality and CSCD requirement for each sentence or data set.

Fig. 10
figure 10

Text processing steps as a flowchart

The output from RapidMiner Studio is a spreadsheet of all data points, their categories and their confidence scores. This spreadsheet is used to populate the matrix which was automated in excel. Where the data point has two or more categories it can populate a cell in the matrix. For example, the categories of communication and communication tools are linked and the data show examples of this, then the cell connecting both categories would be filled.

7 Results

The use of the CSCD evaluation matrix populated by the GDP data has revealed insights into the class that could help with the development of this class and demonstrates its value. The insights are discussed based on the populated CSCD evaluation matrix presented in Fig. 11.

Fig. 11
figure 11

Success in CSCD matrix with data populated from GDP class

The CSCD evaluation matrix was automatically populated with the output from RapidMiner Studio illustrating the relationships between the success factors and technology functionality. The nodes within the CSCD evaluation matrix provide a numerical indication of the confidence of the relationship between the two aspects. This illustrates the extent to which the functionality of the technology used within the GDP addressed the CSCD requirements. The CSCD evaluation matrix provides insight into both project managers and computer scientists regarding the barriers in the collaborative process. The CSCD evaluation matrix also supports the identification of whether new or additional technology is required to fill gaps in the CSCD requirements. Within a newly formed team, the CSCD evaluation matrix could be used to select and build a toolkit of integrated technology that meets the CSCD requirements of the team, the project, and the processes.

In Fig. 11, it is visually demonstrated that there are gaps in the requirements of building trust, motivating team members, greater productivity, reducing barriers and feedback mechanisms. A team would then investigate why these requirements are not being fulfilled with additional technologies which encourage these practices.

Technology that supported conversation contributed towards almost all requirements relating to the success of the teams. This was expected as without conversation it would be difficult to perform any collaborative task. All technologies that were used within the GDP had some form of conversation tool. This is the only column which is fully populated which demonstrates the importance of conversation in all aspects of CSCD.

Mamo et al. (2015) posited that four areas of technology functionality would be most populated: conversation tools, video conferencing, group dynamic tools and shared file repositories. It was, therefore, expected that these functionalities would satisfy the requirements for success in CSCD fully. The CSCD evaluation matrix has revealed this to be true for conversation and the remaining three are more populated. The reason for this is due to the factors which influence CSCD which the technologies satisfy.

Group dynamic tools supported many requirements for successful CSCD which were unexpected. For example, company structure was not expected to be related to any requirement, but in reviewing the populated matrix, company structure relates to enabling all members of a team, no matter their position, the ability to contribute to discussion and decisions. Competency was not expected to be a contributor to group dynamic tools either as competency reflects a human’s ability to perform a task and not a computer’s ability. But in reviewing the populated matrix, competency relates to the functionalities that technologies offer enabling tasks to be completed in a simplified way. This relates to the availability of team members for a task and their suitability. Technology can support this by offering co-ordination and profiling functionalities.

Social communication was not expected to arise in group dynamic tools as the functionality typically refers to project work and professional communication. However, within design engineering, innovative thinking outcomes can arise from social interaction contributing to new ideas.

There were no data relating to knowledge capture with the functionality of video conferencing. This could be because the knowledge capture of video conferencing technologies is difficult to implement. Typically, video conferences can be recorded but this does not enable easy summaries to work from and if summaries are created in a text form this requires manual work. Transcriptions could be one alternative with text search, but none of the technologies used within the GDP offered this ability. This could be an opportunity for improving success in CSCD for design engineering if the technology used enables simple capture of work in a usable format. This demonstrates the abilities of the CSCD evaluation matrix to display why factors for success in CSCD were not met and clearly displays how they could be met in future technology development.

Syndication tools such as notification systems are often linked with building trust between team members (Carroll et al. 2003; O’Riordan 2014). Within the GDP data, this was not observed. The CSCD evaluation matrix indicates that this functionality is more linked with management in being able to co-ordinate people.

It is important to note that this method and its results cannot be generalised to all engineering projects using this data. The data used is from an educational environment and deals with distributed engineering design. Many of the technologies considered have search functionality; however, the use of search was not reported in the GDP data. In addition, audio conferencing was not utilised within the GDP in favour of the use of video conferencing, however, the use of audio conferencing may be preferable in other projects. What can be inferred from this is a trend within the GDP that there is an expectation for functionalities such as search and subsequent a lack of reporting on its use. The results can be considered complete in terms of the use of technology within the GDP 2015–2017. And for examples outside this context further data collection would be possible.

8 Discussion

This method fills a gap within the GDP and the wider engineering design community where there is a lack of knowledge on CSCD tools to help teams select suitable technologies. To fill this gap in knowledge, the steps are taken for creating a matrix and identifying factors which influence success in CSCD, and an automated evaluation method has demonstrated successful support in the creation of the CSCD evaluation matrix and the method used.

The contribution of the literature categorisation and the act of forming the CSCD requirements is a key contribution and component in the reported approach as this has never been achieved before in a systematic way using published literature. One respondent to the survey on CSCD requirement statements described the CSCD requirements as “CSCD dogma which once formalised through verification and publication will have a great impact on teaching”. Students can benefit from education in both the core knowledge in their field, but also the skills to be able to collaborate with others. The CSCD requirements form the basis for a framework representing the knowledge to build new techniques, tools and approaches. For engineering design as a collaborative process, the knowledge of CSCD requirements is important to learn and build skills.

In the creation of the CSCD requirements, there are implications for the development of new technologies to ensure the requirements are fulfilled, but this would require new partnerships between developers of these technologies and the knowledge holders in industry and academia. These types of relationships are prevalent in the development of CAD systems but less so for collaborative communication-based technologies.

If a major technology change were to happen to offer new functionality or new collaboration procedures, the CSCD evaluation matrix can be updated. The information within this paper on the development of the method offers the ability to recreate the knowledge required or adapt for alternative purposes. This method is robust in that it can be repeated, but it should not be generalised. If the method is to be applied to other situations such as CSCD success in the industry, industrial data must be used. If the method is to consider co-operation in greater detail for example, then the requirements must be changed to consider co-operation success factors. One way to ensure the continued development could be in the utilisation of a larger database of data to track trends and update over time rather than focusing on a single university class and these partnerships could be created or further developed across institutions.

Due to lack of training and disinterest by workers, attempts to integrate novel technology often fail (Garcia-Perez and Ayres 2010). The adoption of this technology is only visible between team member to team member communication and rarely in business to business communication with the exception of e-mail as an industry standard. For a paradigm shift in the industry on the magnitude of previous technology changes, such as the adoption of e-mail, would require rapid large-scale adoption (Ellison et al. 2007) which has not yet happened. Although, along with current trends in social communication, the use of CSCD can be expected to increase in the future as students familiar with these technologies make their way to the workforce.

The next generation of workers choose to use social media platforms and social network sites for personal communication, and usage reveals they are active and engaged on these platforms. If this affinity can transfer to technologies used in industry, then they might also share in higher levels of engagement (Hank 2012). The CSCD evaluation matrix gives future workers the ability to understand and evaluate their technology based on the requirements of the project.

Considering future workers, it is not unreasonable for an employer to assume that newer technologies such as messengers and social network sites might become a basic technological skill along with e-mail and word processors. And if future workers are not able to use these in a professional setting, they may not have the opportunity to build these skills in the appropriate ways.

This study used student co-operation in its development and student data to evaluate. Considering industry applications this approach is yet to be validated. However, other studies of this nature had the same approach such as Gopsill (2014) who utilised student projects to develop an e-mail analysis technique before applying it to industrial applications. The benefits of this approach are the validation of the method in a familiar environment before industry partners get involved. The disadvantage is that major changes may be required to the text analysis, but with time this can easily be updated.

Whilst the systematic literature review was completed in a manner that aimed to capture all relevant published work, there is still a need to regularly check and expand the potential of the CSCD evaluation matrix. This study cannot infer on collocated or blended environments where students are collaborating whilst using technology. This is because the CSCD requirements may not be fully complete for these situations. If a systematic literature mapping of engineering teamwork was completed it might offer a wider impact on generalised engineering work and CSCD requirements.

In the future, a comparison of technology use over the years in terms of learning implications is possible using the CSCD evaluation matrix. This would be beneficial in mapping the changes in learning practice and predicting trends. Some expected outcomes were not seen within the GDP data such as e-mail use and audio-conferencing technologies. This is because the students of the GDP did not use e-mail after the first week and only utilised video conferencing in place of audio conferencing. However, in industrial contexts, it is likely that these types of technology would be more prominent. This relates to the change in student behaviour and a change in the skills of students. If students are restricted to older technologies with fewer functionalities, then perhaps their skills are not being fully utilised. Partnerships with other similar classes would offer the data required to build a more complete picture of successful CSCD in student engineering design teams.

The creation of this evaluation method has never been attempted before. It was built in a systematic way but also offers a systematic method of evaluation. The knowledge utilised comes from peer-reviewed literature comparing a range of different experiences to make a large list of factors which influence success. The list of CSCD requirements created not only considers technological factors but also human interaction factors and social factors to give a more complete picture of CSCD evaluation. The method of working with the matrix has been developed using existing knowledge of HoQ matrix formulation for a simplified understanding, but also the automated population of cells means no prior expert knowledge is required to utilise the matrix. This also benefits new users such as students who are being introduced to evaluation methods. All these together or individually offer new avenues for further research which have not been seen before.

The next steps for this project are to develop an educational programme on the impact of technology selection based on CSCD requirements. The CSCD evaluation matrix will play an important part in this class and will act as a framework for student’s decision-making. Feedback from the class will be used to evaluate how successful the CSCD evaluation matrix is as an evaluation tool and the data from this class will be used to evaluate how complete the CSCD evaluation matrix is for this purpose. This class is envisioned for further education level for university students and as a CPD course for industry. It would be worthy in the future to be able to compare the outcomes of the CSCD evaluation matrix with other similar classes and within an industrial setting.

9 Conclusion

This paper describes the development of a method to evaluate and select suitable technologies based on CSCD requirements and functionality which influence success in CSCD. The main contribution of this paper is the description, verification and validation of a method to evaluate technologies through a CSCD evaluation matrix. The matrix offers the ability to compare technologies and their functionalities against CSCD requirements. In addition to the main outcome of this research; factors which contribute towards successful CSCD, and the impact of technology used within a global design class were discovered. The impact of this method and the data collected is the potential to change the way students and academics think about the technology they use and how it impacts the success of their CSCD projects. These outcomes suggest how the method can be used in an industrial environment to improve the process of selecting technology.

To create the method, a four-phase process was devised to create the CSCD matrix for the evaluation of technology. This involved a systematic literature mapping to identify the 220 factors which influence success in CSCD verified through workshops to ensure a robust list, categorisation of the 19 CSCD requirements validated through a survey of experts, and creation of a discourse analysis method developed in an intercoder structure to process design reports and automatically populate the CSCD evaluation matrix.

The development of the CSCD evaluation matrix introduces opportunities for students and educators towards the development of better learning experiences. With the development of classes on the evaluation method, students can be educated on the impact of technology selection with collaborative projects and with this knowledge and the CSCD matrix can evaluate technologies to pick the most suitable. This does not exclude the potential for education within industry which could be developed through future studies. If the gaps in the CSCD requirements are better known, then they can be filled and CSCD can be better supported. The details within this paper explain how the CSCD matrix can be updated or augmented for new purposes. The method developed is not generalisable but is a first step in building a better understanding of technology evaluation in CSCD. In addition, this paper discusses the impact of the findings as a framework to direct future research on the topic of CSCD and the CSCD evaluation matrix impact in its ability to track and predict trends through further research and partnerships.