Keywords

1 Introduction

Desktop 3D printers have been widely adopted in design studios, research laboratories, and teaching classes. With different printer mechanisms e.g., jet printing and laser sintering, and materials e.g., plastic and metal, 3D printers can accelerate conventional prototype process and support iterative design and evaluation. Importantly, 3D printers provide amateur designers easy access to making and testing ideas in a flexible and low-cost manner [1]. Occupational product designers and engineers are familiar with 3D printing functions, as the concept is basically derived from the conventional ink-based printing. When 3D printers are increasingly emerging, designers are having increasing opportunities of assembling 3D printers on their own, which are more likely to cause difficulties to most product designers, especially to the amateur. Designers will have to deal with numerous components with insufficient instructions. As such, the assembly becomes a process that can only be well handled with good knowledge of mechanisms and engineering, which are essential to identify components and fit each piece into right positions in specific sequences [2].

To enhance the use of assembly for both professional and amateur product designers, conventional approaches use paper and electronic manuals that explain the key assembly steps. Other approaches include short videos and virtual reality (and augmented reality) demonstration. Especially, mobile phones are frequently used as a general-purpose platform of assembly tutorials e.g., mobile phone-based tutorials for complicated energy management system configurations [2] and mobile phone camera scanning QR codes to track components to assembly [3]. As so, designers can learn how to assemble a 3D printer at any time and from anywhere [4, 5]. In addition, mobile phones can integrate attachments such as external sensors and widgets to augment functions and interaction experience [6]. Despite known benefits, mobile phone camera scanning is an interruptive approach, as which must stop ongoing tasks to proceed with the scanning and assembly tutorials [7].

Given the inevitable physical contact with the target components in assembly, we are inspired to implement a mobile phone-based device. The device integrates electromagnetic signal-based object detection technique to detect foreign objects upon physical contact with the 3D printer components and there is a dedicated mobile phone application accordingly displays text and animation tutorials of current component. The device and application together underwent usability evaluation. The paper’s main contributions include: (a) the design of the mobile phone-based device that can detect components upon physical touch; (b) the approach that integrates the object recognition process into essential assembling operations. Taking together, we propose the system for personalised tutorials of 3D printer assembly for both professional and amateur designers.

2 Related Work

2.1 Personalised Tutorials for 3D Printer Assembly

3D printers have the potential to enable iterative prototyping and evaluation on an individual basis [8] and are easy to use with no need of extra training. However, it is a contrast that most designers, including the experienced and amateur ones, are likely to encounter technical and cognitive difficulties when assembling a 3D printer. Due to the growing numbers and types of 3D printers, the conventional paper and electronic instructions appear to be insufficient in catering individual designers’ needs for assembly.

Recently, researchers have developed more accessible manuals by taking advantage of mobile phone and related sensors e.g. integrated camera. Users can scan QR codes that are enclosed in the components to gain relevant instructions, or they can take a picture of target components and search for related information. RFID is adopted to mark up and recognise components [9]. These approaches help designers understand (and manage) the components but may disrupt the current assembly process [10]. Take mobile phone camera scanning for example, user will have to stop current assembling activities, start the camera for related tutorials, and resume the previous task. The disruption impairs the overall naturalness of assembly procedures and results in unnecessary distractions and inefficiency. Furthermore, it raises a higher bar for personalised learning due to the requirements for user’s experience beforehand [11]. Handheld objects recognition such as [12] provide a useful example of integrating object detection with user’s objects seeing, which achieved greater naturalness.

The technical development and applications of 3D printers are spread in diverse domains, and technology-enhanced learning [13] and personalisation optimisation [14] are involved in the context of online education and distance learning. But in overall, little is concerned about how to design and deliver personalised tutorials to support 3D printer assembly and more assembly-specialised results are expected.

2.2 Electromagnetic Signal-Based Object Detection

To achieve natural object detection, researchers have presented many techniques and one of recent works is electromagnetic signal-based object detection, which utilises electromagnetic (EM) signals to detect the target object upon physical contact [15, 16]. EM detection has advantages over other object detection techniques, as it is marker-less, mobile phone inhabitable, and spontaneously integrated with object touch [17].

Electronic devices produces significant levels of electromagnetic (EM) emissions due to circuitry operations [15]. Given the governmental regulations e.g. FCC’ mandatory standards at the devices’ electromagnetic noises, these unintentional emissions can be received and transformed into electromagnetic signals, albeit the initial signals are interweaved with background noises. A few categories of non-electronic objects such as metallic objects also have unique electromagnetic signatures [18]. This shows possibility of detecting mundane objects by simple touching.

To read the EM emissions and extract particular patterns out of the signals, device-mediated sensing and body-communication are used [19]. The former instruments devices such as electronic wires and the latter instruments the user through conductive body which acts as an antenna. For example, the device-mediated sensing used to be designed in an instrument that requires direct touch with the target object, as in [17]. In contrast, the body-communication sensing reads EM signals through human body upon physical contact with the object, as in [15, 18]. Clearly, the body-communication sensing has advantages in dual-hand interaction tasks, but it also influences the detected EM signals. Figure 1 shows the respective EM signal spectrums of objects being scanned with different methods.

Fig. 1.
figure 1

Electromagnetic spectrums of two electromagnetic signal detection methods with the same object (left: device-mediated sensing, right: body-communication sensing)

Extracting unique EM signal patterns out of the noises includes several steps. Firstly, building an EM signal scanner hardware. Basic RFID reader and low cost open-source software defined radio (SDR) modules with minor circuitry modifications were adopted to read low frequency EM signals [18]. Secondly, visualising the EM signal spectrum, as shown in Fig. 1. Thirdly, extracting EM signal patterns. This consists of two processes: setup a baseline threshold to filter off the unwanted signals, and then employing statistical analysis of the EM signal profiles. Similar details of this process can refer to [18]. Finally, categorising EM patterns according to object types. The extracted patterns are used as input for object classification. That is, each unique pattern is assigned to a specific object. Once the above processes are completed, the objects’ EM patterns are parsed and preserved in a database, which can be used for later object detection. The hardware and process algorithms, which were partially implemented and tested in our previous works in [20], were evaluated in multiple studies with reliable and robust performance [17].

2.3 Mobile Phones for Personalised Tutorials

Mobile phones are an ideal platform for personalised tutorials, as mobile phones are a ubiquitous personal device and compatible with external devices such as the electromagnetic sensors. There are a huge number of mobile phones in the use and a large portion of these are for personal learning, including for language learning tutorials [21] and distant courses [22]. Due to the mobility and other features accumulated through successive generations of development e.g. camera and gyroscope, mobile phones have promoted designers and researchers to take a pedagogical view towards supporting tutorial applications in versatile scenarios such as 3D printer assembly and system setup [1]. The existing studies put a strong emphasis on mobile phone-based system design as well as system effectiveness [22].

Mobile phones have become an effective tutorial tool and are increasingly incorporating object recognition capabilities, e.g. EM signal detection, to build tangible systems [23]. Mobile phone-inhabited object recognition systems open a wide range of applications and novel interaction forms, despite the forms of mundane objects. For example, mobile phones support augmented virtual assembly of architectures [24].

Object recognition and other mobile phone sensing e.g. location-aware allow non-language learning and enhance user’s perception of the relationships between the physical objects and spaces [25]. Especially, optical scanning attachments can be coupled with the mobile phones and provide intuitive and efficient interaction. In this regard, previous studies successfully designed electromagnetic interference systems, Emi-spy, to support proxemic interaction [16].

3 Method

The preceding reviews raise two main requirements for personalised tutorials for a 3D printer assembling, including (a) accurate object detection that needs to be naturally and seamlessly integrated in essential assembling operations, and (b) natural interaction that needs to support user’s personalised access to the components without early experience requirements. To meet these requirements, we prototyped a mobile phone-inhabited two-part system. This section describes the details.

3.1 System Design

The device consists of two main parts, the hardware dongle that connects to the hosting mobile phone for electromagnetic signal capture (Fig. 2) and the software application that detects incoming signals and displays corresponding tutorials of component. The dongle only harvests electromagnetic signals, as the application subsequently interprets components with different signal patterns and displays component tutorials that are prepared beforehand.

Fig. 2.
figure 2

The original receiver and inner circuit board (left), the modified receiver embedded in the case (mid), and the final attachment coupled with a mobile phone (right)

Hardware Dongle

As a device for personal use, it needs to be compact and low energy consuming. We chose a low-cost RTL2832U USB software defined radio receiver as the detector of electromagnetic signals (Fig. 2, top). Two capacitors on the circuit board were removed and replaced with a wideband transformer in the original receiver, so to be able to detect low band electromagnetic signals. The modification re-adjusted the receiver’s signal detecting range to 1 Hz to the in-board oscillator’s highest frequencies of 28.8 MHz. Technical details of the modification can refer to previous studies e.g. [15, 18].

We removed the original receiver’s outer case and embedded the inter circuit board in a 3D printed dongle (Fig. 2, mid). The receiver’s full-sized USB interface was modified to a smaller USB-C interface, so it can directly connect to the mobile phone. To support electromagnetic signal conduction, the receiver’s antenna was welded to a large piece of copper foil (Fig. 2, bottom). As such, the dongle can detect signals upon direct physical contact with the objects, alternatively it can capture signals when a user holds the antenna in one hand and touches target objects with the other, which supports the signals to travel through conductive human body.

Object Signal Process

The signals captured by the dongle were processed as follows. The signals were trimmed to 0 Hz–1 MHz by shifting the symmetric bypass window. Then, the signals went through a fast Fourier Transform for frequency domain values. In this step, the background noises that were sampled from the last 5 s were subtracted from the raw signals. In Fig. 3, we show an example of two signal spectrums of 3D printer components, indicating that the results are replicated and reliable in different environments.

Fig. 3.
figure 3

The shifted signals of two different components

The signal patterns consist of two main features, the amplitudes (as the peaks in Fig. 3) and frequencies (as the positions of the frequency in Fig. 3). The patterns are stored in a local relational database. Before inserting a new signal pattern into the database, we compared the detected signal patterns with the existing ones by using rudimentary mathematical techniques, one of them being the least squares method. Unknown signal patterns were passed through various levels before the outcome was given. The sum of the square of errors between unknown signal peaks and the catalogued ones were computed and the entries that provided errors larger than an acceptable error limit (which was set and calibrated beforehand) were directly rejected. Another level with the algorithm checked if the error between majority of the peak values was within a limit (which was more fine-tuned than the earlier mentioned one). All the elementary signal processing and identification were implemented using Python.

Signal Pattern Sampling

We took samples of electromagnetic signal patterns from 16 different components of a 3D printer (model: MakerBot Replicator 2) and recorded these samples in the local database. In addition, we extracted electromagnetic signal patterns from another 12 components of laptops and desktop monitors and saved the results in the database.

The signal patterns in the database were assessed in three different environments (3D printing laboratory, meeting room, and office), so to ensure the reliability and validity of these patterns. We tested all signal patterns in another locations (corridor hall, coffee bar, and modelling laboratory) and achieved 97.6% (82 out of 84) detection accuracy.

Application and User Interfaces

We designed an Android mobile phone application to process signals and display text and amination tutorials (Fig. 4a). In addition to the tutorials for the overall 3D printer (Fig. 4b), each component in the database also had a dedicated tutorial page (Fig. 4c). In addition, the application included a search page that provided extensive learning contents of 3D printing (Fig. 4d).

Fig. 4.
figure 4

Mobile phone application tutorial pages (a: splash page with recent detected tutorials on the bottom, b: overall 3D printer tutorial page, c: detailed tutorial page of a component, d: search page for extensive learning)

The application included two parallelly running threads. The detection thread monitored any incoming electromagnetic signal streams and detected the components. In the case of no matching results, the application displayed the search page, otherwise displaying tutorials of the component.

We illustrate a scenario of procedural device use in Fig. 5. Firstly, the user attached the dongle to mobile phone and launched the application that automatically initialised the electromagnetic signal receiver and the local database. Secondly, the user touched the copper foil antenna of the dongle while holding the mobile phone and device in the one hand. This ensures quality transmission of electromagnetic signals when the other hand had physical contact with components (see Fig. 5 the blue dashed line that indicates signal flows).

Fig. 5.
figure 5

Procedural flows of electromagnetic signal detection and processing in personalised tutorial delivery process (Color figure online)

3.2 System Evaluation

The device, both the dongle and application, was strictly engineered and tested in multiple locations and scenarios. Despite high accuracy in signal sampling, little is known about the device’s usability in practical use. Thus, we conducted an empirical study to understand how the device affected student designers’ 3D printer assembly.

Participants

We recruited 15 volunteer students from the department of design to assembly a 3D printer with the device. Of these students, 3 were Master students in digital media and 1 was doctoral researcher in human-computer interaction (Mage = 21.7, SDage = 0.61). Prior to formal study, all participants were given a 2 min video introduction and sequentially a 5 min practice. To circumvent potential learning effects, the practice used a laptop and a 24 in. desktop monitor.

After signing consent form, the participants completed a self-report regarding previous experience of 3D printer use and assembly. Given the experience levels, the participants were divided into two groups. 11 participants with little experience were in group A and the other 4 participants were in group B. Group A used system usability scales as it was an intuitive approach to quantitatively measuring usability [26]. In contrast, group B adopted heuristic evaluation by following Neilson’s heuristic process because it allowed the participants to flexibly explore the usability [27].

Procedures

For group A. The task was to use the device to assemble the 3D printer that was previously used for signal sampling. There were 20 components in total to assemble, 16 of these were prepared in the database and the other 4 were not. The task lasted 30 min, regardless of completeness. After the task, the participants were required to fulfil the 5-Likert usability questionnaires.

For group B. The experimental settings and task procedures were the same as group A’s. After the task, the participants needed to rate the device against a sheet of heuristics.

Results

The results of the device’s usability were quantitative and qualitative. Despite relatively small sample sizes, the SUS evaluation results indicated greater overall usability than average levels (standard average score is 68 and the study results were 80). In addition, the mean score of all questions was 3.8 (SD = 0.69), indicating extraordinarily positive feedbacks on the usability of the device in supporting 3D printer assemble (Fig. 6).

Fig. 6.
figure 6

Results of the ten questions in system usability scales evaluation

In group B, the participants’ heuristics evaluation revealed two important factors that attributed to good usability. The compact hardware design and intuitive dongle coupling with mobile phone were useful to users and importantly, the approach to detect target components through physical contacts was of great novelty to the participants. In overall, participants in group B reported no severe usability issues but commented on a number of user interface designs e.g. font sizes.

In addition, the participants’ overall efficiency was qualitatively examined. Based on the experimenter’s observations on the participants’ device use, the participants assembled the components quickly once they gained corresponding tutorials. Otherwise, the participants took longer time to think about which positions the current component fitted into.

4 Discussion

The novelty of the device design is the approach that integrates the process of component detection into the essential operations of 3D printer assembling. It is different from conventional personalised design methods in several perspectives.

Firstly, existing object detection systems e.g. barcode and QR code scanners may disrupt ongoing operations, which leads to a consequence of task interventions and low engagement level. Our approach takes advantage of the inevitable activities – user’s hands will have to touch the components during assembling, and as a reaction, the approach implements a detection technique – electromagnetic signal detection – to merge the detection process into the core operations. It is a more natural way than the conventional ones with respect to task flows.

Secondly, the approach, as well as its implementation of the prototype system, not only has greater naturalness but also has potential benefits of higher productivity. The task performance results indicate that the participants’ interactions were relatively high, as that they saved the object scanning time by following an intuitive process that was very similar to the natural ‘seeing-doing’ loops.

Thirdly, the approach is generalising to multiple mobile device-based applications. For example, the object detection process can be integrated with user’s eye-seeing activities for those working in a parcel picking production line. Finally, on the ground of mobile learning – especially its mobility and ubiquity, the approach is also supportive to personalised tutorial delivery, as that the users can touch any interested components and learn related information of device assembly.

As mentioned, the device currently serves as a proof-of-concept and works reliably with a set of components, mostly the laptops, monitors, and 3D printers. Possible improvements need to be performed with more rigorous signal data processing by improving noise reduction algorithms followed by using e.g. Support Vector Machine (SVM). This may also improve reliability. Alternatively, the signal can be further analysed to find minima and maxima as well as the relationships between these key features such as gradient between points, etc. Increasing the number of attributes used to classify an object can only lead to greater accuracy although that may come at the expense of processing speed, since each additionally processing step would incur additional time in the overall process.

5 Conclusion

The paper presents the design of a mobile phone-inhabited system that is served to supply personalised tutorials for a 3D printer assembly. The detailed procedures, as well as the specifications of the system design, are provided. Furthermore, the study adds an empirical evaluation of the system with respect to usability which reveals potential influence on user’s learning of device assembling. The results show that the system design, which integrate object detection within essential assembling operations, is very effective. Moreover, generalising implications for personalised system design for learning purposes are discussed.