Skip to main content
Log in

Moving with the flow: an automatic tour of unsteady flow fields

  • Regular Paper
  • Published:
Journal of Visualization Aims and scope Submit manuscript

Abstract

We present a novel framework that creates an automatic tour of unsteady flow fields for exploring internal flow features. Our solution first identifies critical flow regions for time steps and computes their temporal correspondence. We then extract skeletons from critical regions for feature orientation and pathline placement. The key part of our algorithm determines which critical region to focus on at each time step and derives the region traversal order over time using a combination of energy minimization and dynamic programming strategies. After that, we create candidate viewpoints based on the construction of a simplified mesh enclosing each focal region and select the best viewpoints using a viewpoint quality measure. Finally, we design a spatiotemporal tour that efficiently traverses focal regions over time. We demonstrate our algorithm with several unsteady flow data sets and perform a user study and an expert evaluation to confirm the benefits of including internal viewpoints in the design.

Graphic abstract

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

References

  • Bai Z, Yang R, Zhou Z, Tao Y, Lin H (2016) Topology aware view path design for time-varying volume data. J Vis 19(4):797–809

    Article  Google Scholar 

  • Bordoloi UD, Shen HW (2005) View selection for volume rendering. In: Proceedings of IEEE visualization conference, pp 487–494

  • Buatois L, Caumon G, Lévy B (2009) Concurrent number cruncher—a GPU implementation of a general sparse linear solver. Int J Parallel Emergent Distrib Syst 24(3):205–223

    Article  MathSciNet  Google Scholar 

  • Campbell MJ, Julious SA, Altman DG (1995) Estimating sample sizes for binary, ordered categorical, and continuous outcomes in two group comparisons. Br Med J 311(7031):1145–1148

    Article  Google Scholar 

  • Chandler J, Obermaier H, Joy KI (2015) Interpolation-based pathline tracing in particle-based flow visualization. IEEE Trans Vis Comput Graph 21(1):68–80

    Article  Google Scholar 

  • Fritz CO, Morris PE, Richler JJ (2012) Effect size estimates: current use, calculations, and interpretation. J Exp Psychol Gen 141(1):2–18

    Article  Google Scholar 

  • Gagvani N, Silver D (1999) Parameter-controlled volume thinning. Graph Models Image Process 61(3):149–164

    Article  Google Scholar 

  • Hsu WH, Zhang Y, Ma KL (2013) A multi-criteria approach to camera motion design for volume data animation. IEEE Trans Vis Comput Graph 19(12):2792–2801

    Article  Google Scholar 

  • Janusonis S (2009) Comparing two small samples with an unstable, treatment-independent baseline. J Neurosci Methods 179(2):173–178

    Article  Google Scholar 

  • Ji G, Shen HW (2006) Dynamic view selection for time-varying volumes. IEEE Trans Vis Comput Graph 12(5):1109–1116

    Article  MathSciNet  Google Scholar 

  • Lane DA (1993) Visualization of time-dependent flow fields. In: Proceedings of IEEE conference on visualization, pp 32–38

  • Lee TY, Mishchenko O, Shen HW, Crawfis R (2011) View point evaluation and streamline filtering for flow visualization. In: Proceedings of IEEE pacific visualization symposium, pp 83–90

  • Lorensen WE, Cline HE (1987) Marching cubes: a high resolution 3D surface construction algorithm. In: Proceedings of ACM SIGGRAPH conference, pp 163–169

    Article  Google Scholar 

  • Ma J, Wang C, Shene CK (2013) Coherent view-dependent streamline selection for importance-driven flow visualization. In: Proceedings of IS&T/SPIE conference on visualization and data analysis, pp 865407:1–865407:15

  • Ma J, Walker J, Wang C, Kuhl SA, Shene CK (2014) FlowTour: an automatic guide for exploring internal flow features. In: Proceedings of IEEE pacific visualization symposium, pp 25–32

  • Marchesin S, Chen CK, Ho C, Ma KL (2010) View-dependent streamlines for 3D vector fields. IEEE Trans Vis Comput Graph 16(6):1578–1586

    Article  Google Scholar 

  • McLoughlin T, Laramee RS, Peikert R, Post FH, Chen M (2010) Over two decades of integration-based, geometric flow visualization. Comput Graph Forum 29(6):1807–1829

    Article  Google Scholar 

  • McLoughlin T, Jones MW, Laramee RS, Malki R, Masters I, Hansen CD (2013) Similarity measures for enhancing interactive streamline seeding. IEEE Trans Vis Comput Graph 19(8):1342–1353

    Article  Google Scholar 

  • Meuschke M, Engelke W, Beuing O, Preim B, Lawonn K (2017) Automatic viewpoint selection for exploration of time-dependent cerebral aneurysm data. In: Bildverarbeitung für die Medizin. Springer, Berlin, Heidelberg, pp 352–357

    Google Scholar 

  • Moberts B, Vilanova A, van Wijk JJ (2005) Evaluation of fiber clustering methods for diffusion tensor imaging. In: Proceedings of IEEE visualization conference, pp 65–72

  • Post FH, Vrolijk B, Hauser H, Laramee RS, Doleisch H (2003) The state of the art in flow visualisation: feature extraction and tracking. Comput Graph Forum 22(4):775–792

    Article  Google Scholar 

  • Sadlo F, Rigazzi A, Peikert R (2011) Time-dependent visualization of Lagrangian coherent structures by grid advection. In: Topological methods in data analysis and visualization. Springer, Berlin, Heidelberg, pp 151–165

    Google Scholar 

  • Silver D, Wang X (1997) Tracking and visualizing turbulent 3D features. IEEE Trans Vis Comput Graph 3(2):129–141

    Article  Google Scholar 

  • Silver D, Wang X (1998) Tracking scalar features in unstructured data sets. In: Proceedings of IEEE visualization conference, pp 79–86

  • Spencer B, Laramee RS, Chen G, Zhang E (2009) Evenly spaced streamlines for surfaces. Comput Graph Forum 28(6):1618–1631

    Article  Google Scholar 

  • Steinman DA (2000) Simulated pathline visualization of computed periodic blood flow patterns. J Biomech 33(5):623–628

    Article  Google Scholar 

  • Takahashi S, Fujishiro I, Takeshima Y, Nishita T (2005) A feature-driven approach to locating optimal viewpoints for volume visualization. In: Proceedings of IEEE visualization conference, pp 495–502

  • Tao J, Ma J, Wang C, Shene CK (2013) A unified approach to streamline selection and viewpoint selection for 3D flow visualization. IEEE Trans Vis Comput Graph 19(3):393–406

    Article  Google Scholar 

  • Viola I, Feixas M, Sbert M, Gröller ME (2006) Importance-driven focus of attention. IEEE Trans Vis Comput Graph 12(5):933–940

    Article  Google Scholar 

  • Xu L, Lee TY, Shen HW (2010) An information-theoretic framework for flow visualization. IEEE Trans Vis Comput Graph 16(6):1216–1224

    Article  Google Scholar 

  • Yu H, Wang C, Shene CK, Chen JH (2012) Hierarchical streamline bundles. IEEE Trans Vis Comput Graph 18(8):1353–1367

    Article  Google Scholar 

Download references

Acknowledgements

This research was supported in part by the U.S. National Science Foundation through Grants IIS-1456763, IIS-1455886, and DUE-1833129.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jun Tao.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Appendices

Appendix

Fig. 9
figure 9

The temporal correspondence of critical regions extracted from the hurricane data set. The horizontal axis represents time step. Each red line represents the lifespan of one critical region

Critical region identification

1.1 Critical Region Detection

Given the input unsteady flow field, we adopt a 4D moving window (\(5\times 5\times 5\times 5\)) centered at each voxel and calculate its entropy value by evaluating the variation of vector directions in each local window. Since calculating the entropy value of one voxel is independent of another, we compute the entropy field using CUDA in the GPU.

A critical region at each time step can be defined as a subvolume which contains rich flow features such as the neighborhoods of critical points. Entropy values in a critical region are normally larger than a non-critical region because vector directions located in the critical region have a higher degree of variation. Therefore, we detect critical regions by applying a region growing algorithm. In each time step, we detect all connected voxels with their entropy values larger than a given entropy threshold \(\delta _e\) as a critical region. Multiple critical regions can be identified for each time step. We discard critical regions whose sizes are smaller than a given threshold. To speed up the computation, we implement critical region detection using CUDA in the GPU. Specifically, each GPU thread takes the responsibility for one voxel in the volume. Each thread checks if the entropy value for the corresponding voxel is larger than \(\delta _{e}\). If yes, the voxel is marked as a critical voxel. At the same time, the boundary critical voxel is also marked if at least one of its neighboring voxels is not critical. Each critical voxel is then assigned a unique ID. Next, for a critical voxel v with ID \(v_{\mathrm{ID}}\), we check each of its neighboring critical voxel’s ID \(n_{\mathrm{ID}}\) and set \(n_{\mathrm{ID}} = v_{\mathrm{ID}}\) if \(n_{\mathrm{ID}} > v_{\mathrm{ID}}\). This is applied to all critical voxels in parallel. After one round, some voxels’ IDs are changed to smaller values. We iteratively apply this operation for multiple rounds until there is no ID change for any voxel. At this moment, all voxels with the same ID form a critical region.

1.2 Temporal correspondence computation

For an unsteady flow field, a critical region at one time step could span several time steps and overlap other regions in the neighboring time steps. To identify such temporal correspondences, we detect all matching regions between consecutive time steps by computing their overlap rates (Silver and Wang 1998). We point out that for fast moving structures, alternative solutions such as optical flow or mass transport minimization should be used. In our scenario, two regions are matched when their overlap rate is larger than a given region overlap rate threshold \(\delta _o\). In this way, we can construct an overlap table for every pair of neighboring time steps t and \(t+1\) indicating the matching relations (i.e., continuation, bifurcation, amalgamation, dissipation, and creation) among critical regions.

Based on those overlap tables, we further apply the feature tracking algorithm developed by Silver and Wang (1997) to build the temporal correspondence among critical regions. This algorithm considers every two consecutive time steps by examining their overlap table in the order of bifurcation, continuation, amalgamation, dissipation, and creation. When any region has been categorized into one correspondence, we will not consider the same region in the subsequent examination. Finally, based on their temporal correspondence, we merge regions which have the continuation correspondence into a new critical region. We also identify a region’s parents or children if there is a bifurcation or amalgamation correspondence. Figure 9 shows an example of the temporal correspondence of critical regions.

1.3 Region skeleton extraction

In order to identify the shape of each critical region detected, we construct its skeleton by applying the volume thinning algorithm presented by Gagvani and Silver (1999). For each voxel, the algorithm first computes the distance transform (DT) value, which is the minimum of the distances from the current voxel to all boundary voxels. We identify a voxel as a skeleton point if its DT value is larger than the summation of the average of its neighboring voxels’ DT values and a predefined thinning parameter \(\delta _t\). Obtaining all skeleton points for each critical region, we employ a minimum spanning tree (MST) algorithm to connect them to form the skeleton. We also define the major axis of a skeleton as a line connecting the two endpoints which have the longest Euclidean distance on the skeleton. The major direction of the skeleton is the vector starting from one endpoint of the skeleton with the lower y value to the other one. In practice, for a critical region which spans several time steps, we extract its 3D skeleton at each time step separately.

Fig. 10
figure 10

a, b Pathlines traced from random and skeleton-based seeding, respectively, over all time steps (each 4000 lines). c, d Pathlines traced from random and skeleton-based seeding, respectively, from the first three time steps (each 300 lines). Note that skeleton-based seeding only applies to the focal region, while the remaining regions are still randomly seeded. Blue to gray to red are mapped to pathline colors for slow to medium-to-high-velocity magnitudes

Fig. 11
figure 11

Two sequences of focal region ordering based on two different size weight \(\lambda _S\) values. The first row ac shows the focal region (highlighted in red) sequence in three continuous time steps with the default \(\lambda _S\) value. The second row df shows a different sequence with a larger \(\lambda _S\) value favoring regions of larger size

Fig. 12
figure 12

Two sequences of focal region ordering based on two different temporal correspondence weight \(\gamma _{\eta }\) values. The first row ac highlights the focal region (highlighted in red) sequence in three consecutive time steps with the default \(\gamma _{\eta }\) value. The focal region in the last time step c is the child of the selected region in the previous time step (b). The second row df shows a different sequence with a larger \(\gamma _{\eta }\) value which prefers a region that is not the child of the previously selected region

1.4 Skeleton-based seeding

In order to trace pathlines, we apply a skeleton-based seeding strategy by evenly placing seeds along the skeletons extracted from critical regions. Our goal is to make sure that each focal region at every time step has enough pathlets to represent itself for clear highlighting, and the entire flow field has approximately the same number of pathlets for each time step. To achieve this, we place two types of seed: region seeds and random seeds. First, we place region seeds along the skeleton of each focal region detected at every time step (refer to Sect. 4 in the paper) and ensure that the number of seeds for each region is proportional to its size. Furthermore, to ensure that the traced pathlets will capture each focal region, we actually start to trace region seeds a few time steps before the region is being focused on. For random seeds, we keep track of the number of pathlets in each time step. The number of pathlets could decrease as the time evolves since pathlets may go out of the domain boundary or get absorbed around the vicinity of a critical point (such as a sink). If the number is less than a given threshold \(N_p\), we will add some more seeds randomly. This keeps the overall pathline number relatively stable at each time step. In Fig. 10, we compare skeleton-based seeding against random seeding. It is evident from (a) and (b) that skeleton-based seeding generates more pathlines from interesting flow regions. The same conclusion can be drawn when comparing (c) and (d). From regions highlighted in red boundaries, we can see that the skeleton-based seeding favors regions with more intricate flow patterns while placing fewer streamlines at the bottom region where the flow pattern is almost straight.

Table 2 Parameter setting for all data sets

Parameter influence

We briefly discuss the effects of the parameters and how we set their values. We categorize the parameters into three sets based on which stage of the algorithm they are applied to:

  • Region extraction parameters. This set of parameters controls region extraction based on the entropy field and temporal correspondence of these regions. The entropy threshold \(\delta _e\) allows users to control the extracted region size at different levels. The thinning parameter \(\delta _t\) is used to control the point density of the region’s skeleton. The overlap rate \(\delta _o\) determines if two regions are overlapped and therefore controls the final region temporal correspondence.

  • Region traversal-order determination parameters. This set of parameters determines the final scores of spatiotemporal regions in the linear system. They include the factor weights (e.g., \(\lambda _{S}\) for the intrinsic property region size \(S_{r,t}\), and \(\gamma _{\nu }\) for the traversal frequency constraint \(O_{\nu }\) in the linear system), the time window size \(W_s\), and the maximum number of backtracking time windows \(L_b\) used in the normalized traversal frequency computation. Changing the parameter values will vary the influence of corresponding factors (e.g., the region size or the region time span) in the linear system and affect the final region scores. Since the traversal order is determined based on the region scores, manipulating these parameters will finally impact the region traversal order.

  • Viewpoint creation parameters. This set of parameters controls the properties of the final viewpoints (e.g., position, look-at direction, and viewpoint score). The mesh decimation factor \(\delta _s\) determines the number of vertices (viewpoints) for the simplified mesh. The offset threshold \(\delta _l\) determines the distance from the created viewpoints to the focal region. \(\lambda _1\), \(\lambda _2\), and \(\lambda _3\) control the weights of different factors in the viewpoint score computation. The thresholds \(\delta _{\alpha }\) and \(\delta _d\) are used to control the final quality of the selected viewpoints.

Table 2 shows all parameters for each data set. We point out that although multiple parameters exist in our algorithm, users are not required to adjust them for each data set. Actually, the motivation for providing such parameters is to offer advanced users the capability to explore the flow field based on their own requirements. In practice, most of the parameters in our case studies remain the same for all data sets. We refer to these parameters as data-independent parameters. Contrarily, the data-dependent parameters are the ones which should be adjusted based on the given data set. There are only a few of data-dependent parameters, e.g., \(W_s\) (depending on the total time steps) and \(\delta _d\) (depending on the dimension of the data set).

Data-independent parameters are specifically designed for advanced users to control the final exploration. In Fig. 11, we give an example on how the size weight \(\lambda _S\) in the linear system controls the final region-order selection for the hurricane data set. The three images in the first row show the focal regions (highlighted in red) of three continuous time steps based on the original value (\(\lambda _S=1\)) used in our experiment and the three images in the second row show the focal regions for the same time steps using a larger value (\(\lambda _S=5\)). Therefore, when emphasizing the size factor in the linear system, our algorithm prefers to select regions with larger sizes. Figure 12 demonstrates the influence of the temporal correspondence weight \(\gamma _{\eta }\) to the final region traversal order for the solar plume data set. Setting a larger value for the parameter will allow the linear system to prefer a region which is not the child of the previously selected region. The first row highlights the focal regions in red for three consecutive time steps. With a small value (\(\gamma _{\eta }=2\)), the system picks the child of the previously selected region in the last time step. On the contrary, a different region is selected at the same time step if a larger value (\(\gamma _{\eta }=5\)) is set, as shown in the second row.

User study

We conducted a between-subjects user study to evaluate the effectiveness of our method. We used a design of 2 conditions (our view tour and baseline view tour) \(\times 2\) tasks (answering questions and identifying critical regions). We recruited 14 users and separated them into two groups: one group of seven users evaluated our view tours and another group of seven users evaluated baseline view tours. Each user was paid $10 for participating in the study. All users are graduate students from different departments (computer science, mechanical engineering, physics) of a university.

1.1 Baseline view tour

For the baseline view tour, we first generate a set of viewpoints whose positions are randomly picked both inside and outside of the volume. For external viewpoints, we also keep them not too far away from the volume’s center to ensure clear observation of the flow field. The look-at center of each viewpoint is also randomly created. However, we constrain their positions to be inside of the volume so that the viewpoints could still focus on the flow field rather than an empty space outside. Next, we connect all these viewpoints in a way such that both the Euclidean distance between viewpoints and the angle change along the path could be minimized. Finally, we interpolate a B-spline curve passing all the viewpoints to generate the tour path. If the total length or the angle change of the baseline view tour is much different from our method, we will regenerate the path by replacing some viewpoints. Since the viewpoints for the baseline view tour are not selected intentionally for observing any particular flow pattern, we expect that the baseline view tour is unable to guarantee a comprehensive exploration of internal flow features.

1.2 Experimental procedure

Before the experiment started, users had been briefly introduced the basic characteristics of unsteady flow fields, the types of critical regions they need to recognize, and the tasks they would be expected to perform in the experiment. After the briefing, users could ask questions about the flow fields or the experiment itself to avoid possible misunderstanding. Then the main study commenced, consisting of tasks for three data sets. For the first data set, a short demonstration of how to use the program was given. During the experiment, users were only allowed to ask questions for clarification about the meaning of the specific questions or how to use the program.

The procedure for each data set was as follows. First, users were shown an animation of the complete tour of the data set without stop or pause. The speed of the animation could be adjusted if desired. Second, users were given a limited time to answer several multiple-choice questions about the flow field and to identify critical regions. They could revisit any part of the tour using a slider or replay the animation. This function was helpful for answering questions and was required for identifying critical regions. Finally, users were asked to answer several post-experiment questions about subjective feedback and suggestions for improvement. They should complete all three data sets in one sitting. The entire experiment took approximately an hour for each user, including the initial paperwork, briefing, and post-experiment questionnaire.

Fig. 13
figure 13

a The average proportion of correct answers of multiple-choice questions. b The average number of critical regions detected

1.3 Results and discussion

We present the results of this study in the following aspects: user correctness on multiple-choice questions, ratings of subjective questions, the proportion of critical regions correctly identified, and the post-experiment feedback. Because user performance varied widely by data set, each data set was analyzed individually, comparing user performance between our tour and the baseline tour. We used Student’s t-test to analyze statistical significance between the conditions with a significance level \(\alpha = 0.05\). Although there are objections to using the t-test on small samples (seven in our case), several articles (Campbell et al. 1995; Janusonis 2009; Fritz et al. 2012) argued that there are no principle objections to use a t-test for a small sample size even though the statistical power may not be high.

Multiple-choice questions Each data set was analyzed individually by comparing users’ average proportion of correct answers by the two methods. The hypothesis is that there is a significant difference of the answer correctness rate between our method and the baseline method. The average correctness rates of all users for the two methods are given in Fig. 13a. A t-test shows that our method performed better than the baseline one on the hurricane and solar plume data sets and the differences are statistically significant with \(p \ll 0.001\) and \(p = 0.045\), respectively. For the supernova data set, although our method also received higher average correctness rate than the baseline method (0.63 vs. 0.54), the difference is not statistically significant.

Subjective questions There are two subjective questions for each data set asking the effectiveness of finding critical regions and identifying the global flow pattern. The subjective questions were also analyzed individually for each data set. We quantized the answers by setting 1.0 for “Strongly Agree”, 0.75 for “Agree”, 0.5 for “Neutral”, 0.25 for “Disagree”, and 0.0 for “Strongly Disagree”. The hypothesis is that there is a significant difference of the rating score between our method and the baseline method. For the hurricane data set, our method gets much higher average ratings than the baseline one for both questions (0.93 vs. 0.64 and 0.79 vs. 0.61). But only the first question has significant difference with \(p = 0.039\). Our method also receives a higher rating than the baseline one for the solar plume data set with average ratings 0.75 vs. 0.57 and 0.68 vs. 0.61 though no significant difference is shown. For the supernova data set, both methods received similar scores.

Critical region identification Similarly to the multiple-choice questions, each data set was analyzed individually for critical region identification. The analysis was done by comparing how many critical regions the users correctly identified. The average number of critical regions detected for both methods is given in Fig. 13b. The hypothesis is that there is a significant difference in the average number of identified critical regions between our method and the baseline method. The supernova data set gets the same result for both groups since it contains one enormous critical region that is easily identifiable by all users. For the other two data sets, users performed better with our method. The average number of detected critical regions is 3.00 vs. 1.57 for the hurricane data set and 5.43 vs. 3.29 for the solar plume data set. Furthermore, the performance difference for the solar plume is significant with \(p = 0.04\) where the same conclusion cannot be made for the hurricane because its p value is only slightly above the significance level.

Post-experiment feedback We received the following major user comments from the post-experiment questionnaire. First, for both our and baseline methods, most users suggested that the camera may stay longer at each of the selected viewpoints to allow better observation and less visual jumping. Second, the background pathlines should be thinner for reducing distraction. Third, some users also suggested providing a global view of the data set before the experiment since the tour path may focus on the internal pattern rather than the global shape of the flow field. Users also had some different opinions on the two methods. For the baseline method, users gave the neutral rating for detecting flow features. One of them claimed that the look-at direction sometimes provided an unreasonable view of sight for observing the flow field. By contrast, most users agreed that the tour generated by our method could easily help them identify critical regions. For other questions, such as animation speed and pathlet size, both groups were satisfied with the current configurations.

Discussion In summary, for the multiple-choice questions, users in general performed better with our method than the baseline method. This indicates that our view tour indeed provides users with more information about the underlying flow field. For subjective questions, most users agreed that our method better help them detect critical regions and identify the global flow pattern than the baseline method for the hurricane and solar plume data sets, though the latter one did not have a significant difference. The supernova data set received almost the same rating for both tours due to the simple flow feature which could be easily observed with either method. In terms of identifying critical regions, users performed much better with our method over the baseline method except for the supernova data set which only contains a single obvious sink-like region at the center. From the post-experiment feedback, except for suggestions on animation and interface, most users gave positive feedback to our method over the baseline one.

We conclude that our view tour indeed helps users better identify and observe the internal flow pattern in unsteady flow fields, especially for hidden or occluded features that only exist for a short period of time. Therefore, our view tour could complement the traditional overview tour by providing users with a more comprehensive exploration experience for large and complex flow fields.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ma, J., Tao, J., Wang, C. et al. Moving with the flow: an automatic tour of unsteady flow fields. J Vis 22, 1125–1144 (2019). https://doi.org/10.1007/s12650-019-00592-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12650-019-00592-3

Keywords

Navigation