当前期刊: IEEE Transactions on Visualization and Computer Graphics Go to current issue    加入关注   
显示样式:        排序: 导出
我的关注
我的收藏
您暂时未登录!
登录
  • Assigning Rated Items to Locations in Non-List Display Layouts
    IEEE Trans. Vis. Comput. Graph. (IF 3.780) Pub Date : 2018-09-13
    Simone Santini

    One of the most common ways in which results are displayed by an information retrieval system is in the form of a list, in which the most relevant results appear in the first positions. Today's large screens, however, allow one to create more complex displays of results, especially in cases such as image retrieval, in which each unit returned is fairly compact. For these layouts the simple list model is no longer valid, since the relations between the slots in which the results are placed do not form a sequence, that is, the relation among them is no longer that of a total order. In this paper we model these layouts as partial orders and show that a “stalwart display” property (a layout in which items’ relevance is unambiguously conveyed by their display position) can be obtained only in the case of lists. For the other layouts, we define two classes of representation functions: “safe” functions (which display results without adding spurious structure) and “rich” functions (which do not drop any structure from the result set), as well as an algorithm to optimally display fully ordered result sets in arbitrary display layouts.

    更新日期:2020-01-04
  • Equalizer 2.0–Convergence of a Parallel Rendering Framework
    IEEE Trans. Vis. Comput. Graph. (IF 3.780) Pub Date : 2018-09-17
    Stefan Eilemann; David Steiner; Renato Pajarola

    Developing complex, real world graphics applications which leverage multiple GPUs and computers for interactive 3D rendering tasks is a complex task. It requires expertise in distributed systems and parallel rendering in addition to the application domain itself. We present a mature parallel rendering framework which provides a large set of features, algorithms and system integration for a wide range of real-world research and industry applications. Using the $\mathsf{Equalizer}$Equalizer parallel rendering framework, we show how a wide set of generic algorithms can be integrated in the framework to help application scalability and development in many different domains, highlighting how concrete applications benefit from the diverse aspects and use cases of $\mathsf{Equalizer}$Equalizer . We present novel parallel rendering algorithms, powerful abstractions for large visualization setups and virtual reality, as well as new experimental results for parallel rendering and data distribution.

    更新日期:2020-01-04
  • Feature Level-Sets: Generalizing Iso-Surfaces to Multi-Variate Data
    IEEE Trans. Vis. Comput. Graph. (IF 3.780) Pub Date : 2018-09-03
    Jochen Jankowai; Ingrid Hotz

    Iso-surfaces or level-sets provide an effective and frequently used means for feature visualization. However, they are restricted to simple features for uni-variate data. The approach does not scale when moving to multi-variate data or when considering more complex feature definitions. In this paper, we introduce the concept of traits and feature level-sets , which can be understood as a generalization of level-sets as it includes iso-surfaces, and fiber surfaces as special cases. The concept is applicable to a large class of traits defined as subsets in attribute space, which can be arbitrary combinations of points, lines, surfaces and volumes. It is implemented into a system that provides an interface to define traits in an interactive way and multiple rendering options. We demonstrate the effectiveness of the approach using multi-variate data sets of different nature, including vector and tensor data, from different application domains.

    更新日期:2020-01-04
  • Hamiltonian Operator for Spectral Shape Analysis
    IEEE Trans. Vis. Comput. Graph. (IF 3.780) Pub Date : 2018-08-28
    Yoni Choukroun; Alon Shtern; Alex Bronstein; Ron Kimmel

    Many shape analysis methods treat the geometry of an object as a metric space that can be captured by the Laplace-Beltrami operator. In this paper, we propose to adapt the classical Hamiltonian operator from quantum mechanics to the field of shape analysis. To this end, we study the addition of a potential function to the Laplacian as a generator for dual spaces in which shape processing is performed. We present general optimization approaches for solving variational problems involving the basis defined by the Hamiltonian using perturbation theory for its eigenvectors. The suggested operator is shown to produce better functional spaces to operate with, as demonstrated on different shape analysis tasks.

    更新日期:2020-01-04
  • Intrinsic Image Decomposition with Step and Drift Shading Separation
    IEEE Trans. Vis. Comput. Graph. (IF 3.780) Pub Date : 2018-09-10
    Bin Sheng; Ping Li; Yuxi Jin; Ping Tan; Tong-Yee Lee

    Decomposing an image into the shading and reflectance layers remains challenging due to its severely under-constrained nature. We present an approach based on illumination decomposition that recovers the intrinsic images without additional information, e.g., depth or user interaction. Our approach is based on the rationale that the shading component contains the step and drift channels simultaneously. We decompose the illumination into two channels: the step shading, corresponding to the sharp shading changes due to cast shadow or abrupt shape changes; the drift shading, accounting for the smooth shading variations due to gradual illumination changes or slow shape changes. Due to such transformation of turning the conventional assumption that shading has smoothness as reasonable prior, our model has the advantages in handling real images, especially with the cast shadows or strong shape edges. We also apply a much stricter edge classifier along with a reinforcement process to enhance our method. We formulate the problem using a two-parameter energy function and split it into two energy functions corresponding to the reflectance and step shading. Experiments on the MIT dataset, the IIW dataset and the MPI Sintel dataset have shown the success of our approach over the state-of-the-art methods.

    更新日期:2020-01-04
  • PANENE: A Progressive Algorithm for Indexing and Querying Approximate k-Nearest Neighbors
    IEEE Trans. Vis. Comput. Graph. (IF 3.780) Pub Date : 2018-09-24
    Jaemin Jo; Jinwook Seo; Jean-Daniel Fekete

    We present PANENE, a progressive algorithm for approximate nearest neighbor indexing and querying. Although the use of k -nearest neighbor (KNN) libraries is common in many data analysis methods, most KNN algorithms can only be queried when the whole dataset has been indexed, i.e., they are not online . Even the few online implementations are not progressive in the sense that the time to index incoming data is not bounded and cannot satisfy the latency requirements of progressive systems. This long latency has significantly limited the use of many machine learning methods, such as $t$t -SNE, in interactive visual analytics. PANENE is a novel algorithm for Progressive Approximate $k$k -NEarest NEighbors, enabling fast KNN queries while continuously indexing new batches of data. Following the progressive computation paradigm, PANENE operations can be bounded in time, allowing analysts to access running results within an interactive latency. PANENE can also incrementally build and maintain a cache data structure, a KNN lookup table, to enable constant-time lookups for KNN queries. Finally, we present three progressive applications of PANENE, such as regression, density estimation, and responsive $t$t -SNE, opening up new opportunities to use complex algorithms in interactive systems.

    更新日期:2020-01-04
  • Poisson Vector Graphics (PVG)
    IEEE Trans. Vis. Comput. Graph. (IF 3.780) Pub Date : 2018-08-28
    Fei Hou; Qian Sun; Zheng Fang; Yong-Jin Liu; Shi-Min Hu; Hong Qin; Aimin Hao; Ying He

    This paper presents Poisson vector graphics (PVG), an extension of the popular diffusion curves (DC), for generating smooth-shaded images. Armed with two new types of primitives, called Poisson curves and Poisson regions, PVG can easily produce photorealistic effects such as specular highlights, core shadows, translucency and halos. Within the PVG framework, the users specify color as the Dirichlet boundary condition of diffusion curves and control tone by offsetting the Laplacian of colors, where both controls are simply done by mouse click and slider dragging. PVG distinguishes itself from other diffusion based vector graphics for 3 unique features: 1) explicit separation of colors and tones, which follows the basic drawing principle and eases editing; 2) native support of seamless cloning in the sense that PCs and PRs can automatically fit into the target background; and 3) allowed intersecting primitives (except for DC-DC intersection) so that users can create layers. Through extensive experiments and a preliminary user study, we demonstrate that PVG is a simple yet powerful authoring tool that can produce photo-realistic vector graphics from scratch.

    更新日期:2020-01-04
  • Realistic Procedural Plant Modeling from Multiple View Images
    IEEE Trans. Vis. Comput. Graph. (IF 3.780) Pub Date : 2018-09-24
    Jianwei Guo; Shibiao Xu; Dong-Ming Yan; Zhanglin Cheng; Marc Jaeger; Xiaopeng Zhang

    In this paper, we describe a novel procedural modeling technique for generating realistic plant models from multi-view photographs. The realism is enhanced via visual and spatial information acquired from images. In contrast to previous approaches that heavily rely on user interaction to segment plants or recover branches in images, our method automatically estimates an accurate depth map of each image and extracts a 3D dense point cloud by exploiting an efficient stereophotogrammetry approach. Taking this point cloud as a soft constraint, we fit a parametric plant representation to simulate the plant growth progress. In this way, we are able to synthesize parametric plant models from real data provided by photos and 3D point clouds. We demonstrate the robustness of the proposed approach by modeling various plants with complex branching structures and significant self-occlusions. We also demonstrate that the proposed framework can be used to reconstruct ground-covering plants, such as bushes and shrubs which have been given little attention in the literature. The effectiveness of our approach is validated by visually and quantitatively comparing with the state-of-the-art approaches.

    更新日期:2020-01-04
  • The Effect of Focal Distance, Age, and Brightness on Near-Field Augmented Reality Depth Matching
    IEEE Trans. Vis. Comput. Graph. (IF 3.780) Pub Date : 2018-09-24
    Gurjot Singh; Stephen R. Ellis; J. Edward Swan

    Many augmented reality (AR) applications operate within near-field reaching distances, and require matching the depth of a virtual object with a real object. The accuracy of this matching was measured in three experiments, which examined the effect of focal distance, age, and brightness, within distances of 33.3 to 50 cm, using a custom-built AR haploscope. Experiment I examined the effect of focal demand, at the levels of collimated (infinite focal distance), consistent with other depth cues, and at the midpoint of reaching distance. Observers were too young to exhibit age-related reductions in accommodative ability. The depth matches of collimated targets were increasingly overestimated with increasing distance, consistent targets were slightly underestimated, and midpoint targets were accurately estimated. Experiment II replicated Experiment I, with older observers. Results were similar to Experiment I. Experiment III replicated Experiment I with dimmer targets, using young observers. Results were again consistent with Experiment I, except that both consistent and midpoint targets were accurately estimated. In all cases, collimated results were explained by a model, where the collimation biases the eyes’ vergence angle outwards by a constant amount. Focal demand and brightness affect near-field AR depth matching, while age-related reductions in accommodative ability have no effect.

    更新日期:2020-01-04
  • WeSeer: Visual Analysis for Better Information Cascade Prediction of WeChat Articles
    IEEE Trans. Vis. Comput. Graph. (IF 3.780) Pub Date : 2018-08-30
    Quan Li; Ziming Wu; Lingling Yi; Kristanto Seann; Huamin Qu; Xiaojuan Ma

    Social media, such as Facebook and WeChat, empowers millions of users to create, consume, and disseminate online information on an unprecedented scale. The abundant information on social media intensifies the competition of WeChat Public Official Articles (i.e., posts) for gaining user attention due to the zero-sum nature of attention. Therefore, only a small portion of information tends to become extremely popular while the rest remains unnoticed or quickly disappears. Such a typical “long-tail” phenomenon is very common in social media. Thus, recent years have witnessed a growing interest in predicting the future trend in the popularity of social media posts and understanding the factors that influence the popularity of the posts. Nevertheless, existing predictive models either rely on cumbersome feature engineering or sophisticated parameter tuning, which are difficult to understand and improve. In this paper, we study and enhance a point process-based model by incorporating visual reasoning to support communication between the users and the predictive model for a better prediction result. The proposed system supports users to uncover the working mechanism behind the model and improve the prediction accuracy accordingly based on the insights gained. We use realistic WeChat articles to demonstrate the effectiveness of the system and verify the improved model on a large scale of WeChat articles. We also elicit and summarize the feedback from WeChat domain experts.

    更新日期:2020-01-04
  • A Task-Based Taxonomy of Cognitive Biases for Information Visualization
    IEEE Trans. Vis. Comput. Graph. (IF 3.780) Pub Date : 2018-09-28
    Evanthia Dimara; Steven Franconeri; Catherine Plaisant; Anastasia Bezerianos; Pierre Dragicevic

    Information visualization designers strive to design data displays that allow for efficient exploration, analysis, and communication of patterns in data, leading to informed decisions. Unfortunately, human judgment and decision making are imperfect and often plagued by cognitive biases. There is limited empirical research documenting how these biases affect visual data analysis activities. Existing taxonomies are organized by cognitive theories that are hard to associate with visualization tasks. Based on a survey of the literature we propose a task-based taxonomy of 154 cognitive biases organized in 7 main categories. We hope the taxonomy will help visualization researchers relate their design to the corresponding possible biases, and lead to new research that detects and addresses biased judgment and decision making in data visualization.

    更新日期:2020-01-04
  • FleXeen: Visually Manipulating Perceived Fabric Bending Stiffness in Spatial Augmented Reality
    IEEE Trans. Vis. Comput. Graph. (IF 3.780) Pub Date : 2018-09-19
    Parinya Punpongsanon; Daisuke Iwai; Kosuke Sato

    The appearance of fabric motion is suggested to affect the human perception of bending stiffness. This study presents a novel spatial augmented reality, or projection mapping, approach that can visually manipulate the perceived bending stiffness of a fabric. Particularly, we proposed a flow enhancement method that can change the apparent fabric motion by using a simple optical flow analysis technique rather than complex physical simulations for interactive applications. Through a psychophysical experiment, we investigated the relationship between the magnification factor of our flow enhancement and the perceived bending stiffness of a fabric. Furthermore, we constructed a prototype application system that allows users to control the stiffness of a fabric without changing the actual physical fabric. By evaluating the prototype, we confirmed that the proposed technique can manipulate the perceived stiffness of various materials (i.e., cotton, polyester, and mixed cotton and linen) at an average accuracy of 90.3 percent.

    更新日期:2020-01-04
  • 2019 Index IEEE Transactions on Visualization and Computer Graphics Vol. 25
    IEEE Trans. Vis. Comput. Graph. (IF 3.780) Pub Date : 2019-12-31

    Provides instructions and guidelines to prospective authors who wish to submit manuscripts.

    更新日期:2020-01-04
  • LMap: Shape-Preserving Local Mappings for Biomedical Visualization.
    IEEE Trans. Vis. Comput. Graph. (IF 3.780) Pub Date : 2018-07-11
    Saad Nadeem,Xianfeng Gu,Arie E Kaufman

    Visualization of medical organs and biological structures is a challenging task because of their complex geometry and the resultant occlusions. Global spherical and planar mapping techniques simplify the complex geometry and resolve the occlusions to aid in visualization. However, while resolving the occlusions these techniques do not preserve the geometric context, making them less suitable for mission-critical biomedical visualization tasks. In this paper, we present a shape-preserving local mapping technique for resolving occlusions locally while preserving the overall geometric context. More specifically, we present a novel visualization algorithm, LMap, for conformally parameterizing and deforming a selected local region-of-interest (ROI) on an arbitrary surface. The resultant shape-preserving local mappings help to visualize complex surfaces while preserving the overall geometric context. The algorithm is based on the robust and efficient extrinsic Ricci flow technique, and uses the dynamic Ricci flow algorithm to guarantee the existence of a local map for a selected ROI on an arbitrary surface. We show the effectiveness and efficacy of our method in three challenging use cases: (1) multimodal brain visualization, (2) optimal coverage of virtual colonoscopy centerline flythrough, and (3) molecular surface visualization.

    更新日期:2019-11-01
  • Pattern-Driven Navigation in 2D Multiscale Visualizations with Scalable Insets.
    IEEE Trans. Vis. Comput. Graph. (IF 3.780) Pub Date : 2019-08-24
    Fritz Lekschas,Michael Behrisch,Benjamin Bach,Peter Kerpedjiev,Nils Gehlenborg,Hanspeter Pfister

    We present Scalable Insets, a technique for interactively exploring and navigating large numbers of annotated patterns in multiscale visualizations such as gigapixel images, matrices, or maps. Exploration of many but sparsely-distributed patterns in multiscale visualizations is challenging as visual representations change across zoom levels, context and navigational cues get lost upon zooming, and navigation is time consuming. Our technique visualizes annotated patterns too small to be identifiable at certain zoom levels using insets, i.e., magnified thumbnail views of the annotated patterns. Insets support users in searching, comparing, and contextualizing patterns while reducing the amount of navigation needed. They are dynamically placed either within the viewport or along the boundary of the viewport to offer a compromise between locality and context preservation. Annotated patterns are interactively clustered by location and type. They are visually represented as an aggregated inset to provide scalable exploration within a single viewport. In a controlled user study with 18 participants, we found that Scalable Insets can speed up visual search and improve the accuracy of pattern comparison at the cost of slower frequency estimation compared to a baseline technique. A second study with 6 experts in the field of genomics showed that Scalable Insets is easy to learn and provides first insights into how Scalable Insets can be applied in an open-ended data exploration scenario.

    更新日期:2019-11-01
  • Uncertainty-Aware Principal Component Analysis.
    IEEE Trans. Vis. Comput. Graph. (IF 3.780) Pub Date : 2019-10-12
    Jochen Gortler,Thilo Spinner,Dirk Streeb,Daniel Weiskopf,Oliver Deussen

    We present a technique to perform dimensionality reduction on data that is subject to uncertainty. Our method is a generalization of traditional principal component analysis (PCA) to multivariate probability distributions. In comparison to non-linear methods, linear dimensionality reduction techniques have the advantage that the characteristics of such probability distributions remain intact after projection. We derive a representation of the PCA sample covariance matrix that respects potential uncertainty in each of the inputs, building the mathematical foundation of our new method: uncertainty-aware PCA. In addition to the accuracy and performance gained by our approach over sampling-based strategies, our formulation allows us to perform sensitivity analysis with regard to the uncertainty in the data. For this, we propose factor traces as a novel visualization that enables to better understand the influence of uncertainty on the chosen principal components. We provide multiple examples of our technique using real-world datasets. As a special case, we show how to propagate multivariate normal distributions through PCA in closed form. Furthermore, we discuss extensions and limitations of our approach.

    更新日期:2019-11-01
  • Evaluating an Immersive Space-Time Cube Geovisualization for Intuitive Trajectory Data Exploration.
    IEEE Trans. Vis. Comput. Graph. (IF 3.780) Pub Date : 2019-10-04
    Jorge A Wagner Filho,Wolfgang Stuerzlinger,Luciana Nedel

    A Space-Time Cube enables analysts to clearly observe spatio-temporal features in movement trajectory datasets in geovisualization. However, its general usability is impacted by a lack of depth cues, a reported steep learning curve, and the requirement for efficient 3D navigation. In this work, we investigate a Space-Time Cube in the Immersive Analytics domain. Based on a review of previous work and selecting an appropriate exploration metaphor, we built a prototype environment where the cube is coupled to a virtual representation of the analyst's real desk, and zooming and panning in space and time are intuitively controlled using mid-air gestures. We compared our immersive environment to a desktop-based implementation in a user study with 20 participants across 7 tasks of varying difficulty, which targeted different user interface features. To investigate how performance is affected in the presence of clutter, we explored two scenarios with different numbers of trajectories. While the quantitative performance was similar for the majority of tasks, large differences appear when we analyze the patterns of interaction and consider subjective metrics. The immersive version of the Space-Time Cube received higher usability scores, much higher user preference, and was rated to have a lower mental workload, without causing participants discomfort in 25-minute-long VR sessions.

    更新日期:2019-11-01
  • Dynamic Nested Tracking Graphs.
    IEEE Trans. Vis. Comput. Graph. (IF 3.780) Pub Date : 2019-10-04
    Jonas Lukasczyk,Christoph Garth,Gunther H Weber,Tim Biedert,Ross Maciejewski,Heike Leitte

    This work describes an approach for the interactive visual analysis of large-scale simulations, where numerous superlevel set components and their evolution are of primary interest. The approach first derives, at simulation runtime, a specialized Cinema database that consists of images of component groups, and topological abstractions. This database is processed by a novel graph operation-based nested tracking graph algorithm (GO-NTG) that dynamically computes NTGs for component groups based on size, overlap, persistence, and level thresholds. The resulting NTGs are in turn used in a feature-centered visual analytics framework to query specific database elements and update feature parameters, facilitating flexible post hoc analysis.

    更新日期:2019-11-01
  • Winglets: Visualizing Association with Uncertainty in Multi-class Scatterplots.
    IEEE Trans. Vis. Comput. Graph. (IF 3.780) Pub Date : 2019-09-29
    Min Lu,Shuaiqi Wang,Joel Lanir,Noa Fish,Yang Yue,Daniel Cohen-Or,Hui Huang

    This work proposes Winglets, an enhancement to the classic scatterplot to better perceptually pronounce multiple classes by improving the perception of association and uncertainty of points to their related cluster. Designed as a pair of dual-sided strokes belonging to a data point, Winglets leverage the Gestalt principle of Closure to shape the perception of the form of the clusters, rather than use an explicit divisive encoding. Through a subtle design of two dominant attributes, length and orientation, Winglets enable viewers to perform a mental completion of the clusters. A controlled user study was conducted to examine the efficiency of Winglets in perceiving the cluster association and the uncertainty of certain points. The results show Winglets form a more prominent association of points into clusters and improve the perception of associating uncertainty.

    更新日期:2019-11-01
  • BarcodeTree: Scalable Comparison of Multiple Hierarchies.
    IEEE Trans. Vis. Comput. Graph. (IF 3.780) Pub Date : 2019-09-24
    Guozheng Li,Yu Zhang,Yu Dong,Jie Liang,Jinson Zhang,Jinsong Wang,Michael J Mcguffin,Xiaoru Yuan

    We propose BarcodeTree (BCT), a novel visualization technique for comparing topological structures and node attribute values of multiple trees. BCT can provide an overview of one hundred shallow and stable trees simultaneously, without aggregating individual nodes. Each BCT is shown within a single row using a style similar to a barcode, allowing trees to be stacked vertically with matching nodes aligned horizontally to ease comparison and maintain space efficiency. We design several visual cues and interactive techniques to help users understand the topological structure and compare trees. In an experiment comparing two variants of BCT with icicle plots, the results suggest that BCTs make it easier to visually compare trees by reducing the vertical distance between different trees. We also present two case studies involving a dataset of hundreds of trees to demonstrate BCT's utility.

    更新日期:2019-11-01
  • CerebroVis: Designing an Abstract yet Spatially Contextualized Cerebral Artery Network Visualization.
    IEEE Trans. Vis. Comput. Graph. (IF 3.780) Pub Date : 2019-09-24
    Aditeya Pandey,Harsh Shukla,Geoffrey S Young,Lei Qin,Amir A Zamani,Liangge Hsu,Raymond Huang,Cody Dunne,Michelle A Borkin

    Blood circulation in the human brain is supplied through a network of cerebral arteries. If a clinician suspects a patient has a stroke or other cerebrovascular condition, they order imaging tests. Neuroradiologists visually search the resulting scans for abnormalities. Their visual search tasks correspond to the abstract network analysis tasks of browsing and path following. To assist neuroradiologists in identifying cerebral artery abnormalities, we designed CerebroVis, a novel abstract-yet spatially contextualized-cerebral artery network visualization. In this design study, we contribute a novel framing and definition of the cerebral artery system in terms of network theory and characterize neuroradiologist domain goals as abstract visualization and network analysis tasks. Through an iterative, user-centered design process we developed an abstract network layout technique which incorporates cerebral artery spatial context. The abstract visualization enables increased domain task performance over 3D geometry representations, while including spatial context helps preserve the user's mental map of the underlying geometry. We provide open source implementations of our network layout technique and prototype cerebral artery visualization tool. We demonstrate the robustness of our technique by successfully laying out 61 open source brain scans. We evaluate the effectiveness of our layout through a mixed methods study with three neuroradiologists. In a formative controlled experiment our study participants used CerebroVis and a conventional 3D visualization to examine real cerebral artery imaging data to identify a simulated intracranial artery stenosis. Participants were more accurate at identifying stenoses using CerebroVis (absolute risk difference 13%). A free copy of this paper, the evaluation stimuli and data, and source code are available at osf.io/e5sxt.

    更新日期:2019-11-01
  • Measures of the Benefit of Direct Encoding of Data Deltas for Data Pair Relation Perception.
    IEEE Trans. Vis. Comput. Graph. (IF 3.780) Pub Date : 2019-09-20
    Christine Nothelfer,Steven Franconeri

    The power of data visualization is not to convey absolute values of individual data points, but to allow the exploration of relations (increases or decreases in a data value) among them. One approach to highlighting these relations is to explicitly encode the numeric differences (deltas) between data values. Because this approach removes the context of the individual data values, it is important to measure how much of a performance improvement it actually offers, especially across differences in encodings and tasks, to ensure that it is worth adding to a visualization design. Across 3 different tasks, we measured the increase in visual processing efficiency for judging the relations between pairs of data values, from when only the values were shown, to when the deltas between the values were explicitly encoded, across position and length visual feature encodings (and slope encodings in Experiments 1 & 2). In Experiment 1, the participant's task was to locate a pair of data values with a given relation (e.g., Find the 'small bar to the left of a tall bar' pair) among pairs of the opposite relation, and we measured processing efficiency from the increase in response times as the number of pairs increased. In Experiment 2, the task was to judge which of two relation types was more prevalent in a briefly presented display of 10 data pairs (e.g., Are there more 'small bar to the left of a tall bar' pairs or more 'tall bar to the left of a small bar' pairs?). In the final experiment, the task was to estimate the average delta within a briefly presented display of 6 data pairs (e.g., What is the average bar height difference across all 'small bar to the left of a tall bar' pairs?). Across all three experiments, visual processing of relations between data value pairs was significantly better when directly encoded as deltas rather than implicitly between individual data points, and varied substantially depending on the task (improvement ranged from 25% to 95%). Considering the ubiquity of bar charts and dot plots, relation perception for individual data values is highly inefficient, and confirms the need for alternative designs that provide not only absolute values, but also direct encoding of critical relationships between those values.

    更新日期:2019-11-01
  • The Impact of Immersion on Cluster Identification Tasks.
    IEEE Trans. Vis. Comput. Graph. (IF 3.780) Pub Date : 2019-09-20
    M Kraus,N Weiler,D Oelke,J Kehrer,D A Keim,J Fuchs

    Recent developments in technology encourage the use of head-mounted displays (HMDs) as a medium to explore visualizations in virtual realities (VRs). VR environments (VREs) enable new, more immersive visualization design spaces compared to traditional computer screens. Previous studies in different domains, such as medicine, psychology, and geology, report a positive effect of immersion, e.g., on learning performance or phobia treatment effectiveness. Our work presented in this paper assesses the applicability of those findings to a common task from the information visualization (InfoVis) domain. We conducted a quantitative user study to investigate the impact of immersion on cluster identification tasks in scatterplot visualizations. The main experiment was carried out with 18 participants in a within-subjects setting using four different visualizations, (1) a 2D scatterplot matrix on a screen, (2) a 3D scatterplot on a screen, (3) a 3D scatterplot miniature in a VRE and (4) a fully immersive 3D scatterplot in a VRE. The four visualization design spaces vary in their level of immersion, as shown in a supplementary study. The results of our main study indicate that task performance differs between the investigated visualization design spaces in terms of accuracy, efficiency, memorability, sense of orientation, and user preference. In particular, the 2D visualization on the screen performed worse compared to the 3D visualizations with regard to the measured variables. The study shows that an increased level of immersion can be a substantial benefit in the context of 3D data and cluster detection.

    更新日期:2019-11-01
  • Facetto: Combining Unsupervised and Supervised Learning for Hierarchical Phenotype Analysis in Multi-Channel Image Data.
    IEEE Trans. Vis. Comput. Graph. (IF 3.780) Pub Date : 2019-09-13
    Robert Krueger,Johanna Beyer,Won-Dong Jang,Nam Wook Kim,Artem Sokolov,Peter K Sorger,Hanspeter Pfister

    Facetto is a scalable visual analytics application that is used to discover single-cell phenotypes in high-dimensional multi-channel microscopy images of human tumors and tissues. Such images represent the cutting edge of digital histology and promise to revolutionize how diseases such as cancer are studied, diagnosed, and treated. Highly multiplexed tissue images are complex, comprising 109 or more pixels, 60-plus channels, and millions of individual cells. This makes manual analysis challenging and error-prone. Existing automated approaches are also inadequate, in large part, because they are unable to effectively exploit the deep knowledge of human tissue biology available to anatomic pathologists. To overcome these challenges, Facetto enables a semi-automated analysis of cell types and states. It integrates unsupervised and supervised learning into the image and feature exploration process and offers tools for analytical provenance. Experts can cluster the data to discover new types of cancer and immune cells and use clustering results to train a convolutional neural network that classifies new cells accordingly. Likewise, the output of classifiers can be clustered to discover aggregate patterns and phenotype subsets. We also introduce a new hierarchical approach to keep track of analysis steps and data subsets created by users; this assists in the identification of cell types. Users can build phenotype trees and interact with the resulting hierarchical structures of both high-dimensional feature and image spaces. We report on use-cases in which domain scientists explore various large-scale fluorescence imaging datasets. We demonstrate how Facetto assists users in steering the clustering and classification process, inspecting analysis results, and gaining new scientific insights into cancer biology.

    更新日期:2019-11-01
  • ProtoSteer: Steering Deep Sequence Model with Prototypes.
    IEEE Trans. Vis. Comput. Graph. (IF 3.780) Pub Date : 2019-09-13
    Yao Ming,Panpan Xu,Furui Cheng,Huamin Qu,Liu Ren

    Recently we have witnessed growing adoption of deep sequence models (e.g. LSTMs) in many application domains, including predictive health care, natural language processing, and log analysis. However, the intricate working mechanism of these models confines their accessibility to the domain experts. Their black-box nature also makes it a challenging task to incorporate domain-specific knowledge of the experts into the model. In ProtoSteer (Prototype Steering), we tackle the challenge of directly involving the domain experts to steer a deep sequence model without relying on model developers as intermediaries. Our approach originates in case-based reasoning, which imitates the common human problem-solving process of consulting past experiences to solve new problems. We utilize ProSeNet (Prototype Sequence Network), which learns a small set of exemplar cases (i.e., prototypes) from historical data. In ProtoSteer they serve both as an efficient visual summary of the original data and explanations of model decisions. With ProtoSteer the domain experts can inspect, critique, and revise the prototypes interactively. The system then incorporates user-specified prototypes and incrementally updates the model. We conduct extensive case studies and expert interviews in application domains including sentiment analysis on texts and predictive diagnostics based on vehicle fault logs. The results demonstrate that involvements of domain users can help obtain more interpretable models with concise prototypes while retaining similar accuracy.

    更新日期:2019-11-01
  • Ablate, Variate, and Contemplate: Visual Analytics for Discovering Neural Architectures.
    IEEE Trans. Vis. Comput. Graph. (IF 3.780) Pub Date : 2019-09-11
    Dylan Cashman,Adam Perer,Remco Chang,Hendrik Strobelt

    The performance of deep learning models is dependent on the precise configuration of many layers and parameters. However, there are currently few systematic guidelines for how to configure a successful model. This means model builders often have to experiment with different configurations by manually programming different architectures (which is tedious and time consuming) or rely on purely automated approaches to generate and train the architectures (which is expensive). In this paper, we present Rapid Exploration of Model Architectures and Parameters, or REMAP, a visual analytics tool that allows a model builder to discover a deep learning model quickly via exploration and rapid experimentation of neural network architectures. In REMAP, the user explores the large and complex parameter space for neural network architectures using a combination of global inspection and local experimentation. Through a visual overview of a set of models, the user identifies interesting clusters of architectures. Based on their findings, the user can run ablation and variation experiments to identify the effects of adding, removing, or replacing layers in a given architecture and generate new models accordingly. They can also handcraft new models using a simple graphical interface. As a result, a model builder can build deep learning models quickly, efficiently, and without manual programming. We inform the design of REMAP through a design study with four deep learning model builders. Through a use case, we demonstrate that REMAP allows users to discover performant neural network architectures efficiently using visual exploration and user-defined semi-automated searches through the model space.

    更新日期:2019-11-01
  • There Is No Spoon: Evaluating Performance, Space Use, and Presence with Expert Domain Users in Immersive Analytics.
    IEEE Trans. Vis. Comput. Graph. (IF 3.780) Pub Date : 2019-09-05
    Andrea Batch,Andrew Cunningham,Maxime Cordeil,Niklas Elmqvist,Tim Dwyer,Bruce H Thomas,Kim Marriott

    Immersive analytics turns the very space surrounding the user into a canvas for data analysis, supporting human cognitive abilities in myriad ways. We present the results of a design study, contextual inquiry, and longitudinal evaluation involving professional economists using a Virtual Reality (VR) system for multidimensional visualization to explore actual economic data. Results from our preregistered evaluation highlight the varied use of space depending on context (exploration vs. presentation), the organization of space to support work, and the impact of immersion on navigation and orientation in the 3D analysis space.

    更新日期:2019-11-01
  • Scalable Topological Data Analysis and Visualization for Evaluating Data-Driven Models in Scientific Applications.
    IEEE Trans. Vis. Comput. Graph. (IF 3.780) Pub Date : 2019-09-05
    Shusen Liu,Jim Gaffney,Luc Peterson,Peter B Robinson,Harsh Bhatia,Valerio Pascucci,Brian K Spears,Peer-Timo Bremer,Di Wang,Dan Maljovec,Rushil Anirudh,Jayaraman J Thiagarajan,Sam Ade Jacobs,Brian C Van Essen,David Hysom,Jae-Seung Yeom

    With the rapid adoption of machine learning techniques for large-scale applications in science and engineering comes the convergence of two grand challenges in visualization. First, the utilization of black box models (e.g., deep neural networks) calls for advanced techniques in exploring and interpreting model behaviors. Second, the rapid growth in computing has produced enormous datasets that require techniques that can handle millions or more samples. Although some solutions to these interpretability challenges have been proposed, they typically do not scale beyond thousands of samples, nor do they provide the high-level intuition scientists are looking for. Here, we present the first scalable solution to explore and analyze high-dimensional functions often encountered in the scientific data analysis pipeline. By combining a new streaming neighborhood graph construction, the corresponding topology computation, and a novel data aggregation scheme, namely topology aware datacubes, we enable interactive exploration of both the topological and the geometric aspect of high-dimensional data. Following two use cases from high-energy-density (HED) physics and computational biology, we demonstrate how these capabilities have led to crucial new insights in both applications.

    更新日期:2019-11-01
  • Text-to-Viz: Automatic Generation of Infographics from Proportion-Related Natural Language Statements.
    IEEE Trans. Vis. Comput. Graph. (IF 3.780) Pub Date : 2019-09-04
    Weiwei Cui,Xiaoyu Zhang,Yun Wang,He Huang,Bei Chen,Lei Fang,Haidong Zhang,Jian-Guan Lou,Dongmei Zhang

    Combining data content with visual embellishments, infographics can effectively deliver messages in an engaging and memorable manner. Various authoring tools have been proposed to facilitate the creation of infographics. However, creating a professional infographic with these authoring tools is still not an easy task, requiring much time and design expertise. Therefore, these tools are generally not attractive to casual users, who are either unwilling to take time to learn the tools or lacking in proper design expertise to create a professional infographic. In this paper, we explore an alternative approach: to automatically generate infographics from natural language statements. We first conducted a preliminary study to explore the design space of infographics. Based on the preliminary study, we built a proof-of-concept system that automatically converts statements about simple proportion-related statistics to a set of infographics with pre-designed styles. Finally, we demonstrated the usability and usefulness of the system through sample results, exhibits, and expert reviews.

    更新日期:2019-11-01
  • Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics.
    IEEE Trans. Vis. Comput. Graph. (IF 3.780) Pub Date : 2019-09-04
    Yuxin Ma,Tiankai Xie,Jundong Li,Ross Maciejewski

    Machine learning models are currently being deployed in a variety of real-world applications where model predictions are used to make decisions about healthcare, bank loans, and numerous other critical tasks. As the deployment of artificial intelligence technologies becomes ubiquitous, it is unsurprising that adversaries have begun developing methods to manipulate machine learning models to their advantage. While the visual analytics community has developed methods for opening the black box of machine learning models, little work has focused on helping the user understand their model vulnerabilities in the context of adversarial attacks. In this paper, we present a visual analytics framework for explaining and exploring model vulnerabilities to adversarial attacks. Our framework employs a multi-faceted visualization scheme designed to support the analysis of data poisoning attacks from the perspective of models, data instances, features, and local structures. We demonstrate our framework through two case studies on binary classifiers and illustrate model vulnerabilities with respect to varying attack strategies.

    更新日期:2019-11-01
  • Separating the Wheat from the Chaff: Comparative Visual Cues for Transparent Diagnostics of Competing Models.
    IEEE Trans. Vis. Comput. Graph. (IF 3.780) Pub Date : 2019-09-04
    Aritra Dasgupta,Hong Wang,Nancy O'Brien,Susannah Burrows

    Experts in data and physical sciences have to regularly grapple with the problem of competing models. Be it analytical or physics-based models, a cross-cutting challenge for experts is to reliably diagnose which model outcomes appropriately predict or simulate real-world phenomena. Expert judgment involves reconciling information across many, and often, conflicting criteria that describe the quality of model outcomes. In this paper, through a design study with climate scientists, we develop a deeper understanding of the problem and solution space of model diagnostics, resulting in the following contributions: i) a problem and task characterization using which we map experts' model diagnostics goals to multi-way visual comparison tasks, ii) a design space of comparative visual cues for letting experts quickly understand the degree of disagreement among competing models and gauge the degree of stability of model outputs with respect to alternative criteria, and iii) design and evaluation of MyriadCues, an interactive visualization interface for exploring alternative hypotheses and insights about good and bad models by leveraging comparative visual cues. We present case studies and subjective feedback by experts, which validate how MyriadCues enables more transparent model diagnostic mechanisms, as compared to the state of the art.

    更新日期:2019-11-01
  • Data Changes Everything: Challenges and Opportunities in Data Visualization Design Handoff.
    IEEE Trans. Vis. Comput. Graph. (IF 3.780) Pub Date : 2019-09-04
    Jagoda Walny,Christian Frisson,Mieka West,Doris Kosminsky,Soren Knudsen,Sheelagh Carpendale,Wesley Willett

    Complex data visualization design projects often entail collaboration between people with different visualization-related skills. For example, many teams include both designers who create new visualization designs and developers who implement the resulting visualization software. We identify gaps between data characterization tools, visualization design tools, and development platforms that pose challenges for designer-developer teams working to create new data visualizations. While it is common for commercial interaction design tools to support collaboration between designers and developers, creating data visualizations poses several unique challenges that are not supported by current tools. In particular, visualization designers must characterize and build an understanding of the underlying data, then specify layouts, data encodings, and other data-driven parameters that will be robust across many different data values. In larger teams, designers must also clearly communicate these mappings and their dependencies to developers, clients, and other collaborators. We report observations and reflections from five large multidisciplinary visualization design projects and highlight six data-specific visualization challenges for design specification and handoff. These challenges include adapting to changing data, anticipating edge cases in data, understanding technical challenges, articulating data-dependent interactions, communicating data mappings, and preserving the integrity of data mappings across iterations. Based on these observations, we identify opportunities for future tools for prototyping, testing, and communicating data-driven designs, which might contribute to more successful and collaborative data visualization design.

    更新日期:2019-11-01
  • Design by Immersion: A Transdisciplinary Approach to Problem-Driven Visualizations.
    IEEE Trans. Vis. Comput. Graph. (IF 3.780) Pub Date : 2019-08-27
    Kyle Wm Hall,Adam J Bradley,Uta Hinrichs,Samuel Huron,Jo Wood,Christopher Collins,Sheelagh Carpendale

    While previous work exists on how to conduct and disseminate insights from problem-driven visualization projects and design studies, the literature does not address how to accomplish these goals in transdisciplinary teams in ways that advance all disciplines involved. In this paper we introduce and define a new methodological paradigm we call design by immersion, which provides an alternative perspective on problem-driven visualization work. Design by immersion embeds transdisciplinary experiences at the center of the visualization process by having visualization researchers participate in the work of the target domain (or domain experts participate in visualization research). Based on our own combined experiences of working on cross-disciplinary, problem-driven visualization projects, we present six case studies that expose the opportunities that design by immersion enables, including (1) exploring new domain-inspired visualization design spaces, (2) enriching domain understanding through personal experiences, and (3) building strong transdisciplinary relationships. Furthermore, we illustrate how the process of design by immersion opens up a diverse set of design activities that can be combined in different ways depending on the type of collaboration, project, and goals. Finally, we discuss the challenges and potential pitfalls of design by immersion.

    更新日期:2019-11-01
  • An Incremental Dimensionality Reduction Method for Visualizing Streaming Multidimensional Data.
    IEEE Trans. Vis. Comput. Graph. (IF 3.780) Pub Date : 2019-08-27
    Takanori Fujiwara,Jia-Kai Chou,Shilpika,Panpan Xu,Liu Ren,Kwan-Liu Ma

    Dimensionality reduction (DR) methods are commonly used for analyzing and visualizing multidimensional data. However, when data is a live streaming feed, conventional DR methods cannot be directly used because of their computational complexity and inability to preserve the projected data positions at previous time points. In addition, the problem becomes even more challenging when the dynamic data records have a varying number of dimensions as often found in real-world applications. This paper presents an incremental DR solution. We enhance an existing incremental PCA method in several ways to ensure its usability for visualizing streaming multidimensional data. First, we use geometric transformation and animation methods to help preserve a viewer's mental map when visualizing the incremental results. Second, to handle data dimension variants, we use an optimization method to estimate the projected data positions, and also convey the resulting uncertainty in the visualization. We demonstrate the effectiveness of our design with two case studies using real-world datasets.

    更新日期:2019-11-01
  • GPGPU Linear Complexity t-SNE Optimization.
    IEEE Trans. Vis. Comput. Graph. (IF 3.780) Pub Date : 2019-08-27
    Nicola Pezzotti,Julian Thijssen,Alexander Mordvintsev,Thomas Hollt,Baldur Van Lew,Boudewijn P F Lelieveldt,Elmar Eisemann,Anna Vilanova

    In recent years the t-distributed Stochastic Neighbor Embedding (t-SNE) algorithm has become one of the most used and insightful techniques for exploratory data analysis of high-dimensional data. It reveals clusters of high-dimensional data points at different scales while only requiring minimal tuning of its parameters. However, the computational complexity of the algorithm limits its application to relatively small datasets. To address this problem, several evolutions of t-SNE have been developed in recent years, mainly focusing on the scalability of the similarity computations between data points. However, these contributions are insufficient to achieve interactive rates when visualizing the evolution of the t-SNE embedding for large datasets. In this work, we present a novel approach to the minimization of the t-SNE objective function that heavily relies on graphics hardware and has linear computational complexity. Our technique decreases the computational cost of running t-SNE on datasets by orders of magnitude and retains or improves on the accuracy of past approximated techniques. We propose to approximate the repulsive forces between data points by splatting kernel textures for each data point. This approximation allows us to reformulate the t-SNE minimization problem as a series of tensor operations that can be efficiently executed on the graphics card. An efficient implementation of our technique is integrated and available for use in the widely used Google TensorFlow.js, and an open-source C++ library.

    更新日期:2019-11-01
  • Towards Automated Infographic Design: Deep Learning-based Auto-Extraction of Extensible Timeline.
    IEEE Trans. Vis. Comput. Graph. (IF 3.780) Pub Date : 2019-08-24
    Zhutian Chen,Yun Wang,Qianwen Wang,Yong Wang,Huamin Qu

    Designers need to consider not only perceptual effectiveness but also visual styles when creating an infographic. This process can be difficult and time consuming for professional designers, not to mention non-expert users, leading to the demand for automated infographics design. As a first step, we focus on timeline infographics, which have been widely used for centuries. We contribute an end-to-end approach that automatically extracts an extensible timeline template from a bitmap image. Our approach adopts a deconstruction and reconstruction paradigm. At the deconstruction stage, we propose a multi-task deep neural network that simultaneously parses two kinds of information from a bitmap timeline: 1) the global information, i.e., the representation, scale, layout, and orientation of the timeline, and 2) the local information, i.e., the location, category, and pixels of each visual element on the timeline. At the reconstruction stage, we propose a pipeline with three techniques, i.e., Non-Maximum Merging, Redundancy Recover, and DL GrabCut, to extract an extensible template from the infographic, by utilizing the deconstruction results. To evaluate the effectiveness of our approach, we synthesize a timeline dataset (4296 images) and collect a real-world timeline dataset (393 images) from the Internet. We first report quantitative evaluation results of our approach over the two datasets. Then, we present examples of automatically extracted templates and timelines automatically generated based on these templates to qualitatively demonstrate the performance. The results confirm that our approach can effectively extract extensible templates from real-world timeline infographics.

    更新日期:2019-11-01
  • A Comparison of Visualizations for Identifying Correlation over Space and Time.
    IEEE Trans. Vis. Comput. Graph. (IF 3.780) Pub Date : 2019-08-24
    Vanessa Pena-Araya,Emmanuel Pietriga,Anastasia Bezerianos

    Observing the relationship between two or more variables over space and time is essential in many domains. For instance, looking, for different countries, at the evolution of both the life expectancy at birth and the fertility rate will give an overview of their demographics. The choice of visual representation for such multivariate data is key to enabling analysts to extract patterns and trends. Prior work has compared geo-temporal visualization techniques for a single thematic variable that evolves over space and time, or for two variables at a specific point in time. But how effective visualization techniques are at communicating correlation between two variables that evolve over space and time remains to be investigated. We report on a study comparing three techniques that are representative of different strategies to visualize geo-temporal multivariate data: either juxtaposing all locations for a given time step, or juxtaposing all time steps for a given location; and encoding thematic attributes either using symbols overlaid on top of map features, or using visual channels of the map features themselves. Participants performed a series of tasks that required them to identify if two variables were correlated over time and if there was a pattern in their evolution. Tasks varied in granularity for both dimensions: time (all time steps, a subrange of steps, one step only) and space (all locations, locations in a subregion, one location only). Our results show that a visualization's effectiveness depends strongly on the task to be carried out. Based on these findings we present a set of design guidelines about geo-temporal visualization techniques for communicating correlation.

    更新日期:2019-11-01
  • GenerativeMap: Visualization and Exploration of Dynamic Density Maps via Generative Learning Model.
    IEEE Trans. Vis. Comput. Graph. (IF 3.780) Pub Date : 2019-08-24
    Chen Chen,Changbo Wang,Xue Bai,Peiying Zhang,Chenhui Li

    The density map is widely used for data sampling, time-varying detection, ensemble representation, etc. The visualization of dynamic evolution is a challenging task when exploring spatiotemporal data. Many approaches have been provided to explore the variation of data patterns over time, which commonly need multiple parameters and preprocessing works. Image generation is a well-known topic in deep learning, and a variety of generating models have been promoted in recent years. In this paper, we introduce a general pipeline called GenerativeMap to extract dynamics of density maps by generating interpolation information. First, a trained generative model comprises an important part of our approach, which can generate nonlinear and natural results by implementing a few parameters. Second, a visual presentation is proposed to show the density change, which is combined with the level of detail and blue noise sampling for a better visual effect. Third, for dynamic visualization of large-scale density maps, we extend this approach to show the evolution in regions of interest, which costs less to overcome the drawback of the learning-based generative model. We demonstrate our method on different types of cases, and we evaluate and compare the approach from multiple aspects. The results help identify the effectiveness of our approach and confirm its applicability in different scenarios.

    更新日期:2019-11-01
  • Construct-A-Vis: Exploring the Free-Form Visualization Processes of Children.
    IEEE Trans. Vis. Comput. Graph. (IF 3.780) Pub Date : 2019-08-24
    Fearn Bishop,Johannes Zagermann,Ulrike Pfeil,Gemma Sanderson,Harald Reiterer,Uta Hinrichs

    Building data analysis skills is part of modern elementary school curricula. Recent research has explored how to facilitate children's understanding of visual data representations through completion exercises which highlight links between concrete and abstract mappings. This approach scaffolds visualization activities by presenting a target visualization to children. But how can we engage children in more free-form visual data mapping exercises that are driven by their own mapping ideas? How can we scaffold a creative exploration of visualization techniques and mapping possibilities? We present Construct-A-Vis, a tablet-based tool designed to explore the feasibility of free-form and constructive visualization activities with elementary school children. Construct-A-Vis provides adjustable levels of scaffolding visual mapping processes. It can be used by children individually or as part of collaborative activities. Findings from a study with elementary school children using Construct-A-Vis individually and in pairs highlight the potential of this free-form constructive approach, as visible in children's diverse visualization outcomes and their critical engagement with the data and mapping processes. Based on our study findings we contribute insights into the design of free-form visualization tools for children, including the role of tool-based scaffolding mechanisms and shared interactions to guide visualization activities with children.

    更新日期:2019-11-01
  • Persistent Homology Guided Force-Directed Graph Layouts.
    IEEE Trans. Vis. Comput. Graph. (IF 3.780) Pub Date : 2019-08-24
    Ashley Suh,Mustafa Hajij,Bei Wang,Carlos Scheidegger,Paul Rosen

    Graphs are commonly used to encode relationships among entities, yet their abstractness makes them difficult to analyze. Node-link diagrams are popular for drawing graphs, and force-directed layouts provide a flexible method for node arrangements that use local relationships in an attempt to reveal the global shape of the graph. However, clutter and overlap of unrelated structures can lead to confusing graph visualizations. This paper leverages the persistent homology features of an undirected graph as derived information for interactive manipulation of force-directed layouts. We first discuss how to efficiently extract 0-dimensional persistent homology features from both weighted and unweighted undirected graphs. We then introduce the interactive persistence barcode used to manipulate the force-directed graph layout. In particular, the user adds and removes contracting and repulsing forces generated by the persistent homology features, eventually selecting the set of persistent homology features that most improve the layout. Finally, we demonstrate the utility of our approach across a variety of synthetic and real datasets.

    更新日期:2019-11-01
  • RSATree: Distribution-Aware Data Representation of Large-Scale Tabular Datasets for Flexible Visual Query.
    IEEE Trans. Vis. Comput. Graph. (IF 3.780) Pub Date : 2019-08-24
    Honghui Mei,Wei Chen,Yating Wei,Yuanzhe Hu,Shuyue Zhou,Bingru Lin,Ying Zhao,Jiazhi Xia

    Analysts commonly investigate the data distributions derived from statistical aggregations of data that are represented by charts, such as histograms and binned scatterplots, to visualize and analyze a large-scale dataset. Aggregate queries are implicitly executed through such a process. Datasets are constantly extremely large; thus, the response time should be accelerated by calculating predefined data cubes. However, the queries are limited to the predefined binning schema of preprocessed data cubes. Such limitation hinders analysts' flexible adjustment of visual specifications to investigate the implicit patterns in the data effectively. Particularly, RSATree enables arbitrary queries and flexible binning strategies by leveraging three schemes, namely, an R-tree-based space partitioning scheme to catch the data distribution, a locality-sensitive hashing technique to achieve locality-preserving random access to data items, and a summed area table scheme to support interactive query of aggregated values with a linear computational complexity. This study presents and implements a web-based visual query system that supports visual specification, query, and exploration of large-scale tabular data with user-adjustable granularities. We demonstrate the efficiency and utility of our approach by performing various experiments on real-world datasets and analyzing time and space complexity.

    更新日期:2019-11-01
  • Data Sampling in Multi-view and Multi-class Scatterplots via Set Cover Optimization.
    IEEE Trans. Vis. Comput. Graph. (IF 3.780) Pub Date : 2019-08-24
    Ruizhen Hu,Tingkai Sha,Oliver Van Kaick,Oliver Deussen,Hui Huang

    We present a method for data sampling in scatterplots by jointly optimizing point selection for different views or classes. Our method uses space-filling curves (Z-order curves) that partition a point set into subsets that, when covered each by one sample, provide a sampling or coreset with good approximation guarantees in relation to the original point set. For scatterplot matrices with multiple views, different views provide different space-filling curves, leading to different partitions of the given point set. For multi-class scatterplots, the focus on either per-class distribution or global distribution provides two different partitions of the given point set that need to be considered in the selection of the coreset. For both cases, we convert the coreset selection problem into an Exact Cover Problem (ECP), and demonstrate with quantitative and qualitative evaluations that an approximate solution that solves the ECP efficiently is able to provide high-quality samplings.

    更新日期:2019-11-01
  • DeepDrawing: A Deep Learning Approach to Graph Drawing.
    IEEE Trans. Vis. Comput. Graph. (IF 3.780) Pub Date : 2019-08-24
    Yong Wang,Zhihua Jin,Qianwen Wang,Weiwei Cui,Tengfei Ma,Huamin Qu

    Node-link diagrams are widely used to facilitate network explorations. However, when using a graph drawing technique to visualize networks, users often need to tune different algorithm-specific parameters iteratively by comparing the corresponding drawing results in order to achieve a desired visual effect. This trial and error process is often tedious and time-consuming, especially for non-expert users. Inspired by the powerful data modelling and prediction capabilities of deep learning techniques, we explore the possibility of applying deep learning techniques to graph drawing. Specifically, we propose using a graph-LSTM-based approach to directly map network structures to graph drawings. Given a set of layout examples as the training dataset, we train the proposed graph-LSTM-based model to capture their layout characteristics. Then, the trained model is used to generate graph drawings in a similar style for new networks. We evaluated the proposed approach on two special types of layouts (i.e., grid layouts and star layouts) and two general types of layouts (i.e., ForceAtlas2 and PivotMDS) in both qualitative and quantitative ways. The results provide support for the effectiveness of our approach. We also conducted a time cost assessment on the drawings of small graphs with 20 to 50 nodes. We further report the lessons we learned and discuss the limitations and future work.

    更新日期:2019-11-01
  • VisTA: Integrating Machine Intelligence with Visualization to Support the Investigation of Think-Aloud Sessions.
    IEEE Trans. Vis. Comput. Graph. (IF 3.780) Pub Date : 2019-08-24
    Mingming Fan,Ke Wu,Jian Zhao,Yue Li,Winter Wei,Khai N Truong

    Think-aloud protocols are widely used by user experience (UX) practitioners in usability testing to uncover issues in user interface design. It is often arduous to analyze large amounts of recorded think-aloud sessions and few UX practitioners have an opportunity to get a second perspective during their analysis due to time and resource constraints. Inspired by the recent research that shows subtle verbalization and speech patterns tend to occur when users encounter usability problems, we take the first step to design and evaluate an intelligent visual analytics tool that leverages such patterns to identify usability problem encounters and present them to UX practitioners to assist their analysis. We first conducted and recorded think-aloud sessions, and then extracted textual and acoustic features from the recordings and trained machine learning (ML) models to detect problem encounters. Next, we iteratively designed and developed a visual analytics tool, VisTA, which enables dynamic investigation of think-aloud sessions with a timeline visualization of ML predictions and input features. We conducted a between-subjects laboratory study to compare three conditions, i.e., VisTA, VisTASimple (no visualization of the ML's input features), and Baseline (no ML information at all), with 30 UX professionals. The findings show that UX professionals identified more problem encounters when using VisTA than Baseline by leveraging the problem visualization as an overview, anticipations, and anchors as well as the feature visualization as a means to understand what ML considers and omits. Our findings also provide insights into how they treated ML, dealt with (dis)agreement with ML, and reviewed the videos (i.e., play, pause, and rewind).

    更新日期:2019-11-01
  • Improving the Robustness of Scagnostics.
    IEEE Trans. Vis. Comput. Graph. (IF 3.780) Pub Date : 2019-08-24
    Yunhai Wang,Zeyu Wang,Tingting Liu,Michael Correll,Zhanglin Cheng,Oliver Deussen,Michael Sedlmair

    In this paper, we examine the robustness of scagnostics through a series of theoretical and empirical studies. First, we investigate the sensitivity of scagnostics by employing perturbing operations on more than 60M synthetic and real-world scatterplots. We found that two scagnostic measures, Outlying and Clumpy, are overly sensitive to data binning. To understand how these measures align with human judgments of visual features, we conducted a study with 24 participants, which reveals that i) humans are not sensitive to small perturbations of the data that cause large changes in both measures, and ii) the perception of clumpiness heavily depends on per-cluster topologies and structures. Motivated by these results, we propose Robust Scagnostics (RScag) by combining adaptive binning with a hierarchy-based form of scagnostics. An analysis shows that RScag improves on the robustness of original scagnostics, aligns better with human judgments, and is equally fast as the traditional scagnostic measures.

    更新日期:2019-11-01
  • Data by Proxy - Material Traces as Autographic Visualizations.
    IEEE Trans. Vis. Comput. Graph. (IF 3.780) Pub Date : 2019-08-24
    Dietmar Offenhuber

    Information visualization limits itself, per definition, to the domain of symbolic information. This paper discusses arguments why the field should also consider forms of data that are not symbolically encoded, including physical traces and material indicators. Continuing a provocation presented by Pat Hanrahan in his 2004 IEEE Vis capstone address, this paper compares physical traces to visualizations and describes the techniques and visual practices for producing, revealing, and interpreting them. By contrasting information visualization with a speculative counter model of autographic visualization, this paper examines the design principles for material data. Autographic visualization addresses limitations of information visualization, such as the inability to directly reflect the material circumstances of data generation. The comparison between the two models allows probing the epistemic assumptions behind information visualization and uncovers linkages with the rich history of scientific visualization and trace reading. The paper begins by discussing the gap between data visualizations and their corresponding phenomena and proceeds by investigating how material visualizations can bridge this gap. It contextualizes autographic visualization with paradigms such as data physicalization and indexical visualization and grounds it in the broader theoretical literature of semiotics, science and technology studies (STS), and the history of scientific representation. The main section of the paper proposes a foundational design vocabulary for autographic visualization and offers examples of how citizen scientists already use autographic principles in their displays, which seem to violate the canonical principles of information visualization but succeed at fulfilling other rhetorical purposes in evidence construction. The paper concludes with a discussion of the limitations of autographic visualization, a roadmap for the empirical investigation of trace perception, and thoughts about how information visualization and autographic visualization techniques can contribute to each other.

    更新日期:2019-11-01
  • The Perceptual Proxies of Visual Comparison.
    IEEE Trans. Vis. Comput. Graph. (IF 3.780) Pub Date : 2019-08-24
    Nicole Jardine,Brian D Ondov,Niklas Elmqvist,Steven Franconeri

    Perceptual tasks in visualizations often involve comparisons. Of two sets of values depicted in two charts, which set had values that were the highest overall? Which had the widest range? Prior empirical work found that the performance on different visual comparison tasks (e.g., "biggest delta", "biggest correlation") varied widely across different combinations of marks and spatial arrangements. In this paper, we expand upon these combinations in an empirical evaluation of two new comparison tasks: the "biggest mean" and "biggest range" between two sets of values. We used a staircase procedure to titrate the difficulty of the data comparison to assess which arrangements produced the most precise comparisons for each task. We find visual comparisons of biggest mean and biggest range are supported by some chart arrangements more than others, and that this pattern is substantially different from the pattern for other tasks. To synthesize these dissonant findings, we argue that we must understand which features of a visualization are actually used by the human visual system to solve a given task. We call these perceptual proxies. For example, when comparing the means of two bar charts, the visual system might use a "Mean length" proxy that isolates the actual lengths of the bars and then constructs a true average across these lengths. Alternatively, it might use a "Hull Area" proxy that perceives an implied hull bounded by the bars of each chart and then compares the areas of these hulls. We propose a series of potential proxies across different tasks, marks, and spatial arrangements. Simple models of these proxies can be empirically evaluated for their explanatory power by matching their performance to human performance across these marks, arrangements, and tasks. We use this process to highlight candidates for perceptual proxies that might scale more broadly to explain performance in visual comparison.

    更新日期:2019-11-01
  • A Comparison of Radial and Linear Charts for Visualizing Daily Patterns.
    IEEE Trans. Vis. Comput. Graph. (IF 3.780) Pub Date : 2019-08-24
    Manuela Waldner,Alexandra Diehl,Denis Gracanin,Rainer Splechtna,Claudio Delrieux,Kresimir Matkovic

    Radial charts are generally considered less effective than linear charts. Perhaps the only exception is in visualizing periodical time-dependent data, which is believed to be naturally supported by the radial layout. It has been demonstrated that the drawbacks of radial charts outweigh the benefits of this natural mapping. Visualization of daily patterns, as a special case, has not been systematically evaluated using radial charts. In contrast to yearly or weekly recurrent trends, the analysis of daily patterns on a radial chart may benefit from our trained skill on reading radial clocks that are ubiquitous in our culture. In a crowd-sourced experiment with 92 non-expert users, we evaluated the accuracy, efficiency, and subjective ratings of radial and linear charts for visualizing daily traffic accident patterns. We systematically compared juxtaposed 12-hours variants and single 24-hours variants for both layouts in four low-level tasks and one high-level interpretation task. Our results show that over all tasks, the most elementary 24-hours linear bar chart is most accurate and efficient and is also preferred by the users. This provides strong evidence for the use of linear layouts - even for visualizing periodical daily patterns.

    更新日期:2019-11-01
  • ShapeWordle: Tailoring Wordles using Shape-aware Archimedean Spirals.
    IEEE Trans. Vis. Comput. Graph. (IF 3.780) Pub Date : 2019-08-24
    Yunhai Wang,Bongshin Lee,Xiaowei Chu,Kaiyi Zhang,Chen Bao,Xiaotong Li,Jian Zhang,Chi-Wing Fu,Christophe Hurter,Oliver Deussen

    We present a new technique to enable the creation of shape-bounded Wordles, we call ShapeWordle, in which we fit words to form a given shape. To guide word placement within a shape, we extend the traditional Archimedean spirals to be shape-aware by formulating the spirals in a differential form using the distance field of the shape. To handle non-convex shapes, we introduce a multi-centric Wordle layout method that segments the shape into parts for our shape-aware spirals to adaptively fill the space and generate word placements. In addition, we offer a set of editing interactions to facilitate the creation of semantically-meaningful Wordles. Lastly, we present three evaluations: a comprehensive comparison of our results against the state-of-the-art technique (WordArt), case studies with 14 users, and a gallery to showcase the coverage of our technique.

    更新日期:2019-11-01
  • A Natural-language-based Visual Query Approach of Uncertain Human Trajectories.
    IEEE Trans. Vis. Comput. Graph. (IF 3.780) Pub Date : 2019-08-24
    Zhaosong Huang,Ye Zhao,Wei Chen,Shengjie Gao,Kejie Yu,Weixia Xu,Mingjie Tang,Minfeng Zhu,Mingliang Xu

    Visual querying is essential for interactively exploring massive trajectory data. However, the data uncertainty imposes profound challenges to fulfill advanced analytics requirements. On the one hand, many underlying data does not contain accurate geographic coordinates, e.g., positions of a mobile phone only refer to the regions (i.e., mobile cell stations) in which it resides, instead of accurate GPS coordinates. On the other hand, domain experts and general users prefer a natural way, such as using a natural language sentence, to access and analyze massive movement data. In this paper, we propose a visual analytics approach that can extract spatial-temporal constraints from a textual sentence and support an effective query method over uncertain mobile trajectory data. It is built up on encoding massive, spatially uncertain trajectories by the semantic information of the POls and regions covered by them, and then storing the trajectory documents in text database with an effective indexing scheme. The visual interface facilitates query condition specification, situation-aware visualization, and semantic exploration of large trajectory data. Usage scenarios on real-world human mobility datasets demonstrate the effectiveness of our approach.

    更新日期:2019-11-01
  • AirVis: Visual Analytics of Air Pollution Propagation.
    IEEE Trans. Vis. Comput. Graph. (IF 3.780) Pub Date : 2019-08-24
    Zikun Deng,Di Weng,Jiahui Chen,Ren Liu,Zhibin Wang,Jie Bao,Yu Zheng,Yingcai Wu

    Air pollution has become a serious public health problem for many cities around the world. To find the causes of air pollution, the propagation processes of air pollutants must be studied at a large spatial scale. However, the complex and dynamic wind fields lead to highly uncertain pollutant transportation. The state-of-the-art data mining approaches cannot fully support the extensive analysis of such uncertain spatiotemporal propagation processes across multiple districts without the integration of domain knowledge. The limitation of these automated approaches motivates us to design and develop AirVis, a novel visual analytics system that assists domain experts in efficiently capturing and interpreting the uncertain propagation patterns of air pollution based on graph visualizations. Designing such a system poses three challenges: a) the extraction of propagation patterns; b) the scalability of pattern presentations; and c) the analysis of propagation processes. To address these challenges, we develop a novel pattern mining framework to model pollutant transportation and extract frequent propagation patterns efficiently from large-scale atmospheric data. Furthermore, we organize the extracted patterns hierarchically based on the minimum description length (MDL) principle and empower expert users to explore and analyze these patterns effectively on the basis of pattern topologies. We demonstrated the effectiveness of our approach through two case studies conducted with a real-world dataset and positive feedback from domain experts.

    更新日期:2019-11-01
  • Exploranative Code Quality Documents.
    IEEE Trans. Vis. Comput. Graph. (IF 3.780) Pub Date : 2019-08-24
    Haris Mumtaz,Shahid Latif,Fabian Beck,Daniel Weiskopf

    Good code quality is a prerequisite for efficiently developing maintainable software. In this paper, we present a novel approach to generate exploranative (explanatory and exploratory) data-driven documents that report code quality in an interactive, exploratory environment. We employ a template-based natural language generation method to create textual explanations about the code quality, dependent on data from software metrics. The interactive document is enriched by different kinds of visualization, including parallel coordinates plots and scatterplots for data exploration and graphics embedded into text. We devise an interaction model that allows users to explore code quality with consistent linking between text and visualizations; through integrated explanatory text, users are taught background knowledge about code quality aspects. Our approach to interactive documents was developed in a design study process that included software engineering and visual analytics experts. Although the solution is specific to the software engineering scenario, we discuss how the concept could generalize to multivariate data and report lessons learned in a broader scope.

    更新日期:2019-11-01
  • FlowSense: A Natural Language Interface for Visual Data Exploration within a Dataflow System.
    IEEE Trans. Vis. Comput. Graph. (IF 3.780) Pub Date : 2019-08-24
    Bowen Yu,Claudio T Silva

    Dataflow visualization systems enable flexible visual data exploration by allowing the user to construct a dataflow diagram that composes query and visualization modules to specify system functionality. However learning dataflow diagram usage presents overhead that often discourages the user. In this work we design FlowSense, a natural language interface for dataflow visualization systems that utilizes state-of-the-art natural language processing techniques to assist dataflow diagram construction. FlowSense employs a semantic parser with special utterance tagging and special utterance placeholders to generalize to different datasets and dataflow diagrams. It explicitly presents recognized dataset and diagram special utterances to the user for dataflow context awareness. With FlowSense the user can expand and adjust dataflow diagrams more conveniently via plain English. We apply FlowSense to the VisFlow subset-flow visualization system to enhance its usability. We evaluate FlowSense by one case study with domain experts on a real-world data analysis problem and a formal user study.

    更新日期:2019-11-01
  • Galex: Exploring the Evolution and Intersection of Disciplines.
    IEEE Trans. Vis. Comput. Graph. (IF 3.780) Pub Date : 2019-08-24
    Zeyu Li,Changhong Zhang,Shichao Jia,Jiawan Zhang

    Revealing the evolution of science and the intersections among its sub-fields is extremely important to understand the characteristics of disciplines, discover new topics, and predict the future. The current work focuses on either building the skeleton of science, lacking interaction, detailed exploration and interpretation or on the lower topic level, missing high-level macro-perspective. To fill this gap, we design and implement Galaxy Evolution Explorer (Galex), a hierarchical visual analysis system, in combination with advanced text mining technologies, that could help analysts to comprehend the evolution and intersection of one discipline rapidly. We divide Galex into three progressively fine-grained levels: discipline, area, and institution levels. The combination of interactions enables analysts to explore an arbitrary piece of history and an arbitrary part of the knowledge space of one discipline. Using a flexible spotlight component, analysts could freely select and quickly understand an exploration region. A tree metaphor allows analysts to perceive the expansion, decline, and intersection of topics intuitively. A synchronous spotlight interaction aids in comparing research contents among institutions easily. Three cases demonstrate the effectiveness of our system.

    更新日期:2019-11-01
  • You can't always sketch what you want: Understanding Sensemaking in Visual Query Systems.
    IEEE Trans. Vis. Comput. Graph. (IF 3.780) Pub Date : 2019-08-24
    Doris Jung-Lin Lee,John Lee,Tarique Siddiqui,Jaewoo Kim,Karrie Karahalios,Aditya Parameswaran

    Visual query systems (VQSs) empower users to interactively search for line charts with desired visual patterns, typically specified using intuitive sketch-based interfaces. Despite decades of past work on VQSs, these efforts have not translated to adoption in practice, possibly because VQSs are largely evaluated in unrealistic lab-based settings. To remedy this gap in adoption, we collaborated with experts from three diverse domains-astronomy, genetics, and material science-via a year-long user-centered design process to develop a VQS that supports their workflow and analytical needs, and evaluate how VQSs can be used in practice. Our study results reveal that ad-hoc sketch-only querying is not as commonly used as prior work suggests, since analysts are often unable to precisely express their patterns of interest. In addition, we characterize three essential sensemaking processes supported by our enhanced VQS. We discover that participants employ all three processes, but in different proportions, depending on the analytical needs in each domain. Our findings suggest that all three sensemaking processes must be integrated in order to make future VQSs useful for a wide range of analytical inquiries.

    更新日期:2019-11-01
  • Visual Analysis of High-Dimensional Event Sequence Data via Dynamic Hierarchical Aggregation.
    IEEE Trans. Vis. Comput. Graph. (IF 3.780) Pub Date : 2019-08-24
    David Gotz,Jonathan Zhang,Wenyuan Wang,Joshua Shrestha,David Borland

    Temporal event data are collected across a broad range of domains, and a variety of visual analytics techniques have been developed to empower analysts working with this form of data. These techniques generally display aggregate statistics computed over sets of event sequences that share common patterns. Such techniques are often hindered, however, by the high-dimensionality of many real-world event sequence datasets which can prevent effective aggregation. A common coping strategy for this challenge is to group event types together prior to visualization, as a pre-process, so that each group can be represented within an analysis as a single event type. However, computing these event groupings as a pre-process also places significant constraints on the analysis. This paper presents a new visual analytics approach for dynamic hierarchical dimension aggregation. The approach leverages a predefined hierarchy of dimensions to computationally quantify the informativeness, with respect to a measure of interest, of alternative levels of grouping within the hierarchy at runtime. This information is then interactively visualized, enabling users to dynamically explore the hierarchy to select the most appropriate level of grouping to use at any individual step within an analysis. Key contributions include an algorithm for interactively determining the most informative set of event groupings for a specific analysis context, and a scented scatter-plus-focus visualization design with an optimization-based layout algorithm that supports interactive hierarchical exploration of alternative event type groupings. We apply these techniques to high-dimensional event sequence data from the medical domain and report findings from domain expert interviews.

    更新日期:2019-11-01
  • sPortfolio: Stratified Visual Analysis of Stock Portfolios.
    IEEE Trans. Vis. Comput. Graph. (IF 3.780) Pub Date : 2019-08-24
    Xuanwu Yue,Jiaxin Bai,Qinhan Liu,Yiyang Tang,Abishek Puri,Ke Li,Huamin Qu

    Quantitative Investment, built on the solid foundation of robust financial theories, is at the center stage in investment industry today. The essence of quantitative investment is the multi-factor model, which explains the relationship between the risk and return of equities. However, the multi-factor model generates enormous quantities of factor data, through which even experienced portfolio managers find it difficult to navigate. This has led to portfolio analysis and factor research being limited by a lack of intuitive visual analytics tools. Previous portfolio visualization systems have mainly focused on the relationship between the portfolio return and stock holdings, which is insufficient for making actionable insights or understanding market trends. In this paper, we present s Portfolio, which, to the best of our knowledge, is the first visualization that attempts to explore the factor investment area. In particular, sPortfolio provides a holistic overview of the factor data and aims to facilitate the analysis at three different levels: a Risk-Factor level, for a general market situation analysis; a Multiple-Portfolio level, for understanding the portfolio strategies; and a Single-Portfolio level, for investigating detailed operations. The system's effectiveness and usability are demonstrated through three case studies. The system has passed its pilot study and is soon to be deployed in industry.

    更新日期:2019-11-01
  • LightGuider: Guiding Interactive Lighting Design using Suggestions, Provenance, and Quality Visualization.
    IEEE Trans. Vis. Comput. Graph. (IF 3.780) Pub Date : 2019-08-24
    Andreas Walch,Michael Schwarzler,Christian Luksch,Elmar Eisemann,Theresia Gschwandtner

    LightGuider is a novel guidance-based approach to interactive lighting design, which typically consists of interleaved 3D modeling operations and light transport simulations. Rather than having designers use a trial-and-error approach to match their illumination constraints and aesthetic goals, LightGuider supports the process by simulating potential next modeling steps that can deliver the most significant improvements. LightGuider takes predefined quality criteria and the current focus of the designer into account to visualize suggestions for lighting-design improvements via a specialized provenance tree. This provenance tree integrates snapshot visualizations of how well a design meets the given quality criteria weighted by the designer's preferences. This integration facilitates the analysis of quality improvements over the course of a modeling workflow as well as the comparison of alternative design solutions. We evaluate our approach with three lighting designers to illustrate its usefulness.

    更新日期:2019-11-01
  • OD Morphing: Balancing Simplicity with Faithfulness for OD Bundling.
    IEEE Trans. Vis. Comput. Graph. (IF 3.780) Pub Date : 2019-08-24
    Yan Lyu,Xu Liu,Hanyi Chen,Arpan Mangal,Kai Liu,Chao Chen,Brian Lim

    OD bundling is a promising method to identify key origin-destination (OD) patterns, but the bundling can mislead the interpretation of actual trajectories traveled. We present OD Morphing, an interactive OD bundling technique that improves geographical faithfulness to actual trajectories while preserving visual simplicity for OD patterns. OD Morphing iteratively identifies critical waypoints from the actual trajectory network with a min-cut algorithm and transitions OD bundles to pass through the identified waypoints with a smooth morphing method. Furthermore, we extend OD Morphing to support bundling at interaction speeds to enable users to interactively transition between degrees of faithfulness to aid sensemaking. We introduce metrics for faithfulness and simplicity to evaluate their trade-off achieved by OD morphed bundling. We demonstrate OD Morphing on real-world city-scale taxi trajectory and USA domestic planned flight datasets.

    更新日期:2019-11-01
  • EmoCo: Visual Analysis of Emotion Coherence in Presentation Videos.
    IEEE Trans. Vis. Comput. Graph. (IF 3.780) Pub Date : 2019-08-24
    Haipeng Zeng,Xingbo Wang,Aoyu Wu,Yong Wang,Quan Li,Alex Endert,Huamin Qu

    Emotions play a key role in human communication and public presentations. Human emotions are usually expressed through multiple modalities. Therefore, exploring multimodal emotions and their coherence is of great value for understanding emotional expressions in presentations and improving presentation skills. However, manually watching and studying presentation videos is often tedious and time-consuming. There is a lack of tool support to help conduct an efficient and in-depth multi-level analysis. Thus, in this paper, we introduce EmoCo, an interactive visual analytics system to facilitate efficient analysis of emotion coherence across facial, text, and audio modalities in presentation videos. Our visualization system features a channel coherence view and a sentence clustering view that together enable users to obtain a quick overview of emotion coherence and its temporal evolution. In addition, a detail view and word view enable detailed exploration and comparison from the sentence level and word level, respectively. We thoroughly evaluate the proposed system and visualization techniques through two usage scenarios based on TED Talk videos and interviews with two domain experts. The results demonstrate the effectiveness of our system in gaining insights into emotion coherence in presentations.

    更新日期:2019-11-01
Contents have been reproduced by permission of the publishers.
导出
全部期刊列表>>
2020新春特辑
限时免费阅读临床医学内容
ACS材料视界
科学报告最新纳米科学与技术研究
清华大学化学系段昊泓
自然科研论文编辑服务
加州大学洛杉矶分校
上海纽约大学William Glover
南开大学化学院周其林
课题组网站
X-MOL
北京大学分子工程苏南研究院
华东师范大学分子机器及功能材料
中山大学化学工程与技术学院
试剂库存
天合科研
down
wechat
bug