Google SketchUp is software that you can use to create 3D models of anything you like.On this blog you can find mostly furniture 3d models. sketchup, 3d models , sketchup models, sketchup components, .skp models, sketchup model , free sketchup models , furniture models , 3d models

The need for technical skill in graphic design continues to grow as technologies and interfaces change. Yet, how have fundamental understandings of visual hierarchy, perception and composition changed along with new interfaces? The modern concept of human visual perception is rooted in psychological study. Thus, the way that we see and perceive graphic information will always be the same, despite continually changing graphic interfaces. So, how can contemporary interactive design consider and improve upon the foundations of graphic composition and visual hierarchy?
The foundational rules of visual perception are critical for any graphic design, as they instruct how information with embedded meaning is conveyed as quickly as possible. However, these rules were founded in print and have yet to be redefined for digital media. There has yet to be a sort of digital Bauhaus school to set these new principles. The rules of hierarchy and composition are not just dated, they break down in web interfaces. We simply experience the interface in an entirely different way from print.
Most designs for interactive and web media are still in their infancy since designers treat the screen as a static, two dimensional object. Can interactive design innovate beyond reapplying print formats to the web? Yet before they can be developed, the fundamental understandings of visual hierarchy, perception and composition should be understood and introduced. As a basis for visual communication they remain relevant. However, perhaps we can investigate how these principles are being employed in innovative examples that produce enjoyable experiences in interactive media.

Visual Hierarchy: New Understandings of Graphic Composition For Interactive Interfaces

What is visual hierarchy and why is it important? Hierarchy is the choreography of content in a composition to communicate information and convey meaning. Visual hierarchy directs viewers to the most important information first, and identifies navigation through secondary content.
The meaning, concept, or mood of the composition is conveyed through the creative use of graphic tools that establish hierarchy. This is established through a designer’s use of size, colour, shape, orientation and other tools.
How do you use Size, Color, Shape, and Orientation to convey meaning and mood in a design?How do you use Size, Color, Shape, and Orientation to convey meaning and mood in a design?
Visual hierarchy is critical for any graphic design, whether it’s a logo that must identify the ambition of a brand at a glance, or the easy navigation of an interactive interface. Our understanding of every element is based on a relation to its context. Elements are treated graphically with graphic tools in order to form visual relationships and thus establish visual hierarchy across a design.
However, the understanding of visual hierarchy is based on theory relating to two dimensional visual perception. Web and interactive design allows for more complex potential relationships between elements. Thus, we will discuss the basics of this visual theory, but present precedents that are bending the static boundaries of graphic design rooted in print.


Many of the rules of visual hierarchy seem overly simple and banal, but they are a critical foundation for any good graphic design.
Consider the immediate connotations of a red cross versus a monochrome one. Almost universally, the red cross has established a connotation of health, help and safety. Such is the potential for immediate communication with the use of colour. Colour is often used to identify groups, as when one red cross amongst three blacks stands out as somehow more significant.
Bright, rich colours stand out more than muted ones, and thus have greater visual weight. In an interface, colour might be used to point out structure and navigation. Even a single colour within a monochrome interface can identify selection, and perhaps even allude to what might be beyond that button the viewer is hovering over.
Colour, or the lack of it in monochrome design elements, can be used to outline UI elements and appeal to users on a subconscious level.Colour, or the lack of it in monochrome design elements, can be used to outline UI elements and appeal to users on a subconscious level.
However, colour is also embedded with meaning and emotion that conveys information to viewers subconsciously. In branding, much psychological research has been done on colour because it creates a visceral response in a consumer prior to having any meaningful interaction with a brand. For example, blues are often dependable, secure and calming, while reds are stimulating and even known to increase viewers’ heart rates. However, colours may have different connotations depending on the culture.
An example of a meaningful, but organizational use of colour in web design is The Names for Change site.The site immediately communicates its organizing structure through the use of colour (by default the organization is scattered, but can be rearranged by topic/colour). However, the chosen tones help to overcome one of the potential difficulties for the meaning of the site. Fundraising for daily items such as socks or tampons isn’t exciting enough to sell itself, so the radical graphic tone of the site raises the perceptual value of the everyday items, while establishing the necessary underlying organizational structure.


Consider an illustration of one large bird sitting next to three smaller ones. Without any further information, this simple graphic communicate the relationship between elements. The image distinguishes two classes: parent and children, which collectively communicate a family.
Size establishes hierarchy because the biggest items gain attention first and thus appear to be the most important. Here we see one big circle next to three smaller ones. Thus not only does the biggest appear to be more important than the others, but two distinct groups immediately become established, as well. It is critical to understand that we have imbued meaning into these objects only by changing one quality relative to one another. Alone, each group could not be distinguished so deliberately. Size is often used within bodies of text to identify meaningful subjects, headlines, or important quotes. Secondary content, such as labels, should thus be smaller so as to not compete with the important information.
A traditional graphic strategy is to make the most important elements the largest, and step down sizing hierarchically. However, too many sets of sizes can be confusing, so a basic structure of heading, body and label size text is sufficient. Consider some of the most widely used graphic interfaces, such as perhaps Instagram. Nothing on the screen competes with the image, it takes up more than 50 percent of most screens. The purpose of the interface is immediate. This simplicity likely led to the adoption of the social app by so many smartphone users. If an entire navigational interface can be established through relative sizing, why not?
An example of a restructuring of use of hierarchy is the portfolio site for Ro/Lu. The web portfolio for the art/design studio RO/LU may not be the most intuitive site, but it challenges the arrangement of the typical creative portfolio. Various projects appear as the most significant each time a viewer visits resulting from the randomization of projects’ thumbnail sizing in the background. For most creative studios, there is no hierarchy within projects of a portfolio as they’re each important in their own ways. The RO/LU site design creates a dynamic composition, with varying levels of interest in each visit, and encourages viewers to investigate the studio’s extensive portfolio. Thus, the eclectic, interdisciplinary nature of the studio is represented in the randomized sizing of content.


One square stands out of line. When a single element breaks an established structure, it stands out from the composition, and thus attains meaning relative to the rest. Alignment communicates a sense of order by connecting elements spatially. As in most web designs, menu items are gathered together, therefore, we immediately understand each as a part of the same group.
But a rigid composition may appear stagnant and visually uninteresting unless something steps outside of the grid. Thus misalignment, or breaking the grid, might be an opportunity to give an element visual weight. As a principle, elements that are placed centrally often appear to be more significant. For example, important content or interfaces may appear to be central, while navigational tools and menus are often kept out of the way. This being said, often in web design or interactive media, all content is aligned and composed two dimensionally. Yet in these mediums, alignments might change, reorganize, move or evolve into the third dimension. When interactive design applies fundamental principles and pushes the potential of new mediums, interesting and new experiences might be invented. Through re-alignment, new meanings can be associated between different content elements. Most newspaper columns develop structure through grids as a use of alignment. How could web design transcend print?
For example, the DNA project is a site that uses a series of realignments to communicate the creative construction of a musician’s album. At first, as a two dimensional composition, the menu items hide in the margins because they are to be explored later in the experience of the website. First, a viewer is invited to click through the song tracks, traditionally aligned in album format. However, by allowing a viewer to realign the DNA elements of the album/site, the conception of the album is communicated not just as a series of tracks, but a non-linear construction of fragments over time, each informing the album in different, and potentially multiple, places. The structure is complex, as is the construction of an album.


Consider how immediately the simple heart shape communicates its potential use for ‘liking’ in most social interactive interfaces today. To establish importance or groups, consider one heart amongst four circles. Geometric form is like color in that shapes carry certain connotations that give elements personality or meaning.
In interactive design, shapes are essential for efficient communication since they often convey meaning more quickly and universally than text. Instead of text, symbols, which are often simple, geometric shapes, have become the analog for most navigation systems and interactive interfaces. The logic of ‘liking’ an image, making a phone call or checking a message is often conveyed only in shapes. This form of visual communication becomes increasingly important in a global market and is evidence of how digital media can transcend print as form of visual communication.
Until recently, most newspaper websites laid out their pages as if they’re in print, and the experience of sifting through the content was clumsy and disorienting.
Newspapers had to quickly adapt their design for new technologies. The rest of the content industry followed suit.Newspapers had to quickly adapt their design for new technologies. The rest of the content industry followed suit.
Consider the difference between searching through the newspaper for an arts section, and selecting the library or search icon in most apps. Until recently, most newspaper websites laid out their pages as if they’re in print, and the experience of sifting through the content was clumsy and disorienting.
The navigation for the Signes du Quotidien site uses shape subtly to produce a visual hierarchy to guide users. The four circular topics are clearly defined aside from the square menu. The menu interaction is alluded to simply through the use of shape, which hints at the potential to drag the menu items into the box. The viewer is given little information other than visual, graphic clues which produce a clever interface.


Motion is a principle that is almost impossible to use in print, but can be included in the graphic toolkit. Perhaps obvious by now, a moving element will carry greater visual weight in a group of stagnant elements.
However, what can motion communicate other than a literal translation of itself? Motion is often used as a gesture that an element is interactive, but can it be used as a communicative device beyond this? If hierarchy is not only about the efficiency of communication, but also about embedding meaning, how might motion be used as an essential visual tool?
Motion is the single quality that can be given to a graphic element in digital media that is not possible in print. Thus, it inherently rewrites the rules of visual and graphic perception and experience. Many of these uses are simple but essential; they must work without being noticed and appear to have happened naturally. Much of the discussion of visual hierarchy has to do with reinventing old principles, and there is no visual theory that must be redefined or applied. Yet, since motion is mostly used as a fundamental tool for interaction, might the tool be applied to communicate less immediate, nonliteral content?
For the I Remember site, the vibrating main interface immediately stands out from the page as it invites interaction. Although the motion and interface are functional navigational tools to explore the content, the designer uses the potential loss of these elements as a way to discuss the underlying intent of the site. Just like the fading memories of the patients for whom the organization fundraises for, the website will slowly dissolve unless it is interacted with.


Sound is another tool foreign to print media but has yet to be developed within the principles of hierarchy. Since sound is entirely non-visual, there are no rules to refer to. That being said, it is a design tool that effectively communicates literal content, as well as moods or meanings. Perhaps elements that carry certain sounds may be grouped relative to one another. Those that are boldest might seem the most important, or perhaps separate from a group.
The quality of a sound that identifies an element should quickly identify, characterize, or organize the content. How might sounds that contrast with its associated visual content convey new meaning? Sounds themselves can be so complex that they establish an entire mood or message of a design before anything visual is perceived. A sound might sit in the background, just as a colourful backdrop establishes a mood. Or sounds signal the use of an interface, such as responding to the press of a button. The principle of the tool is basic, but the creativity with which it is employed is where the magic can happens.
The site for the exhibition of the German artist’s group ZERO at the Guggenheim uses sound as an atmosphere but also as a form of navigation. Sound is chosen as a tool due to its importance in the creative work of the collective. Bold ringtones establish the pieces that represent the beginnings of a theme, while the tertiary projects click in the background.

Hierarchy Is Straightforward

Hierarchy is a straightforward concept to discuss once diagrammed.
However, understanding it is easier than the ability to execute a well-organized composition. Plus, to be inventive within a new medium while maintaining good design is even more challenging. New mediums appear all the time. First it was the Internet, then smartphones and tablets, and now we are moving into new territory with technologies such as wearables and virtual reality.
Design that truly pushes the boundaries of digital media is still in its infancy. Hopefully, the principles of good design will keep up with the rapid advances of technology so that our experience of digital media remains full of meaning and pleasure.
This article was written by KENT MUNDLE, Toptal freelance designer.

Design Principles: Introduction To Hierarchy

Posted by Anonymous
User experience design (UXD or UED) is the process of enhancing user satisfaction by improving the usability, accessibility, and pleasure provided in the interaction between the user and the product.
This nicely encapsulates what the design part is all about, but what about the other equally important facet of UX, the testing process? The former can be self-taught, at least to a degree. The latter can be considered as one of the more misunderstood, but ultimately necessary steps in UX design. It has to be effective and involve the most important people – your users/customers.
For the UX guru-in-training, testing can be a difficult and overwhelming topic to approach initially, due to its sheer scale and the diverse directions it can take. This can sometimes be confusing and misleading, depending on which area you wish to focus on and what your professional background is.
For the sake of this article, we’ll approach UX testing from the aspect of a web/app designer who wishes to extend their UI design skills and better understand the core User Centered Design (UCD) approach to an application that should take place before Photoshop or Axure are even powered up.

Understanding User Centered Design (UCD)

Before we proceed to testing, let’s start by explaining the basic concept behind UCD.
UCD places the user first in the design and development cycle of an application/website. UCD is based around an understanding of the application’s environment, tasks, and its users. It then addresses the complete user experience as a whole.
What this basically means is that the entire design process involves real users throughout, in order to ensure the end product meets its initial brief requirement as fully as possible.
To sum up the process in its most basic form (there are many variations of UCD), the following phases are as follows:
  • Context of use: Identify who will use the product and what they will use it for, and under what conditions they intend to use it.
  • Requirements: Identify any business requirements or user goals that must be met for the product to be successful.
  • Design solutions: This part of the process may be done in stages, building from a rough concept to a complete design through a number of iterations.
  • Evaluation of designs: Ideally through usability testing with actual users. This step is just as important for UCD as quality testing is to good software development.
Some of the techniques and methods used in UCD are:

Card Sorting

Card sorting can offer useful insight at the UX Design/Design stage.
Card sorting involves participants being given an unsorted group of cards, each card has a statement on it relating to a page or section of the website. The participants are then asked to sort the cards into groups and name them.
Card sorting is a simple and effective way of testing your UX designs on a range of different subjects.
This is usually a great way of learning what your website navigation and content structure should look like, and how they should work in a way that’s logical to your intended user base.

Usability Testing Session

A usability testing session involves collecting data from a group as they use the website/interactive prototypes. It usually comes at a relatively high cost, because it involves a lot of human interaction and legwork.
What does a usability testing session look like? People are invited to attend a session during which they will be asked to perform a series of tasks on the website, while you or the moderator takes notes. The user will often be asked to fill in a questionnaire at the end of the test, to ascertain how difficult it was to perform certain tasks, such as buy a product on an e-commerce site from a specific category page and proceed to checkout.
This type of testing is usually reserved for high-end interactive prototypes or interactive wireframes. It is a great way of gathering data on the most common issues real-world users will encounter.

Focus Groups

Focus group testing is more or less self-explanatory. It involves asking focus group members (which could be site users or the intended target audience) a series of questions related to the website, and being encouraged to share their thoughts and feelings on different related areas of the site design/wireframes.
UX tests involving user groups and questionnaires can cover a broad demographic, but both come with trade-offs.
It’s normally a good idea to have an experienced moderator during such a group session to ensure accurate notes are taken. Additionally, a good moderator should be able to identify the telltale signs of groupthink, and make sure that the whole process is not negatively affected by group dynamics.


Questionnaires can be a great way of generating invaluable solid statistical data – providing the right questions are asked.
A questionnaire can be particularly useful when you want to collect a much more varied cross-section of data than could be achieved through a small focus group. It can also be argued that people tend to be more honest without the immediate pressure of being in a small user group.
The risk of groupthink is averted, so individuals will make their own decisions.

Testing on a Tight Budget or Timescale

Don’t worry, none of these processes are set in stone. In case you are forced to operate on a tight budget or cut corners to meet a hard deadline, there are ways of streamlining the process without sacrificing too much.
If you have to UX test on a tight budget or on short notice, you will have to cut corners and think outside the box.
For example, you could organize part of these processes differently, or merge them together and use your friends and family as test subjects if needs be. What is important is that you are actively seeking involvement, feedback, and constructive criticism on the processes you design from other people.
If your budget and schedule won’t allow you to do everything you had in mind, you need to think outside the box and come up with new ways of obtaining usable test results. While this approach involves some tradeoffs, you should still be able to get a lot of actionable information from your test subjects.
This post originally appeared in the Toptal Engineering blog

UX Testing For The Masses: Keep It Simple And Cost Effective

Posted by Anonymous
In his recent article on Toptal’s blog, skilled data scientist Charles Cook wrote about scientific computing with open source tools. His tutorial makes an important point about open source tools and the role they can play in easily processing data and acquiring results.
But as soon as we’ve solved all these complex differential equations another problems comes up. How do we understand and interpret the huge amounts of data coming out of these simulations? How do we visualize potential gigabytes of data, such as data with millions of grid points within a large simulation?
A data visualization training for data scientists interested in 3D data visualization tools.
During my work on similar problems for my Master’s Thesis, I came into contact with the Visualization Toolkit, or VTK - a powerful graphics library specialized for data visualization.
In this tutorial I will give a quick introduction to VTK and its pipeline architecture, and go on to discuss a real-life 3D visualization example using data from a simulated fluid in an impeller pump. Finally I’ll list the strong points of the library, as well as the weak spots I encountered.

Data Visualization and The VTK Pipeline

The open source library VTK contains a solid processing and rendering pipeline with many sophisticated visualization algorithms. It’s capabilities, however, don’t stop there, as over time image and mesh processing algorithms have been added as well. In my current project with a dental research company, I’m utilizing VTK for mesh based processing tasks within a Qt-based, CAD-like application. The VTK case studies show the wide range of suitable applications.
The architecture of VTK revolves around a powerful pipeline concept. The basic outline of this concept is shown here:
This is what the VTK data visualization pipeline looks like.
  • Sources are at the very beginning of the pipeline and create “something out of nothing”. For example, a vtkConeSource creates a 3D cone, and a vtkSTLReader reads *.stl 3D geometry files.
  • Filters transform the output of either sources or other filters to something new. For example a vtkCuttercuts the output of the previous object in the algorithms using an implicit function, e.g., a plane. All the processing algorithms that come with VTK are implemented as filters and can be freely chained together.
  • Mappers transform data into graphics primitives. For example, they can be used to specify a look-up table for coloring scientific data. They are an abstract way to specify what to display.
  • Actors represent an object (geometry plus display properties) within the scene. Things like color, opacity, shading, or orientation are specified here.
  • Renderers & Windows finally describe the rendering onto the screen in a platform-independent way.
A typical VTK rendering pipeline starts with one or more sources, processes them using various filters into several output objects, which are then rendered separately using mappers and actors. The power behind this concept is the update mechanism. If settings of filters or sources are changed, all dependent filters, mappers, actors and render windows are automatically updated. If, on the other hand, an object further down the pipeline needs information in order to perform its tasks, it can easily obtain it.
In addition, there is no need to deal with rendering systems like OpenGL directly. VTK encapsulates all the low level task in a platform- and (partially) rendering system-independent way; the developer works on a much higher level.

Code Example with a Rotor Pump Dataset

Let’s look at a data visualization example using this dataset of fluid flow in a rotating impeller pump from the IEEE Visualization Contest 2011. The data itself is the result of a computational fluid dynamics simulation, much like the one described in Charles Cook’s article.
The zipped simulation data of the featured pump is over 30 GB in size. It contains multiple parts and multiple time steps, hence the large size. In this guide, we’ll play around with the rotor part of one of these timesteps, which has a compressed size of about 150 MB.
My language of choice for using VTK is C++, but there are mappings for several other languages like Tcl/Tk, Java, and Python. If the target is just the visualization of a single data-set, one doesn’t need to write code at all and can instead utilize Paraview, a graphical front-end for most of VTK’s functionality.

The Dataset and Why 64-bit is Necessary

I extracted the rotor dataset from the 30 GB dataset provided above, by opening one timestep in Paraview and extracting the rotor part into a separate file. It is an unstructured grid file, i.e., a 3D volume consisting of points and 3D cells, like hexahedra, tetrahedra, and so on. Each of the 3D points has associated values. Sometimes the cells have associated values as well, but not in this case. This training will concentrate on pressure and velocity at the points and try to visualize these in their 3D context.
The compressed file size is about 150 MB and the in-memory size is about 280 MB when loaded with VTK. However, by processing it in VTK, the dataset is cached multiple times within the VTK pipeline and we quickly reach the 2 GB memory limit for 32bit programs. There are ways to save memory when using VTK, but to keep it simple we’ll just compile and run the example in 64bit.
Acknowledgements: The dataset is made available courtesy of the Institute of Applied Mechanics, Clausthal University, Germany (Dipl. Wirtsch.-Ing. Andreas Lucius).

The Target

What we will achieve using VTK as a tool is the visualization shown in the image below. As a 3D context the outline of the dataset is shown using a partially transparent wireframe rendering. The left part of the dataset is then used to display the pressure using simple color coding of the surfaces. (We’ll skip the more complex volume rendering for this example). In order to visualize the velocity field, the right part of the dataset is filled with streamlines, which are color-coded by the magnitude of their velocity. This visualization choice is technically not ideal, but I wanted to keep the VTK code as simple as possible. In addition, there is a reason for this example to be part of a visualization challenge, i.e., lots of turbulence in the flow.
This is the resulting 3D data visualization from our example VTK tutorial.

Step by Step

I will discuss the VTK code step by step, showing how the rendering output would look at each stage. The full source code can be downloaded at the end of the training.
Let’s starts by including everything we need from VTK and open the main function.
#include <vtkActor.h>
#include <vtkArrayCalculator.h>
#include <vtkCamera.h>
#include <vtkClipDataSet.h>
#include <vtkCutter.h>
#include <vtkDataSetMapper.h>
#include <vtkInteractorStyleTrackballCamera.h>
#include <vtkLookupTable.h>
#include <vtkNew.h>
#include <vtkPlane.h>
#include <vtkPointData.h>
#include <vtkPointSource.h>
#include <vtkPolyDataMapper.h>
#include <vtkProperty.h>
#include <vtkRenderer.h>
#include <vtkRenderWindow.h>
#include <vtkRenderWindowInteractor.h>
#include <vtkRibbonFilter.h>
#include <vtkStreamTracer.h>
#include <vtkSmartPointer.h>
#include <vtkUnstructuredGrid.h>
#include <vtkXMLUnstructuredGridReader.h>

int main(int argc, char** argv)
Next, we setup the renderer and the render window in order to display our results. We set the background color and the render window size.

  // Setup the renderer
  vtkNew<vtkRenderer> renderer;
  renderer->SetBackground(0.9, 0.9, 0.9);

  // Setup the render window
  vtkNew<vtkRenderWindow> renWin;
  renWin->SetSize(500, 500);
With this code we could already display a static render window. Instead, we opt to add a vtkRenderWindowInteractor in order to interactively rotate, zoom and pan the scene.
  // Setup the render window interactor
  vtkNew<vtkRenderWindowInteractor> interact;
  vtkNew<vtkInteractorStyleTrackballCamera> style;
Now we have a running example showing a gray, empty render window.
Next, we load the dataset using one of the many readers that come with VTK.
  // Read the file
  vtkSmartPointer<vtkXMLUnstructuredGridReader> pumpReader = vtkSmartPointer<vtkXMLUnstructuredGridReader>::New();
Short excursion into VTK memory management: VTK uses a convenient automatic memory management concept revolving around reference counting. Different from most other implementations however, the reference count is kept within the VTK objects themselves, instead of the smart pointer class. This has the advantage that the reference count can be increased, even if the VTK object is passed around as a raw pointer. There are two major ways to create managed VTK objects. vtkNew<T> and vtkSmartPointer<T>::New(), with the main difference being that a vtkSmartPointer<T> is implicit cast-able to the raw pointer T*, and can be returned from a function. For instances of vtkNew<T> we’ll have to call .Get() to obtain a raw pointer and we can only return it by wrapping it into a vtkSmartPointer. Within our example, we never return from functions and all objects live the whole time, therefore we’ll use the short vtkNew, with only the above exception for demonstration purposes.
At this point, nothing has been read from the file yet. We or a filter further down the chain would have to call Update() for the file reading to actually happen. It is usually the best approach to let the VTK classes handle the updates themselves. However, sometimes we want to access the result of a filter directly, for example to get the range of pressures in this dataset. Then we need to call Update() manually. (We don’t lose performance by calling Update() multiple times, as the results are cached.)
  // Get the pressure range
  double pressureRange[2];
Next, we need to extract the left half of the dataset, using vtkClipDataSet. To achieve this we first define a vtkPlane that defines the split. Then, we’ll see for the first time how the VTK pipeline is connected together: successor->SetInputConnection(predecessor->GetOutputPort()). Whenever we request an update from clipperLeft this connection will now ensure that all preceding filters are also up to date.
  // Clip the left part from the input
  vtkNew<vtkPlane> planeLeft;
  planeLeft->SetOrigin(0.0, 0.0, 0.0);
  planeLeft->SetNormal(-1.0, 0.0, 0.0);

  vtkNew<vtkClipDataSet> clipperLeft;
Finally, we create our first actors and mappers to display the wireframe rendering of the left half. Notice, that the mapper is connected to its filter in exactly the same way as the filters to each other. Most of the time, the renderer itself is triggering the updates of all actors, mappers and the underlying filter chains!
The only line that is not self-explanatory is probably leftWireMapper->ScalarVisibilityOff(); - it prohibits the coloring of the wireframe by pressure values, which are set as the currently active array.
  // Create the wireframe representation for the left part
  vtkNew<vtkDataSetMapper> leftWireMapper;

  vtkNew<vtkActor> leftWireActor;
  leftWireActor->GetProperty()->SetColor(0.8, 0.8, 0.8);
At this point, the render window is finally showing something, i.e., the wireframe for the left part.
This is also a resulting example of a 3D data visualization from the VTK tool.
The wireframe rendering for the right part is created in a similar way, by switching the plane normal of a (newly created) vtkClipDataSet to the opposite direction and slightly changing the color and opacity of the (newly created) mapper and actor. Notice, that here our VTK pipeline splits into two directions (right and left) from the same input dataset.
  // Clip the right part from the input
  vtkNew<vtkPlane> planeRight;
  planeRight->SetOrigin(0.0, 0.0, 0.0);
  planeRight->SetNormal(1.0, 0.0, 0.0);

  vtkNew<vtkClipDataSet> clipperRight;

  // Create the wireframe representation for the right part
  vtkNew<vtkDataSetMapper> rightWireMapper;

  vtkNew<vtkActor> rightWireActor;
  rightWireActor->GetProperty()->SetColor(0.2, 0.2, 0.2);
The output window now shows both wireframe parts, as expected.
The data visualization output window now shows both wireframe parts, per the VTK example.
Now we are ready to visualize some useful data! For adding the pressure visualization to the left part, we don’t need to do much. We create a new mapper and connect it to clipperLeft as well, but this time we color by the pressure array. It is also here, that we finally utilize the pressureRange we have derived above.
  // Create the pressure representation for the left part
  vtkNew<vtkDataSetMapper> pressureColorMapper;

  vtkNew<vtkActor> pressureColorActor;
The output now looks like the image shown below. The pressure at the middle is very low, sucking material into the pump. Then, this material is transported to the outside, rapidly gaining pressure. (Of course there should be a color map legend with the actual values, but I left it out to keep the example shorter.)
When color is added into the data visualization example, we really begin to see the way the pump works.
Now the trickier part starts. We want to draw velocity streamlines in the right part. Streamlines are generated by integration within a vector field from source points. The vector field is already part of the dataset in the form of the “Velocities” vector-array. So we only need to generate the source points. vtkPointSourcegenerates a sphere of random points. We’ll generate 1500 source points, because most of them won’t lie within the dataset anyways and will be ignored by the stream tracer.
  // Create the source points for the streamlines
  vtkNew<vtkPointSource> pointSource;
  pointSource->SetCenter(0.0, 0.0, 0.015);
Next we create the streamtracer and set its input connections. “Wait, multiple connections?”, you might say. Yes - this is the first VTK filter with multiple inputs we encounter. The normal input connection is used for the vector field, and the source connection is used for the seed points. Since “Velocities” is the “active” vector array in clipperRight, we don’t need to specify it here explicitly. Finally we specify that the integration should be performed to both directions from the seed points, and set the integration method to Runge-Kutta-4.5.
  vtkNew<vtkStreamTracer> tracer;
Our next problem is coloring the streamlines by velocity magnitude. Since there is no array for the magnitudes of the vectors, we will simply compute the magnitudes into a new scalar array. As you have guessed, there is a VTK filter for this task as well: vtkArrayCalculator. It takes a dataset and outputs it unchanged, but adds exactly one array that is computed from one or more of the existing ones. We configure this array calculator to take the magnitude of the “Velocity” vector and output it as “MagVelocity”. Finally, we call Update() manually again, in order to derive the range of the new array.
  // Compute the velocity magnitudes and create the ribbons
  vtkNew<vtkArrayCalculator> magCalc;

  double magVelocityRange[2];
vtkStreamTracer directly outputs polylines and vtkArrayCalculator passes them on unchanged. Therefore we could just display the output of magCalc directly using a new mapper and actor.
Instead, in this training we opt to make the output a little nicer, by displaying ribbons instead. vtkRibbonFiltergenerates 2D cells to display ribbons for all polylines of its input.
  // Create and render the ribbons
  vtkNew<vtkRibbonFilter> ribbonFilter;

  vtkNew<vtkPolyDataMapper> streamlineMapper;

  vtkNew<vtkActor> streamlineActor;
What is now still missing, and is actually needed to produce the intermediate renderings as well, are the last five lines to actually render the scene and initialize the interactor.
  // Render and show interactive window
  return 0;
Finally, we arrive at the finished visualization, which I will present once again here:
The VTK training exercise results in this complete visualization example.
The full source code for the above visualization can be found here.

The Good, the Bad, and the Ugly

I will close this article with a list of my personal pros and cons of the VTK framework.
  • ProActive development: VTK is under active development from several contributors, mainly from within the research community. This means that some cutting-edge algorithms are available, many 3D-formats can be imported and exported, bugs are actively fixed, and problems usually have a ready-made solution in the discussion boards.
  • ConReliability: Coupling many algorithms from different contributors with the open pipeline design of VTK however, can lead to problems with unusual filter combinations. I have had to go into the VTK source code a few times in order to figure out why my complex filter chain is not producing the desired results. I would strongly recommend setting up VTK in a way that permits debugging.
  • ProSoftware Architecture: The pipeline design and general architecture of VTK seems well thought out and is a pleasure to work with. A few code lines can produce amazing results. The built-in data structures are easy to understand and use.
  • ConMicro Architecture: Some micro-architectural design decisions escape my understanding. Const-correctness is almost non-existent, arrays are passed around as inputs and outputs with no clear distinction. I alleviated this for my own algorithms by giving up some performance and using my own wrapper for vtkMath which utilizes custom 3D types like typedef std::array<double, 3> Pnt3d;.
  • ProMicro Documentation: The Doxygen documentation of all classes and filters is extensive and usable, the examples and test cases on the wiki are also a great help to understand how filters are used.
  • ConMacro Documentation: There are several good tutorials for and introductions to VTK on the web. However as far as I know, there is no big reference documentation that explains how specific things are done. If you want to do something new, expect to search for how to do it for some time. In addition it is hard to find the specific filter for a task. Once you’ve found it however, the Doxygen documentation will usually suffice. A good way to explore the VTK framework is to download and experiment with Paraview.
  • ProImplicit Parallelization Support: If your sources can be split into several parts that can be processed independently, parallelization is as simple as creating a separate filter chain within each thread that processes a single part. Most large visualization problems usually fall into this category.
  • ConNo Explicit Parallelization Support: If you are not blessed with large, dividable problems, but you want to utilize multiple cores, you are on your own. You’ll have to figure out which classes are thread-safe, or even re-entrant by trial-and-error or by reading the source. I once tracked down a parallelization problem to a VTK filter that used a static global variable in order to call some C library.
  • ProBuildsystem CMake: The multi-platform meta-build-system CMake is also developed by Kitware (the makers of VTK) and used in many projects outside of Kitware. It integrates very nicely with VTK and makes setting up a build system for multiple platforms much less painful.
  • ProPlatform Independence, License, and Longevity: VTK is platform independent out of the box, and is licensed under a very permissive BSD-style license. In addition, professional support is available for those important projects that require it. Kitware is backed by many research entities and other companies and will be around for some time.

Last Word

Overall, VTK is the best data visualization tool for the kinds of problems I love. If you ever come across a project that requires visualization, mesh processing, image processing or similar tasks, try firing up Paraview with an input example and evaluate if VTK could be the tool for you.
This article was written by Benjamin Hopfer, a Toptal SQL developer.

3D Data Visualization with Open Source Tools

Posted by Anonymous