Hybrid Rugosity Mesostructures (HRMs) for fast and accurate rendering of ﬁne haptic detail

. The haptic rendering of surface mesostructure (ﬁne relief features) in dense triangle meshes requires special structures, equipment, and high sampling rates for detailed perception of rugged models. Low cost approaches render haptic texture at the expense of ﬁdelity of perception. We propose a faster method for surface haptic rendering using image-based Hybrid Rugosity Mesostructures (HRMs), paired maps with per-face heightﬁeld displacements and normal maps, which are layered on top of a much decimated mesh, effectively adding greater surface detail than actually present in the geometry. The haptic probe’s force response algorithm is modulated using the blended HRM coat to render dense surface features at much lower costs. The proposed method solves typical problems at edge crossings, concave foldings and texture transitions. To prove the wellness of the approach, a usability testbed framework was built to measure and compare experimental results of haptic rendering approaches in a common set of specially devised meshes, HRMs, and performance tests. Trial results of user testing evaluations show the goodness of the proposed HRM technique, rendering accurate 3D surface detail at high sampling rates, deriving useful modeling and perception thresholds for this technique.


Introduction
Haptic perception is the ability to touch and sense variations in geometry, roughness, texture and other volume or surface detail in computer-generated objects, necessary in fields such as nanomaterials manipulation, surgical training, virtual prototyping, machine assembly and digital sculpting. Haptic rendering of surface relief in dense models requires special data structures and force-feedback device probes with high sampling rates for accurate model perception. Herein we describe a hybrid image-based haptic rendering approach for fast and accurate perception of surface features ranging from fine creases to major topographic features, providing a measure of performance against the existing geometric-based haptic rendering algorithms.
Our main contributions are: (i) A specific model and algorithm for rendering image-based mesostructure surface details, mapping dual displacement and normal maps onto underlying simplified geometries (Algorithm 2); (ii) A blending function for smoothing height/normal computation at folding edges (section 4.1) and mesostructure transitions (section 4.2); and (iii) A battery of usability benchmark tests over a chosen set of meshes and mesostructures, allowing qualitative measures of feature perception at varying resolutions (section 5).
Using the previous experimental testing protocol, we achieve accurate haptic perception of fine surface detail without compromising rendering rates or fidelity of touch, with the stated objective of rendering complex detail not present in the original geometry at very low processing costs.

Related work
The term haptic rendering as defined by Zilles and Salisbury [1] is applied to the real-time computation and generation of force responses to users interactions with virtual objects, detecting collisions with a force-feedback device placed into a 3D environment.
A high priority event loop checks for contacts against the geometry, after which it generates at the device new forces and torques of varying direction and magnitude [2].
A first effort to measure haptic discrimination of 2D textures was the Sandpaper System by Minsky and Lederman [3]. Users manipulated a force-feedback joystick to traverse several simple textures and report qualitative roughness differences. Siira and Pai [4] incorporated a stochastic model of physically correct surface properties to produce the appropriate textural feel, including friction and lateral forces. Costa and Cutkosky [5] generated fractal rugosity procedurally on flat surfaces and measured perception thresholds.
When used as an aid for navigating a space, its short range reach requires space exploration strategies, such as a moving bubble for navigation [6], a workspace drift control allowing perception discrepancies between visual and haptic space [7], or a force-filled constraining movement [8]. Collaboration across networks allows simulation of real-time activities such as stretchercarrying [9], but it brings its own set of latency and simultaneity problems that may cause oscillations in the interaction.
With device sampling rates standardizing in the 1000 Hz range, as in the Phantom or HAPTICMaster device [10], efficient haptic-interaction techniques may go beyond the simple detection of geometric primitives, towards allowing real-time rendering of arbitrary surfaces of irregular detail, conveying spatial and material properties. All this without forgetting its other role as an user-interaction device for high level event acquisition, recognition of tactile "icons", and general haptic user interfaces or HUIs [11].
Using a third object as an extended probe allows real-time texture differentiation and shape perception [12]. Detecting friction among objects is achieved by rubbing simulated known material against each other [13] and the expected friction force using common physical models.
These efforts choose among several alternatives for modeling and rendering surfaces. Gregory et al's H-Collide [14] uses hybrid hierarchical representation of uniform grids and trees of tight-fitting oriented bounding boxes, whereas Johnson [15] uses a pure geometric render approach for arbitrary polygons using neighborhood proximities. Morgenbesser and Srinivasan in [16] proposed the method of force shading, akin to Phong shading and bump-mapping. The force response vector is interpolated from nearby vertices, but it is unable to elicit accurate geometric up-down perception. A global procedure for mapping a grayscale image as a displacement map for point-based haptic rendering is given by Ho et al [17]. It works only for convex objects of genus 0, without any assessment of sensation fidelity.
Inadequate modeling or suboptimal rendering produce instabilities in the force response, as shown in the work of Choi and Tan [18]. Collisions are detected against a coarse geometry and then against a second micro-geometry layer. Incorrect renderings when traversing concave foldings are identified but not addressed. A similar approach for geometric sculpting with 2D textures and a haptic stylus is used by Kim et al [19]. Potter et al [20] provide a simple model to perceive haptic variation in large heightfield terrains, detecting collisions against the terrain's dataset patches.
An extended survey of current haptic rendering techniques can be found in Laycock and Day [21]. From the latter review it follows that haptic rendering approaches have relied either on straightforward collisions against the mesh's triangles or a NURBS parameterization of the same. There is no formal treatment regarding the use of heightfield displacements for haptic rendering, and there is no unified testing framework for measuring quantitative and qualitative differences among rendering approaches, or usability testbeds for standard models and surfaces.
In the following sections we develop a new method for haptic surface perception, overlaying simple triangle meshes with image-based composite mesostructures built out of heightfield displacement textures and normal maps. A devised usability testing trial set allows making fair comparisons of rendering techniques using the same mesh models and surface details, measuring quality and performance of the proposed solution for the perception of fine surface detail.

Models for haptic perception of surface details
Fast algorithms for visualizing surface details of complex models using color textures and bump-mapping are well known in the literature [22]. In the case of haptic rendering, any algorithm should be efficient enough to achieve the high frequency updates (1000 Hz) required by the human sense of touch. Our objective was finding a haptic rendering algorithm for surface detail perception in triangle meshes, and compare it to other known solutions. We summarize a taxonomy for haptic detail rendering, which determines the particular algorithm to be used.
-Geometric Detail, rendering the surface as detailed polygonal meshes or NURBS (Figure 1c), and detecting collisions against the surfaces.
-Surface Relief Detail, where a haptic texture is sampled in lieu of the actual surface. Haptic textures may be based on normal force maps ( Figure 1d) or heightfields (Figure 1e).
(c) Ovals -Geometry (d) Ovals -Force shading (e) Ovals -Heightfield displacement The force shading algorithm uses a surface normal vector field to calculate the force direction and magnitude to be applied to the haptic device when it collides with the triangle (Figure 1d). By using this haptic rendering algorithm with its corresponding bump-mapping visualization, one can achieve a correct perception of surface roughness for small height differences. Collisions are always detected against the mesh triangle, but upward/downward perception from the mesh surface is not possible.
We offer below a brief summary of an approach by Theoktisto et al [23] to render the surface detail of an underlying triangle mesh, which builds a constraint-based force response of local heightfield displacements by modulating 6 DoF spring/damper objects. The method, shown here as Algorithm 1, compares favorably against a force shading implementation for perceiving the same models, using equivalent normal force maps for texture perception.
A bounded prism or wedge is created for each mesh triangle T k = V k,0 ,V k,1 ,V k,2 , with displacements up and down a distance mh along each vertex normal, enclosing all possible heightfield values (see Figure 2). The 8 triangles thus created (2 for each of the 3 walls, plus the top and bottom lids) share the same tagging label of the base triangle, so identification of the relevant face is immediate after hitting any side of the prism.
As shown in Algorithm 1, a fast 3D search procedure looks for haptic collisions against all of the prisms' faces. Once the relevant prism is identified (at very low computational cost), the collision is checked against the prism's base triangle, triggering the haptic rendering of the corresponding heightfield displacement map. The probe's position is orthogonally projected onto the closest surface point of that triangle, with ample time to sample the appropriate heightfield altitude, and determine whether there is an actual penetration (the haptic probe is below that height at that point).
If at any time the probe elevation's from the triangle descends below the computed height at that point, a repelling force proportional to the penetration difference is applied to the haptic probe along the face normal at the projected point. This repulsion forces the god-object (the constrained proxy of the haptic probe) to move towards the surface, at which point the force ceases (see Figure 3). The haptic probe and the god-object are kept in sync by the device's constraint system.
Heightfield displacements are clamped at the edges to insure C 0 -continuity. This avoids sudden height jumps at the triangles' edge. In the case of convex folds, simple normal interpolation avoids instabilities when cruising near the edges, but this is inadequate when sizable height differences exist across face boundaries, and totally wrong for holes and concave folds. {Haptic Probe P H is inside T j 's prism} 8: {so we must check against haptic texture} 9: Project P H against T obtaining surface point P; 10: Compute 2D texture coords (s,t) of P over T j ; 11: {Sample the heightfield displacement} 12: if (penetration > 0) then 15: {Positive penetration, a real collision} 16: Calculate repulsing force for the device; 17: We proceed now to elaborate on a method that proposes a global solution to the afore mentioned problems. Instead of applying the force in the normal direction of the base triangle T k , a more accurate rendering approach applies the repulsing force in the exact direction of the normal at the specific impacted surface point. Normals are generated directly from the heightfield displacement texture and both stored as textures, creating what we call a Hybrid Rugosity Mesostructure or HRM. Taking into account the traversal direction when touching a surface, the haptic point is pushed in the direction of the normal, and a constraint system combines this repulsion with the force exerted by the user at the probe, producing a change of position and orientation without incurring in lagged responses. This allows to vary surface sensation exploration by "coating" a mesh with several surface reliefs at different frequencies. This image-based general procedure, shown in Algorithm 2, uses the previously defined prisms with an added twist. The HRM tuples from normal maps and heightfield displacements − → N (s,t), H(s,t) correspond to an RGBα texture, having coordinates r, g, b, α = Nx, Ny, Nz, Hw . The heightfield-normal tuples may be provided as static or procedural 2D, 3D or 4D (3 + time) textures, allowing for even greater complexity of haptic perception and correspondence with visual renderers. Friction, viscosity, magnetism, color and other surface properties may be easily added as additional entries on the HRM, requiring only the modification of the force-response accordingly. Haptic resolution gets scaled in sync with the current visual zoom state, so features not measurable at lower zoom levels ("blurred") become noticeable at close range, and surface perception becomes more accurate.

Blending haptic mesostructure at the edges
Algorithm 2 also computes soft transitions at triangle edges having different mesostructures using a simple interpolation scheme. For each face in the mesh, we keep track of neighborhood information of all adjoining face indices. When following along the surface of the mesh, the mesostructures in neighboring faces may produce an abrupt topographic change at the edge, that if left to stand will produce a sudden force kick (in magnitude and orientation) in the haptic device. To eliminate these abrupt jumps, we devised the following stitching procedure to blend the transition among convex faces.
Heightfield and normals closer to the edges are sampled from the rugosity mesostructure using a multi-texturing approach. In Figure 4 we see a schematic of this heightfield stitching. A parametric band of size ρ extends at both sides of each edge. In this area we use an alpha-blending function. We extend each parametric distance of the triangle's barycentric coordinates in this quantity ρ, say 0.05 (or 5%) over each HRM. Each blending map of Figure 5 is then used to compute an averaged mesostructure that spans parametrically a ρ distance across each edge. Project P H against T to obtain surface point P; 6: Compute 2D texture coords (s,t) of P over T ; 7: Obtain barycentric coords α, β, and γ of P in T ; 8: if (∃ α, β, or γ ≥ 1 − ρ) then 9: {We are within ρ distance of an edge} 10: − → AN ← − → 0 ; 11: AD ← 0; 12: for all T and adjoining triangles f i of T do 13: Project P H against f i to obtain P i ; 14: Evaluate weights ω i from P, P i and ρ;  If the projected point of the haptic probe is inside the ρ band of triangle A (in Figure 4), at least one of the barycentric coordinates of P 0 is less than ρ at some edge. This point is remapped into the opposing triangle B, obtaining its local set of barycentric coordinates P 1 . To blend both heights and normals, we solve for a t value between 0 and 1 out of P 0 , P 1 and ρ for the chosen blending function, and obtain the corresponding weights ω(t). Several blending functions may be defined for different effects. If we desire no blending at all, a half white/half black map ( Figure 5, No blend) will produce the abrupt relief transition at the edges, generating jumps at edge crossings. A linear gradation from white to black ( Figure 5, Linear) or a sloping S-shaped curve ( Figure 5, S-Shape) offer more stable and pleasant results. We use the ω weights to compute an average height and normal direction. In the case of point P 2 , some additional barycentric coordinate is also less than ρ, so this process is repeated for this adjacent edge. Point P 3 falls outside of the ρ bands, so it is sampled only once.

Haptic rendering in concave faces
A problem mentioned before is a performance issue arising when rendering non-convex objects (see Figure 6a). When two or more triangles form a concave fold or depression (angle between faces less than 180 • ) the haptic probe could be inside two (or more) prisms at the same time.
Basically, the treatment of concave faces is the same applied to flat and convex faces. It will blend heights and normals in a band of size 2ρ around the edges (half in one face, half on the other), so the effect will be a repulsion away from the edge. Unfortunately, this may create a back-and-forth effect at the probe (see Figure 6a), sometimes getting stuck and unable to leave the surface, or in rare cases, generating a resonance situation with ever increasing force magnitude, generating a device failure. Our solution is to transform the initial mesostructure texture, so that height and cumulative normals are already mapped for those surface points that collide into other faces, saving the inclusion and collision tests altogether. In Figure 6b(i) we see two adjoining triangles and their corresponding unblended mesostructures (blue and yellow), with a subsurface hole (and potential device trap) laying in the middle. At Figure 6b(ii) we see the result of the blended joint mesostructure, with the hole eliminated. The probe will be pushed away in the combined correct direction.
To avoid borderline cases, we add a small ε to the collided heights to avoid getting trapped in a narrow crevice or hole. The probe is detected inside the common region by a simple inclusion test, and the relevant heightfields (red) reflect the correct surface relief close to the edges (Figure 6b(iii)).
Stitching different mesostructures may be applied at triangle boundaries if so desired. When the HRMs are precomputed from existing fine geometric detail, abrupt height and normal differences are greatly minimized across edges, thus greatly reducing haptic artifacts.

Testing Procedure
To measure quantitatively the participants' abilities to perceive each surface's haptic properties, we devised the following testing protocol for choice meshes and textures, using both force shading and HRMs. A total of 4 (Tests I, II, III and IV) separate experiments (see Table 3) were performed on the participants, plus a previous baseline perception test (Test 0) as control setup, so users would recognize what a feature-less surface feels like.
Equipment: A FCS HAPTICMaster, able to exert forces from a delicate 0.01 N up to a heavy blow of 250 N, with a built-in 3D haptic perception wedge of 40 cm x 36 cm x 1 radian. The PC is a 2.4 GHz Pentium IV with an ATI Radeon X1600 graphics card.
Stimuli: We prepared a set of base meshes, shown on Table 1, each having some measurable perception property. To create the set of HRM coats shown on Table 2, we generated the appropriate heightfield displacement maps, and calculated their corresponding normal maps as explained in [24], (also used in force shading). In the latter manner we could dress any mesh with chosen HRM coats representing particular 3D mesostructures.
Participants: Each test involved 18 different user tester/trials (6 participants, 3 trials each) for each setup. Testers were isolated and unaware of what to expect. All of them had used the haptic device before, and were instructed to maintain constant force and speed throughout each experiment. Their perception impressions were recorded from the same live questionnaire.
Procedure: Each trial consisted in a different Mesh, HRM, Test triad being executed. The control experiment (Test 0) was established to rule out perception differences between the geometric and HRM-based renderings of the same meshes. The experiments (shown on Table 3) were performed and measured modulating the maximum amplitude of heightfield displacements, at 1%, 5%, 10%, 15% and 20% of the average edge length of each mesh. This proved a better predictor than average triangle mesh area, since it works well even with near degenerate triangles. from an initial Icosahedron Me Open concave mesh in the shape of a "cup" M f Open mesh of a regular gradation of triangles, from big to small Mg A much denser mesh based on M b , with very small triangles following the heightfield

Test 0. Baseline perception (Control)
The baseline perception was designed to test whether users would find any differences between a simple collision with the underlying mesh geometry, and the same mesh having its triangles replaced by the flat mesostructure [H 0 , N 0 ] (of constant zero height throughout).
Evaluation of results: With respect to mesh Ma (flat mesh of equal triangles) and M f (flat mesh of unequal triangles) all testers detected no edge crossings or features whatsoever. Working with meshes M b , Mc, M d and Me, edge crossings were detected only at faces joining at non-flat angles in both approaches. As expected, there was no spurious detection of any other surface feature in both renderings. As expected, Mesh Mg, a very dense mesh of minute triangles, was felt as a continuous surface of varying detail, beyond the users' ability to detect edge crossings in the geometry.

Test I. Quality of Visual-Haptic perception
A series of spheres were built recursively at several resolutions out of a regular icosahedron, and then the synthetic HRMs shown on Figure 7 were generated. The chosen textures were: alternating polished and variable bumpy areas (N 4 and H 4 ), gently sloping circles within a soft gradient (N 3 and H 3 ), and an embossed cross having only horizontal and vertical surfaces (N 2 and H 2 ).
Evaluation of Results: We show on Figure 8, different renderings using force shading and HRMs. It is evident from the figures that normals in bump mapping/force shading make for a smoother visualization. However, in terms of haptic perception, the comparisons are quite different and depend on the characteristics of the texture map.
-In the case of the displacement map shown in Figure 8a, using a texture image with a big oval and two small bumps over the surface, the resulting perception is a bit different between the two methods. In the part of small bumps there is almost no difference, but in the big oval part, the force shading method only perceives the resistance for going up to the oval, while in the heightfield method the perception is clearly going up and down from it. Fig. 7: Normals and Heights maps -In the case of the texture shown in Figure 8c of a cross relief over the surface, the perception is clearly different between the two methods. In the force shading method users perceive resistance on the going up and a jump going down, but report no measurable height differences. In the HRM method the perception is more accurate, because going up and down the cross gives the feeling of a real height displacement from the base.
As a summary for this test, we can conclude that our HRM algorithm gives a more accurate sense of surface characteristics than the use of force shading.

Test II. Perception of mesostructure with simple repeating patterns
This test was devised to detect the lower and upper limits of haptic modeling and perception using the HRM approach. The test measured several perception variables out of a regular serrated pattern: How does it feel when going back-and-forth? Can the ridges be counted? Are the height differences noticeable? Each trial was performed with base mesh M b using HRMs H 1, j , having regular serrated patterns at different frequencies (with corresponding normals N 1, j , see figures 9a and 9b). For each trial, the maximum heightfield value (that is, the altitude of the prism) was modulated at 1%, 5%, 10%, 15% and 20% of the average edge length, and repeated runs with several users. Force shading failed miserably this test, detecting just non-directional vibration at higher frequencies and shown to be unreliable at best at lower ones.
Evaluation of results: When we tested the HRMs, ranging the surface frequencies from few ridges to many, only the last two showed a performance threshold. Frec256 is a mesostructure that has an asymmetric serrated peak-valley combination repeated 256 times, and Frec512 is correspondingly doubled. Results are summarized in Figure 10 and Figure 11. The red (ridge count) and blue (left-to-right difference) lines in each graph represent the sample mean, and the surrounding shaded areas represent two standard deviations around the mean. Two important facts that can be extracted from this results: -There exists a definite region for optimum perception of haptic features, having peaks and valleys of 5%-15% of a triangle's edge size, with a "sweet spot" at altitude 10%. The 5%-15% region also holds for dynamic characteristics such as groove sense, and left-to-right or right-to-left differences. At the highest texture resolution, all test subjects only reported directionless vibration. -Heights greater than 20% produce instabilities in the haptic device, due to fast change in surface normals, high forces in steep vertical walls, and overshoot due to feedback kick.
These results hint at a practical threshold on how well mesostructure may be modeled by this approach. In the upper end of the scale, mesh zones whose surface variation exceeds 15% of average edge size are candidates for finer remeshing. In the other end of the scale, if a triangle is perceived as too smooth, the haptic sensation may be enhanced by a coarser sampling of the same texture.

Test III. Perception of non-monotonous mesostructure
Here we measured the ability to perceive definite shapes in the haptic textures: a soft texture of sloping peaks and depressions; small-to-big warts; single scratches. The object of this test is the multi-modal quality of perception: how it corresponds with the visual representation and whether it can be "followed along".
Evaluation of results: As can be extracted from Table 4, even small scratches are felt and followed. All testers were able to accurately detect the target features even at low resolutions, except when reaching the 20% threshold level, at which point instability set in and perception degraded quickly.

Test IV. Perception of visual-haptic disparity in a gradated mesh
We measure resolution changes in perception. We map the same haptic texture into a mesh (M f ) made of rectangular triangles of decreasing size, in order to test the limits of perception, aliasing effects and arising instabilities. We also measure how these qualities change as we zoom (both haptically and visually) in the mesh.

Evaluation of results:
In trial mesh M f (Figure 9c), neighboring triangles progressively reduce their area in half from left to right (height is reduced by √ 2/2). Since mesostructure remains at the same resolution, the resulting mapped areas actually double their density from left to right, and sampling aliasing occurs. What is obtained is a sequence of similar shapes, decreasing in perceptible features. Acute features noticeable in big triangles become fainter at smaller triangles. If the scene is zoomed in (or out) they become sharper (or smoother) again. A feature becomes undetectable when the height difference becomes less than a corresponding visual pixel, just as expected by the Nyquist limit. In other words, if a visual difference is seen, then it should be felt. This is completely in accord with user's expectancies.
A further result from this test is that the constant speed and very low complexity of computations needed for collision detection, HRM texture sampling, and haptic force rendering allow simple models to feel as complex as very dense geometric meshes. The haptic sampling rates (200 Hz to 1000 Hz) measured for the subject meshes in our approach are faster than those reported in [21] for comparable geometric models.

Conclusions
We have developed a fast and accurate method for rendering local haptic texture in triangle meshes, which allows perception of correct surface details at several resolutions. This extends the use of heightfield haptics beyond the usual field of gigantic terrain textures and allows perceiving higher surface detail without modeling it geometrically. This approach can be used for locally mapping relief textures in triangular meshes and haptically render them in real-time. The method even allows managing LoD in the visual and haptic resolutions for closer approximations, and we have the added benefit of having a repository of assorted HRMs, even procedural ones.
The proposed method allows rendering of HRM-textured triangle meshes to feel as detailed as very dense geometric meshes, at a fraction of the modeling and processing costs. The sampling's constant (and very low) computation speed accounts for faster collision detection than those reported in the literature, and promises to scale well even for huge models.
An important result is the existence of a definite region for optimum perception of haptic features, with textures having maximum peak heights or valley depths between 5%-15% of an average edge size, which also holds for dynamic characteristics such as groove sense, and left-to-right or right-to-left differences. This includes a blending procedure across edges that smoothes surface traversals even when transitioning between two or more HRM textures.
Another important result is the creation of a repository of base meshes and standard HRM textures, allowing measuring quantitative and qualitative differences among the haptic rendering algorithms described herein and others that might follow, using the battery of tests devised for this research. In order to apply our method for perceiving overlaid scratches on surfaces [25], we have extended it to accept HRMs representing inverse heightfields. In these cases, force shading does not give the correct perception because neighboring points with converging normals actually push the haptic probe away from the scratch. The proposed HRM-rendering algorithm allows a correct perception and traversal along the grooves of the scratches, and shows ample suitability for modeling and perceiving in real-time very complex surface textures of varying frequency on top of simpler geometric models such as bones, major body organs, machine assembly pieces and other structures.
Our model allows adding material friction as a constant global coefficient, or expanding the HRM with a second 2D texture field whose value represents a variable friction coefficient at each triangle point. This may include incorporating the added resistance of fine microstructure surface properties into the model. We expect to measure favorable performance differences if using an HRM-based approach instead of a pure geometric model in rendering haptic collisions, and thus obtain a robust result for the method.
The real power of this technique is the substitution of very complex geometry (millions of triangles) for its HRM equivalent. We are extending this research by exploring a procedure to scan the fine triangle geometry of dense meshes, and replace it with a decimated mesh of much larger triangles while capturing most of the perceptible frequency details of the original surface in a blended global HRM mesostructure atlas (height displacements, surface normals and other properties such as directed friction and stickiness). The approach uses haptic impostors to replace the nearest object geometry, and is similar to the visualization algorithm that Policarpo [26] and Baboud [27] describe for fast shading of geometric objects using displaced-mapped impostors, either as a two-sided back/front map or a six-sided cube map.