But what if you want to blend three or more textures together? Well today I have the answer.
As usual, you can jump straight to the code , or the demo at the end.
The first thing to cover is why doing the obvious doesn’t work. Suppose we made three different perlin noise images, one for each texture we want to blend. Then we interpret the height of each noise image as the strength of the texture in the blend. But we quickly end up with problems. The three noise images could all be high or low at the same time.
Above, I’ve put the three noise images together as red, green and blue channels of an image. The white and black areas are where all three colors coincide. When blending the textures, it’s necessary to scale these areas to a more normal range, or else they end up very bright / dark. Even after scaling, black is a problem as you end up scaling so much a small change in the input noise causes a huge change in the texture blend. As you can see, the output texture is a blurry mess.
The other obvious thing is to blend textures 1 and 2 with one perlin, then blend the result of that with texture three using a second perlin. That looks better, but that will be biased towards the third texture over the other two in a way that is hard to correct.
So what do we do?
The trick is to use barycentric co-ordinates. Barycentric co-ordinates are where you specify a point in space according to its nearness to a fixed set of points, instead by measuring a long axes that are at right angles to each other.
Consider the classic RGB triangle:
Any point inside this triangle could be described as, say, 10% red, 60% blue and 30% green. And given that description, I could easily find the point by taking a weighted average of the three corners with those percentages.
In this maner, we have a co-ordinate system descripting the area inside a triangle using three numbers that always sum to 100%. And we can describe the area outside the triangle, too, using negative numbers. The same idea works in multiple dimensions. In each case, \(n\) numbers summing to one are needed to describe a point in \(n – 1\) dimensional space.
Mathematically, barycentric co-ordinates are vectors \(\boldsymbol x\) where each component, \(\boldsymbol x_i\) is positive and the component sum \(\Sigma{\boldsymbol x_i}\) equals \(1\). I’ll return to this idea of component sum later.
The advantage of this co-ordinate system is it is explicitly based around blending. Position is defined by blending together the corners in the correct proportion.
I’m not going to describe how perlin noise works, that has been covered adequately in many places. But recall that a key part of the algorithm is that we randomly pick a gradient for each grid co-ordinate. Then we interpolate between those gradients to get the entire smooth image.
I’ve made two key changes. Firstly, I changed the algorithm to output a \(n\)-dimensional vector, not just a single value. This means replacing all additions in the algorithm with vector additons, and all the multiplications with scalar multiplications. Each output dimension is basically treated entirely separately. The only way the dimension are related is due to the second change. I’ve changed how the random gradients are generated.
Instead of generating \(n\) gradients at random, I generate a single \(n\)-dimensional barycentric gradient. A barycentric gradient is a \(n\)-dimensional vector with a component sum of zero. Because the components sum to zero, if I start at a barycentric point, and move in the direction of that gradient, I add a vector with component sum \(0\) to one with component sum \(1\). The result will also have component sum \(1\), so I’ll end up at a new barycentric point. The entire perlin noise algorithm is based off the gradient vectors, so this means it will only output barycentric points.
It’s actually super simple to generate random unit vectors with component sum of zero. We just generate random vectors with component sum zero, and repeat until we find one inside the unit sphere. Here’s some pseudo code:
while True: x_1 = rand_between(-1, 1) x_2 = rand_between(-1, 1) x_3 = 1 - x_1 - x_2 #This ensures the component sum is zero x = Vector(x_1, x_2, x_3) if x.length < 1: return x.normalized()
In fact, if you just replace the gradient function as described, you end up with a noise function with \(n\) dimensions of output, with a range of \(-1\) to \(+1\) for each dimension, and the sum of all the components is zero. This isn't quite barycentric, but it'll still be useful, so I'm calling it "barycentric variant".
To get actual barycentric co-ordinates, we need the component sums to be one, and we need it to generate only values between \(0\) and \(+1\), as it is not a good idea to use negative values when blending. And we want the average value to be \(\boldsymbol c = (\frac{1}{n}, \frac{1}{n} ... \frac{1}{n})\), i.e. evenly spread between all the dimensions.
Perlin noise always takes value \(0\) at integer co-ordinates. So far, we haven't changed that. The trick of using barycentric gradients doesn't work unless we actually start at a barycentric co-ordinate. So we need to pick a starting value for the output on each integer co-ordinate, and blend between them at the same time as applying and blending the gradient. We could simply set the starting value to be the desired average value, \(\boldsymbol c\), but in the actual code , I'm a little smarter. Depending on the randomly chosen unit gradient for a given grid point, I pick a starting point that guarantees that we stay inside the triangle at all times.
Suppose we've picked a random gradient \(\boldsymbol t\). First we calculate how far we can travel from \(\boldsymbol c\) in the selected direction while staying in the triangle. Say we can travel \(u\) units forward, and \(v\) units backwards. Then we output a starting value of \( \frac{u-v}{2} \boldsymbol t \) and a gradient of \( \frac{u+v}{2} \boldsymbol t \). Because normal perlin noise gives values in range \(-1\) to \(+1\), scaling and offsetting like this will give values between \(-v\) and \(+u\), ensuring we are always inside the triangle.
If also biases the output towards the corners of the triangle and away from the sides, so that, so each component will have an average value of \(\frac{1}{n}\), but will have an effective range of \(0\) to \(1\). This is exactly what we want for blending, and is true barycentric perlin noise.
As an aside, another use for perlin noise with \(n\) dimensions is if you want to subdivide a map into \(n\) roughly equal size territories. Say you want to decide which places are desert, forest, grassland and so on. To do so, you can simply take the largest component of the n dimensional vector as the choice of biome. Barycentric perlin noise gives slightly nicer subdivisions than other ways or achieving this. Or you can take the best 2 components if you want transitional regions between the main regions.
I've included options for both in the demo below.
Using RGB coloring, it's easy to visualize the differences.
The naive approach still has its uses, but as noted it generates white and black spots as the colors are chosen independently.
Using barycentric variant, we ensure that that we only pick colors from the RGB triangle, but the secondary colors, cyan, magenta and yellow, are equally prevalant to the primary colors.
Finally done correctly, the algorithm picks spots of red, green and blue and smoothly blends between them - perfect for texture blending.
Here's a simple demo where you can experiment with using three independent perlin noises, versus using barycentric perlin noise. The output can dislayed in various ways. I threw in some cross hatching options as I liked the idea of using this noise for generating fake maps.
NB: The perlin noise output has been scaled up to give a more vivid demonstration. But that does lead to some clipping, noticably the black patches when using Independent Textures.
The algorithm is simple. Start with the entire area covered in path tiles, then and remove tiles one by one until only a thin path remains. When removing tiles, you cannot remove any tile that will cause the ends of the path to become disconnected. These are called articulation points (or cut-vertices). I use a fast algorithm based on DFS to find the articulation points. I had to modify the algorithm slightly so it only cares about articulation points that separate the ends, rather than anything which cuts the area in two. After identifying articulation points it’s just a matter of picking a random tile from the remaining points, and repeating. When there are no more removable tiles, you are done. Or you can stop early, to give a bit of a different feel.
I call it “chiseling” as you are carving the path out of a much larger space, piece by piece.
See on github .
I don’t think I’ve seen anyone propose this technique before – it generates quite organic looking paths and is pretty easy to implement.
It’s quite flexible as it works on any graph, and you can control the generation by changing the probability of each tile being selected. It can connect together any number of endpoints. I also made a version that generates paths with a fixed number of tiles but no specific endpoints.
Here’s a demo below showing both sorts of generation.
In part 1 and part 2 of the series, we looked at the Marching Cubes algorithm, and how it can turn any function into a grid based mesh. In this article we’ll look at some of the shortcomings and how we can do better.
Marching Cubes is easy to implement, and therefore ubiquitous. But it has a number of problems:
Complexity
Even though you only need process one cube at a time, Marching Cubes ends up pretty complicated as there are a lot of different possible cases to consider.
Ambiguity
Some cases in Marching Cubes cannot be obviously resolved one way or another. In 2d, if you have two opposing corners, it’s impossible to say if they are meant to be joined or not.
In 3d, the problem is even worse, as inconsistent choices can leave you with a leaky mesh. We had to write extra code to deal with that in part 2.
Here’s a square approximated with Marching Cubes.
The corners have been sliced off. Adaptivity cannot help here – Marching Squares always creates straight lines on the interior of any cell, which is where the target square corner happens to lie.
So, what do do next?
Dual Contouring solves these problems, and is more extensible to boot. The trade off is we need to know even more information about \( f(x) \), the function determining what is solid and what is not. Not only do we need to know the value of \( f(x) \), we also need to know the gradient \( f'(x) \). That extra information will improve the adaptivity from marching cubes.
Dual Contouring works by placing a single vertex in each cell, and then “joining the dots” to form the full mesh. Dots are joined across every edge that has a sign change, just like in marching cubes.
Unlike Marching Cubes, we cannot evaluate cells independently. We must consider adjacent ones to “join the dots” and find the full mesh. But in fact, it’s a lot simpler algorithm than Marching Cubes because there are not a lot of distinct cases. You simply find every edge with a sign change, and connect the vertices for the cells adjacent to that edge.
In our simple example of a 2d circle of radius 2.5, \( f \) is defined as:
\[ f(x, y) = 2.5 – \sqrt{x^2 + y^2} \]
(in other words, 2.5 minus the distance from the origin)
With a bit of calculus, we can compute the gradient:
\[ f'(x, y) = \text{Vec2}\left(\frac{-x}{\sqrt{x^2 + y^2}}, \frac{-y}{\sqrt{x^2 + y^2}} \right) \]
The gradient is pair of numbers for every point, indicating how much the function changes when moving along the x-axis or y-axis.
But you don’t need complicated maths to get the gradient function. You can just measure the change in \( f \) when \( x \) and \( y \) are perturbed by a small amount \( d \).
\[ f'(x, y) \approx \text{Vec2}\left(\frac{f(x+d, y) – f(x-d, y)}{2d}, \frac{f(x, y+d) – f(x, y-d)}{2d} \right) \]
This works for any sufficiently smooth \( f \), providing you pick \( d \) small enough. In practice, even functions with sharp points are sufficiently smooth, as you don’t need evaluate the gradient near the sharp parts for it to work. Link to code .
So far, we’ve just got the same sort of stepped look that Marching Cubes had. We need to add some adaptivity. For Marching Cubes, we chose where along the edge put a vertex. Now we have a free choice of anywhere in the interior of the cell.
We want to choose the point that is most consistent with the information provided to us, i.e. the evaluation of \( f(x) \) and its gradient. Note that we’ve sampled the gradient along the edges, not at the corners.
By picking the illustrated point, we ensure that the output faces from this cell conform as well as possible to the normals:
In practice, not all the normals around the cell are going to agree. We need to pick the point of best fit. I discuss how to handle picking that point in a later section.
The 2d and 3d cases aren’t really that different. A cell is now a cube, not a square. And we are outputting faces, not edges. But that’s it. The routine for picking a single point per cell is the same. And we still find edges with a sign change, and then connect the points of adjacent cells, but now that is 4 cells, giving us a 4-sided polygon:
Dual contouring has a more natural look and flow to it than marching cubes, as you can see in this sphere constructed with it:
In 3d, this procedure is robust enough to pick points running along the edge of a sharp feature, and to pick out corners where they occur.
One major problem I glossed over before is how to choose the point location when the normals don’t point to a consistent location.
The problem is even worse in 3d when there are likely to be a lot of normals.
The way to solve this is to pick the point that is mutually the best for all the normals.
First, for each normal, we assign a penalty to locations further away from ideal. Then we sum up all the penalty functions, which will give an ellipse shaped penalty. Finally, we pick the point with the lowest penalty.
Mathematically, the individual penalty functions are the square of the distance from the ideal line for that normal. The sum of all those squared terms is a quadratic function, so the total penalty function is called the QEF (quadratic error function). Finding the minimal point of a quadratic function is a standard routine available in most matrix libraries.
In my python implementation , I use numpy’s lstsq function.
Most tutorials would stop here but it’s a dirty secret that solving the QEF as described in the original Dual Contouring paper doesn’t actually work very well.
If you solve the QEF, you can find the point that is most consistent with the normals of the function. But there’s no actual guarantee that the resulting point is inside the cell.
In fact, it’s quite common for it not to be if you have large flat surfaces. In that case all the sampled normals are the same or very close, as in this diagram.
I’ve seen a lot of advice dealing with this particular problem. Several people have given up on using the gradient information, instead taking the center of the cell, or average of the boundary positions. This is called Surface Nets, and it at least has simplicity going for it.
But what I’d recommend based on my own experiments is the combination of two techniques.
Recall we were finding the cell point by finding the point that minimized the value of a given function, called the QEF. With some small changes, we can find the minimizing point within the cell.
We can add any quadratic function we like to the QEF and we get another quadratic function which is still solvable. So I add a quadratic which has a minimal point in the center of the cell.
This has the effect of pulling the solution of the overall QEF towards the center.
In fact, it has a stronger effect when the normals are colinear and likely to give screwy results, while it barely affects the positions for the good case.
Using both techniques is somewhat redundant, but I think it gives the best visual results.
Better details on both techniques are demonstrated in the code .
Another dual contouring annoyance is it can sometimes generate a 3d surface that intersects itself. For most uses, this is completely ignorable, so I have not addressed it.
There is a paper discussing how to deal with it: “Intersection-free Contouring on An Octree Grid” Ju and Udeshi 2006
While a dual contouring mesh is always water tight, it’s not always a well defined surface. As there’s only one point per cell, if two surfaces pass through a cell, they will share it. This is called a “non-manifold” mesh, and it can mess with some texturing algorithms. This issue is common if you have solids that are thinner than the cell size, or multiple objects nearly touching each other.
Handling these cases is a substantial extension on basic Dual Contouring. If you need this feature I recommend having a look at this Dual Contouring implementation or this paper
Because of the relative simplicity of the meshing process, Dual Contouring is much easier to extend to cell layouts other than the standard grid used above. Most commonly, you can run it on an octtree to have varying sizes of cells precisely where you need the detail. The rough idea is identical – pick a point per cell using sampled normals, then for each edge showing a sign change find the adjacent 4 cells and combine their vertices into a face. With an octree, you can recurse to find those edges, and the adjacent cells. Matt Keeter has a detailed tutorial on what to do.
Another neat extension is that all you need for Dual Contouring is a measure of what is inside / outside, and corresponding normals. Though I’ve assumed you have a function for this, you can also derive the same information from another mesh. This allows you to “remesh”, i.e. generate a clean set of vertices and faces that cleans up the original mesh. For example, Blender’s remesh modifier.
Well, that’s all I’ve got time for. Let me know in the comments if you found it useful, or if there’s anything else you’d like me to explain.
Dual Contouring is only one of a handful of similar techniques. See SwiftCoder’s list for a number of other approaches, each with pros and cons.
To recap, you divide up the space into a grid, then for each vertex in the grid evaluate whether that point is inside or outside of the solid you are evaluating. There are 4 corners for each square in a 2d grid and there are two possibilities for each so there’s \( 2 \times 2 \times 2 \times 2 = 2^4 = 16 \) possible combinations of corner states for a given cell.
You then fill the cell with a different line for each of the 16 cases and the lines of all the cells will naturally join up. We use adaptivity to adjust those lines to best fit the target surface.
The good news is this works almost identically in the three dimensional case. We divide up the space into a grid of cubes, consider them individually, draw some faces in each cube, and they’ll join up to form the desired boundary mesh.
The bad news is that cubes have 8 corners, so there are \( 2^{8} = 256 \) possible cases to consider. And some of the cases are much more complicated than before.
The very good news is you don’t need to understand it all. You can just copy the cases I’ve put together and skip to the results section, forgetting about the whys and wherefores. Then start reading about dual contouring if you want a more powerful technique.
Still here? Ok, you’re keen. I like that.
The secret is you don’t really have to assemble all 256 different cases. A lot of them are mirror images or rotations of each other.
Here are three different cases for cells. The red corners are solid, and other are empty. In the first case, the bottom corners are solid and the top ones are empty so the correct way to draw a dividing boundary is to split the cell in half vertically. For convenience, I’ve coloured the outer side of the boundary as yellow and the inner side of the boundary blue.
The other two cases can be found simply by rotating the first case.
We have another trick to use:
These two cases are inverses of each other – the corners of one are solid where the other is empty, and vice versa. We can easily generate one case from the other – they have the same boundary, just flipped.
With that in mind, we only really need 18 cases from which we can generate all the rest.
If you check the wikipedia article or most tutorials for Marching Cubes, it says you only need 15 cases. What gives? Well, it’s true, the bottom three cases I’ve drawn in my diagram aren’t strictly needed. Here’s those extra three cases again, compared with using inverses of other cases to get a similar surface.
Both the 2nd and 3rd columns correctly separate the solid corners from the empty ones. But only when you consider a single cube in isolation. If you look at the edges on each face of the cell, you’ll see they are different for the 2nd column and 3rd column. The inverted ones won’t connect up properly with adjacent cells, leaving holes in the surface. After adding the extra three cases, all the cells will neatly join up.
As in the 2d case, we can just run all cells independently. Here’s a sphere mesh made from Marching Cubes.
As you can see, the overall shape of the sphere is good but in places it is just a mess as very narrow triangles are generated. Read on to Dual Contouring, a more advanced technique with several benefits over Marching Cubes.
]]>Here’s No Man’s Sky, for example:
Similar use cases could be be displaying MRI scans, metaballs and voxelizing terrain.
The following tutorial in Marching Cubes, a technique for achieving destructible terrain, and more generally, creating a smooth boundary mesh to something solid. In this series, we’ll cover 2d in this first article, follwed by 3d in the next , and Dual Contouring in the third. This last is a more advanced technique for achieving the same effect.
So let’s begin.
First, let’s define exactly what we want to do. Suppose we have a function that can be sampled over an entire space, and want to plot the its boundary. In other words identifying where the function switches from positive to negative, or vice versa. With the destructible terrain example, we’re interpreting the areas that are positive to be solid, and the areas that are negative to be empty.
A function is a great way of describing an arbitrary shape, but it doesn’t help you draw it.
To draw it, you need to know the boundary, i.e. the points between positive and negative, where the function crosses zero. The Marching Cubes algorithm takes such a function, and produces a polygonal approximation to the boundary, which can then be used for rendering. In 2d, this boundary will be a continuous line. When we get to 3d, a mesh.
Let’s start in 2d for clarity. We’ll get to 3d later. I refer to both the 2d and 3d cases as “Marching Cubes” as they are essentially the same algorithm.
First, we split the space into a uniform grid of squares (cells). Then, for each cell, we can measure whether each vertex of the cell is inside or outside the solid by evaluating the function.
Below, I’m using function that describes a circle and colored each vertex black if that position evaluates to positive.
Then we process each cell individually, filling it with an appropriate boundary.
A simple look up table provides the 16 possible combinations of the corners being inside or outside. For each case it describes what boundary needs to be drawn in each case.
Once you repeat over all the cells, the boundaries join up to create a complete mesh, even though each cell can be considered independently.
Nice! I guess it kinda looks like the original circle provided by the formula. But as you can see everything is all angular, with lines at 45 degree angles. That’s because we are choosing the boundary vertices (the red squares) to be equidistant from the points of the grid.
The best way to fix the fact all lines are at 45 degree angles is adaptive marching cubes. Rather than merely setting all boundary vertices from the cell midpoints, they can be spaced to best match the underlying solid. To do this, we need to know more than what we just used – whether a point is inside or outside – we also need to know how deep it is.
This means we need a function that gives some measure of how far inside / outside it is. This doesn’t need to be precise as we only use it for approximations. In the case of our ideal circle, which has a radius of 2.5 units, we use the following function .
\[ f(x, y) = 2.5 – \sqrt{x^2 + y^2} \]
Where positive values are inside, and negative values are outside.
We can use then use the numerical values of \( f \) on either side of an edge to determine how far along the edge to put the point .
When put together, it looks like this.
Despite having the exact same vertices and lines before, the slight adjustment in position makes the final shape much more circle-like.
To learn how to apply the Marching Cubes algorithm in 3d here. Or if you want to jump ahead to an improved technique checkout Dual Contouring.
]]>The addon can now generate a much wider variety.
Check it out on github.
And you generate “twills”, a fancier sort of weaving.
The coloring and UV options have also been improved, as you can see above.
Thanks to the following papers which helped me generate these additional features:
* Single-Cycle Plain-Woven Objects (2010) Xing, Akleman, Chen, Gross
* Cyclic Twill Woven Objects (2011) Akleman, Chen, Chen, Xing, Gross
Try a demo. Click to add/remove anchor point, where the path must pass through.
]]>Jump to the code, the live demo.
To cut the problem down to the bare bones, say you’ve got a top down tile based room like this.
The player enters from the top, and leaves via the bottom. We want to add a block, , to the room to make things more interesting.
Some locations are are ok to put the block:
Others are not, as it is no longer possible to walk out the exit.
In this room the answer is easy – you can put the block anywhere, as long as it’s not directly in front of the entrance or exit. But for a larger, more complicated room, or a larger, more complicated block, this is not so easy to solve. If the room is a huge maze, you effectively have to solve the maze to determine if blocking off one corridor makes a difference.
The simple way to solve this is path finding. Or, since we don’t really care about the actual path taken, flood filling.
To determine if a block is ok to place, simply place it, and flood fill all the walkable squares starting with the entrance. If you ever reach the exit, then there’s still a path from one to the other and the block is ok. Otherwise, you must remove it and try again.
The problem with the above approach is it is slow. I wanted to determine all safe locations on a large map. But each location required you to flood fill the entire area from scratch.
Disjoint-set data structure to the rescue. This data structure lets you hold a bunch of sets, and merge any two sets together extremely efficiently.
It can often be used much like the flood fill algorithm. Simply add every tile as a set of one tile. Then for every pair of walkable tiles that are adjacent, merge the respective sets of those tiles. At the end, if the entrance and exit tiles are in the same set, then they are connected. Otherwise, they are not.
The crucial difference from flood fill is that you can walk the adjacencies in any order, you don’t need to seed from one point and track what is currently being processed. That means, we can do all the merging that is far away from the block first of all. Then we can re-use what is computed so far for many different block positions (or shapes), as those far away tiles will have the same connectivity properties each time.
First I precompute a bunch of disjoint-set data structures (DSDS objects). I compute one for each possible vertical slice of room that sweeps either to one edge or the other:
These can be very efficiently computed serially, by processing tiles left to right, then right to left. The respective DSDS objects are also very memory efficient, as they can actually share a lot of overlapping data. (this elaboration is called a Persistent data structure, the code with this article shows how it works).
Then, once you have a block placement you want to test, you find the slices that bracket it. This gives you connectivity information for the left and right sides of the block pre-computed. All you need do is continue merging sets based on connectivity in the small central column. Then, as before, you check if the entrance and exit blocks are in the same set.
That’s it. Take a look at the code here.
This distribution of particles guarantees no two particles are very near each other. It’s often considered a higher quality particle arrangement than Blender’s default uniform sampling. It’s particularly useful for organic arrangements, and randomly arranging meshes without collisions.
]]>Axaxaxas is a Python 3.3 implementation of an Earley Parser. Earley parsers are a robust parser that can recognize any context-free grammar, with good support for amiguous grammars. They have linear performance for a wide class of grammars, and worst case .
The main goals of this implementation are ease of use, customization, and requiring no pre-processing step for the grammar. You may find the Marpa project better suits high performance needs.
Documentation can be found at: http://axaxaxas.readthedocs.org
]]>