**What are rulebooks?**

Rulebooks are essentially a fancy form of C#’s `Func<>`

and `Action<>`

generics.

`Func<>`

and `Action<>`

can hold a reference to a C# method, or an lambda expression/statement. But the thing they hold is essentially a black box – you cannot do much with it except run it, or check equality.

RuleBook provies `FuncBook<>`

and `ActionBook<>`

, which work similarly to their counterparts. But these rulebook objects are built out of individual rules, which can be individually inspected and mutated.

Overall, rulebooks give a systematic way of handling a bunch of useful programming patterns, including events, multiple dispatch, modding and code weaving.

Rulebooks are not an elaborate rules evaluation engine, it’s a lightweight way of stitching together bits of functionality.

]]>PuzzleScript is a marvel of economic design. A single text file specifies, all the graphics, levels, sound effects, and all the rules of the puzzle. It uses a custom system to concisely express rules. So concise that the rules of Sokoban can be expressed in a single line.

This efficiency comes because rules are expressed as find-replace rules. That makes it a grammar replacement system, which I last discussed when looking at Ludoscope and Unexplored. But it has many pragmatic features geared toward puzzle design, which I’ll explore in this article.

The world of a puzzle script level is a rectangular grid of cells. Filling those cells are various objects, each of which has a simple sprite associated. There can be multiple objects in one cell, e.g. there’s always a background object in a cell which is the sprite to show when nothing else is occupying it.

The rules are expressed like this:

`[ Player | Crate ] -> [ Player | ]`

All rules are a find and replace like this. The left hand side of the `->`

is the find pattern, and the right hand side the replacement. In this case, the find pattern is a 2 by 1 rectangle, with a Player in the first cell, and a Crate in the second. And the replacement removes the Create, while keeping the Player around. Patterns are made of a line of cells, there’s no direct way to match a 2 by 2 rectangle.

Rules in PuzzleScript are considered in all 4 directions by default, so this rule will delete a Crate that is adjacent to the player on any side.

Rules are evaluated in order from top to bottom. There are some looping to control if rules need to be retried, and rules can be marked as `late`

to be run after the movement phase.

The matching rules can be quite sophisticated, can you find all the stuff that is possible in the docs.

Commonly puzzle rules are often concerned with change and motion. *“When a player pushes a block, move it”. *But PuzzleScript rules only match the current state of the world, they can’t express events occuring, or changes.

The solution to this is inspired. Every game object can be tagged with the motion it is about to undertake. Then the user presses left, the Player object is tagged as left-moving, then the rules run, then the game processes the movement, and finally the `late`

rules run.

Movement tags can be matched in find rules, and set in replace rules. This rule

`[ > Player | Crate ] -> [ > Player | > Crate ]`

Finds a Player object that is moving towards a Create object, and sets the Crate object moving in the same direction. Essentially acting a push operation.

The actual movement is a core engine feature, it’s not done by rules. It handles the fiddlier details of collisions.

Beyond basic rules, there are a number of features that struck me as a great payoff of simplicity to utility.

Rules like

`[> Player | Lever] [Door] -> [Player | Lever] [Door]`

have multiple patterns in the find section. The rule only fires if both patterns are matched somewhere on the level, but they don’t need be near each other Each pattern has its own replacement.

Rules like

`[ > Kitty | ... | Fruit ] -> [ | ... | Kitty ]`

can match a Player and Crate in a line from each other, at any distance. In other words, it stands for an infinite set of rules:

```
[ > Kitty | Fruit ] -> [ | Kitty]
[ > Kitty| | Fruit ] -> [ | | Kitty]
[ > Kitty| | | Fruit ] -> [ | | | Kitty]
etc
```

Variable sized patterns like this was one of the biggest weaknesses I noted in Ludoscope, it’s nice to see such an elegant solution.

This is more a coding technique than specifically part of the engine, but you can create transparent objects with no collision, which essentially serve as variables without having to break away from the replacement rules paradigm.

These hidden objects can track all sorts of details. The manual notes, for example, you can create a “shadow” behind an object at the start of evaluation to detect if it has moved by the end.

PuzzleScript rules are much much simpler than the graph rewriting rules we’ve seen before. The authors note that it’s not “a general purpose puzzle game making tool”. But what are some specific problems?

I think one fundamental problem is object identity. PuzzleScript doesn’t really say whether the Player object in the find pattern is the same Player as in the replacement pattern. In a sense, they are identical objects, so there’s no difference between relocating an object, and destroying and creating an object in a new place.

But this ambiguity does cause some problems. It’s impossible to animate a PuzzleScript rule without guessing. And rules like

`[ Player | Crate ] -> [ Crate | Player ]`

can backfire as the Player will inherit Crate’s movement tags and visa versa, when a naive reading suggests it is swapping the position of two objects.

It’s also very hard express some simple concepts. Matching on diagonals, doing counting operations, and path finding all require multiple rules, and often copy-pasting.

What I love about PuzzleScript is how well suited replacement rules are for simple 2d puzzles. The discrete nature of replacements and the logical evaluation order naturally matches the main properties of this sort of game.

The idea of treating motion as a state, rather than a change, is an incredible insight, and one I’ll be trying to apply eleswhere.

The rules themselves may be simple and powerful, but that doesn’t always translate to simple to code. If you browse the gallery of games, you’ll find many with dozens, or hundreds of rules. Combined with hidden state, loops and so one, they can be tricky to write and hard to debug. My feeling is that replacement rules are a powerful tool, but they can’t serve as sole source of logic for a game. But I’ve yet to really see a system embrace that.

But I think we can all agree: PuzzleScript rules!

]]>I’ve been doing it as a hobbyist for some time, and have become more and more involved: I make tutorials, projects, I sell a tool online for a niche algorithm, and recently taught a “masterclass” at Everything Procedural, the main conference for professionals in the space.

I thought I’d spill some digital ink about what it’s actually about. I get asked often enough, and this will help me clarify my verbal answers.

I am not a naturally creative person, but I am a visual person. I like to *see* results. Procedural generation is a super easy area to get into, and pretty much immediately you can get your own results to play around with. There’s no right answer to what looks best, so there’s always more stuff to explore.

It’s a fertile area for research, too. I think a lot of techniques remain undeveloped, and the space is recently being challeneged by generative AI, to the consternation of many.

I also am a big fan of roguelikes/roguelites, which are games that heavily rely on procedural generation.

I feel embarrassed sometimes when I’m asked about my hobby, because in many ways, I’m not really a central example, and I’m doing a poor job representing it. I’m interested in weird things, and have too much a focus on theory over practise. But I rub shoulders with lots of interesting people, including:

**Artists**

It’s particularly popular for demosceners, NFTs, and for interactive art exhibits**Game developers**

Some game genres*rely*on generation to get some desirable gameplay, but it’s also needed in most big games to simplify the process of authoring the massive amounts of content needed. It’s a common skill of technical artists**Academic Researchers**

There’s a few dedicated arenas, but it tends to blend into broader categories of computer graphics, ai or “creativity in games”**Film VFX**

VFX studios rely on physical simulations a great deal, and also use procedural techniques to fill in details on scenes and perform animation.**Hobbyists**

You can find plenty of communities online. Some programming and maths are useful, but not required.

While you can classify procedural generation in many ways, I generally like to break things down by the different techniques used to create things. I think this gives a best idea of the breadth of stuff to explore, and creators tend to have their specific speciality.

Personally, I classify things in these rough categories:

**Algorithmic**

Classic programming, you give the computer a series of instructions to follow. Example: Prim’s Algorithm, which is a computer science technique that can be used to design labarythine mazes.

**Functional**

You specify a mathematical formula, which is evaluated to get results. Example: Perlin noise, which gives a wobbly looking pattern with many uses (first used in the original Tron film)

**Simulation**

Setup the initial parameters, create the rules for how to change things, and watch it go. Physics simulations are everywhere, but these are also used for crowd dynamics, erosion, story generation and more.

**Generative AI**

This is a newcomer, and people are still unsure where this lies in relation to the older techniques, particularly as it has raised some ethical objections. But there’s no denying that it is an extremely powerful way to create things, and one with a lot of undiscovered ground in maximizing control and behaviour.

Feel free to reach out to me on twitter. Or try Kate Compton’s essay, which has a more details on everything I’ve discussed here.

There’s some tools out there that are fun ways to get started.

- Tracery – create random written text
- Generative AI – too many to list
- ShaderToy – Write simple programs that run directly on your GPU to create images
- Geometry Nodes – A Blender based tool for creating 3d objects and scenes

There’s too many talented creators to list, but I’ll mention some who are more prolific about explaining their thoughts and processes, who are some great places to start

- Mike Cook – automated game design
- Inigo Quilez – demoscener and engineer
- Amit Patel – tutorial writer & gamedev

Table of Contents

GitHub lets you serve arbitrary files that are not part of the repository as part of it’s Releases feature. You can publish a cross platform .dll, or the full source code, or both. I use this for DeBroglie.

Install instructions for users are “Download this and drop it in your Assets folder”.

This is extremely easy to set up, and works basically fine^{1}. You don’t get dependency management or much by way of versioning, but do your users really use those things?

Also, if you are deploying a compiled dll you cannot take advantage of conditional compilation, and need a separate dll for the editor. You can have Unity components in a dll, but they are not interchangable with the same components defined in a source file due to how .meta files work, making them tricky to debug or work with.

I recommend this approach for fairly simple source distributions, or dlls that do not depend on Unity.

NuGet is the normal C# package manager. It’s more or less impossible to get this operating with Unity. I only publish here for “dual-use” libraries that can be compiled free of unity dependencies. Not recommended.

While convenient for users, publishing to the asset store is extremely tedious. Not recommended for free assets. It’s the only option for paid assets that integrates into the Editor, of course.

“Asset Packages” are the old style of package, used internally by the Asset Store. They’re more or less just a file with the extension .unitypackage which contains a collection of files inside. They can be imported/exported in the assets menu with these items, and unpack into the Assets/ folder.

They’re pretty simple, but there aren’t good tools to work with them automatically. So they really have little advantage over a zip file. Not recommended.

The new style of packages is Unity Package Manager packages^{2}. This format is a bit flexible. It’s basically a directory with a package.json manifest, and optionally a recommended directory layout for other details.

The manifest contains the sort of metadata you’d expect a package to have. In particular, UPM is the only format that supports dependencies between packages^{3}, which is a must if you have a lot of projects that share code.

All files in the UPM package are dumped into a subfolder of the Packages/ folder. Annoyingly, Unity treats everything in the packages folder as an asset, meaning that other distributed files such as documentation need to be in folders marked with a ~ to get Unity to ignore them. It’s a bit ugly.

UPM packages can be directly loaded from the package manager.

As you can see, they can be loaded from disk (usually used when developing a package), from tarball (a form of zipping, not dissimilar to .unitypackage distribution), and from a git URL.

The latter is how many developers like to deploy things. As long as you have the files pushed to github (or other public git host), you don’t need do any release process at all. You can just point users at the github url.

I think this works for ultra lightweight publishing, but it has some issues if you want to do a good job:

- It is conflating source files with release files. For a lot of things, that’s ok, but if you have any source preprocessing, or doc generation, it leads to trouble.
- It becomes awkward if the upm package is not at the root of the git repo, as this is only supported from later versions of unity.

I recommend if you are looking for hassle free, and the above are not dealbreakers.

Another option for loading UPM files is via a “scoped registry”. This works a bit like alternative Asset Stores. They look like this.

You can set up your own registry, but in practise, it’s easiest to publish to OpenUPM which already exists, and comes with a nice website and CLI tool as added conveniences for users using a registry. It gives step by step instructions for how to install a package, which is good as it’s not obvious.

OpenUPM has some additional requirements for publishing to their registry. But they’re things like having semver and a license, which you’ll probably be doing anyway.

OpenUPM expects you to use tags to indicate actual released versions, which I quite like as it means not every push to github is immediately released. But it is one more step of process.

Recommended.

Finally, we’re reaching the solution I’ve used for Sylves. I like OpenUPM, but I refuse to make my source repo confirm to Unity’s layout. This turns out to be straighforward.

I have all my source files in git, on the main branch. When I want to release, I compile the documentation, preprocess source files, put together the package.json file, and so forth, and arrange everything in the UPM format in a separate directory. I then push that directory as a separate git commit, in a different branch, upm. That way I have complete control over what is published. I’ve a short python script to do the whole thing.

(NB: the upm branch is an orphan so it does not share history with the main branch. They are separate, but live in the same repo).

Your choice probably depends on the size of project.

- Dead simple, no dependencies
**→**share the files, e.g. in GitHub releases.

- Project is complex, but you don’t want to waste time with release processes
**→**Use the recommended layout, push to github, and share the github url.

- Gold standard
**→**Push to OpenUPM so it can be used as a scoped registry or a github url. Potentially separating source and release layouts.

Not many developers actually use UPM as it’s still new and the package manager is not as intuitive as dropping a file in the Assets folder. So consider offering both:

After publishing a few projects, I’ve realized there are some extra bits you should be aware of while working with unity packages.

If you are using a method where the resulting files goes in the users Assets/ folder, then users may not like Unity’s default behaviour of putting things in the root. They will want to move it to a subdirectory. Be careful to make your project relocatable. The main gotcha here is paths to resource files. Use a subdirectory called Resources/ as these are all considered root paths by Unity regardless of where they are located.

Assembly Definition files give Unity instructions on how to build your package into a separate assembly from the rest of the users game. This can offer a small boost to compilation, and is organizationally neater. A few of my users asked me to set these up.

You’ll need a separate asmdef for Editor source code, and they have some configurable features that are useful. But largely they are create once and forget.

If you are deploying dlls rather than source code, then you need to worry about platforms. A single .NET dll will already run perfectly on Mac/Windows/Linux. But make sure you build the dll targetting “.NET Standard 2.0” as this maximizes the Unity build options that can use your .dll.

I’ve also made my library Sylves truly cross platform – it compiles separate versions for Unity, Godot and .NET. But how I managed that is a question for another article.

With Asset Packages (including via the Asset Store), when updating a package, Unity naively just dumps the new files, overwriting old ones. It has no mechanism for cleaning up old files. For source code, that’s crippling, as the old and new will define the same class.

I’ve usually left empty source files in place to avoid this problem. I delete them during major version changes.

I think UPM solves this problem, as the Packages/ directory is immutable and thus safe for Unity to delete. Would be useful if someone could confirm?

- Although see the caveat on deleting/renaming at the bottom of the article ︎
- Fresh from the Department of Redundancy Department ︎
- You cannot do dependencies from a git-referenced package to another git referenced package for some reason. You need to use scoped registries for that. ︎

Jules felt stung as he left the psych ward. He wasn’t bothered by the battery of tests they ran – after all, he’d just had a major head accident. Nor was he bothered by the extra time they’d spent holding him as they flew in a neuro specialist to try and understand his extraordinary condition.

No, it was the final remarks as he was approved for release. “Sure, he’s testing off the charts for mental aptitude, perhaps a tenfold gain in thinking speed and memory, but that’s not a reason to hold him in a ward. What’s he going to do, take over the world?”

This, Jules took personally.

Most science fiction, quite frankly, is silly nonsense.

Alfred Bester

A day later, Jules had read a healthy chunk of fiction on the subject of enhanced intelligence. *Flowers for Algernon*, Ted Chiang’s *Understand*, even films like *Limitless* at ultra fast playback. He knew science fiction is a worthless way of predicting the future, but what it did contain was *ideas*, some of which he could use.

His gifts seemed relatively modest compared with most fiction. He thought and perceived the world at an incredible rate, and was equipped with a memory to match. But his IQ was not changed, nor any measures of creativity or any other intellectual skills you might care to name.

But it would suffice. He was already planning the future as he turned in for the night.

All money is a matter of belief.

Adam Smith

The first goal was to make some money. That’s always useful for all sorts of purposes, and Jules’ plans already ran to many years. Jules was already pretty smart before the accident, so he reasoned he could just find 10 high skilled remote jobs, like a software engineer, and work them simultaneously. Realistically, he could probably replace more like a team of 15-20 developers – not only did he work 10 times faster, but he could skip all the time that team would have spent in meetings together, and he only had to read a document once, unlike a team who had to each absorb it separately.

But a couple of million a year seemed unambitious to Jules, particularly as he’d be doing far more work per day than most tolerate. Game shows like Jeopardy were also out of the question. Though Jules could easily memorise savant levels of information, he wasn’t thrilled by dedicating hours to memorising useless facts or calling such public attention to his gifts.

Finally, he followed the boring, but lucrative, path of day trading stocks. Every time a company posts news, computer algorithms and people race to react to that news, buying/selling stocks. Jules couldn’t outrace the algorithms, but they are typically stupid, looking for simple positive or negative sentiment in the article, keywords, and so on.

What the human portfolio managers do is read the news for wider implications – a new mining prospect means cheaper copper for other industries, or a political announcement will affect the exchange rate of a country’s currency. Jules was no expert at this, at least initially, but simply being able to outrun everyone to reacting on less straightforward news stories gave him an unbeatable edge. And the thing about finance is that a reliable advantage can be multiplied exponentially, as success qualifies you for larger and larger loans.

By the end of the week, Jules was satisfied he could find funding for projects of any size and had a nice cushion for what was to come.

“Let us cultivate our garden.”

Voltaire, Candide

A common theme in the stories he’d read is that of self-improvement. Could the freak accident that had gifted him these abilities be replicated, or even extended? Jules was unwilling to work with others, partly out of impatience, and partly from a residual fear that a three letter agency might take an interest in him.

So he started reading neuroscience textbooks himself. Jules was pleased to discover how quickly he could learn a new subject. He not only read material ten times faster but had much better retention. He had almost no need to revise material, and complex topics came to him very easily as he no longer struggled to keep many aspects of a difficult problem in mind simultaneously. Within a month, he’d covered enough of the material to appreciate quite how little is known about the brain. He didn’t doubt he could eventually do research in the area but was unlikely to make further progress without time-consuming experiments.

Not for the first time, Jules cursed his mortal limitations. At least half the fiction he’d read had mentioned artificial intelligences, which had options like cloning, purchasing hardware, or directly inspecting and improving their own mind. They were all out of the question for him. Reluctantly, he put this aside in favour of more straightforward means.

I must study politics and war, that our sons may have liberty to study mathematics and philosophy. Our sons ought to study mathematics and philosophy, geography, natural history and naval architecture, navigation, commerce and agriculture in order to give their children a right to study painting, poetry, music, architecture, statuary, tapestry and porcelain.

John Adams, Letters of John Adams

Jules started studying various subjects in earnest. He estimated it would take under a year to become a full expert in any given field, getting the equivalent of 10,000 of hours practice. And similar subjects would take much less time, given overlapping concepts and skills. Some subjects, like law, are heavily memory-based, and didn’t even require much time beyond reading the material.

Many people manage to master 2 fields in their lifetimes, so he could reasonably expect to eventually be able to master about 20 before even his memory would be taxed too far. But Jules didn’t intend to master anything. Usually, he’d study enough of a subject to get a clear idea of the broad strokes, trends and key blockers before moving on. True expertise simply wasn’t worth it, as when more specific knowledge was needed, he could likely learn it faster than whatever situation called for it.

Jules rarely picked up formal qualifications. He could pass the exams, but attending courses or acquiring a fixed period of on-the-job experience was just too slow to consider. Instead, he focussed on areas lacking professional bodies, or unusually meritocratic.

One area of unexpected ease was academia. Submissions are usually judged blinded to the true author making it much easier for better ideas to win out regardless of the author’s background or tenure. Jules found that even with medium levels of expertise in multiple fields, he could pick up a lot of quick wins with cross-disciplinary work, or applying techniques from other areas.

By the end of the year, Jules was a jack of all trades, at least for theory/knowledge-heavy subjects. He’d studied economics, politics, history, sociology, medicine, law, maths, rhetoric, and so on.

A diplomat is a person who can tell you to go to hell in such a way that you actually look forward to the trip.

Caskie Stinnett

In the last year, Jules had rarely interacted with anyone directly. Everyone spoke too slowly! He preferred to use private tutors that he communicated with via email so he could multitask. But Jules’s plans needed a public face, so he set out to improve his ability of persuasion as much as possible.

To his surprise, despite his year of isolation, he was already a very charismatic speaker. Having plenty of time to think in between responses had yielded many benefits. Jules never stuttered or misspoke, had a quick wit and always remembered an apposite quote, anecdote or citation. If necessary, he could lie very convincingly, having a larger well of details to draw upon and plenty of time to reflect on the internal consistencies. Persuasion itself is a skill, and though Jules couldn’t practise it at nearly the same speed as academic subjects, he saw measured improvement, aided by coaches and trainers.

As he entered his career in politics, his tremendous depth of background and quick grasp of any new situation started to pay off. Most politicians are torn between a dozen crises and opportunities, and their real skill is in the art of politics themselves – they’re rarely able to actually make more than a cursory understanding of the actual issues they encounter. Jules was the real deal – able to go toe to toe with subject experts, run rings around his opponents in debates, and most crucially, he was often right. Add to this his inexhaustible bank account for campaigning and his career was off to a flying start.

Jules could no doubt have won the presidency with time, luck and persistence, but that was not thinking big enough for Jules. Presidents are still at the will of the people, and there are many checks and balances on their power. Instead, he followed a path well known to history – demagoguery. Having established himself as a major leader and thinker, he started to align his rhetoric with a movement. Let’s not discuss exactly what Jules said, or about who. We’ve seen this story before and it’s not pretty.

I have nothing but contempt for the kind of governor who is afraid, for whatever reason, to follow the course that he knows is best for the State

Sophocles, Antigone

By stirring resentment and prejudice, Jules became the head of a movement of anger and was swept through to power with enough of a majority to loosen and unbind much of the constitutional limitations and avoid serious deadlocks in government. Perhaps he’d have to find a stooge to replace him as term limits wore on. Jules had no particular skill of judging character but found it easy enough to police his allies via direct monitoring.

Most dictators are not good at their jobs, to the extent they even try. The skills of running an uprising are not the same as running a country, and they now have to worry about maintaining power. Democratically elected politicians at least have to pretend to improve the lives of their electorate, but dictators only need a few generals and politicos on their side, and let their countries languish.

But Jules was different. He had the skills and the indefatigability of someone who rarely finds themselves stretched. Working with a suite of aides and experts, he used his political capital to push through a large suite of sweeping reforms and amendments. It wasn’t hard to recognise good ideas when you have a good understanding of the subject, a set of experts to draft documents, and you’re willing to steal good ideas from other nations. Jules found many obvious changes had been neglected simply from political gridlock.

Slowly, but surely, he began to make real headway on social ills and economic lassitude. He transformed an already rich and powerful country into the envy of the world. But here, Jules found himself running aground. As a talented and omnipresent administrator, he could run his government effectively. But a government is composed of tens of thousands of employees, many having devoted their lifetime to expertise. At this scale his own productivity was dwarfed by those he was responsible for.

His most effective decisions had become deciding who to recruit and promote, which he found he was barely more skilled than average. The feedback loop on hiring decisions is slow enough he could only gain experience at a normal rate, and it’s a skill not well transferrable by writing.

Taking over the world from here would be a slow grind of being a fractionally better head of state than others. Perhaps with the right opportunities, he could join countries in empire or union, but even Jules’ stubbornness had limits.

Jules decided to enter retirement and retreat again from the public world.

The more sand has escaped from the hourglass of our life, the clearer we should see through it.

Noccolo Machiavelli

Jules found something strange was happening to him. He’d experienced over 200 years of subjective life, double that of anyone else in the world, and without any of the decline or fixity of old age, or even the bliss of forgetfulness. He already knew he’d been set apart from humanity from the start, but he hadn’t appreciated how much further he had drifted.

He turned over in his mind his changing perspective on the world. News events no longer shocked him but seemed obvious repetitions from history. Talking with others seemed like talking to children, clumsily retreading arguments he’d already heard before and considered in depth. Entertainment lost its charm as he saw the influences and inspirations clearer than the original authors. The amount of novelty in the world shrank to less than a teaspoon.

The overall workings of the world were starting to form a pattern in his mind. For a time, he worked on a philosophy text explaining it, until he found an insurmountable difficulty. The average human can hold around seven distinct objects in their working memory at once, while Jules could hold over fifty. There were some concepts he’d come to understand that simply couldn’t be digested by anyone else. By the time they’d read the last page of Jules’ treatise, their grasp of the beginning had already smudged enough that it couldn’t be put together into a coherent whole.

Such concepts are not uncommon in maths. The Monster Group, for example, is a fundamental indivisible object with as many elements as there are atoms in the solar system. It cannot be apprehended fully by anyone, and cannot be simplified into smaller elements. But Jules was proposing he knew important truth, secret to only himself. People began to turn on him, dismissing what they couldn’t understand.

Jules felt more alone than ever. Retreating further from public life, he lived an ascetic life for several years as he considered ideas inscrutable to modern men. Had he been right to leave his power and fame? He could have shepherded humanity even further or crossed the line into war and violence. Or he could have enjoyed his position more, becoming a self-interested despot too wily for anyone to dispose of. What did he want in life? What purpose does any of it have? He forged his own answers to these questions, reached lofty conclusions no one else on earth could have found.

He understood that everyone is a product of their past and their environment in totality. As an over-evolved ape, he could never escape the more fundamental and base aspects of his nature, and there were limits of thought even he could not pierce.

Ultimately, he found the position in life that best suited his nature, all things considered. It was a plan for him alone. Each person must find their own path, considering both their shared human heritage, and their personal quirks. And so, Jules spent the rest of his days quietly, as a keeper of bees.

]]>There’s been a lot of clamour about generative AI for images, like Midjourney or Stable Diffusion. It’s killing creating jobs or whole industries; it’s illegally using copyrighted data for training purposes; it’s eroding the nature of art itself. I’m sure there are many out there who would be happy to see an outright ban on AI image generators and the like.

On the other hand, it’s undeniable that this is a valuable technology. Not just for the corporations making them, but of benefit to the world. Sure, every artist unpaid is someone else’s money saved, but also as the costs of art fall, that democratises everything around art. A friend of mine made personalised Christmas card this year, a small joy of the world that simply would not have occurred before. I co-wrote a custom murder mystery with ChatGPT in barely longer time than it took to play. The lowering skills bar for indie comics and games is something I hope leads to a profusion of new original things, much as digital art and games engines have spurred it in the past.

How can we resolve these things, to have our cake and eat it? Well, society has faced this problem before and has found a solution that, though imperfect, has endured for centuries.

I’m referring of course to **copyright**. Copyright was invented to deal with a similar tension: on the one hand, if all creative acts were free to use, share and remix, that would have valuable benefit to the world. On the other, creators deserve to be paid for their works and need to be incentivized to create them in the first place.

Copyright gives creators **a time-limited right to prevent others copying their wor**k. For a set period of time, they can benefit from their creation exclusively or sell on the rights. But after that period, it enters the public domain and the world gets the rest of the benefit. It elegantly splits the value of a creation between the creator and the public, in a way that doesn’t require a fraught process of valuation, or even that much administration.

As a system, it is a bit rough around the edges. There isn’t consensus on how long the exclusivity period should be, and exactly what is/isn’t copying has a vast unlegislated gray area (more on this later). But as a framework, I think it holds up.

The shortcomings of copyright law have been brought into starker focus with generative AI. It wasn’t really designed to protect against the sort of use that AI represents, and of course, there is very little case law establishing the boundary. Does training an AI count as “transformative”? Does it matter how accurately the original image can be reproduced? It will take time for a legal consensus to form.

What we need are new laws that will provide better guidance in this area. I don’t want to ban generators entirely, they have too much potential. But neither do I want to cause mass technological unemployment, and hand over humanity’s creative destiny to the machines.

So here’s what I’m thinking.

We make a new right, **trainright**, to supplement copyright. Like copyright, trainright is the right to use a particular creative work in certain ways. It’s much more limited than copyright – it’s designed so that **it can only be used in training AIs**, and only in such a way that the original image is subsumed into the gestalt of the ai, and **not directly accessible**.

Like copyright, trainright is granted to the original creator of a work, can be licensed out, and after X years, it expires. Anything older than that is fair game for training. But it might still be in copyright for a longer period of time. I.e. the piece is only available to the world as a general gist – the specifics are still protected.

This is not a new idea – most legal systems already have many distinct rights/licenses that are associated with a work. An instructive one is the “**mechanical license**“. Mechanical licenses were created in the 1900s to deal with player pianos, an automatic piano that could play a specially encoded track of music. There was discussion about whether using a machine to reproduce the sounds of sheet music constituted a copyright violation. Sound familiar?

In the US, the Supreme court ruled that player pianos did not violate copyright^{1}. Shortly after, the mechanical license was established in law. It allows anyone to get certain rights to a musical score such as creating cover songs or sample it. But is much more limited than copyright. If we’d have relied on soley copyright to protect the music industry, much of our existing culture of music could not exist.

There are differences of course. Mechanical licenses are compulsory licenses, meaning you must pay to get the right, and the rightsholder cannot refuse. While I’m proposing that trainright works closer to copyright, and the rights can be gained for free after waiting long enough. The number of images in a typical training dataset makes seeking individual permissions impractical, so a time limit is the only option.

Another concession to practicality: Trainright, unlike copyright, should **expire in a fixed number of years after the publication of the image**. Let’s say twenty years. That means that every image currently on the internet today is guaranteed to have expired trainrights in 20 years. Or equivalently, any dump of the internet older than 20 years is out of trainright, and safely can be used for training without legal problems. Otherwise, the requirement of verifying that every image in your dataset is safe to train on would be prohibitive.

I think wording like the following serve as a draft for the idea of trainright:

If you have trainright, then you may incorporate the work into a larger work (such as a trained AI model) provided that the author has reasonable confidence that the work is mixed in sufficiently that a substantial proportion of it cannot be accurately recovered without specific, directed action or prompting.

The key distinction here is on “directly prompted”. I think an image generator that produces a copyright-infringing image when prompted “Nintendo’s Mario” is nothing to worry about – a human had to drive that infringement, and it can be dealt with by usual copyright law. On the other hand, an image generator that produces the same image when prompted “Italian Plumber” is going too far to infringe without a user’s intervention, and can be considered to infringe trainright. Generators at present usually fall into the second category, but I do not think the technical distance is too far to cross.

To address some obvious questions

**Q: But AI is bad and deserves to die!**

I’d encourage you to think *why* generative AI is bad. The fact that the quality of the output is poor is neither here nor there – if it was so awful no one wanted it, then there wouldn’t be a problem in the first place. The world would be a different place if cheap inferior products were banned, and I don’t think a better one. I’m sure the quality will improve in the future anyway, why not plan for then?

So the badness of AI must stem from it hurting some people, which can only be creators who are having their rights breached and livelihoods threatened. This proposal addresses both those cases. Is it a perfect deal for them? No, probably not. Like copyright, it is a compromise. Perhaps I’ve set the duration of trainright too low, and the balance should be set elsewhere. But I refuse to give up on generative AI altogether. History has shown that labour-saving technologies like this cannot be indefinitely resisted, even if they lead to quality drops.

**Q: Enforcement of this would be asymmetric leading to unfairness**

Probably true. No system of justice can do away with this complaint. But I would observe that copyright has worked out very well in this regard, and I think it would carry over to trainright. By and large, corporations are large sue-able entities who need to be scrupulous about getting licences for everything they do. Meanwhile, fan art and fan fiction have flourished (despite being copyright violations in most cases) in part because it is impractical to stop. Copyright law is absolutely asymmetric, but in a way that aligns closer with people’s notion of fairness than the actual letter of the law!

**Q: Your wording leaves a lot of room for ambiguity and loopholes**

Yes. Perhaps you can improve on it? Better attempts have been made elsewhere, such as Japan’s law on Generative AI.

But I think laws are generally not that precisely drafted anyway. The exact boundaries are explored through individual cases. AI has clearly yet to reach its final form, there’s no point pretending we’ve got all the answers.

**Q: You’ve really focussed on image generators, does this extend to other AI?**

Well, certainly other “content” generators like music, video. I’m less sure about writing – a lot more of the value of LLMs and code generators relies on them being up to date. So this proposal is a narrow fix for one of the most salient short term criticisms. AI is a huge subject to cover. Society still has to deal with the more subtle issues, such as the use of algorithms replacing human descision making or existential risk from AGI.

**Q: Generative Art can’t really create anything, it can only copy.**

This law is a great way to test this statement. Trainright is only granted to the extent that you are reasonably confident that your model is not incidentally copying a specific example. So if generative AI turns out to be fundamentally incapable of remixing its training data in a general way, then this proposal effectively becomes a ban on AI, providing much needed clarity on copyright’s ambiguous stance.

- Though performance royalties still applied. ︎

Watabou’s Cave Generator is one in a series of RPG-ready map generators that Watabou has created over the years. All his work oozes style, but the cave generator was always the one I found most mysterious.

I discovered that the entire thing was exported with extremely readable javascript, so naturally I started to poke and prod. Let’s go over how it works.

Table of Contents

We’ll focus on how this map is generated for version 2.0.2:

The main thing to initialize are tags. These are randomly picked, but can also be set by the user.

Tags control the later generation steps. Sometimes they switch between different algorithms, but more commonly they act as presets for numeric parameters. I’ll refer to tags `like this` when they are referenced. Some tags are mutually exclusive – e.g. only one of `small` / `medium` / `large` can be chosen.

Some other global parameters are randomly chosen at the start, such as chamber size and connectedness preference for area growth. This gives each dungeon a more consistent feel across the map.

Like all procedural generators, there is a seed parameter that controls all sources of randomness. This enables permalinks for maps by storing the seed.

It’s not obvious, but in fact everything in the generator is done on a hex grid. Let’s see the final output again, this time with all the graphical details turned off – no wobbly lines, no random rocks, no water, no superimposed square gridlines.

Now you can see the actual specifics of the map. But how is the layout actually achieved?

After creating the hex grid, it is immediately converted into a data structure called a Doubly connected edge list, which is essentially a fancy way of representing a mesh of polygons and their neighbours. Everything in the generator operates on this mesh data structure – it’s not tied to hexes at all. I was able to hack in square grids without much difficulty, the author just didn’t find a use for this feature.

The majority of the map is generated with an algorithm called seed growth^{1}. It picks a random starting location and then adds adjacent cells at random until a predetermined size has been met. This is then repeated to give several areas. Areas are not permitted to touch each other.

Most of the details of the generation are determined by tags:

The number of areas is 9-19 for a `large` map, 3-8 for a `medium` and 2-3 for a `small`. If no tags are present has a 1 in 20 chance of picking 2 or 20, otherwise it uses a random formula.

The size of each area is also a random function. For `hub` generations, there is one area of size 60-79, and other areas of size 8-13. For `chamber`, a random chamber size is chosen between 11-14, and then each area is randomly offset from that by 0-2. And for `burrow`, it uses formula `10 + 80 * pow((rand() + rand() + rand()) / 3, 3)`

which gives rooms around size 23, but occasionally much larger.

When choosing which cell to add to the area, they are weighted differently according to various algorithms. By default, it weights each cell as `pow(c, gamma)`

where c is the number of adjacent cells already in the area, and gamma is a random variable indicating the preference for connection. High gamma tends to cause rounder areas with fewer jutting pieces. `cavities` fixes the gamma of 6, while `coral` uses a negative gamma so each area prefers narrow tendrils. `chaotic` prefers cells with 2 connections.

Now its time to connect up all the areas. First, the algorithm identifies all cells that border two elements.

The possible area-pairs are then culled. For `tree`, a simple depth first search ensures each area is only reachable a single way (no loops). For `connected`, if it finds 3 areas all adjacent, it disconnects one pair. Otherwise, all pairs are kept.

For each pair, a single door cell is randomly chosen from amongst the possibilities and added to the map.

Some areas are randomly chosen to shrink to corridors. It prefers areas with a large number of doors for the area, and some tags, particularly `burrows`, greatly increase the chances.

To shrink an area, points are repeatedly removed, and flood-fill is used to ensure that the doors are all still connected, which I describe more in Chiseled Paths Revisited.

At this point, exits are chosen too, which are randomly selected from eligible border cells. Cells further from the center get higher weights. The exact number of exits can be controlled with tags `sealed`, `entrance`, `passage`, `junction`.

The water is simply an independent Perlin noise, which is compared against a threshold. It’s more obvious in v2.1.0, where you can play with a slider.

The hexagonal nature of the walls is hidden via several steps:

- Every edge is subdivided
- The outline is smoothed (“bumpiness”)
- Vertices in corridors are moved towards the face center.
- The points are randomly offset (“irregularity”)
- Two more iterations of subdivision and randomness (“roughness”)
- The boundary is converted to curves with Chaikin’s Algorithm (not shown in gif)

Some random small polygons are dropped on the map to make stones and rubble. Dyson hatching is applied near the boundary.

The title of each map is generated using a Tracery script. Many features contribute to the choices – “demonic” names are more likely for certain tags, and the presence of water increases the chance of damp names like “Bog”.

The top-level rule gives you an idea of what it’s like:

```
"name" : [
"#compound_noun# #noun#",
"#adj# #noun#",
"0.2?-#noun# of #fantasy#",
"!LARGE&0.2?-#person#'s #noun#",
"!SMALL&0.1?-#noun# of #epic_noun#"],
```

Glade mode re-purposes the entire generator for making forest clearings. The main differences are the trees randomly drawn along the border, and the name generator switches to a different script. I love how you can switch it on and see the same map completely re-interpreted.

Oleg’s generator illustrates a common maxim in procedural generation. Great results don’t come from applying a super advanced algorithm, but rather by combining several simple rules effectively, and with an eye to the look and style you are aiming for.

In my opinion, Some of the rules used here, notably seed growth, and path chiseling offer a nice balance of simplicity to useful results and are generally underappreciated as techniques.

- We’ve seen “seed growth” before in my Binding of Isaac article. But there’s no widely known name for this technique that I’m aware of. ︎

I’ve recently seen a lot of demonstrations of why the decimal 0.999… equals 1.

These are endlessly cycling the internet, simply because all the simple explanations aren’t really compelling. You see smart people responding “can’t you just…” or simply not convinced by bare assertion.

The truth is that is that dealing with these things is actually a lot more complex than a glib twitter answer. You *should* feel uneasy with these explanations. This same subject confused mathematicians of earlier centuries, leading to awkward theories like “infinitesimals”, which ultimately fell out of favour.

I’m going to take you through a proof that 0.999… = 1, with **rigour**. Rigour is a term used in maths for building from a solid foundation and proceeding from there in sufficiently small steps. Thus, the majority of the article is **not the proof but the definitions**. How can we talk about infinity in a way that makes sense? The trick, as we’ll see, is to only talk about finite things we already understand, and define infinity in terms of those.

This article is aimed at those with high school level maths. There’s a proof halfway down, but it’s skippable.

Table of Contents

Let’s start with a simple scenario. It’s necessarily unrealistic, but maths was never concerned with actual reality, just what follows from the rules set out.

Suppose there is a large red jar. Every minute, another \(10\) litres of water is poured into the jar. The jar magically grows larger, and the source of water never stops or runs out. How much water is eventually in the jar?

Well, it’s basically a nonsense question. What does “eventually” mean? We haven’t defined that yet. But here are some true facts that I can say about the jar.

- After \(10\) minutes, there are \(100\) litres in the jar.
- After \(100\) minutes, there are \(1000\) litres in the jar
- After \(T\) minutes, there are \(10T\) litres in the jar.

These are all facts involving finite amounts of time and finite amounts of water. They are straightforwardly provable with the tools we already have, basic arithmetic. That last statement is true for all \(T\), but resist the urge to think of it as an infinite set of statements – it’s one statement, that has a variable, \(T\), in it. It too is also provable with arithmetic.

Now we’re going to play a stupid 2 player game. Here are the rules: On your turn, name a number, \(N\). Then on my turn, I name a time, \(T\). Finally, you name a time \(U\) that is greater than \(T\). If there are less than \(N\) litres of water in the jar at time \(U\), then you win. Otherwise, I win.

It should be clear, this game is rigged. I can always win. No matter which \(N\) you pick, all I have to do is pick \(T=N/10\). At that time, there will be \(N\) litres in the jar, and it only increases from there, so no choice of \(U\) works – I will win.

In other words, there is no **upper** **bound** on the amount of water in the jar. Any bound you might suggest will definitely get shattered at some later point. We call any number \(N\) where you definitely win an upper bound. Any number where I can win is not an upper bound.

We’ll call any such process that increases without limit as **“tends to infinity”**. Infinity, in this definition, is not a number at all, it’s just a description of a sequence of numbers. And the description itself only involves finite numbers, so our definition is solid.

Clearly, not all sequences “tend to infinity”. If we stopped filling the jar after an hour, then it’ll never have more than \(600\) litres of water, so it has an upper bound. And you could win the game by saying \(601\) liters.

Let’s look at another jar, with another process for filling it.

This time, there’s a large blue jar. It’s still being filled with water, but differently. The first minute, half a litre of water is added. The second minute, a quarter litre is added, and the third an eighth. Each subsequent minute, we pour in half the amount we poured in the previous minute.

Can we say that this sequence also “tends to infinity”? The answer is no, but we’ll have to do some maths to prove it.

We’ll use a sequence of variables \(x_1, x_2, \cdots \). to stand for the amount of water in the jar after minute 1, minute 2, etc. We can write \(x_i\) to stand for the amount of water in the jar after minute \(i\).

So

\[

\begin{align*}

x_1 &= \frac{1}{2}\\

x_2 &= \frac{1}{2} + \frac{1}{4}\\

x_3 &= \frac{1}{2} + \frac{1}{4} + \frac{1}{8}\\

\vdots &

\end{align*}

\]

We can summarize this as:

\[ x_i = \frac{1}{2} + \cdots + \frac{1}{2^i} \]

Or equivalently \( x_i = x_{i-1} + \frac{1}{2^i} \), which makes it clear that we are starting with the previous volume of water (\(x_{i-1}\)) in the jar, and adding to it.

But that doesn’t really tell us the precise value unless we want to do a lot of sums. Instead, I’m going to prove that \( x_i = 1 – \frac{1}{2^i} \), using a proof by induction. You can skip the proof if you are intuitively happy that each minute, we pour in water equal to half the remaining space in the jar, which is sized to fit one litre of water.

**Theorem**:

If \( x_1 = \frac{1}{2}\) and \(x_i=x_{i-1} + \frac{1}{2^i}\), then

\[ x_i = 1 – \frac{1}{2^i} \]

**Proof**:

Proof by induction works in two steps. First, we prove the base case, \(x_1\). It is clear that \(x_1 = \frac{1}{2} = 1 – \frac{1}{2^1}\). Second, we prove the inductive case. We assume it is true for \(i-1\), and then seek to prove that it is true for \(i\).ex

By assumption: \(x_{i-1} = 1-\frac{1}{2^{i-1}}\)

So we know that

\[

\begin{align*}

x_i &= x_{i-1} + \frac{1}{2^i} \\

&= 1-\frac{1}{2^{i-1}} + \frac{1}{2^i} \\

&= 1-\frac{1}{2^i}

\end{align*}

\]

Which completes the inductive case.

So we know that the \( x_i = 1 – \frac{1}{2^i}\) for \(i=1\) (the base case), and that if it’s true for \(i=1\), it’s true for \(i=2\) (the inductive case), then if it’s true for \(i=2\), it’s true for \(i=3\) (induction again), etc. So we know it’s true for any value of \(i\), and the theorem is proved.

You may be thinking this proof is in some sense sneaking in an infinity. We jumped from knowing a fact about specific values of \(i\), to knowing a fact about all \(i\). Well, you’d be right. This process of induction is an “axiom”, something you just have to accept if you want to do productive maths. You don’t have to accept it, but then you end up with different, and usually more boring conclusions. But remember, to prove the theorem for any given i, you only need finitely many uses of the inductive case. So as with the red jar, we’ve dealt with the entire range of values by only considering finite work for each value.

End of proof, start reading here if you are skipping the maths.

So we’ve established that the amount of water in the blue jar gets closer and closer to 1 litre as time goes on. We could say that 1 litre is an upper bound. If we played that stupid game from before, you could name 1 litre, and I would be stumped, there’s no time where the water level exceeds that. So you’d win.

Thus, the blue jar sequence does not “tend to infinity”. This is despite constantly pouring in more water!

Instead, we’ll play another stupid game. In this game, you’ll pick a number \( \varepsilon \) ( \( \varepsilon \) is the Greek letter epsilon, which is traditionally used in maths to represent variables for this purpose). Then I’ll pick a time \(T\). Then, you pick a time \(U\) which is greater than \(T\). If at time \(U\), the distance between the amount of water and 1 litre is less than epsilon, then I win. Otherwise, you win.

Again, this game is rigged. There’s no value of \(\varepsilon\) that works. I can always pick \(T = \log_2(\frac{1}{\varepsilon}) + 1\). At that time, the amount of water will be \(1-\frac{1}{2^T}\), which works out as \(1 – \varepsilon/2\). The distance of that to 1 is \(\varepsilon/2\), which is less than \(\varepsilon\). Later values of U will have values even closer to 1.

When I can always win this game, we say the sequence “**tends to 1**“. Again, this is a description of a sequence of numbers, perfectly valid in a finite-only world.

Ok, I think we’re finally ready to tackle decimal numbers. Imagine we live in a world where only fractions have been discovered. There is decimal notation, like \(0.123\), but that is understood as just a shorthand for \(\frac{123}{1000}\). There are no recurring decimals, or decimals with an unending amount of digits.

But what we do have, is sequences. Here’s one sequence:

\[

\begin{align*}

x_1 &= 0.3 & =& \frac{3}{10}\\

x_2 &= 0.33 & =& \frac{33}{100}\\

x_3 &= 0.333 & =& \frac{333}{1000}\\

\vdots

\end{align*}

\]

It doesn’t take long to recognize this sequence works very similarly to the blue jar sequence from above, and get

\[ x_i = x_{i-1} + \frac{3}{10^i} \]

We can adapt the proof we used above, and show that this new sequence tends to \(\frac{1}{3}\). That is, we can play the epsilon game, this time comparing distances of the sequence to the value \(\frac{1}{3}\). I’ll always win, as the sequence gets arbitrarily close.

We’ll call this sequence “\(0.\dot{3}\)”, pronounced “zero point three recurring”.

There are lots of other sequences. Here’s one: \(x_1=5, x_2=5, x_3=5, x_4=5\cdots\) I shouldn’t need to tell you that this sequence tends to \(5\).

Here’s another sequence: \(x_1=3, x_2=3.1, x_3=3.14, x_4=3.141, x_5=3.1415\cdots\) You’ll have to take my word there’s a straightforward, if long, definition for elements in this sequence. Unlike my other examples, there’s no fraction that we pick can this process tends to, so we’ll just call the sequence \(\pi\).

There’s also sequences that don’t settle down, like \(x_1=1, x_2=0, x_3=1, x_4=0, x_5=1\cdots\). We’ll ignore these for now. There’s another epsilon game we can play that can be used to filter out sequences like this, but I won’t go into it.

It turns out there’s a lot we can do with sequences. If we’ve got two sequences \(x_i\) and \(y_i\), we can define a new sequence \(z_i = x_i + y_i\). It turns out that if \(x_i\) tends to a fraction \(A\), and \(y_i\) tends to a fraction \(B\), then \(z_i\) will tend to \(A + B\). You can try and prove this yourself as an exercise, but it can be found in plenty of textbooks.

You can make equivalent operations for all the basic arithmetic operations, like subtraction, and division. In every case, if we work with sequences that tend to some fractions, and do some operations on those real numbers, the result will tend to the fraction you’d get if you did the same operations on the starting fractions. So in some sense, these sequences behave identically to their fraction counterparts.

So these sequences behave just like numbers, for the most part. Maybe we should just call them numbers? Mathematicians use the term **real** **numbers**, to distinguish them from from fractions, integers, and other sorts of numbers that mathematicians work with.

But there is a catch. Here are two sequences:

\[

\begin{align*}

x_1&=0.3, & x_2&=0.33, &x_3&=0.333,&\cdots\\

y_1&=0.3, & y_2&=0.333, &y_3&=0.33333,&\cdots\\

\end{align*}

\]

These are two different sequences, but they both tend to \(\frac{1}{3}\). For various reasons, it’s not useful to have duplicate numbers that behave identically. So when defining the real numbers, we’ll consider certain sequences as equivalent.

Two sequences correspond to the same real number if the difference of the sequences tends to zero.

So \(x_i\) and \(y_i\) are equivalent, as \((y_i-x_i)\) tends to 0.

In practice, it gets extremely wordy to always be talking about sequences of things, so real numbers are usually referred to in a shorthand. We’ve seen how “\(0.\dot{3}\)” and “\(5\)” are examples are such shorthand. And it gets tedious to say “sequences are equivalent”. We’ll just say that the real numbers are equal instead, and start using an equals sign. While we’re being liberal with equals signs, we might as well say that a real number that tends to a fraction “equals” that fraction. They behave identically, so why not? Whether they are “really” equal is something we leave to the philosophers.

This sort of thing is called “**abuse of notation**” in maths. Familiar symbols are repurposed for new meanings that are roughly similar to the old ones. You can’t accomplish anything in maths without shorthand notations, then shorthands of shorthands, lest things become too verbose. But a mathematician should always be capable of translating back to raw elements if needed. At least until expert research level mathematics where you become such a pro at reasoning that you often drift off from such low-level thinking entirely.

So let’s practise that unpacking process now. When we write \(0.999\cdots = 1\). What are we actually saying?

The left-hand sign is an infinite decimal. We know that is a shorthand for the real number represented by the sequence \(x_1=0.9, x_2=0.99, x_3=0.999, \cdots\)

And likewise, the right-hand side is a real number represented by sequence \(y_1=1, y_2=1, y_3=1,\cdots\)

And the equals sign is asserting that these two real numbers are the same, which means that the difference of the sequences tends to zero. Is that something we can prove?

Well, \(y_1-x_1=0.1, y_2-x_2=0.01,\cdots\) It’s fairly easy to prove that \(y_i-x_i = \frac{1}{10^i}\). This sequence tends to zero as if you pick any \(\varepsilon\), no matter how small, I can show that the sequence eventually gets smaller.

Thus the sequences are equivalent, which means the real numbers are equal, so yes, it is true that \(0.999\cdots = 1\).

Phew!

We’ll finish with some concluding thoughts, but well done, we got there.

I think this above also explains why people things intuitively that 0.999… and one are different. In terms of raw sequences, they are different. In terms of ink on the page, they are different. But the only sensible way to work with real numbers is to consider two sequences equivalent if they tend to the same thing. And these two sequences do exactly that.

So this is a lot longer article than any of the explanations you’ve seen elsewhere. The problem with shorter explanations is that they involve doing algebra on infinite sequences of operations. The use of numerical sequences above very carefully avoids that. All the maths is done with finitely many steps, and we use the epsilon game to reason about long-term behaviour while still talking about finite things.

Algebra *can* be done on infinite sequences, but it is not intuitive. Certain common sense things no longer apply in these cases, and what can safely be done, needs to be proved. The mechanisms above are one such way of proving things.

As an example of the sorts of dangers, consider this infinite operation:

\[ 1-1+1-1+1\cdots \]

Depending on where you insert the brackets, that could be interpreted as

\[ (1-1)+(1-1)+(1-1)+\cdots = 0+0+0+\cdots =0 \]

or

\[ 1 + (-1 + 1) + (-1 + 1)+\cdots = 1 + 0 + 0 +\cdots = 1 \]

In other words, nonsense. Something as innocuous as putting in brackets is totally invalid!

Another thing I didn’t do is use \(\infty\), the infinity symbol, at all. While it’s ok to talk about infinity as a concept, or as a symbol, I certainly didn’t treat it as a number. Again, you *can* treat it as a number in some cases, but not all the rules work with it, so you need to prove everything over again and forget your intuitions.

What we did in this article was essentially define real numbers, starting from nothing more than fractions and sequences. By defining something complex in terms of simpler building blocks, we make a solid foundation. If we ever want to know a fact about real numbers, and we can’t prove it in terms of real number facts already known, then we can resort to thinking about real numbers as sequences, and prove things there.

This general idea is called construction. This is not the only way to construct real numbers, there are other formulations that are equivalent. And reals are not the only things that are constructed – most mathematical objects have a definition that works this way. In my first year at university, we proceeded in order – first, we defined natural numbers (counting numbers from 0). Then we learnt a construction for integers (whole numbers positive and negative) and proved that it was equivalent to natural numbers where they overlapped. Then we constructed fractions from integers, real numbers from fractions, and so on.

At this point, maths starts becoming a lot more cohesive subject. You begin to understand that maths isn’t a whole bunch of different rules, tricks and techniques, but instead a vast tree, or web of knowledge. So vast, that we must use different notation, terminology and ideas to tackle different parts. But there are more commonalities than you’d believe.

In my haste to get to the 0.999… case, I glossed over a lot of the complexities of working with real numbers. There are lots of interesting ideas here that you’ll have to read about elsewhere.

- There are many sequences that don’t actually correspond to any real number. We really only want to consider
**convergent**sequences, which is another variant on the epsilon-delta game. - Some real numbers, such as \(\sqrt{2}\) or \(\pi\), do not have a fraction that they tend to at all
- In some sense, there are “more” real numbers than there are fractions, despite there being an infinite amount of both.
- Not all sequences can be easily described on paper. Some cannot be computed at all.

In the article, I introduced several games. These are all variants on the **epsilon-delta game**, which is the key definitions for limits. What I was calling “tends to” is better known as “the limit of a sequence. It is an extremely powerful technique for dealing with all sorts of things, and features heavily in the definitions of calculus.

Ever struggled to figure out all the possible combinations of tiles you need to put together for autotiling? I’ve create a tool answers that question for a variety of cases, with visualizations.

]]>Last time, we looked at quarter-tiles. This was an auto-tiling technique for square grids. Each cell in the grid is associated with a terrain (i.e. either solid or empty). Then the squares were split in four, and each quarter was assigned an appropriate quarter-tile.

Otho-tiles extends this procedure to work with irregular grids, even non-square grids. We just have to alter the procedure a little, and be ready to deform the quarter tiles fit in place.

Ortho is a Conway Operator. It can be thought of as the extension of dividing a square into 4. It divides each n-gon into n “kites” or “ortho-cells”. Each kite is a four sided shape containing the cell center, one corner, and the midpoint of the two edges adjacent to that corner.

The appeal of the ortho operation is it can take any polygonal grid, no matter how irregular, and convert it into a grid of 4 sided shapes. And it’s much easier to work with something that has a consistent number of sides.

Ok, so we can convert a grid of polygons into a grid of kites. We want to treat each kite of this grid like we treated the quarter cells of the square grid in the previous article. To remind you, that means we want to pick an appropriate tile to fill them, according to the following rules.

As the kites always have 4 sides, it’s easy to design similar to fit any particular shape of kite. We’ll call these new set of tiles, **ortho-tiles**. They’re exactly like the quarter-tiles of the previous article, extended to more polygons.

Having the tiles and tile rules is only half the answer though. We also need to change how we read values from the grid.

With quarter-tiles, we read the value of the current cell, two adjacent cells, and one diagonal cell. Those 4 values are then fed into the rules above. The current cell and the adjacent cells can still be easily located for a kite, but what is the “diagonal” cell? For some circumstances, like a hex grid, there is no diagonal cell, while for others, like a triangle grid, there are multiple!

The trick is to use the following rule. “*The value used for a given corner is the minimum value of all cells that share that corner*“. Or, equivalently, a vertex is considered solid only if all the cells that share that vertex are solid.

Then, everything works.

Note that in all cases, only 6 tiles are used, rotated appropriately. Hex grids are often annoying to make tilesets for as there are so many cases, but ortho-tiles side steps that by subdividing to quads first.

The example above demonstrates a hex grid and a triangle grid. I created a separate set of ortho-tiles for each, to fit the specific shapes required. But you can just take the tiles designed for squares, the original quarter tiles, and *warp* them to fit any other quadrilateral. As we started by subdividing polygons into quadrilaterals, that means it’ll work on any grid at all.

Here’s an example:

Despite the superficial similarity to Townscaper, this actually uses a tiny handful of tiles.

]]>