{"version":"https://jsonfeed.org/version/1","title":"rubenwardy's blog","description":"Welcome to my blog, where I write about my projects and various tech-related topics.","home_page_url":"https://blog.rubenwardy.com/","feed_url":"https://blog.rubenwardy.com/feed.json","icon":"https://blog.rubenwardy.com/static/rubenwardy.png","favicon":"https://blog.rubenwardy.com/favicon.ico","expired":false,"authors":[{"name":"rubenwardy","url":"https://rubenwardy.com/","avatar":"https://rubenwardy.com/static/ruben-512.png"}],"items":[{"id":"/2024/03/08/typescript-is-awesome/","language":"en","title":"Why I like and recommend using TypeScript","content_html":"
I’ve recently gotten into using TypeScript when working with web technology, for\ncreating web apps and servers. I find that TypeScript makes JavaScript more\nenjoyable to use. In this article, I’ll explain my reasons why.
\nThe number one benefit for me is that it massively improves the developer\nexperience without impacting the agility of an interpreted language. TypeScript\ncombined with a supported IDE like VSCode(ium) delivers a great experience.
\nTypeScript provides type information that helps the IDE help you. Whilst type\ninformation is supported for JavaScript, it’s not as reliable or fully featured.
\nWith type information, the IDE can show relevant suggestions for methods and\nmembers (ie: IntelliSense), allowing you to see what is inside an object. This\nincreases productivity. TypeScript makes refactoring operations like renaming\nconsiderably more reliable as it can accurately find all the uses of a symbol.
\n\nDefining types acts as a form of documentation. This doesn’t replace actual\ndocumentation but instead supplements it - it tells you the types that an API\nexpects.
\nHere’s the signature of a function without any comments. Just the argument names\nand the types are enough to get an idea of how to use it.
\nfindPath(from: Vector2, to: Vector2, maxDistance?: number): Vector2[];
\nAn alternative to TypeScript is JSDoc. This allows you to document the types in\nJavaScript using comments instead of type annotations. This is a good stopgap if\nyou have an existing code base in JavaScript, but is quite verbose and not as\neffective.
\n/**\n * @param {Vector2} from\n * @param {Vector2} to\n * @param {Vector2} [maxDistance]\n * @returns {Vector2[]} the path\n */\nfindPath(from, to, maxDistance);
\nA secondary benefit for me is that TypeScript will emit compile errors when\ntypes do not match. This is very useful for discovering a lot of different\nproblems.
\nOne example of a situation where this helps is when dealing with optional types.\nTypeScript will tell you if you try to access an optional type without first\nchecking for null/undefined. I find that in a lot of languages, nullability\ncauses a lot of issues. It’s a big issue in Java, for example, and one of the\nreasons why I prefer Kotlin. Remember the types-as-documentation argument as\nwell: having nullability in the types tells you, as the developer, whether you\nneed to handle the null case.
\nType checking is also very useful when refactoring code and updating\ndependencies, as it gives you compile errors for invocations that don’t match\nchanges made.
\nIf you’re doing development properly, you should have unit tests and code\nreviews. These have a greater impact on code quality and correctness than\nTypeScript does, but it doesn’t negate the benefit completely.
\nA common complaint II’ve read about TypeScript is that it only does compile-time\ntype checking and not runtime. It’s important to note that compile-time checks\nare better than run-time checks. It’s good to catch as many errors as possible\nat compile-time, as run-time is slower and less reliable. Compile-time errors\nare fast enough to be shown in your IDE as you edit the code.
\nC and C++ are considered to be strongly typed languages and rely almost entirely\non compile-time type checking. The “static” in static type checking means that\nthe types are verified at compile-time. In C, the concept of types doesn’t even\nexist at runtime. C++ has\nRun-Time Type Information (RTTI),\nwhich allows you to do safe casts at run-time (std::dynamic_cast
) and also get\ntype information (typeid
and std::type_info
). These features are useful but\nnot essential - it’s common for developers to disable RTTI to reduce program\nsize in embedded applications. Now, perhaps C/C++ isn’t the best role model when\nit comes to type safety. Rust would be a better example, as it has a\nwell-written compiler and is memory-safe.
So, what’s the problem here with TypeScript? The problem isn’t that it only does\ncompile-time checks, the problem is that incorrect types can enter the program.\nFirst, incorrect types can enter the program through third-party dependencies.\nThis is especially common when the dependencies are written in JavaScript.\nSecond, TypeScript has dangerous operations that can result in incorrect types.\nFor example, type assertions (as T
) allow you to tell the compiler that a\nsymbol is a certain type even if this isn’t true. The use of any
is quite\ndangerous as well.
It’s possible for compile-time checking to be fully capable if you limit the use\nof dangerous operations. You can use eslint
to enforce this in a project.
Incorrect types can enter a TypeScript program when interacting with untyped\ndependencies and data. One way to ensure this doesn’t happen is by using a\nlibrary like Zod. Zod is a TypeScript\nvalidation library that allows you to check unknown data using a schema. It also\ngenerates TypeScript types, allowing schemas to work seamlessly with TypeScript.
\nconst User = z.object({\n\tusername: z.string(),\n\tfull_name: z.string(),\n\tage: z.number(),\n});\n\nconst user = User.parse(JSON.parse(json));\n// `user` will have the correct TypeScript type.
\nAnother cool thing is that Zod can transform data for you whilst validating. If\nan API gives you a date timestamp in ISO format, you can use z.date()
to\nconvert that string to a JS Date
object. You can also provide custom transform\nfunctions to validate and manipulate the data. This makes it a very powerful\ntool for dealing with untyped data.
Like any technology, TypeScript does have some drawbacks. The main one for me is\nthat it adds complexity to a project. You need to set up a build system to\nprocess TypeScript into JavaScript. I don’t think this is a problem once a\nproject reaches a certain size, but if I’m just writing a few small scripts for\na website I’ll use plain JavaScript.
\nA lot of people say they have issues with TypeScript types, but I’m not too\nworried about this. The majority of my code is fairly straightforward type wise.\nI have had more issues with Webpack and NPM modules than TypeScript itself.
\nFor another opinion, you might be interested in reading\n\nThe TypeScript Tax.
\nI’m not saying that TypeScript is a perfect language - there’s no such thing,\nonly different tradeoffs. Whether or not TypeScript is worth it depends on your\nproject and priorities.
\nIn my opinion, TypeScript is great when you’re working with web technology and\nwould otherwise be using JavaScript. It improves the developer experience and\nincreases resilience without a huge amount of cost. I find it enjoyable to use\nand will continue using it for the foreseeable future.
\n","url":"https://blog.rubenwardy.com/2024/03/08/typescript-is-awesome/","summary":"I find that TypeScript makes JavaScript more enjoyable to use. In this article, I'll explain why.","date_published":"2024-03-08T00:00:00.000Z","image":"https://blog.rubenwardy.com/static/img/YeAZ5KZCjT-540.png","tags":["reviews","fullstack"]},{"id":"/2024/01/28/interesting-reads-2023/","language":"en","title":"My favourite articles of 2023","content_html":"I read a lot of articles about programming and technology as a way of keeping up\nwith the industry and expanding my knowledge. Here are some articles I read in\n2023 that I recommend reading.
\nThis article is an interesting read on software longevity, with a humorously\nshort outcome. It shows the risks of\nrelying on private platforms and shows how\nfragile software is compared to other forms of engineering.
\n\nHere is a must-read for those considering using Brave. I support using whatever\nsoftware works best for you, but I consider Brave to be untrustworthy due to\nthe issues highlighted in this article.
\n\nA couple of years ago, I released a browser extension called\nRenewed Tab to the Firefox and Chrome stores. Since\nthen, I’ve been getting a lot of emails about selling my extension or adding\nspyware to it. These spammers all have the same goal - to steal the search and\nbrowsing data of my users, and sell it on to data brokers.
\nI would never sell out my users, but not every browser extension developer is\nthe same. My extension only had a few hundred installs when I first started\nreceiving emails, so I’m sure the vast majority of extension developers will be\nreceiving these “offers”.
\n\nI found this article interesting as it taught me something new about HTML.
\n\nSoftware internationalisation is super important for the accessibility and reach\nof your software. Even though I do not speak multiple languages, I try to\nsupport translation in my applications.
\n\nI’ve been working at consultancies and software houses since I graduated.\nWhilst I haven’t necessarily experienced all these things, I found this article\nto be an interesting read.
\n\nI found this article quite influential in changing how I think about blogging.\nI try to write from my perspective and experiences, without worrying if it’s\nbeen said before. I also try to write what I want to write, rather than trying\nto go viral.
\n\nThe Internet sometimes feels like one grift after another. This article presents\nan argument against that. You don’t need to blog to boost vanity metrics\nor to increase your value to employers. Just blog for the hell of it.
\n\nI love explorables - interactive articles that explain how something works.\nHere is a great one about how mechanical watches work.
\n\nI hope you found some articles worth reading. What were your favourite reads\nlast year? Feel free to send article suggestions you think I might like.
\nCover Image\n© Engin Akyurt
\n","url":"https://blog.rubenwardy.com/2024/01/28/interesting-reads-2023/","summary":"Articles are a great way of keeping up with programming and technology. Here are some articles I read in 2023 that I recommend reading.","date_published":"2024-01-28T00:00:00.000Z","image":"https://blog.rubenwardy.com/static/img/fBiLy502MT-540.png","tags":["blogging"]},{"id":"/2023/11/18/why-i-write/","language":"en","title":"Five reasons why I write","content_html":"In 2022, I made a goal of writing at least twelve long-form\narticles on my blog each year. But why bother? In this article, I will explain\nthe reasons why I write.
\nI write to improve my technical writing. Technical writing is a very useful\nskill as a software engineer; communicating complex concepts is an essential\npart of my day job. Writing articles regularly is a great way to practice my\nplanning and writing skills.
\nI’ve gotten pretty good at the process of writing articles, but I’m not sure\nwhether I’m actually getting better at writing. To get better at something, you\nneed timely and relevant feedbackfootnote 1. I don’t have an editor to point\nout issues and readers don’t tend to comment on writing quality unless it is\nawful. It’s hard to improve at writing without feedback. The type of writing I\ndo also isn’t especially ambitious; perhaps creative writing or poetry would\nprovide better practice, although it’s not really what I want to be doing.
\nFor the last ten years, I’ve been working on an online book to teach Minetest\nmodding and game creation called the\nMinetest Modding Book. One day,\nI’d like to self-publish a physical book on this subject. This will be a huge\nundertaking - and you can’t remotely patch books once they are printed. Getting\nbetter at writing helps towards this goal and will make the end result better.
\nI learnt to program because I wanted to create things. Websites, games,\nworld-ending artificial intelligences, and apps. I’m the sort of person who has\na lot of projects on the go at any one time.
\nWriting about my projects is a way of sharing them with the\nworld. Whilst I do publish many of my projects as open source, some are less\naccessible. For example, you can’t use my electronics or\n3D printing projects unless you have electronics skills\nand a 3D printer. It would also be quite time-consuming to replicate these\nprojects.
\nWriting whilst my memory is fresh allows me to archive my thoughts about a\nproject for the future, and share the lessons I learned. This is quite useful as\nsomething to refer back to.
\nRecently, I’ve been learning electronics. My eventual goal with electronics is\nto create autonomous robots and drones; I’d like to make a quadcopter with my\nown flight controller that can take off, land, and follow a target. This will be\nquite an ambitious project, and I’m nowhere near capable enough for that yet.
\nTo get better at electronics, I’ve been working on\neasier projects to develop the skills I will need. Writing\nabout these projects and the problems I’ve faced has allowed me to get\nsuggestions from those experienced with electronics. This has been invaluable\nfor my self-development.
\nYou don’t know what you don’t know. Even if I have solved all the problems and\nfinished an electronics project, I still benefit from suggestions for\nimprovements and better ways of doing things.
\nThere’s been a lot of cases where I’ve had to work out how to do something\nmyself. By publishing tutorials, I can hopefully save the\nnext searcher from struggling with the same problem.
\n\nWriting blog posts is a great way to develop opinions about subjects. As part of\nmy writing process, I break down concepts, make notes, and figure out which\npoints are the most important. By attempting to explain something clearly, you\ncan often identify weaknesses in your arguments or knowledge. Writing articles\nallows me to brainstorm, weigh up my thoughts, and build/challenge opinions.
\nI write to get better at writing, to document my projects, to get feedback, to\nshare knowledge, and to develop opinions.
\nThe 4 things it takes to be an expert by\nVeritasium is a good explainer on how to get better at something. ↩︎
\nI recently switched to using Eleventy to\ngenerate my blog. As part of this process, I needed to implement word counts for\neach of my posts. I made my own plugin for this as I was unable to find an\nexisting one that met my requirements - they either used regex to parse HTML or\nincluded non-text like scripts and code snippets in the count.
\nThe below code should work with any template engine, including Liquid and\nNunjucks.
\nFirst of all, we need to work out exactly how we are going to count words.
\nA word is a unit of prose or writing. We need to make sure we exclude code\nsnippets and scripts from the word count.
\nA naive approach would be to split text by spaces and count all the parts.\nHowever, this misses words that are joined together like “Dog/Cat”. Another\napproch would be to split text by all punctuation, but this would count words\nlike “self-hosting” as two words.
\nTo validate my word counter, I made a page where I dumped all the detected words\nfrom a post. I used this to refine the list of delimators.
\nYou need to install JSDom:
\nnpm install --save jsdom
\nIn the eleventy config, you need to add our new plugin:
\nconst pluginWordcount = require(\"./plugins/wordcount.js\");\n\nmodule.exports = function(eleventyConfig) {\n eleventyConfig.addPlugin(pluginWordcount);\n // You can only have one module.exports in a configuration file,\n // so make sure you add the above line to your existing one.\n}
\nThis is the file for the plugin. It contains extractText
to get all the text\nfrom HTML, and countWords to count the words in a piece of text.
const { UserConfig } = require(\"@11ty/eleventy\");\nconst { JSDOM } = require(\"jsdom\");\n\nconst TO_STRIP = [\n \"code\",\n \"pre code\",\n \"script\",\n \".header-anchor\",\n];\n\nfunction extractText(html) {\n const dom = new JSDOM(html);\n const document = dom.window.document;\n\n // Remove non-text elements\n document.querySelectorAll(TO_STRIP.join(\", \")).forEach(child => child.remove());\n\n return document.body.textContent;\n}\n\nconst cache = {};\n\nfunction countWords(value) {\n if (cache[value]) {\n return cache[value];\n }\n\n const result = extractText(value)\n .split(/[\\s;/\\\\]/)\n .map(x => x.trim())\n // Word is non-empty with at least one letter or number\n .filter(x => x.match(/.*[a-z0-9].*/i))\n .length;\n\n cache[value] = result;\n return result;\n}\n\nmodule.exports = eleventyConfig => {\n eleventyConfig.addFilter(\"wordcount\", countWords);\n};
\nHere’s how you might use the wordcount filter inside a post layout that uses\nliquid:
\n{% assign wordcount = content | wordcount %}\n\n{{ wordcount | divided_by: 238 | round }} min read\n({{ wordcount }} words)
\nAs a bonus, here’s the unit tests I used when writing the word counter:
\nconst { describe } = require(\"mocha\");\nconst { expect } = require(\"chai\");\nconst { count } = require(\"./wordcount\");\n\nconst parameterisedTests = {\n \"empty\": {\n html: \"\",\n expected: 0,\n },\n\n \"just symbols\": {\n html: \". . -//!\\\"$%^&*()\\\\`\",\n expected: 0,\n },\n\n \"single word paragraph\": {\n html: \"<p> Hey! </p>\",\n expected: 1,\n },\n\n \"punctuation\": {\n html: \"<p>Hello world! This is a test, of the word/counter</p>\",\n expected: 10,\n },\n\n \"strips scripts\": {\n html: `\n <p>Hello world! This is a test, of the word/counter</p>\n <script>\n alert(\"Hello world!\")\n </script>\n `,\n expected: 10,\n },\n\n \"strips code blocks\": {\n html: `\n <p>Hello world! This is a test, of the word/counter</p>\n <pre>\n <code>\n alert(\"Hello world!\")\n </code>\n </pre>\n `,\n expected: 10,\n },\n\n \"strips inline code\": {\n html: `\n <p>Hello world! This is a <code>test</code>, of the word/counter</p>\n `,\n expected: 9,\n },\n\n \"strips heading anchors\": {\n html: `\n <h2>A heading</h2>\n <a class=\"header-anchor\">1</a>\n <p>Hello world! One two</p>\n `,\n expected: 6,\n },\n\n \"counts numbers but not symbols\": {\n html: `\n <p>Hello world! You are 26.0 today - or so</p>\n `,\n expected: 8,\n },\n\n \"words can contain hyphens\": {\n html: `\n <p>Hello world! One-two three</p>\n `,\n expected: 4,\n },\n}\n\ndescribe(\"countWords\", () => {\n Object.entries(parameterisedTests).forEach(([key, data]) => {\n it(key, () => {\n expect(count(data.html)).to.equal(data.expected);\n });\n });\n});
\n","url":"https://blog.rubenwardy.com/2023/10/29/eleventy-wordcount/","summary":"A short tutorial on how to count words and determine the reading time using Eleventy.","date_published":"2023-10-29T00:00:00.000Z","tags":["snippet","blogging","tutorials","fullstack"]},{"id":"/2023/10/29/eleventy-backlinks/","language":"en","title":"How to find backlinks using Eleventy (11ty)","content_html":"I recently switched to using Eleventy to\ngenerate my blog. Because Eleventy is very easy to extend, I was able to add\nmore features to my blog - including backlinks.
\nIn a post’s sidebar, there is a list of other posts that link to the current\npost. This is an effective way to allow readers to find related content.
\nTo implement this, I created a small plugin that adds a filter to find which\nitems in a collection link to the given URL. To find links in HTML, I used the\nJSDom library.
\nWhilst there are existing backlink plugins, they didn’t meet my purposes.\neleventy-plugin-backlinks
, for example, only finds links made using\nwikilinks-style markup (ie: [[Other Post Name]])
). I wanted backlinks to\nwork with any link in a post - whether the post is markdown, HTML, or something\nelse.
The below code should work with any template engine, including Liquid and\nNunjucks.
\nYou need to install JSDom:
\nnpm install --save jsdom
\nIn the eleventy config, you need to add our new plugin:
\nconst pluginBacklinks = require(\"./plugins/backlinks.js\");\n\nmodule.exports = function(eleventyConfig) {\n eleventyConfig.addPlugin(pluginBacklinks);\n // You can only have one module.exports in a configuration file,\n // so make sure you add the above line to your existing one.\n}
\nThis is the file for the plugin. It contains getLinks
to extract links from\nHTML and a plugin function to register a filter.
const { UserConfig } = require(\"@11ty/eleventy\");\nconst { JSDOM } = require(\"jsdom\");\n\nconst hostname = \"blog.rubenwardy.com\";\nconst cache = {};\n\n/**\n * Extract links from html, not including hash parts\n */\nfunction getLinks(html) {\n if (cache[html]) {\n return cache[html];\n }\n\n const dom = new JSDOM(html);\n const document = dom.window.document;\n\n const result = new Set([...document.querySelectorAll(\"a[href]\")]\n .map(x => {\n let href = x.getAttribute(\"href\");\n\n // Normalise internal links\n const url = new URL(href, `https://${hostname}`);\n if (url.hostname == hostname) {\n return url.pathname;\n }\n\n url.hash = \"\";\n return url.toString();\n }));\n cache[html] = result;\n return result;\n}\n\nmodule.exports = (eleventyConfig) => {\n eleventyConfig.addFilter(\"links_to\", async function(collection, target) {\n return collection.filter(item => getLinks(item.content).has(target));\n });\n};
\nThis is how you use the links_to
filter to get backlinks inside a post layout\nthat uses liquid:
{% assign backlinks = collections.post | links_to: page.url %}\n<!-- An empty list isn't false-y in Eleventy liquid -->\n{% assign backlinks_count = backlinks | size %}\n{% if backlinks_count > 0 %}\n <aside id=\"backlinks\">\n <h3>Links here</h3>\n <ul>\n {%- for post in backlinks -%}\n <li>\n <a href=\"{{ post.url }}\">{{ post.data.title }}</a>\n </li>\n {%- endfor -%}\n </div>\n </aside>\n{% endif %}
\n","url":"https://blog.rubenwardy.com/2023/10/29/eleventy-backlinks/","summary":"A short tutorial on how to find backlinks (pages that link to the current page) using Eleventy","date_published":"2023-10-29T00:00:00.000Z","tags":["snippet","blogging","tutorials","fullstack"]},{"id":"/2023/10/27/switched-to-eleventy/","language":"en","title":"I've switched to Eleventy (11ty): my thoughts","content_html":"After nine years of using Jekyll, I’ve switched to using\nEleventy (11ty) and redesigned my blog whilst I was at\nit. I was finding that Jekyll was too limiting and wanted something more\ncapable. In this article, I will talk about why I switched, how I did it, and my\nthoughts about Eleventy as a new user.
\nI chose to use Jekyll in 2014 because it was the only static site generator that\nGitHub Pages supported. It allowed me to use multiple pages without having\nto duplicate all the layout markup, making it easier to grow the size of my\nwebsite.
\nRecently, I’ve been finding Jekyll increasingly painful to use and too limited\nfor my purposes. Since 2022, my goal has been to write at least twelve posts a\nyear. This means that I’ve been working on posts much more frequently than I\nhave done in the past. Ease of writing posts is much more important now.
\nOne of my biggest annoyances with writing a post was creating thumbnails for my\nimages - this was quite time-consuming; I wanted to fully automate it in a\nmanner that I don’t even need to think about when writing a post.
\nAnother thing was that the files were split into different places - the text was\nplaced in _posts
and the images were in static
. I wanted the ability to\nmake a single folder for an article and get the generator to automatically move\nthe assets to where they are needed.
Whilst I could have hacked together more scripts to fix these issues, my setup\nwas already quite convoluted. To generate my writing statistics page,\nI had a mess of Python scripts and complicated layout code. Liquid isn’t the\nmost capable language to use by itself. So, I decided to look for other static\nsite generators.
\nMy first requirement was that it must be a static site generator. A static site\nis the easiest type of website to host - it can be hosted on any web host, even\na Content Delivery Network (CDN). This gives a lot of flexibility. It also\nincreases longevity - if, for whatever reason, I cease maintaining my website,\nit can continue to be hosted as a low-tech static website for a long time. To\nget dynamic features like commenting, I have\na NodeJS server hosted on a separate\nsubdomain.
\nThe second requirement was that it should not require any client-side\nJavaScript. Requiring JavaScript is bad for SEO and accessibility and bloats the\nwebpage. Client-side JavaScript should only be used to\nenhance\nthe behaviour.
\nThe third requirement was that the generator must be easy to extend and\ncustomise to match my workflow. I’m not fond of Ruby or Bundler; I found\nBundler quite painful to use. I felt like JavaScript/TypeScript would be the\nbetter language to use as it is a web technology.
\nThe final requirement is the generator must be fully open source. Using\nopen-source tools is vital to achieve flexibility and longevity.
\nAfter looking into the options, I decided to give Eleventy a go. I started by\nsetting up an Eleventy project and porting a single blog post. To derisk the\nprocess, I focused on implementing and prototyping the unknowns first, such as\nthumbnails, SCSS, post/tags, and writing statistics. This allowed me to quickly\nverify whether Eleventy was a good fit before spending a lot of effort porting\nall my content across.
\nI also decided to use this opportunity to redesign my blog as well. I used to\nuse Bootstrap; I wanted to go with something a bit more custom and unique this\ntime. I also wanted a light/dark mode switcher.
\nBefore I published the new Eleventy version to production, I wrote a simple\nscript to make sure that no URLs had changed or were missing.\nCool URLs never change, so I considered\nverifying this important.
\nEleventy-Image is an official plugin\nfor Eleventy that allows you to resize images and create thumbnails.
\nUsing Eleventy-Image, I defined custom template functions to create image\nthumbnails based on how they are used. For example, the figure
function\ncreates a 540px width thumbnail and then renders a <figure>
element to the\nHTML.
{% figure \"./schematic.png\",\n \"The schematic for my plant watering system's circuit. Created using KiCAD.\" %}
\nThe source image is stored in the post’s directory. The figure
function saves\nresized versions to the static
directory. This allows me to keep the text and\nimages of a post together, and only create specific image sizes when needed.
Eleventy is very flexible, powerful, and fairly easy to customise. Using\nJavaScript, you can define custom template filters and functions. You can even\nbuild entire pages using JS - or JSX with an additional library.
\nIt was incredibly easy to add features such as a table of contents, writing\nstatistics, and backlinks. For a table of contents, I used an existing plugin\nand only needed to add {{ toc }}
to place it. I\nimplemented writing statistics, word counts,\nand backlinks by creating plugins to add\ntemplate functions using JavaScript.
// Usage:\n// {% assign backlinks = collections.post | links_to: page.url %}\neleventyConfig.addFilter(\"links_to\", async function(posts, targetURL) {\n return posts.filter(post => getLinks(post.content).has(targetURL));\n});
\nAnother thing is that Eleventy allows you to customise or provide your own\nMarkdown parser, or use different formats entirely. To get heading anchors\n(#
), I just needed to enable that option.
Finally, Eleventy has support for JavaScript data files, which is great for\nfetching data from the network before building the site. I used this feature to\nload Webmentions from the\nWebmention.io REST API.
\nEleventy has quite a lot of quirks that make learning it quite difficult. It\ntook much longer than I expected to switch to Eleventy.
\nFirst, Eleventy uses variables inconsistently. At the top of each page, there’s\nmetadata known as front matter:
\n---\ntitle: I've redesigned my blog and switched to Eleventy\ndescription: \"\"\ntags:\n - fullstack\n - reviews\n---
\nTo access the title and URL inside a template, you do title
and page.url
.\nInside JavaScript or when reading a collection, it’s item.data.title
,\nitem.url
, or item.page.url
.
Having a lot of global variables inside the template is quite confusing. It’s\nnot just pages - Liquid allows you to include other liquid files. In Jekyll, you\naccess parameters by writing include.param1
. In Eleventy, the parameter is\njust param1
.
The Eleventy liquid syntax is also slightly different from Jekyll liquid, for some\nreason. To include a directory, you need to quote the file name and use :
\ninstead of =
:
<!-- Eleventy -->\n{% include \"youtube.html\", id: \"dQw4w9WgXcQ\", caption: \"Funny cat videos\" %}\n\n<!-- Jekyll -->\n{% include youtube.html id=\"dQw4w9WgXcQ\" caption=\"Funny cat videos\" %}
\nBut custom shortcodes do not support named parameters, you can only pass them in\nby order:
\n{% youtube \"dQw4w9WgXcQ\", \"Funny cat videos\" %}
\nThe error messages produced by Eleventy are very long and point to the wrong\nlines in templates. It looks like\nthe line numbers don’t include the height of the front matter.
\n[11ty] Problem writing Eleventy templates: (more in DEBUG output)\n[11ty] 1. Having trouble rendering liquid template ./file.md (via TemplateContentRenderError)\n[11ty] 2. tag "doesnotexist" not found, file:./file.md, line:2, col:1 (via ParseError)\n[11ty] 3. tag "doesnotexist" not found (via AssertionError)\n[11ty]\n[11ty] Original error stack trace: AssertionError: tag "doesnotexist" not found\n[11ty] at assert (/home/ruben/dev/tmp/eleventy_repro/node_modules/liquidjs/dist/liquid.node.cjs.js:319:15)\n[11ty] at Parser.parseToken (/home/ruben/dev/tmp/eleventy_repro/node_modules/liquidjs/dist/liquid.node.cjs.js:2284:17)\n[11ty] at Parser.parseTokens (/home/ruben/dev/tmp/eleventy_repro/node_modules/liquidjs/dist/liquid.node.cjs.js:2276:33)\n[11ty] at Parser.parse (/home/ruben/dev/tmp/eleventy_repro/node_modules/liquidjs/dist/liquid.node.cjs.js:2270:21)\n[11ty] at Liquid.parse (/home/ruben/dev/tmp/eleventy_repro/node_modules/liquidjs/dist/liquid.node.cjs.js:3618:28)\n[11ty] at Liquid.compile (/home/ruben/dev/tmp/eleventy_repro/node_modules/@11ty/eleventy/src/Engines/Liquid.js:255:28)\n[11ty] at Markdown.compile (/home/ruben/dev/tmp/eleventy_repro/node_modules/@11ty/eleventy/src/Engines/Markdown.js:68:28)\n[11ty] at TemplateRender.getCompiledTemplate (/home/ruben/dev/tmp/eleventy_repro/node_modules/@11ty/eleventy/src/TemplateRender.js:269:26)\n[11ty] at Template.compile (/home/ruben/dev/tmp/eleventy_repro/node_modules/@11ty/eleventy/src/TemplateContent.js:362:42)\n[11ty] at async Template._render (/home/ruben/dev/tmp/eleventy_repro/node_modules/@11ty/eleventy/src/TemplateContent.js:486:16)\n[11ty] Wrote 0 files in 0.06 seconds (v2.0.1)\n
\nEleventy would have been much easier to use if the error message was at the\nbottom and it gave a link to the exact line in the template. Providing the\ntemplate path as path/to/file.h:123
like the rest of the paths would also make\nit clickable in IDEs.
Another thing I didn’t like is that Eleventy does not support TypeScript. I like\nTypeScript because it makes my IDE (VSCodium) more helpful. IntelliSense allows\nme to inspect what methods are available on an object, and linting shows errors\nwhen I use something incorrectly.
\nI tried setting up TypeScript with Eleventy, but there were no types available.\nEleventy does come with index.d.ts
, but it only documents a single thing and\nnot the entire API.
One thing I did discover is that you can use JSDoc to get IntelliSense to work a\nlittle bit. JSDoc does feel a bit like “TypeScript at home” though. It’s very\nverbose and doesn’t work as well.
\nconst { UserConfig } = require(\"@11ty/eleventy/src/UserConfig\");\n\n/**\n * @param {UserConfig} eleventyConfig\n * @returns {void}\n */\nmodule.exports = function(eleventyConfig) {\n // configure here\n}
\nAs you may have seen in the above example, Eleventy uses CommonJS require
\nrather than the more modern ES6 import
. This is not great, but will\nbe fixed in Eleventy 3.0.0.
Whilst I’m not 100% happy with Eleventy, it is more capable than Jekyll and\nbetter for my purposes. Many of the issues I have are fixable, and now that I\nknow how to use Eleventy, it should be easier to do things in the future.
\n","url":"https://blog.rubenwardy.com/2023/10/27/switched-to-eleventy/","summary":"In this article, I will talk about why I switched, how I did it, and my thoughts about Eleventy as a new user.","date_published":"2023-10-27T00:00:00.000Z","image":"https://blog.rubenwardy.com/static/img/bn8U-9HAwK-540.png","tags":["fullstack","reviews","blogging"]},{"id":"/2023/10/10/hello-indieweb/","language":"en","title":"I have joined the IndieWeb","content_html":"The collapse of Twitter last year got me thinking about closed platforms and\nreducing the hold that privately owned platforms have over the Internet.
\nI’ve been blogging for nine years now on my personal website. I like owning my\nown domain as it allows me to retain control and stay independent of particular\nservices. Private platforms have a tendency to be bought out and/or ruined by\ncommercial interests, especially now with tech growth slowing down and investors\ngetting uneasy.
\nHowever, there are some benefits to closed blogging platforms. Medium provides a\nnetwork effect that small blogs don’t have. It has an algorithm that promotes\nposts that users may find interesting. This allows the blogs to organically gain\nnew readers. Additionally, Medium makes it super easy to like, comment, and\nreply to a post, resulting in a platform that feels a lot more like a social\nnetwork than your standard cloud blogging service.
\nLast year, I started looking into ways independent blogs could communicate, just\nlike on Medium. I considered making my blog ActivityPub-compatible, as that\nwould allow users on Mastodon and the Fediverse to like, share, and comment on\narticles. And then, I stumbled upon the IndieWeb.
\nIndieWeb.org describes the IndieWeb as:
\n\nTo phrase it another way, IndieWeb is about posting the things you make on your\npersonal website and domain, to keep control of your data and stay independent\nfrom private platforms (aka silos). You may still post to silos but you should\npost to your personal website first.
\nIndieWeb isn’t just about blogging. You might post Twitter-like microposts,\nphotos, location check-ins, reviews, replies to other sites, and more.
\nAn important concept is “Publish on your Own Site; Syndicate\nElsewhere” (POSSE). This means that you should\npost the original version on your own website and then share links or copies of\nyour content with relevant social media communities. This is simpler and more\nflexible than adding ActivityPub support to my blog, and is so obvious that I’ve\nalready been doing it without realising it.
\nYou may be thinking that POSSE is pretty obvious and a bit of a cop-out. But\nwhere POSSE truly shines is when combined with backfeeding. A Backfeed is a\nlist of replies, likes, and mentions for the current page. Combined with POSSE,\nthis allows you to see replies to the current page across all different private\nsilos. For example, you might see comments from Mastodon and Reddit at the\nbottom of a blog post, as well as replies from other IndieWeb websites.
\nTogether, POSSE and backfeeding strike a good compromise between owning your\nown presence and participating in silos. They improve discoverability and allow\nfor reader interaction.
\nThe IndieWeb community has authored several standards that allow IndieWeb\nwebsites to communicate.
\nWebmentions allow websites to be notified\nwhen another site links to them. By receiving a notification, a site can know\nabout replies and mentions without having to maintain impractical web crawlers\nor subscribe to a backlinking service.
\nI started by\nimplementing support for\nreceiving Webmentions. This was super easy, I just needed to add a couple of\nlink
tags to the top of all pages:
<link rel=\"webmention\" href=\"https://webmention.io/example.com/webmention\">\n<link rel=\"pingback\" href=\"https://webmention.io/example.com/xmlrpc\">
\nWebMention.io is a cloud service for receiving\nWebmentions. You might think it’s odd to use a cloud service for this, but it’s\nnot a problem as I’m still using my own domain for the pages and could switch\nthe Webmentions service at any time. IndieWeb isn’t about self-hosting, it’s\nabout owning your identity and data.
\nI currently send Webmentions manually using\nTelegraph or IndieWebify.
\nMy blog is statically hosted and is built using GitLab CI. As the site is only\npublished when CI finishes, it would be impossible to include sending web\nmentions as part of the same CI pipeline. In the future, I’ll probably look into\nusing Brid.gy or some other tool to send Webmentions by monitoring my web feeds.
\nPersonal websites can contain a variety of content. Long-form articles,\nTwitter-like notes, location check-ins, reviews, and replies.\nMicroformats2 is a way of marking up the content of\nweb pages so that machines can understand it better. This is a powerful thing\nwhen combined with Webmentions as it allows the receiving website to understand\nwhat is linking to it and why.
\nMicroformats2 works by adding classes to elements representing content:
\n<article class=\"h-entry\">\n <h2 class=\"p-name\">Hello world!</h2>\n <a href=\"/tags/a-tag/\" class=\"p-category\">\n A tag\n </a>\n <div class=\"e-content\">\n This is the article's content.\n </div>\n</article>
\nImplementing support for Microformats2 (mf2) was a\nhuge pain, I cannot overstate how much so. The documentation was very fragmented\nand inconsistent, and the tools I found to test mf2 didn’t match the\ndocumentation.
\nThe most extreme problem I had was with\nauthorship - authorship is how you find out\nwho is the author of a piece of content. The documentation says that you should\nbe able to just include a link to the homepage in each piece of content, and\ntools should fetch the author info:
\n<article class=\"h-entry\">\n <div class=\"e-content\">This is an example note</div>\n <a href=\"https://rubenwardy.com\" class=\"u-author\"></a>\n</article>
\nHowever, this did not work at all. Most of the tools I found didn’t make further\nrequests and only looked at the current page. This makes sense I guess, but it’s\nannoying that the documentation said it was possible.
\nThe next thing I looked at was including the authorship information in the\nfooter of each page, and then referencing it from each piece of content like so:
\n<article class=\"h-entry\">\n <div class=\"e-content\">This is an example note</div>\n <a href=\"/\" class=\"u-author\"></a>\n</article>\n<footer>\n <a href=\"/\" class=\"h-card\">\n <img class=\"u-photo\" src=\"/me.jpg\">\n <span class=\"p-name\">Author Name</span>\n </a>\n</footer>
\nUnfortunately, this didn’t work with any of the tools either. The only thing I\nfound that worked was to include the authorship information in full in every\nsingle piece of content.
\n<article class=\"h-entry\">\n <div class=\"e-content\">This is an example note</div>\n <div class=\"p-author h-card d-none\">\n <a class=\"u-url p-name\" href=\"https://rubenwardy.com/\">rubenwardy</a>\n <img class=\"u-photo\" src=\"/me.jpg\">\n </div>\n</article>
\nThank you to users on the IndieWeb IRC channels for pointing towards useful\ntools and documentation, and asking my newbie questions. Without them, I\nwouldn’t have been able to implement support at all. I believe that they have\nimproved the documentation a bit based on my feedback, although the authorship\npage still mentions the methods I tried that didn’t work.
\nThe three main tools I used for testing Microformats2 were\nIndieWebify, pin13.net mf2,\nWaterpigs mf2.
\nA Backfeed is a list of replies, likes, and mentions for the current page. For\nexample, you might see comments from Mastodon and Reddit at the bottom of a blog\npost, as well as replies from other IndieWeb websites.
\nMy blog shows likes from Mastodon and other social platforms using\nBrid.gy and Webmentions. When I post a link to my blog on\nsocial media, Brid.gy monitors activity and sends Webmentions. JavaScript on my\nblog fetches Webmentions and updates the counter. My blog also caches like\ncounts at build time. In the future, I’ll make it so that the JS only fetches\nactivity since the blog was last built, reducing the amount of work the Web\nMentions API needs to do.
\nI decided not to implement the backfeeding of comments from social media as I’m\nconcerned about the privacy implications. Just because someone decides to reply\npublicly on social media silos doesn’t mean that they want their post and their\nprofile picture to appear on my website. Additionally, Webmentions can’t be\ndeleted meaning that the comment may continue to appear on my website even after\nthe author deletes it on the silo.
\nTwo good articles discussing the ethics and privacy challenges of backfeeding\ninclude\n“The ethics of syndicating comments using WebMentions”\nand\n“The IndieWeb privacy challenge”.
\nI may reconsider this in the future. I’d need to make it sufficiently clear to\ncommenters and allow them to opt-out. I’d also need to make sure that deleting\nthe comment on the silo also deletes it from my website.
\nI added a comment form to the bottom of posts on my blog. My blog is statically\nhosted. To collect comments, I have a service running on another subdomain that\ncollects any comments and sends them to me. Users can also choose to send\ncomments by email or another method. All comments are moderated before showing\non my blog.
\nTo avoid spam, the comment form has a “username” form hidden using CSS. Most\nspam bots don’t bother applying the CSS so will see the field and fill it in.\nThis is called a honeypot field and is surprisingly effective - I was receiving\nmultiple spam comments a day, but since adding the field I’ve only received a\nsingle spam comment.
\n<style>\n input[name=\"username\"] {\n display: none;\n }\n</style>\n<input type=\"text\" name=\"username\">
\nYou can find\nthe source code behind commenting on GitLab.
\nIndieWeb standards are fairly obscure and don’t seem to have been adopted much\nyet. Of all the posts I’ve made since adding IndieWeb support, this is probably\nthe only one that will actually find websites linked to that can receive\nWebmentions. If a popular Content Management System, like WordPress, added\nbuilt-in support for Webmentions and mf2, I could see it suddenly becoming a lot\nmore popular.
\nWhilst Webmentions are pretty cool, Microformats2 is pretty complicated and was\npretty annoying to implement. I know that the IndieWeb crowd will have strong\nopinions on this, but I quite like how simple JSON-LD\nwas to add support for and that it’s JSON.
\nAs for the community side of IndieWeb, personal websites have seen a big\nresurgence since the fall of Twitter. I think we’re in a new golden age for RSS\nand personal websites. I’ve been encouraging a lot of my friends to take up\nblogging.
\nI’m certainly a more technical user than the average blogger. I don’t mind being\nan early adopter of technology and appreciate the goals of IndieWeb. I like how\nthey try to focus on the people before the technology, even though Microformats2\nleaves much to be desired.
\nI currently only post blog posts on my website. I don’t plan to post notes on my\nwebsite as I prefer to use Mastodon directly. But I might start posting my\nphotography here.
\nI’m undecided as to whether I’ll stick with IndieWeb technology in the long\nterm, but I’ll certainly continue to own and publish on my own domain.
\n","url":"https://blog.rubenwardy.com/2023/10/10/hello-indieweb/","summary":"The collapse of Twitter last year got me thinking about closed platforms and reducing the hold that privately-owned silos have over the Internet.","date_published":"2023-10-10T00:00:00.000Z","image":"https://blog.rubenwardy.com/static/img/V34DGI5mGz-540.png","tags":["reviews","fullstack","blogging"]},{"id":"/2023/08/29/ghosts-live-in-the-ground/","language":"en","title":"Ghosts live in the ground: debugging an electronic circuit","content_html":"I’m making a device to water my basil plants. Whilst working on this, I\nencountered an issue where running the pumps whilst connected to WiFi would\ncrash the microcontroller. After investigation, I found the cause was poor\ngrounding and Electromagnetic Interference (EMI).
\nPreviously, I created a plant monitor that tracked\nsoil moisture and sent notifications when the plant needed watering. But wouldn’t\nit be nice if it could water the plant instead of sending a notification? This\nnew project will do exactly that.
\nI’m designing the plant watering system to have support for four plants, with a\nsoil moisture sensor and pump for each plant. The pumps will draw water from a\nwater tank, which contains a water level sensor. I’m using an ESP32\nmicrocontroller as the brains, as it comes with Internet connectivity and\nlow-power deep sleeping.
\nI designed a circuit and then made a prototype on a breadboard, which worked as\nexpected. Making a prototype allows me to find any issues with the circuit\ndesign before I bring out the soldering iron. Or, at least, it’s supposed to.
\n\nI started working on the production version, soldered on to stripboard.\nStripboard is a material for making circuits; it comes pre-formed with a grid of\nholes and parallel strips of copper traces. I use it because it’s an easy way to\nmake semi-permanent circuits for my projects.
\nI decided to work on the circuit in parts, so I could test the circuit as I\nwent. I started by soldering on the pump circuits and connecting them to the\nmicrocontroller. It is at this point that I discovered a big issue - when I ran\na pump, the microcontroller would crash. This would result in the pump staying\non until power was cut.
\n\n\nGuru Meditation Error: Core 0 panic’ed (IllegalInstruction). Exception was\nunhandled.
\n
I noticed that if I disconnected the microcontroller from WiFi, the crash would\nno longer happen.
\nI’m using power transistors to allow the microcontroller to turn the pumps on\nand off. This is required because the pumps would draw way too much current if\npowered directly by the microcontroller. I’m using two power supplies. The pumps\nare powered by a 5V USB plug, the kind commonly used to charge phones. The\nmicrocontroller is powered by a micro USB cable to my computer so that I can\nprogram and debug it.
\nAt first, I thought that the power transistors were still drawing too much\ncurrent from the microcontroller and causing the crash. The fact that the crash\nonly happened when the WiFi was running strongly hinted at this, as WiFi\nrequires a lot of current itself. To test this theory, I replaced the\ntransistors with a push button. The microcontroller still crashed, showing this\nwasn’t the cause.
\nAfter discussing the issue with a friend, I moved the capacitor closer to the\npump and added another, smaller, capacitor. This didn’t fix the issue.
\nAt this point, the only connection to the microcontroller was a ground rail. My\nfriend now suspected that the issue might be due to a difference in voltage\nbetween the grounds of the two power supplies causing current to flow along the\ncommon ground. I severed the ground rail but the issue still happened. This was\nvery strange as there was now no connection at all between the pump and the\nmicrocontroller?!
\nI decided to create the stripboard again from scratch to make sure it wasn’t a\nsoldering issue. I did this, and the problem still happened.
\nAt this point, I suspected the circuit was either haunted or there was some kind\nof EM interference. I broke the ground rail closer to the ESP32 and it started\nworking again. It turns out that the ground rail was acting like an antenna. A\ncurrent was being induced by the noisy pump and caused the ESP32 to crash\ndespite not being connected.
\n\nThe lesson here is that stripboards have pretty terrible grounding and\nElectromagnetic Compatibility (EMC) properties.
\nSomeone on the BPS Space Discord recommended\na video on proper grounding in\ncircuits. This was very interesting, I learned a lot from it and it will improve\nmy circuit designs in the future.
\nWhile watching that video in a coffee shop, a plant fell on me from a shelf\nabove, covering me in soil. I guess it wanted to teach me the difference between\ngrounding and earthing.
\nTo continue with this project, I have decided to bite the bullet and make a\nPrinted Circuit Board (PCB) for it. I have been looking for an excuse to make a\nPCB for a while. PCBs allow you to get vastly better grounding, which will avoid\nEMI issues like this in the future.
\nThanks to my friend and people on the BPS Discord for helping me with this\nissue.
\nWell, this was certainly an… educational problem to have. This problem, and\nthe switch to making a PCB, has delayed this project by a month at least. But I\nwill learn a lot more and gain more value from this project. I just hope my\nbasil can survive in the meantime.
\n","url":"https://blog.rubenwardy.com/2023/08/29/ghosts-live-in-the-ground/","summary":"Whilst working on a plant watering system, I spent hours debugging an issue that turned out to be Electromagnetic Interference (EMI)","date_published":"2023-08-29T00:00:00.000Z","image":"https://blog.rubenwardy.com/static/img/hxAMPXvDq8-540.png","tags":["electronics","projects"]},{"id":"/2023/08/16/minetest-ctf-is-10/","language":"en","title":"Ten years of Capture The Flag","content_html":"Ten years ago today, I uploaded the first version of Capture The Flag. Capture\nThe Flag is a multiplayer game where two teams of players battle to claim the\nother team’s flag whilst defending their own. Capture The Flag is played in a\ndestructible voxel environment, allowing players to build defences and place\ntraps.
\nCapture The Flag started life as a persistent kingdoms game but quickly pivoted\nto a match-based team game. It was developed iteratively, taking into account\nplayer feedback. I hosted a server for the game for many years and a community\nformed around it. In 2021, I handed over the reins to CTF to very capable hands;\nit remains Minetest’s most popular server to this\nday.
\nThis article covers the history of CTF, the lessons I learned, and the changes I\nmade along the way.
\nI originally created Capture The Flag to be used on a persistent map, where\nteams were countries and players worked together to defend castles. Any player\ncould create a new country and place flags. Flags were placed at the top of\ncastles and allowed a country to claim ownership of the surrounding land.
\nThere was a diplomacy system that allowed countries to declare war and peace,\nand form alliances. Players could place turrets that automatically attacked\nenemy players. If an enemy country claimed a flag, it would gain ownership of\nthe surrounding land - including any turrets.
\n\nI created a private server to test the game with friends. I hosted this at home,\nso it wasn’t online that much. I implemented the basics - countries, flags, land\nownership, and turrets - but never really got to the point where I felt like it\nwas ready to host publicly.
\nWhen I first started working on the game, I knew it would be running on a server\nand didn’t want it to crash. I added a lot of if-statements for things that\nshould never happen, and in the process caused a lot of bugs to become subtle\nbehaviour issues rather than crashes. The lesson I learned here is that it’s\nbetter to crash than to obscure a bug or end up in an unknown state.
\nFrom the start, I wanted the core mods to support being used in any team-based\ngames, even without flags. A lot of features were added in additional mods, for\nexample, flags were added by ctf_flags
. There were also a lot of configuration\noptions that were never really needed. I learned the principle of\nYou Aren’t Gonna Need It (YAGNI)\nthe hard way.
In July 2015, after a while of not working on the game, I decided to pivot to\nmaking a match-based game instead. I still planned to work on a persistent\nkingdoms game eventually, but doing a match-based game first would allow me to\ndevelop the core code and test it. I split the game into a reusable mod pack\ncalled CTF PvP Engine and a game called Capture The Flag.
\nAround this time, I played\nKing Arthur’s Gold -\na 2d sidescrolling capture-the-flag game where players build castles and then\nattack each other. The game’s structural integrity mechanic made sieges pretty\nawesome. I quite liked the defensive building part of the game and felt that a\nvoxel world was especially well-suited for this kind of gameplay. King Arthur’s\nGold was Capture The Flag’s main inspiration.
\n\nIn September 2015, I started the official Capture The Flag server, hosted on\nDigitalOcean. The server was mostly dead for a few months but then suddenly\nbecome popular - I guess it had reached the critical number of players that\nallowed the game to become fun.
\nBy this point, I had been making mods for Minetest for a couple of years but\nhadn’t created a public server before. One thing I learned was just how good\nrunning a server is at finding bugs - players do a lot of things that you don’t\nexpect. One example is when players discovered that dropping sand on flags\ncaused them to be dropped as items, allowing players to place multiple flags for\ntheir team. This made it much harder for the other team to win as they would\nhave needed to capture all the flags.
\nI found that another benefit of running a public server is getting access to a\ncommunity of players and feedback. From this point onwards, I was able to\niteratively develop the game based on how the players played it and what\nfeedback they gave. For example, the game contained team chests that allowed\nplayers to share weapons and items. I discovered that a lot of team members were\nintentionally destroying unused weapons rather than placing them in the team\nchest, to prevent newbie players from picking them up, dying, and delivering the\nweapons to the enemy team. To fix this issue, I made it so part of the chest was\nrestricted to players who had at least a certain amount of in-game experience.
\n\nI delegated the day-to-day running of the server to moderators. I wasn’t\ninterested in being a moderator or server owner, for me the fun was in creating\nan experience and seeing how the players react, rather than enforcing rules. I\nchose new moderators based on the recommendation of existing moderators - I felt\nthat the existing moderators were better placed to judge character. A lot of\npeople asked to become a moderator, but I rarely granted this as asking to\nbecome a moderator is a red flag.
\nThe DigitalOcean server I was using was quite underpowered. In April 2016, a\ncommunity member called Calinou offered to host the server and did so until they\nstopped it in November 2016.
\nIn October 2017, I decided to restart the server and development on Capture the\nFlag due to popular demand. This time, I hosted the server using a dedicated\nmachine on Kimsufi - a brand owned by OVH. With hindsight, hosting on a cheap\ndedicated server wasn’t good value for money, but it was faster than the\nDigitalOcean VPS.
\nMy aim for development at this point was to improve the variety of matches,\nincrease strategy, and incentivise co-op. It was too easy for a match to be won\nby a lone hero.
\nAfter months of work in secret, I released support for multiple maps. Instead of\nalways playing on the same map, the map was now different for each match.\nPlayers could use a dedicated tool to create new maps and then submit them for\ninclusion.
\nThe feature shipped with three maps - Two Hills and Two Lakes (the previous map\nused), The Bridge, and Caverns. Caverns was my favourite map. It’s set\nunderground, with three caverns and a small dungeon to explore. All of these\nwere disconnected; players needed to tunnel to reach the enemy team. I liked how\nthis added a stealth aspect to the game.
\nNotice how none of these maps came with prebuilt structures. The aim was still\nto encourage players to build their own defences and use the destructible\nenvironment to their advantage. But that doesn’t mean I was against it\ncompletely - some of the new maps contributed after the feature was released\nincluded entire towns.
\nApples were used to heal the player in CTF. Someone made a cheat that would eat\napples automatically when they took damage, by emulating player input.
\nThe cheat was programmed quite poorly and was vulnerable to a\nRemote Code Execution (RCE)\nbug. This could have allowed the server and other players to run non-sandboxed\nshell commands, taking over the cheater’s computer. I didn’t exploit this for\nobvious reasons.
\nI wrote a mod to detect the use of this cheat. At the end of each match, a GUI\nwindow is shown with the match results. I dealt damage to all players when this\nwindow was shown and then listened to key presses using a hidden text field. If\na bunch of numbers were pressed (ie: 3333331), then I’d know that they were\nusing the apple cheat. The anti-cheat mod would wait a random period, 5-10\nminutes, and then ban them with an anti-cheat message.
\nThis anti-cheat wasn’t a permanent solution but was enough to catch and\nfrustrate the cheat writer, which was the main goal. We eventually replaced\napples with medkits and bandages. Players can use medkits to heal themselves but\nhave to remain still whilst it acts, leaving them vulnerable. Players can use\nbandages to instantly heal their teammates but not themselves.
\nIn 2020, the COVID pandemic meant that I was able to find time again for the\ngame. I worked on a classes feature inspired by King Arthur’s Gold. It allowed\nplayers to select a class at their team’s flag. Classes come with different\nabilities and initial items. The aim was to increase variety and allow other\nstyles of play to become more viable, as to this point the most successful\ntactic was to spam with a steel sword.
\nI also refactored a lot of the code to remove unused features left over from the\nCity vs City days. I released this and classes as Capture The Flag 2.0.
\n\nBy 2021, I had fully lost interest in working on Capture The Flag. I had limited\ntime and there were many other projects I wanted to be working on. I chose\nLandarVargan to become the new project lead\nas he had been significantly contributing to CTF for a while.
\nAfter taking over, LandarVargan added savilli to the development team. They\nworked together on Capture the Flag 3.0, a rewrite to get rid of eight years of\ntechnical debt. Many new gameplay features have been added to 3.0 including\nsupport for multiple game modes. These game modes are another way of increasing\nvariety.
\nDuring a testing session for the 3.0 rewrite, some players got ahold of\nnear-infinite stacks of grenades. This formed the inspiration for the Nade Fight\ngame mode added by savilli. In this mode, players fight with an unlimited\nsupply of 3 different grenades. Each one has a unique use and a cooldown after\nthrowing. Void grenades trap players, firecrackers damage them, and yellow\ngrenades allow the player to move quickly through the air.
\nUnlike me, LandarVargan plays existing competitive PvP games in his own time,\nincluding Albion Online and Brawlhalla. He was interested in what about those\ngames and CTF was fun and attracted their player base. In studying them, he got\na lot of ideas for how he could improve CTF.
\n\nI never ended up continuing my City vs City game, but someone else used the\nCTF PvP Engine to make a similar game called\nPersistent Kingdoms.
\nCapture The Flag feels slightly bittersweet to me. Whilst it has been popular\nand liked by a lot of players, I feel like it never became the game I wanted it\nto be. I was never that interested in first-person shooters; I wanted it to be a\ngame of strategy and castle building and sieging. To get it to that I probably\nwould have had to implement some difficult features like structural integrity\nand pivot the game’s features more to that goal. Not sure whether the player\nbase would have wanted this. Capture The Flag was ultimately developed by\nwatching how it was played.
\nWhilst I am no longer interested in continuing CTF myself, I am glad that the\ncommunity grew to a point where it was able to attract contributors and survive\nwithout me. I think this is the best indicator of the project’s success.
\nThe 8th anniversary of the server will be the 1st of September. Fancy\ncelebrating? From today until the 1st of September, there will be fireworks and\nparty hats on the server. You can play Capture The Flag by joining\nctf.rubenwardy.com:30001 using a Minetest client.
\n\n\n","url":"https://blog.rubenwardy.com/2023/08/16/minetest-ctf-is-10/","summary":"CTF is a PvP game made using Minetest. This article covers the history of CTF, the lessons I learned, and the changes I made along the way.","date_published":"2023-08-16T00:00:00.000Z","image":"https://blog.rubenwardy.com/static/img/Rhe2YDjcPg-540.png","tags":["minetest","projects","gamedev"]},{"id":"/2023/07/14/make-games-not-game-engines/","language":"en","title":"Make games not engines: why I'll be using Godot engine","content_html":"One of the reasons I learned how to program was to make games. Games are a\nunique form of creative medium, combining art, interactive storytelling, and\nvibrant worlds. But as a software engineer, it’s easy to lose sight of my goals\nand get trapped by the technical details. It’s common for software engineers in\ngame dev to roll their own engine, which I believe reduces productivity and is\nultimately a distraction to making a game.
\nNote that I’m not just referring to making reusable or generic game engines; for this\narticle, I consider using low-level technology like OpenGL, SFML, or SDL to make\ngames to include the act of rolling your own game engine, even if the focus is\nspecific. It’s more manageable, but you still end up reinventing the wheel and\nhaving to solve many of the same problems.
\nThere are plenty of other articles about whether or not to make your own game\nengine. This article is personal to me; it’s an exploration of my journey in\ngame dev, a discussion of what motivates me, and a promise for the future.
\nOne of the first programming languages I learned was Scratch. I created a lot of\nsmall games, the most impressive of which was\nSpirit Walker, an adaption of my\nfavourite book series at the time.
\nNot long afterwards, I was given a book called “Game Programming for Teens”,\nwhich taught me how to make games using the Blitz3D game engine. I created a series\nof competitive games, mostly with split-screen multiplayer, called\nPro Series.
\nNeither of these tools were perfect: they were pretty limited, and the code I\nwrote sucked almost as much as the art I made. But those tools allowed seeing\nresults immediately and that is very important for motivation. I was able to\ncreate a lot of impressive things (for my age) in a fairly short time.
\nIt wasn’t long before I was running up against the limitations of Scratch and\nBlitz3D. I briefly tried Unity, but I wasn’t a fan of how it worked.
\nI googled “programming languages for game dev” and found C++. Perhaps not the\nbest advice for a beginner. From there, I found\nIrrlicht, a 3d rendering library on top of\nwhich you can make your own engines. Whilst Irrlicht is largely obsolete and\ndead these days, at the time it had a thriving community and wasn’t too\noutdated. I discovered Minetest on the Irrlicht\nforums; I’ve been contributing to Minetest since this time, eventually making\ncore developer in 2015.
\nI made a few simple game prototypes with Irrlicht, including a zombie survival\ngame. I also made NodeBoxEditor, a\nvoxel 3d modelling program for Minetest.
\nWhilst these projects served as great learning exercises, I wasn’t able to\nfinish many projects as much of the focus was on the tech rather than on the\ngames. One exception during this time was my HTML5 projects, I was able to\ncomplete quite a few small projects including\nPhysics Simulation Kit.
\nRuben’s Virtual World Project (RVWP) was a\nhybrid top-down shooter and base builder, where you manage a group of rebels\nhiding from a dystopian government. In “base builder” mode, the player would\nbuild a base and manage a group of characters similar to RimWorld. Then they\nwould switch to “Hero” mode, and go on missions in a top-down shooter.
\nRVWP was written using C++ and SFML; I started the project to learn low-level\nmultiplayer networking but transitioned to making a game after a while. I spent\nfive years (on and off) working on it. In the end, I implemented very basic\ntop-down gameplay, a build mode, and some\nNPC AI. The amount of time I spent on\ngameplay was massively dwarfed by the amount of time working on the engine; I\nimplemented multiplayer networking, voxel lighting, entities, behaviour trees,\nand Lua scripting. By the time I got to this point, I had mostly lost interest\nin the game idea.
\nWhen I first started making games, most of my time was spent implementing\ngameplay. As I became a better programmer, solutions became increasingly more\nprogramming-focused, with less time being spent on gameplay.
\nIt all comes down to priorities. Creating my own game engines taught me a lot\nabout programming and game development. I implemented scene graphs, entity\nsystems, sounds, multiplayer networking, physics, tools, and more. I became\nbetter at C++ and programming overall. But I’m no longer interested in\nprogramming as a learning exercise, I want to actually make games.
\nTime spent rolling my own tech is time I could be working on gameplay; how good\na game is depends a lot on the number of iterations and feedback cycles it goes\nthrough. Modern game engines make it much easier to prototype and get a game to\na playable state quickly. I’m not planning on starting an indie dev company, but\nif I were - time is money and using a game engine is a competitive advantage.
\nI’m not saying that no one should write game engines. Someone needs to, and they\nare amazing learning experiences. But if you want to make a game, make a game,\ndon’t make an engine as well. And if you seriously want to make a game engine,\nmake sure you have a team and a limited scope (ideally a game in mind).
\n\nSo why did I ever decide to write my own engines? Part of it is the learning\nexercise excuse, RVWP was started so I could learn about multiplayer networking\nat a low level.
\nAnother part is being able to know and own all my code. Before I started using\nC++ and Irrlicht, I tried making games with Unity. I ran into several bugs in\nthe engine - given that Unity is closed-source, I was frustrated at not being\nable to fix the bugs or have any visibility into why they were happening. This\nis not as much of a problem with Godot, as it has zero bugs is open-source.
Another part of it is dogma - I felt like existing engines were impure, buggy,\nand bloated. As a software engineer, it was much more fun to write my own tech,\nespecially to begin with when the project is new. It was also easy to justify -\nthis is just a learning exercise! The most important thing when making a game is\nto make the game. Players don’t care if the game is a few extra megabytes -\ngames are regularly in the gigabytes these days! Nowadays, I care more about the\noutcome than always aiming for ‘perfect’ tech. This quote comes to mind:
\n\nObviously, the above is taken to the extreme. I’m not planning to become a\nPastafarian, but I am\nnow willing to take a more pragmatic approach to creating technology. My goal\nfor the future is to focus less on the perfect tech and more on the thing I’m\ncreating. I will use established game engines and not be afraid to write\nimperfect code.
\nI tried out Godot quite a while ago, but it was still quite immature. Godot is a\nvery capable engine now, especially Godot 4. I’ve started using it for new game\nprojects. I really didn’t like Phaser.JS, so ended up converting my\nTin Mining game jam entry from that to Godot\nas well.
\nGodot isn’t perfect. When I first used it, I wasn’t a fan of how it used\ninheritance rather than something more component-based. But this is an example\nof ‘purity’ again, there are plenty of ways to do composition in Godot and avoid\nall the issues with OOP-based designs.
\nPremade game engines can have a bit of a learning curve, but it’s worth it in\nthe long run. Godot’s\ndocumentation is very good\nand there are a lot of third-party tutorials and support available.
\n\nI’m a software engineer. It’s easy to fall into the trap of writing code for the\nfun of it, rather than to actually make something. But this is a distraction, I\nlike making things and want to actually finish a game.
\nI’ve been learning Godot and enjoy using it. One day, hopefully soon, I’d like\nto finish a game and publish it to Steam. I’m not expecting to make the next\nbestseller, just something that I’m proud of enough to publish even if niche.
\nIn 2020, the Minetest Discord community ran a mod-making competition with\n“combat” as the theme. Participants had one week to create a mod with all the\ncode written within the time, but pre-existing art was allowed. I made a\nReal-Time Strategy (RTS) minigame called\nConquer;\nit received first place.
\nIn this article, I will discuss some of the interesting challenges that Conquer\nneeded to solve. I believe that Conquer is a great example to learn from as it\ndemonstrates best practices for Minetest modding; it is well-structured and\nunit-tested.
\nConquer is a mod\nthat adds RTS gameplay. It allows players to start Conquer mini-games, where\nthey can place buildings, train units, and fight other players.
\nWhilst I created Conquer as a mod, it was designed with plans to eventually\nconvert it into a game, with custom map generation and more in-depth gameplay.
\nPlayers select units by left-clicking them. To issue commands, the player uses\nthe scroll wheel to select a command and then right-click to perform it.\nCommands include move, melee attack, and ranged attack. Archers automatically\naim at nearby enemies, and all units will defend themselves from melee attacks.
\nUnits can damage buildings using melee or ranged attacks. When a player’s keep\nis destroyed, that player is out of the game. The appearance of a building\nreflects how much damage it has taken.
\n\nI used GitLab projects to organise my tasks. I decided what the Minimal Viable\nProduct (MVP) was and created issues for the core features. During development,\nI created new issues to break down high-level goals into smaller steps.
\nI only had seven days to make quite a complicated mod. I needed to cut a lot of\nfeatures that would have been desirable, such as squad movement, walls, and\nsiege engines. I pushed features that weren’t needed in the MVP to the bottom of\nthe backlog and resisted implementing them.
\n\nUnits need to know how to do some tasks without being micromanaged by the\nplayer. They need to understand how to navigate the world, melee attack other\nunits, and arrows at moving targets. To tell the units how to do these things, I\nchose to use Behaviour Trees.
\nExplaining Behavior Trees in full is a bit out of scope for this article; I\nhighly recommend reading\n“Behavior trees for AI: How they work”\nby Chris Simpson. However, I will try to explain the basics.
\nBehaviour trees allow you to control an NPC’s decision-making by combining\nreusable nodes rather than writing code for specific states. They work best when\nthe nodes are small and specific, for example, you might have nodes to check\nconditions or walk to a position.
\nBehaviour trees are basically a programming language in themselves, but for game\nAI. Execution starts at the top of the tree, and then works downwards based on\nthe rules of different nodes. A node is either running, succeeded, or failed.
\n\nUsing behaviour trees for Conquer unit AI was probably overkill. Behaviour trees\nare great for structuring complex behaviour, but Conquer’s unit AI is fairly\nsimple. I decided to use behaviour trees as I wanted to make a Lua\nimplementation to use in other projects anyway - I ended up using it\nin another project to create worker NPCs that can build.
\nThe nodes implemented in Conquer include:
\nArchers need to know what direction to fire an arrow to hit a moving target. I\noriginally tried to derive the maths for this myself, but ultimately found\nan equation on Wikipedia\nthat worked. This was by far the thing I found most challenging whilst creating\nConquer, it took a while to get it to work as expected.
\nHere’s the final code. It’s just the equation converted to Lua.
\nlocal function calculate_projectile_direction(from, to, initial_speed, gravity)\n -- Turn this into a 2D problem by considering just the plane\n local delta = to - from\n local x = math.sqrt(delta.x*delta.x + delta.z*delta.z)\n local y = delta.y\n\n local x2 = x*x\n local v2 = initial_speed*initial_speed\n\n -- If there's an imaginary number, no solution is possible\n local square = v2*v2 - gravity*(gravity*x2 + 2*y*v2)\n if square <= 0 then\n return nil\n end\n\n local t_x = gravity * x\n local t_y = v2 - math.sqrt(square)\n\n local factor = t_x / x\n local u_y = t_y\n return vector.normalize(vector.new(delta.x * factor, u_y, delta.z * factor))\nend
\nThe above function returns the direction to fire an arrow to hit a stationary\ntarget. To hit a moving target, I predict the target’s position at a set time\nin the future:
\nlocal guessed_time = vector.distance(from, target) * 0.11\nlocal offset_target = target + target_velocity * guessed_time
\nThis is a simple approach that works a lot of the time but is flawed as the way\nit estimates the flight time using distance isn’t accurate. It’s good enough for\nmy purposes, however; archers occasionally missing could be considered a feature!
\nIn Conquer, barracks produce new units, farms produce food, and the keep must be\ndefended at all costs. The appearance of each building changes based on its\nhealth. Buildings regenerate health when no enemy units are nearby.
\n\nAn object in Minetest is a moving thing with a position and velocity. For\nexample, both players and Conquer units are objects in the world. In contrast, a\nnode is a static thing in the voxel grid. Grass, keeps, and barracks are\nexamples of nodes.
\nTo interact with an object using the Lua API, you use an ObjectRef. ObjectRefs\nhave methods giving you access to the position, velocity, and hp. Nodes do not\nhave ObjectRefs as they are not objects.
\nConquer units need to be able to attack enemy units and nodes. Whilst I could\nhave implemented these abilities separately, I decided instead to create a Node\nObjectRef, a bit of Lua code that allows interacting with a building node\nas if it were an object.
\nThis is a nice abstraction that vastly simplifies the code. I intentionally only\nimplemented a subset of the ObjectRef interface: get_pos
, get_velocity
,\nget_hp
, and punch
.
local node_object = conquer.create_target(vector.new(3, 4, 5))\n\n-- Can now interact with the node like an object\nnode_object:get_pos()\nnode_object:get_velocity()\nnode_object:punch(unit_entity, 0.5, caps)
\nThe mod jam received 16 entries. After the deadline, the organisers set up a\nserver to play all the mods - it was chaos and a lot of fun. I ran a server just\nfor Conquer as a flat area was needed to play on. I also didn’t get around to\nimplementing persistence before the jam deadline, so other mods crashing the\nshared server would have caused ongoing sessions to be lost.
\n\nI continued to develop Conquer after the jam, adding multi-select and other\nquality-of-life features. The next big thing to implement is squad movement,\nallowing a selection of units to move as one and avoid occupying the same\nposition.
\nThanks to GreenXenith and others for organising\nthe mod jam. It was a pretty cool experience and had a good atmosphere to it. It\nwas a lot more successful than I expected.\nYou can find other jam entries on ContentDB.
\nAfter the success of the mod jam, GreenXenith and others arranged game jams in\n2021 and 2022. These were open to the wider Minetest community and proved even\nmore successful. I was a judge on the game jams and didn’t enter.
\n\n","url":"https://blog.rubenwardy.com/2023/05/31/conquer/","summary":"Conquer is an RTS minigame created for the 2020 mod jam. It received first place. This article will discuss some interesting challenges that it needed to solve.","date_published":"2023-05-31T00:00:00.000Z","image":"https://blog.rubenwardy.com/static/img/iYbsZN4AqX-540.png","tags":["projects","minetest","jams"]},{"id":"/2023/05/04/android/","language":"en","title":"Thoughts on native Android development","content_html":"I love making apps. I like making stuff and it’s even better when I can interact\nwith the stuff I make. App development feels especially real as I can hold and\ntouch what I made. So why don’t I have many Android hobby projects? This article\nis a reflection on the projects I’ve worked on, the problems I have, and what I\nwant to try in the future.
\nI learned native Android development in 2016 to create an\napp to install mods for Minetest. I published\nMinetest Mods to the Play Store and maintained it there for several years,\nunpublishing it only after I added built-in support for downloading mods inside\nMinetest.
\nI took great pride in ensuring the app followed\nMaterial Design and had a good\nuser experience, and was rewarded with an average rating of 4.3 stars. The\nrating was reduced by the fact that it only supported mods, and not games or\ntexture packs. There were also a lot of issues with flawed data,\nI created a new backend called ContentDB to solve this\nissue, ultimately leading to the obsolescence of the app.
\n\nMy first job was doing Android, iOS, and backend development at a small software\nhouse alongside my studies. It was awesome to get paid for doing the thing I\nloved! Compare that to my job before, where I worked in hospitality for not much\nmoney.
\nI worked on six different client projects, including two iOS apps and five\nAndroid apps. Most of the team had a background in web design, so I was able to\nplay a leading role in the technical direction of the projects including pushing\nfor the adoption of best practices like continuous integration and improved\nauthentication (OAuth2).
\n\nI did other hobby Android apps but ultimately didn’t complete them due to\nissues.
\nAnalytics for Monzo was an app to show pretty charts for Monzo bank users, but I\nwas unable to finish it due to a restricted API. With a banking API? Who would\nhave thought?!
\nForumMate was a forum aggregator for\nold-style bulletin board forums, but problems with getting it to work with all\nforums and concerns about legality led me to abandon the project. I started the\nproject as an excuse to get back up to speed with the latest Android SDK\nconstructs, so it did serve its purpose.
\n\nMy current project at work is an Android app used by millions of people. It’s\nwritten mostly in Java with a mixture of modern and legacy APIs. We are slowly\nrewriting the app into Kotlin.
\nOne of the things that stands out about this project is that it’s the first time\nI’ve worked on something where I know people that use it. At previous jobs, I\nmostly worked on niche apps and internal tools for aerospace/transport.
\nThe large user base also allows small changes to make a huge difference to\nusers. I noticed a UX pain point in the app - an icon being used incorrectly. I\nproposed and implemented the fix for this, which increased the discoverability\nof an essential feature from a success rate of 26% to 92%. That’s a huge\ndifference from such a small change.
\nI’m very project-driven; I’ve rarely learned technology just for the sake of it,\nit’s always because I have an idea I want to make. I learned React to create\nRenewed Tab because I couldn’t find a New Tab\nextension that worked how I wanted. The fact is that there are not many things\nthat need an Android app rather than a web app. Web apps can be used on a much\nwider range of platforms, including phones. Web apps are also much easier to\nmake.
\nThis cross-platform issue is a big one - I rarely write applications for just\nLinux or Windows, I always prefer to use cross-platform toolkits. Even if I only\npublish to Linux, the platform I use, I still choose toolkits that are\ncross-platform. So why do I treat mobile app development differently?
\nAnother issue is that the Android SDK is especially painful to deal with. It’s\ncomplicated and there are so many ways to do things; it’s not always clear which\nshould be used. Java as a language leads to boilerplate code, the introduction\nof Kotlin made a big difference in cutting this down. I do enjoy native Android\ndevelopment, the costs are just high compared to other options.
\nCross-platform toolkits allow you to develop for Android and iOS from the same\ncodebase. Some even allow targetting desktop as well. This is appealing as it\nlowers the development effort when creating apps and reduces the time to market.
\nI have historically been quite sceptical of cross-platform app development\ntoolkits as I got the impression that they gave subpar results.
\nFirst, I saw these toolkits as being quite janky and hard to make a good user\nexperience in. This may just be because my exposure has been with thin wrappers\nover webviews, on badly coded apps. Modern toolkits like React Native and\nFlutter make it much easier to create good user experiences.
\nSecond, I didn’t think these toolkits would be able to match the appearance and\nfunctionality of native apps. This is something I’d like to look into, but I\nwill note that it is very common for Android apps to break material guidelines\nand instead look like iOS apps. Companies like to avoid doing the same work\nmultiple times. Just because an app is native doesn’t mean that it provides a\nperfect native experience, it may be easier to provide a good experience using\ncross-platform toolkits.
\n\nI need to look out for use cases and ideas I have that would suit an app. I\ndon’t want to start making things just for the sake of it. Apps are best suited\nfor ideas that require deeper integration with the device or a user experience\nthat web apps are unable to provide.
\nIn 2019, I started the ForumMate project\nas an excuse to get back up to speed with the latest Android SDK constructs -\nincluding MVVM, Room, repositories, Dagger, data binding, and Jetpack\nnavigation. I choose this idea as it had enough complexity to be an interesting\nproject.
\nHopefully this article wasn’t too negative, it’s not meant to be. I do enjoy\nnative Android dev, but I am wary of the costs especially when working on\nprojects by myself. Native development may still have its place on bigger\nprojects.
\nThe next time I have an idea for an app, I’ll use it as an opportunity to learn\na cross-platform toolkit like React Native or Flutter. If you have any thoughts\non React Native, Flutter, or other cross-platform toolkits, it would be great to\nsee replies.
\n","url":"https://blog.rubenwardy.com/2023/05/04/android/","summary":"This article is a reflection on the projects I've worked on, the problems I have, and what I want to try in the future.","date_published":"2023-05-04T00:00:00.000Z","image":"https://blog.rubenwardy.com/static/img/KyTAkYvG2k-540.png","tags":["android"]},{"id":"/2023/04/26/bing-ban/","language":"en","title":"Bing (and DuckDuckGo) shadow banned my website","content_html":"\n Bing has now unbanned my blog. My site is still penalised/low in the\n results, but at least it's appearing now. See the bottom of this article\n for their response.\n
\nMy domain was shadow banned by Bing and DuckDuckGo last year, and I don’t know\nwhy. Since then, my root domain (rubenwardy.com) has been unbanned but my blog\nremains banned. The event also negatively impacted the search placement of my\nroot domain; another site that reuploaded some of my content is appearing as the\nfirst result when searching for it.
\nLast year, I received a message from a user saying that my website wasn’t\nappearing on DuckDuckGo. Looking into it, I discovered that DuckDuckGo uses Bing\nand my website wasn’t appearing there either.
\n\nThe first thing I did was sign all my domains up to Bing Webmaster. The\ndashboards were empty, and using the URL inspection tool resulted in the\nfollowing generic error:
\n\nI’ve read through the Bing Webmaster Guidelines and I’m certain that my website\nfollows them.
\nrubenwardy.com hosts my portfolio, blog, and several\nopen-source projects - including\nthe book I wrote\nabout creating mods for Minetest. At this point last year, my best guess for the\ncause of the ban was that Bing incorrectly thought that I was hosting stolen\ncontent, as another domain had reuploaded the book. Unfortunately, the book is\nlicensed under CC BY-SA 4.0, which means I am unable to use copyright to take\nit down.
\nAt this point, I sent an email to Bing support explaining the situation\n(UCM000004402991). After a week, I got the following response:
\n\n\nThank you for writing to Bing Webmaster Support. I have reviewed your site,\nand I have now reached out to our Product Review Group for further analysis.
\nIn the meantime, you can go through our Bing Webmaster guidelines to check if\nyour site is violating any of the rules cited in Bing Webmaster Guidelines,\nespecially the section Things to Avoid, and thoroughly check your site for any\ndeliberately or accidentally employed SEO techniques that may have adversely\naffected your standing in Bing and Bing-powered search results.
\n
Googling this, I’ve found the exact same text sent to someone else, so it’s a\ncopy-paste response.
\nI once again went through the Bing Webmaster Guidelines, but could not find\nanything that I was doing wrong. After a month, I chased up the email and got\nthe following:
\n\n\nOur engineering team is still looking into this issue. I will keep you posted\non this case. Please allow some time on this request
\n
Without any update from Bing support, rubenwardy.com started appearing in the\nresults albeit heavily penalised. My blog has since remained banned, with the\nsame error message.
\nGiven this, it seems like the problem may be with the blog rather than my root\ndomain and the reuploaded content issue. I’m not sure what my blog could be\ndoing wrong.
\nI followed up with Bing support, and they replied saying they have been\nfollowing up with the engineering team and will keep me posted, but I never\nreceived an update.
\nMy original idea was that it was due to another site reuploading my modding book\nand being considered the original version. But given that the root domain was\nunbanned and the blog domain wasn’t, this looks less likely.
\nI have now added canonical tags to most of my pages. Unfortunately, the owner of\nthe other site is malicious so I cannot ask them to fix the problem. The book is\nlicensed under CC BY-SA 4.0, which means I am unable to use copyright to take it\ndown.
\nI’m quite active online and tend to put my website in my profiles and forum\nsignatures. This has resulted in a lot of inbound links, especially from\nminetest.net, which may have resulted in Bing considering it link spam.
\nI developed and host ContentDB, Minetest’s website to\nfind games, mods, and texture packs. I placed a link to rubenwardy.com in\nContentDB’s footer as a copyright attribution without realising the SEO\nimplications. As ContentDB has over 14,000 pages, this has resulted in a huge\nnumber of inbound links. I have now replaced that footer link with one to an\nabout page that links to my site,\nthis will result in a single quality link rather than thousands of poor-quality\nlinks. But these links were to rubenwardy.com, not blog.rubenwardy.com - this\ndoesn’t explain why my blog is still banned.
\n\nI ran an HTML5 validator on my blog and main website, and fixed a couple of\nsmall mistakes.
\nGoogle Search Console alerted me about a large number of Not Found (404) URLs.\nThese URLs are bizarre and look like spam. Another site must be creating these\nlinks to me, perhaps they’re hoping my site is badly programmed and will return\na soft not found with the links present. After searching, I found some\nthreads which confirmed this.\nAs I correctly return a 404, there’s nothing for me to do here.
\n\nMy blog (blog.rubenwardy.com) remains banned, and the event has negatively\nimpacted the search placement of my root domain; another site that reuploaded\nsome of my content is appearing as the first result when searching for it. This\nis especially annoying as it’s the most popular thing on my website, many users\ngo looking for it and instead find an outdated version on a dodgy domain.
\nIt appears that Bing banning small sites may be a common issue, I’ve found other\npeople reporting that their site was similarly effected:
\n\nThere are potentially over a billion websites on the Internet, Bing needs to\nindex and rank them whilst avoiding abuse. Some websites contain illegal or\nharmful content, others attempt to manipulate the search results. Given the\nscale of the problem, it is understandable for Bing to sometimes make mistakes.
\nIn the end, Dave Rupert discovered that Bing was\nincorrectly classifying his site as a spam blog.\nA spam blog is a website created purely to link to and promote other websites,\noften with useless information. Dave was able to fix his problem are blogging\nabout it and raising awareness, so hopefully this post will be able to resolve\nmy problems. I’d appreciate suggestions and help resolving this issue.
\nBing has now unbanned my blog. My site is still penalised/low in the results,\nbut at least it’s appearing now
\n\n\n","url":"https://blog.rubenwardy.com/2023/04/26/bing-ban/","summary":"My domain was shadow banned by Bing and DuckDuckGo last year, and I don't know why. My root domain has since been unbanned, but my blog remains banned.","date_published":"2023-04-26T00:00:00.000Z","image":"https://blog.rubenwardy.com/static/img/--jfaG77ez-540.png","tags":["fullstack","minetest"]},{"id":"/2023/03/16/lockbox/","language":"en","title":"3D-printed IoT lock box using ESP32","content_html":"Good day and thank you for your patience while we investigated the issue.
\nFirstly, I am happy to provide you information that our Product Review Group\nsucceeded in resolving the issue which prevented your site from showing in our\nindex. After submitting your site to be reviewed, the team has decided to lift\nthe block. Allow up to two to three weeks for your site to be crawled,\nindexed, and serving again.
\nI am unable to provide you the specifics of the block, as our Product Review\nteam does not share the details of the block. However, I wish to help you with\na few > links and help docs, which will not only help in avoiding blocks, but\nalso help you to rank better in the Bing index.
\n\n
\n- Why is My Site Not in the Index?
\n- 10 SEO Myths Reviewed
\n- Is SEO The Future? No, And Here’s Why
\n- Building Authority & Setting Expectations
\n- The Role of Content Quality in Bing Ranking
\nWhen Bing crawls and indexes pages on the Internet, it makes decisions about\nwhich pages will most likely resonate with searchers, so we look at these\nsignals relative to all other sites on the web as part of our selection\ncriteria. Always ensure you follow webmaster guidelines, especially the\nsection Things to Avoid.
\nLet me know if you have any follow-up questions or concerns regarding this\nissue and I would be more than happy to assist.
\nHave a wonderful day ahead!
\n
My eventual goal with electronics is to create autonomous robots and drones; I’d\nlike to make a quadcopter with my own flight controller that can take off, land,\nand follow a target. This will be quite an ambitious project, and I’m nowhere\nnear capable enough for that yet.
\nPreviously, I created a simple plant monitor that\nreported stats to an online dashboard. This allowed me to learn soldering,\nstripboards, and 3d printing. To work on future projects, I need to\nbe able to produce ever more complicated circuits and mechanical designs.
\nAfter watching some Lock Picking Lawyer on YouTube, I was inspired to think\nabout different locking mechanisms. A locking box would be a good experiment\nwith mechanical design, and if combined with IoT, would be good for electronics\ntoo.
\nIn this article, I will cover how I created my lock box - from the problems I\nhad, the iterations I made, and the final design I settled on. It’s a bit of a\nrandom project, I didn’t have set goals in mind. I started by exploring\ndifferent locking mechanisms, and then moved to focusing on the electronics and\ngetting it working.
\nMy initial idea was to use a servo to move a pin to unlock the box. Whilst I\ncould have used the servo head as part of the lock mechanism directly, I didn’t\nwant the servo to be a load-bearing part of the lock.
\nMy first design was to use a pin, a paperclip pushrod, and a two-part slot. I\nfound it quite hard to get the pin to move straight without falling out at the\nend. The slightest bit of misalignment in the two latches also caused a lot of\nfriction on the pin. Due to these problems, I decided to change to a design that\ndidn’t need as much horizontal movement.
\n\nMy second design used a servo to move an E-shaped gripper. When in the locked\nposition, the gripper blocks the lid from being removed by colliding with tabs\non the side of the lid.
\nThe nice thing about this design is that it didn’t need much calibration, the\nservo just needed to be in the right place to move the gripper.
\nHowever, the servo I used was pretty unreliable - it would frequently do\n360° rotations and break the lock. I also had problems with removing the\nlid, there was way too much flex in the lid and base, causing it to get stuck at\nweird angles.
\nAnother problem is that I designed the case before the electronics, and\nunderestimated the amount of space I’d need for them. Increasing the size of the\ncase to fix the electronics would exacerbate the problems further, I need to\ncome up with a design with fewer moving parts.
\nI had bought a\ncheap solenoid lock from PiHut\na few months ago, so I decided to redesign the box using it. I also decided to\nget the full electronic design done first and then design around that, allowing\nthe case to correctly fit the electronics.
\nThe electronics were soldered onto two stripboards, with 22-gauge wire\nconnecting them.
\nThe main board has the ESP32 microcontroller, Real-Time Clock, MicroSD card\nreader, and solenoid driver. I included an SDCard reader to store configuration\ndata and for logging; this is something I wanted to learn how to do as it will\nbe much more useful in future projects. The solenoid driver consists of a power\nMOSFET and a flyback diode.
\nI included a separate MicroUSB port to act as the power supply. Using the\nmicrocontroller’s MicroUSB port would limit the amount of current available to\nthe solenoid and would also allow the lock to be bypassed by reprogramming the\nmicrocontroller. The MicroUSB module is connected to the main board using\n22-gauge wire.
\nThe screen board contains an SSD1306 OLED screen and four push buttons.\n3D-printed button caps rest on the buttons and allow them to be pressed through\nthe case. The lid has holes for the screens and the four button caps.
\nCreating the screen board was by far the hardest part of the project - space was\ntight and once the screen was on it blocked access to the rest of the screen\nboard. At one point, the cheap soldering iron I was using overheated - it\nstarted glowing red hot, indicating it was over 200°C hotter than it should be!\nI bought a new soldering station, a Hakko FX-888D, which made soldering so much\neasier.
\nRather than making the lid part of the lock mechanism, I added a sliding drawer\nto hold the contents. Using a smaller moving part prevents the issues with the\nplastic flexing. The drawer has a slot on the bottom for the solenoid lock, and\na lip to prevent picking.
\nThe lid slides onto the base using tabs, locking it into place vertically. The\ndrawer slides into rails on the lid and base, locking the whole thing together.\nThe drawer prevents the lid from sliding horizontally and provides extra\nstrength vertically. Using the drawer to lock the lid in place is a simple and\nelegant way to prevent access to the electronics whilst locked - it avoids\nneeding to use screws.
\nThe case has a hole for the power cable and a small platform on the inside to\nmount the MicroUSB module using screws.
\nThe software is pretty minimal. ESP32 uses the Arduino C++ SDK so I was able to\nbenefit from existing libraries for components such as the display (SSD1306),\nReal-Time Clock, and MicroSD card reader. One of my other projects is built\nusing the Pico C++ SDK and required me to write my own firmware for this\nhardware.
\nWhen connected to a WiFi network, the microcontroller will synchronise the\ncurrent time using the Network Time Protocol. The time is stored in the\nReal-Time Clock module, allowing the time to be known whilst offline.
\nThe user can lock the device using a pin or set up a time-based lock. The device\ncould also use a REST API, but I didn’t get around to exploring this.
\nOne thing that surprised me about working with the MicroSD card reader is just\nhow slow it was. It took 0.65 seconds to open a file, write a few bytes, and\nthen close it. Appending to a file was super fast, however, likely because the\ncards are optimised for sequential access. This is probably just a software\nproblem. I used SD.h from the Arduino SDK, which uses sdfatlib, with a 32GB\nMicroSD card formatted to FAT32.
\nThe drawer is too small to be useful for much - it only really fits a key or\nnote, and not much else. This is just a prototype and a learning exercise, so it\ndoesn’t matter that much. A larger version could have quite a few use cases,\nsuch as rewarding good habits.
\nThe solenoid design is 15% larger than the servo design but has 70% less storage\nspace. Now, the old design didn’t really fit the electronics so it’s not a fair\ncomparison, but it highlights a problem with the size of the box vs the size of\nthe lockable storage space.
\n\nOne way to solve this would be to miniaturize the electronics. A Pi Pico W would\nhave a smaller footprint and also has a built-in RTC, so the external RTC could\nbe removed. Making a PCB could make the electronics smaller, especially if you\nembed the various breakouts onto it. Glueing the screen rather than using a\nholder plate to screw it in would also save space.
\nAnother option would be to make the whole thing bigger. The same amount of\nelectronics is required, no matter the size. So making it bigger will allow the\nstorage efficiency to be higher.
\n\nAnother possible future improvement would be to add a small li-ion battery,\nallowing the box to become more portable.
\nA limit switch could be used to automatically detect when the drawer is in the\nbox. This would be a small user-experience improvement.
\nIt would be good to get my custom mechanical designs based on servos working.\nWhilst there isn’t a huge amount of benefit to them, I am disappointed that I\nwasn’t able to get them working correctly.
\nWell, I definitely learned a lot in this project. I probably won’t continue to\nwork on it as I’ve already spent more time on it than I was planning and I don’t\nhave much use for it.
\nI’ve been working on a new robotic car platform using a Pi Pico. The aim is for\nit to be a platform for me to experiment with various robotic techniques,\nincluding dead reckoning and\nSimultaneous Localization and Mapping (SLAM).\nI’d also like to learn how to use lipo batteries and RF transceivers.
\nAnother project is likely to be an automatic watering station for my basil,\nwhich needs to be watered daily. This would be a nice continuation of my\nplant monitor project.
\n","url":"https://blog.rubenwardy.com/2023/03/16/lockbox/","summary":"In this article, I will cover how I created my lock box - from the problems I had, the iterations I made, and the final design I settled on.","date_published":"2023-03-16T00:00:00.000Z","image":"https://blog.rubenwardy.com/static/img/1XWIOl7tCu-540.png","tags":["projects","electronics","3d_printing"]},{"id":"/2023/02/17/interesting-reads/","language":"en","title":"My favourite programming and game dev articles","content_html":"Articles are a great way to expand your knowledge as a programmer, by learning\nfrom the experiences of others. A good article is also interesting to read.\nHere is a collection of articles I recommend reading.
\nTynan Sylvester’s\nThe Simulation Dream\nexplains how to create a deep simulated world without super complex systems that\nwould confuse players. It makes the point that the player’s perception of the\ngame is more important than what the game is actually doing.
\nTynan is the creator of RimWorld, my favourite game\nof all time.
\n\nBehaviour Trees are a nice way to construct game AIs. They allow you to control\nan NPC’s decision-making by combining reusable nodes rather than creating new\nstates in a state machine. They’re great for making extensible\nworker NPCs. Below are the two articles\nI used to learn how behaviour trees work.
\nHere is an interesting article about how enemy NPCs work in Mafia III, with\nstealth, cover, perception, tactics, and combat.
\n\nThis is actually a talk rather than an article, but it was too good to leave out.\nThe creator of Box2D talks about how to implement a character controller with\nswept collision and an iterative solver. It’s dense but super interesting and\nthe best resource I’ve found about building your own physics engine.
\n\nThis is one of the most technically impressive things I’ve seen made in Minetest.\nIt’s a base-building game where you command minions to build up the base and\ndefend it from attackers. Without spoiling too much, they use TypeScript,\ntheir own JSX GUI API, data binding, and a hierarchical pathfinder.
\nI’m a big fan of RimWorld and have\nincorporated mechanics from it in my own games\nbefore, so this is pretty cool to see.
\n\nStack Overflow has a reputation for being unfriendly to those asking questions,\nbut it turns out that this is by design - SO’s primary purpose is to be a wiki\nfor programming answers, rather than a place to ask questions. This interesting\narticle by one of Stack Overflow’s founders explains the reasons behind the\nsite’s design.
\n\nWritten by the other founder of Stack Overflow, this article discusses the\ndangers of rewriting software from scratch.
\n\nYAML is a nice, simple format for configuration, right? Wrong. YAML has a\nshocking number of edge cases and foot guns.
\n\nI hope you found these interesting. Feel free to reply with any articles or\ntalks you liked or think I might like.
\nCover Image\n© Engin Akyurt
\n","url":"https://blog.rubenwardy.com/2023/02/17/interesting-reads/","summary":"Articles are a great way to expand your knowledge by learning from the experiences of others. Here is a collection of articles I recommend reading.","date_published":"2023-02-17T00:00:00.000Z","image":"https://blog.rubenwardy.com/static/img/fBiLy502MT-540.png","tags":["gamedev","blogging"]},{"id":"/2023/01/24/using_sdl_gamecontroller/","language":"en","title":"SDL_GameController: Making gamepads just work","content_html":"When implementing controller support in a game, it’s desirable for gamepads to\njust work without a lot of user configuration. Platform APIs are pretty useless\nfor this, the solution is an API like SDL_GameController that allows you to\ntarget a large number of gamepads without much effort.
\nEach operating system has its own API for gamepad input. Windows has XInput, and\nLinux has the joystick and evdev APIs. When a gamepad button is pressed,\napplications will receive a button id. This is a number, there’s no OS way to\nknow which button id corresponds with which button. The ids for a button are not\nthe same on different gamepads and platforms, making it super hard to support\nmore than a couple of devices.
\nif (SDL_JoystickGetButton(joystick, 8)) {\n std::cerr << \"no idea what button 8 is\" << std::endl;\n}
\nOne thing platforms do give you is the name, model, and manufacturer\nof the game controller. If you test with a large number of gamepads, you can\ncreate a database from gamepad name to layout. Luckily, SDL_GameController\nhas already done this for you. Instead of a random number, you can use a named\nbutton that will work no matter the gamepad and platform:
\nif (SDL_GameControllerGetButton(controller, SDL_GameControllerButton::SDL_CONTROLLER_BUTTON_X)) {\n std::cerr << \"X was pressed!\" << std::endl;\n}
\nSDL_GameController is an abstraction that allows you to program input based on\nan Xbox-like controller layout, and have it work with a huge variety of devices.\nIt’s a layer built on top of the raw SDL_Joystick API.
\nXbox-like controllers have a DPAD, two analog sticks, 4 buttons on the right\n(often called A/B/X/Y), shoulder buttons, and 3 buttons in the middle\n(start/back/logo). Examples include PlayStation DualShock, Nintendo Switch, and\nSteam Deck (Neptune).
\nSDL2 comes with a database of game controllers, mapping from controller id to\nlayout information. Users can also provide custom mappings, which is supported\nwithout you needing to do anything.
\nSteam also comes with built-in support for SDL_GameController, allowing users to\nremap their controllers for your game in Steam. If Steam supports the\ncontroller, your game will too.
\nWhen initializing your SDL2 device, add the SDL_INIT_GAMECONTROLLER
flag:
if (SDL_Init(SDL_INIT_VIDEO | SDL_INIT_GAMECONTROLLER) < 0) {\n std::cerr << \"SDL could not initialize! SDL Error: \" << SDL_GetError() << std::endl;\n return 1;\n}
\nThere are events for controllers connecting and disconnecting, but they’re\nnot fired for controllers that are already connected. So, before starting\nthe game loop, you’ll need to check for existing controllers:
\nSDL_GameController *findController() {\n for (int i = 0; i < SDL_NumJoysticks(); i++) {\n if (SDL_IsGameController(i)) {\n return SDL_GameControllerOpen(i);\n }\n }\n\n return nullptr;\n}
\nFor simplicity, we’ll only be setting up and tracking a single controller at a\ntime. But it’s a similar process to handle multiple controllers.
\nSDL_GameController *controller = findController();
\nNext, you’ll need to listen for the SDL_CONTROLLERDEVICEADDED and\nSDL_CONTROLLERDEVICEREMOVED events in the SDL2 event handler:
\nswitch (event.type) {\ncase SDL_CONTROLLERDEVICEADDED:\n if (!controller) {\n controller = SDL_GameControllerOpen(event.cdevice.which);\n }\n break;\ncase SDL_CONTROLLERDEVICEREMOVED:\n if (controller && event.cdevice.which == SDL_JoystickInstanceID(\n SDL_GameControllerGetJoystick(controller))) {\n SDL_GameControllerClose(controller);\n controller = findController();\n }\n break;\n}
\nThe controller removed event gives us the Joystick Instance ID as which. To\ncheck whether this is the same controller, we first need to get the SDL_Joystick\nand then the ID for the controller.
\nThere are two ways of receiving input; you can either use events or you can\npoll for input state. The following events are available:
\nSDL_CONTROLLERDEVICEADDED
: A controller was added.SDL_CONTROLLERDEVICEREMOVED
: A controller was removed.SDL_CONTROLLERDEVICEREMAPPED
: A controller was remapped, you can mostly\nignore this event unless you use raw joysticks.SDL_CONTROLLERBUTTONDOWN
: A button was pressed on a controller.SDL_CONTROLLERBUTTONUP
: A button was released on a controller.SDL_CONTROLLERAXISMOTION
: An axis was moved, such as a thumbstick or analog trigger.Here’s an example of handling the X button using the event:
\ncase SDL_CONTROLLERBUTTONDOWN:\n if (controller && event.cdevice.which == SDL_JoystickInstanceID(\n SDL_GameControllerGetJoystick(controller))) {\n switch (event.cbutton.button) {\n case SDL_GameControllerButton::SDL_CONTROLLER_BUTTON_X:\n std::cerr << \"X pressed!\" << std::endl;\n break;\n }\n }\n break;
\nand by polling for input state:
\nif (SDL_GameControllerGetButton(controller, SDL_GameControllerButton::SDL_CONTROLLER_BUTTON_X)) {\n printf(\"X was pressed!\\n\")\n}
\n\nUsing SDL_GameController, it’s possible to target a large number of gamepads on\ndifferent platforms without much effort. How to manage multiple controllers,\nkeyboard and mouse, and input binding is definitely a story for another time.
\nThe\ncomplete example code for SDL_GameController\nis available for\ndownload\nfrom GitLab.
\nImage\n© 2019 Stas Knop
\nLast month, ContentDB received a large spike in traffic.\nContentDB is the website\nI created that allows you to install mods/games/texture packs from within\nMinetest’s main menu. On Saturday 5th, Bobicraft,\na Spanish-language Youtube channel with 5M subscribers, posted\na video about Minetest.\nAt the time of writing, it has reached 1.3 million views and is now the\nmost-watched Minetest video.
\nIn this article, I will show the data about this increase in traffic. As the\ntools that I use to monitor ContentDB are ephemeral, this will keep the data\naround for future reference.
\nSaturday 5th saw an overall 3.4x increase in unique visitors and a 10x increase in\ntotal bandwidth used. At its peak at 15:18 UTC, ContentDB saw a 46x spike in\ndownloads.
\n\nIt usually has 2TB downloads in a month, but that weekend alone used 0.86TB. If\nthis level of activity was experienced for the whole month, it would use 16TB.\nMy hosting quota is around 100TB a month so I still have a while to go.
\n\nA few days after this event, I implemented per-package statistics on ContentDB,\nshowing daily downloads. I ingested Nginx logs to get data covering the\nevent and several weeks prior. Package statistics are public, you can go to\nany package page and click “Statistics” to see them.
\n\nContentDB was easily able to handle the load, I didn’t notice any performance\ndegradation. ContentDB is hosted on a Virtual Dedicated Server (VDS) from\nnetCup (RS 2000 G9.5, 6-core 3.35GHz, 16GB RAM), and so has dedicated compute\nresources.
\nEarlier this year, we saw a huge spike after AntVenom released a video on Minetest.\nThis resulted in quite a big growth in our community, with the Discord server\ngaining a few hundred new users.
\n\nDespite this new video exceeding the number of views, we haven’t seen anything\nlike the level of growth seen after the AntVenom video. Not many new users\njoined the Discord server, and there was no noticeable uptick in online players.\nThat being said, there wasn’t an increase in online players after AntVenom either.
\n\nThe latter half of the video focused on ContentDB and the packages available\nfrom it, so perhaps a lot of the viewers decided to download and try them out in\nsingleplayer.
\nAnother reason may be that the video is Spanish language and our communities are\nmostly English language.
\nThe AntVenom video accurately portrayed Minetest and our values in a\nmore serious manner, which may have led to the correct users finding Minetest\nand joining our communities.
\nNow that Minetest has been noticed by a couple of big YouTubers, we might start\nseeing more YouTubers and media sources checking out Minetest.
\nOne of the things we’ve been working towards is an improved main menu redesign.\nThis is vital to attract and retain users, as the current main menu doesn’t do a\nvery good job of promoting Minetest or its values as an open game creation\nplatform.
\nAnother thing we’ll be doing soon is listing Minetest on Steam. It’ll start in a\n“coming soon” state whilst we improve the main menu and other things, allowing\nplayers to wishlist it.
\n","url":"https://blog.rubenwardy.com/2022/12/08/contentdb-youtuber-finds-minetest/","summary":"A month ago, ContentDB received a large spike in traffic. Bobicraft, a Spanish-language Youtube channel with 5M subscribers, posted a video about Minetest.","date_published":"2022-12-08T00:00:00.000Z","image":"https://blog.rubenwardy.com/static/img/po1vDlLBwW-540.png","tags":["minetest"]},{"id":"/2022/12/02/minetest-steam-deck/","language":"en","title":"Minetest on Steam Deck: usage and review","content_html":"A few months ago, my Steam Deck arrived. So, obviously, one of the first things\nI did was play Minetest. Whilst Minetest isn’t available on Steam yet, it is\navailable on Flatpak and so can be installed from desktop mode.
\nMinetest plays very well on the Steam Deck, it performs great and Steam Input\nmakes setting up controls easy. The Deck is an open console, so there’s full\nsupport for modding and you can cross-play with other platforms.
\nIn this article, I will explain how to set up Minetest on the deck, and review\nthe controls, performance, and experience. I’ve already written\nmy thoughts about the Deck\nin another article, and recommend checking that out.
\nAs Minetest isn’t available on Steam yet, you’ll need to boot into the Deck’s\nDesktop Mode and install Minetest from Flatpak. Press and hold the power button\nand then select Switch to Desktop Mode.
\nNext, open up the Discover app. To show the onscreen keyboard, you’ll need to\npress Steam and X at the same time. Search for Minetest and install it.
\nOnce installed, open up the start menu, find Minetest, and right-click on it by\npressing the left trigger (L2). Select “add to Steam” and wait for it to finish.
\nYou can now boot back into game mode. You’ll find Minetest in Library >\nNon-Steam Games.
\n\nThe Steam Deck’s UI shows art for each game. It’s likely that you’ll want to add\nart to Minetest so that it’s not just a grey square.
\nRun Minetest at least once in Game Mode, stop it, and return to Desktop Mode.
\nNow you need to download the cover art you’d like to use. I’ll be posting some\nexample images at some point. In the meantime, you’ll need to provide horizontal\n(460x215px) and vertical (600x300px) images in the correct resolutions.
\nLaunch Steam and go to the Library. Find Minetest’s tile on the right side of\nthe window, then right-click > Manage > Set Custom Art > Select the vertical\nimage.
\nScroll down until you see the blank landscape image of the app.\nRight-click > Manage > Select artwork > Choose the horizontal image.
\nIf you’re stuck, here’s the tutorial I followed\non YouTube.
\nUnfortunately, Minetest doesn’t have built-in support for\ngamepads.\nThere is some basic code for it, but it’s very broken and doesn’t support\nuser interfaces at all. I’m planning to fix this at some point, but in the\nmeantime, we’ll need to make do without it.
\nWhen a game doesn’t have built-in controller support, you can use Steam Input to\nbind the controls to fake mouse movement and key presses.
\nI’ve already made a config for Minetest. Select the gamepad icon, and then click\nthe layout button. Go to Community and select “Minetest Semi-official Steam\nDeck” by rubenwardy. You can now customise my layout as desired, or\nalternatively make your own layout based on Templates > Keyboard (WASD) and\nMouse.
\n\nIf, at any time, you’ve forgotten the controls or want to change them, you can\npress the Steam button and go to controller options. It shows you the controls\nand allows you to edit them whilst the game is running.
\nThe game plays well in classic creative/survival gameplay, it’s easy to place\nblocks and move around. Playing PvP or games that require parkour is harder, but\nthere are tricks to making it playable - you can use the touchpad for finer\nmovement or the gyro to aim.
\nOne problem with going from a keyboard+mouse game to a gamepad is that a gamepad\nhas way fewer buttons. Steam Deck comes with extra buttons on the back, which\nare great for extra actions. You can also use the left touchpad as a radial\nmenu, selecting options by dragging your finger to an angle and pressing to\nselect.
\n\nYou can use the right touchpad or thumbstick to move and click the mouse.\nBecause “place” is right-click, you can use the left trigger to right-click in\nthe inventory, allowing you to place a single item in a slot. You can also use\nsneak to shift+click.
\nSteam Deck does come with a touchscreen, but because Minetest was built in\ndesktop mode it just emulates a mouse. You can’t use the touch gestures\navailable on the Android port.
\nTo type, you’ll need to manually open the keyboard by pressing Steam+X.
\nInteracting with Minetest’s User Interfaces is perfectly usable but is a bit\nclunky. It would be nice to have intelligent focus using the left thumbstick,\nbut this will require engine changes to fully support gamepads.
\nIt’s hard to get an objective measure of the performance whilst playing Minetest\nas it depends a lot on what you’re doing. How much of the world and what you can\nsee makes a big difference - viewing a large dense area of the map with lots of\ntrees, walls, and entities will drop the FPS lower than viewing a similar area\nof grassy plains.
\nStarting a brand new world, I got a stable 60FPS walking around a dense forest\nwith default settings (200 view range). Flying above the forest, the FPS drops\ndown to 25FPS, which is still perfectly playable.
\nWalking around my farm with default settings, I get 56-60FPS with occasional\ndrops down to 50FPS. The town has a lot of underground tunnels with mesecons and\npipeworks networks. There are also a lot of mobs roaming around, it is a farm\nafter all. Turning on medium shadows drops the FPS to 37-45.
\nMinetest is largely CPU bound. The Steam Deck has an AMD RDNA 2, which is quite\na good GPU in terms of power to performance; it certainly outperforms the iGPU\non my laptop. If Minetest were to receive\nthe attention needed to upgrade to make use of modern pipelines and the GPU, I’m\nsure the Deck would benefit and perform better.
\nSteam Deck is an awesome new way to play Minetest. Its openness makes it the\nbest console to play Minetest on.
\nIn the future, I plan on adding full gamepad support to Minetest, which will\nmake using GUIs much easier. I also plan on releasing Minetest to Steam once\nwe have a new main menu; this will make it easier to install Minetest on the\nDeck.
\nThis article was originally going to be a YouTube video before I discovered that\nrecording the screen was painful, almost as painful as recording the voice-over.
\n","url":"https://blog.rubenwardy.com/2022/12/02/minetest-steam-deck/","summary":"In this article, I will explain how to set up Minetest on the deck, and review the controls, performance, and experience.","date_published":"2022-12-02T00:00:00.000Z","image":"https://blog.rubenwardy.com/static/img/PCDEf-Th5O-540.png","tags":["minetest","steamdeck","linux","reviews","tutorials"]},{"id":"/2022/11/16/thinkpad-x1-fingerprint-auth/","language":"en","title":"Setting up fingerprint auth on Kubuntu (Thinkpad X1)","content_html":"Using a fingerprint to unlock your computer or elevate privileges (sudo) is a\nnice convenience, saving time during daily use of a computer. This article will\nshow you how to set up fingerprint authentication on Ubuntu KDE. You’ll still\nneed to enter your password to login, but once logged in you will be able to use\na fingerprint to unlock or use sudo.
\nThis guide is based on a Thinkpad X1 running Kubuntu 22.04, but it should work\nfor most laptops with supported fingerprint sensors. You can check whether your\nfingerprint sensor is supported by searching for “Linux fingerprint” and your\ncomputer’s make/model. The Arch wiki is\nespecially useful\nfor this sort of thing, even if you don’t use Arch.
\nIt goes without saying that this comes without warranty; backup your data, and\nhave a LiveUSB ready. There are some pointers on recovering from mistakes at the\nbottom of this page. Luckily, I never needed to use a LiveUSB, I could easily\nrecover from a virtual console.
\nRun lsusb
and check that you can see your fingerprint sensor.\nFor a Thinkpad X1, this will be 06cb:00bd Synaptics, Inc. Prometheus MIS Touch Fingerprint Reader
.
If you don’t, then your firmware may be outdated or the sensor may not be supported.\nYou can use fwupdmgr
to update your firmware.
Next install fprint: sudo apt install -y fprintd libpam-fprintd
You now need to enroll at least one finger so that fprint can recognise you.
\nYou can do that using the GUI, by going to Settings > Users >\nConfigure Fingerprint Authentication.
\nAlternatively, you can use the CLI: Run fprintd-enroll $USER
and swipe your\nfinger over the sensor 3 times.
Enable the fprint pam module by running sudo pam-auth-update
, checking Fingerprint,\nand selecting OK.
In theory, the above should be all that you need to do to use fingerprints for\nlogin and sudo auth, but I found that logins were broken. So we’ll need to do a\nfew more steps to fix that.
\nThe Pluggable Authentication Module (PAM) configuration files can be found at\n/etc/pam.d/
, and are used to determine how to authenticate the user. I needed\nto change the order of the authentication methods to get it to work.
First, comment out the fprint
line in /etc/pam.d/common-auth
if present, to\nprevent fprint from always being used.
To use fingerprints for sudo, add the following lines to /etc/pam.d/sudo
above\n@include common-auth
:
auth [success=2 default=ignore] pam_fprintd.so max_tries=2 timeout=10\n
\nTo use fingerprints to unlock, create a new file /etc/pam.d/kde
:
auth sufficient pam_unix.so try_first_pass likeauth nullok\nauth sufficient pam_fprintd.so max_tries=2 timeout=10\n
\nLock your screen, and check you can still use your password to unlock - this is\nuseful when the fingerprint sensor fails. Now check that the fingerprint works;\npress Enter on the password field and then touch the sensor.
\nWhen changing authentication settings, there’s always a possibility of being\nlocked out; this happened a few times before I figured out how to make this\nwork.
\nYou can switch to a Virtual Console by pressing Ctrl + Alt + F2. This is a\nfullscreen console that allows you to log into the system without going through\nthe lock screen. From here, you can edit the configuration files using nano or\nyour CLI editor of choice.
\nSee Linux Virtual Consoles Explained for more info.
\nAs a last resort, it’s possible to use a LiveUSB/LiveCD to edit the\nauthentication configuration files. This is possible for both unencrypted and\nencrypted drives, as you can\nmount encrypted partitions using cryptsetup.
\nImage\n© 2019 Olena Bohovyk
\nIn 2016, I created an app to install mods for Minetest’s Android port. It\nwas my first Android app; it taught me a great deal about Android development\nand also helped me get my first programming job.
\nMinetest is an open-source game engine with millions\nof downloads and thousands of weekly players. The project has a very active\nmodding community, with many games available to play. Before I created the app,\nusers had to manually install content by unzipping their files into a directory;\nthis was a poor user experience, especially on Android, so I created the app\nto make this easier.
\nMinetest now has ContentDB, a platform I created to\nbrowse and install mods and games within Minetest. Because of this, the app is\nnow obsolete and is no longer available for download. That doesn’t mean this app\nis fully gone - the lessons I learned live on in ContentDB.
\nFor the app to function, it needs a list of mods that can be installed.\nIn 2017, the closest thing to this list was\nKrock’s Mod Search, an index\nof mod topics on the forum, but this was super unreliable. It was common for\nthere to be no download links or git repo, and a lot of the metadata was plain\nwrong.
\nMy first challenge before I could even start on the app was to create a backend\nAPI with validated data. I did this by using Krock’s Mod Search as a base and\nthen performing extra checks. I fetched forum topics to extract more data and\nlinks. I fetched downloads and checked their contents, making sure that they\ncontained a valid mod with the correct mod name. I was able to manually override\nthe data where the automatic crawler failed.
\nOne of the most important parts of this validation system was good tooling. I\ncreated a simple web front-end that allowed you to find mods and see any problems\nthat were preventing them from appearing in the app.
\nThe app’s homepage had a list of mods, with installed mods appearing pinned at\nthe top. By tapping a mod you go to the mod’s detail screen, where you can see\nmore information about the mod and install it.
\nMod installation was done in a background service, allowing the user to navigate\naway from the mod page whilst it was being installed. A notification showed the\ncurrent download progress. Once a mod was installed, the app could then prompt\nthe user to install missing dependencies.
\n\nI published the app on Google Play and F-Droid. It was received with favourable\nreviews, averaging 4.3 stars.
\nI maintained it for 3 years, adding features to improve user experience such as\ndependency installation and world configuration, and rewriting parts as my\nknowledge of Android dev increased.
\nAs the app predated Android Kotlin and Jetpack by over a year, it was originally\nwritten in Java. Towards the end I rewrote parts of the app in Kotlin using\nmore modern techniques. With the introduction of ContentDB, I ended up\nsunsetting the app and removing it from the Play Store.
\nThe Minetest Mods app was my first Android app; it taught me a great deal about\nAndroid development. I’m happy with the result; I think it looked good and had a\ngood user experience. One of the reasons I like making user interfaces is being\nable to create quality user experiences that I can be proud of.
\nOne of the primary flaws in the app was the data. It was quite unreliable; a lot\nof the mods failed to installed due to changes that the crawler didn’t pick up.\nI made a lot of changes to the backend to improve this situation, and they did\nhelp. But the problem was ultimately solved by ditching the crawler and using\nmods from ContentDB instead.
\nAnother flaw is that the app was designed just for mods, and not for other\ncontent like games and texture packs. Towards the end, I started to work on\nextending the app to support games and texture packs - but I was never happy\nwith the result, and so it never made it into a release.
\nThe app also got me my first programming job at a software house, creating\nAndroid and iOS apps for clients. Being able to show my app on Google Play in\nthe interview was a big selling point.
\n","url":"https://blog.rubenwardy.com/2022/10/22/Minetest-mods-app/","summary":"In 2016, I created an app to install mods for Minetest's Android port. It taught me a great deal about Android dev and helped me get my first programming job.","date_published":"2022-10-22T00:00:00.000Z","image":"https://blog.rubenwardy.com/static/img/KC4e_DF3Vc-540.png","tags":["projects","android","minetest"]},{"id":"/2022/10/16/steam-deck-upload-game/","language":"en","title":"Devkit CLI: Upload your game to Steam Deck from your dev computer","content_html":"Steam Deck is a new portable gaming\nconsole from Valve. The Deck is a very open platform as it’s a full Linux PC,\nmaking it very easy to use third-party games and stores. All Decks can act as a\ndevkit machine, and come with devkit tools preinstalled. There was a Steam Deck\nDevkit edition, but that was a preproduction version of the hardware to allow\ngame developers to test their games.
\nThis article will show you how to upload and run your games on the Steam Deck\nfrom your development computer, using a VSCode task or a script you can use in\nany editor. Whilst you could develop on the Deck directly, it would require\nsetting up a new dev environment. Being able to develop on your computer and\ntest on the Deck allows you to reuse your current dev environment.
\nYou’ll need to follow the “Install tools” and “Connecting to Deck” instructions\nfound in the\nSteamworks documentation.
\nBefore we continue with the process, it’s important to explain what the DevKit\nGUI does when you upload a game. The GUI is quite complicated and there’s no\nofficial CLI or library, so it would be nice if we could simplify the process.
\nIt performs the following steps:
\nSteps 1, 2, and 4 don’t need to be done every time. Once the game is registered\nusing the GUI, you only need to upload the files to upload a new version.
\nWith your Deck connected, go to Title Upload and fill in the fields:
\nMyGame
/home/user/dev/games/mygame/build/
.mygame.x84_64
You should now go to Library > Non-Steam Games and run your game. If it fails\nto open, you should check the logs - you can do that using the “Device Logs” tab\nof the Devkit Client GUI.
\nThe script uses\nrsync\nto upload the files to the deck. rsync is a common program used to copy entire\ndirectories; in this case, we’re using it to upload over SSH. If you’re using\nWindows, you may need to use WSL.
\nSave the following as ./utils/deck.sh
in your project and customise the variables\nat the top.
#!/bin/bash\n\nset -e\n\n# Details as provided to the DevKit client\nGAME_NAME=\"MyGame\"\nBUILD_DIR=~/dev/games/mygame/build/\nEXE=\"mygame.x86_64\"\n\n# Deck IP address\nIP_ADDRESS=\"192.168.0.110\"\n\n# No need to edit these\nTARGET_PATH=\"/home/deck/devkit-game/$GAME_NAME/\"\nRSA_KEY_PATH=~/.config/steamos-devkit/devkit_rsa\n\n# Upload game files\nrsync -av --chmod=Du=rwx,Dgo=rx,Fu=rwx,Fog=rx \\\n -e 'ssh -o StrictHostKeyChecking=no -i $RSA_KEY' \\\n --update --delete --delete-excluded --delete-delay \\\n --checksum \"$BUILD_DIR\" \"deck@$IP_ADDRESS:$TARGET_PATH\"
\nYou can add this as a VSCode task by adding the following to\n.vscode/tasks.json
:
{\n // See https://go.microsoft.com/fwlink/?LinkId=733558\n // for the documentation about the tasks.json format\n \"version\": \"2.0.0\",\n \"tasks\": [\n {\n \"label\": \"Run on Steam Deck\",\n \"type\": \"shell\",\n \"command\": \"./utils/deck.sh\",\n \"presentation\": {\n \"reveal\": \"always\",\n \"panel\": \"shared\",\n \"showReuseMessage\": true,\n \"clear\": true\n }\n }\n ]\n}
\nThe fact that the Deck is a Linux PC made this super easy to do. In the future,\nI’d like to make a devkit CLI that allows you to create games as well as update.\nI’d also like to be able to start and stop the game, this would be super useful\nduring testing.
\n","url":"https://blog.rubenwardy.com/2022/10/16/steam-deck-upload-game/","summary":"How to upload and run your games on the Steam Deck from your development computer, using a VSCode task or a script you can use in any editor.","date_published":"2022-10-16T00:00:00.000Z","image":"https://blog.rubenwardy.com/static/img/MZtU3zaEn7-540.png","tags":["steamdeck","linux","tutorials"]},{"id":"/2022/10/14/steam-deck-is-awesome/","language":"en","title":"Steam Deck is Awesome","content_html":"My Steam Deck arrived two months ago, and I’ve been having a blast ever since.\nI’m not much of a gamer; RimWorld and Skyrim are the only two games I’ve spent\nmore than 100 hours in. I find it hard to play a game without wanting to do game\ndev instead. But with the Deck, I’ve been playing the games that have been on my\nplay list for a while.
\nThis article contains my thoughts about the Steam Deck. It’s not meant to be\na review applicable to everyone; I’m not your average user nor do I claim to\nbe. I recommend reading a wide range of reviews before buying something\nexpensive.
\nI was given an Xbox 360 when it first came out and I sold my original Xbox.\nWhile I had over 30 games for the original Xbox, I only had three games for the\nXbox 360 and didn’t have money to buy more. Because of this, I didn’t play the\nXbox 360 much.
\nThe variety and cost of games are very important for a console. The Deck comes\nwith Steam by default, giving access to a huge number of varied and inexpensive\ngames. Steam regularly runs sales, making it even less expensive to get good\ngames. I recently bought The Witcher 3, with all DLCs, for £6; that’s a\nbrilliant game with easily more than 100 hours of content.
\nThe Steam store is the leading store for PC gamers. Over the years, I’ve bought\na lot of games for cheap on Steam sales but haven’t had time to play them. With\nthe Steam Deck, I am finally playing these games. Recent examples include Portal\n1 and The Stanley Parable.
\nSteam Deck is a full Linux PC. You can enter desktop mode and install apps using\nFlatpak. Don’t want to install from Steam? You can install other game stores,\nsuch as Heroic Games (for Epic Game Store), Itch.io, and GOG. You can even use\nXbox Game Pass.
\nI was able to get one of my games working in under 30 minutes. I exported my\ngame from Godot and used the SteamOS DevKit to upload it to the Deck. The DevKit\njust uses rsync, a standard Linux file transfer utility, so it’s perfectly\npossible to upload in the same way using a custom script. The trickier bit is\nadding the game to Steam and creating the shortcuts, the devkit has a few long\nscripts to do this.
\n\nOne of the main benefits of the Steam Deck is the amazing controls. You get\nfull-size thumbsticks with ergonomic triggers and grips.
\nSteam Input allows you to easily rebind controls and add gamepad support to\ngames that don’t otherwise support them.\nMinetest doesn’t\nsupport gamepads by default, but it was super easy to set up Steam Input to fake\nkey presses and mouse movement to control Minetest. The experience is largely\nindistinguishable from the game itself supporting gamepads.
\n\nAs well as the two thumbsticks, the Deck has two trackpads. The right trackpad\nis commonly used to control the mouse or move the right thumbstick more\nprecisely. The left thumbstick is commonly used as a Virtual Menu / Radial Menu,\nallowing you to press and select options quickly. The radial menu feature is\ngreat in games that have a lot of keyboard shortcuts, such as RimWorld.
\n\nThe Deck also has 4 extra buttons on the back. Not many games use these\nbuttons - Aperture Desk Job and RimWorld were the only games I found\nthat did. You can set these buttons to perform your own actions, or just\nignore them.
\nI’ve been using Linux as my primary OS for over 10 years. Linux performs better,\nis more flexible, and provides an alternative to the current duopoly of\noperating systems.
\nWhen I first started using Linux, I dual-booted as most of my games didn’t work.\nValve’s work on Proton/Wine in preparation for Steam Deck has made Linux viable\nas a gaming platform. Now, all the games I’m interested in playing work on\nLinux.
\nSteamOS is a very user-friendly introduction\nto Linux; you don’t need to use the terminal or any desktop features, and the\nread-only distro makes updating unlikely to cause issues.
\nFrom Valve’s perspective, Linux provides an alternative to Windows that reduces\ntheir dependence on Microsoft. Valve started working on improving Linux as a\ngaming platform after Windows 8 required the Windows Store to install apps.
\n\nI work on my laptop. Having a dedicated machine makes gaming easier and puts me\nin the right mood for it. It’s portable but also easy to dock to a TV.
\nBut the Deck isn’t perfect. It’s still clearly a work in progress, I’ve found\nquite a few bugs in the Steam UI while using it. Valve releases updates pretty\nregularly, and all the bugs I’ve found have since been fixed.
\nAnother sticking point is that it’s not very clear what resolutions are being\nused when using an external display, but since I started writing this post Valve\nhas released an update to fix this.
\nThe Deck produces quite a lot of heat when playing demanding games, but this\ndoesn’t actually matter much as it doesn’t appear to be throttling or causing\nissues.
\nThe battery life varies depending on the game but generally tends to be from\n2.5 hours - 5 hours. This is fine for playing on the go, as I don’t tend to be\non public transport or a cafe for long, and at home I tend to play docked to my\nTV and charging.
\nThe Steam Deck is the perfect gaming device for me. It’s open, flexible, and fun\nto use. Even if you don’t use Linux, Valve’s work for the Deck will benefit all\nLinux gamers. It’s truly the year of Linux on the desktop bus.
I created Renewed Tab last year to fulfil my personal\nneeds, and have since expanded on it based on user feedback, focusing on rich\nwidgets, a clean user experience, and customisation.
\nI wrote\nan article on the Renewed Tab blog\nwhere I explained what led me to create a New Tab extension and what the\ndesign requirements were.
\n","url":"https://blog.rubenwardy.com/2022/07/21/renewed-tab/","summary":"I wrote an article on the Renewed Tab blog where I explained what led me to create a New Tab extension and what the design requirements were.","date_published":"2022-07-21T00:00:00.000Z","image":"https://blog.rubenwardy.com/static/img/BgkFRB4RA9-540.png","tags":["projects"]},{"id":"/2022/07/17/game-ai-for-colonists/","language":"en","title":"Creating worker NPCs using behavior trees","content_html":"I’m a huge fan of RimWorld, a base building game\nwhere you manage a group of colonists. Rather than directly controlling the\ncolonists, you place blueprints for buildings, mark trees for cutting, and\nanimals for hunting. NPCs will then decide what to do automatically, based on\ntheir skills and priorities.
\nI’ve made two games recently with similar mechanics. The first was\nRuben’s Virtual World Project (RVWP), a hybrid\nbasebuilder/topdown shooter. The second was\nTin Mining, a mining sim created as an entry to\nLudum Dare 48. Both of these games allowed placing building plans that NPC\nworkers would then build out.
\nIn this article, I will explain how I implemented the NPC AIs, and the problems\nI faced.
\nA common way to implement game AI is to use a finite state machine. Different\nthings an NPC could do would be modelled as states, and the NPC would transition\nbetween states based on conditions.
\n\nOne problem with state machines is that you need to program and design for all\nthe transitions. What if an NPC dies whilst performing work? It shouldn’t keep\nworking like a zombie until it’s finished.
\nState machines work very well for simple behaviour, like that of an animal or\na dumb guard. But my NPCs need to be able to perform complex work, often with\nmultiple steps. The states would end up being pretty complicated.\nI wanted a more capable framework.
\nAnother concern was reuse. Lots of work involves the same actions - moving to a\nlocation, picking up an object, running an animation. It’s possible for states\nto reuse behaviour using utility functions, but this can be a bit painful.\nI wanted my game AI framework to make reuse as easy as possible.
\nAfter researching game AI, the obvious choice for me was behavior trees.
\nExplaining Behavior Trees in full is a bit out of scope for this article;\nI highly recommend reading\n“Behavior trees for AI: How they work”\nby Chris Simpson.\nHowever, I will try to explain the basics.
\nBehaviour trees allow you to control an NPC’s decision making by combining\nreusable nodes rather than creating new states. They work best when the nodes\nare small and specific, for example, you might have nodes to check conditions or\nwalk to a position.
\nBehaviour trees are basically a programming language in themselves, but for game\nAI. Execution starts at the top of the tree, and then works downwards based on\nthe rules of different nodes. A node is either running, succeed, or failed.
\n\nIn the above tree, there is a Sequence node with two children. Sequences will\nrun each child one after another, until either a child fails or they have all\nsucceeded. The first child finds a random position, and writes it to a variable\ncalled $target
. The second child walks the NPC to that position. If the NPC\nsuccessfully finds a position and walks to it, then the sequence succeeds. If\neither child fails, for example if there is something blocking the path, then\nthe sequence will also fail.
To implement logic, you can use decorators to check conditions. If the\ncondition is true, execution continues to the decorator’s child. if the\ncondition is false, the decorator marks itself as failed and returns to the\nparent. This is very powerful when combined with a selector, a node that runs\nits children until the first succeeds.
\n\nThe above tree will check whether the NPC’s needs, such as health and energy,\nbefore doing work. This will be checked every time the tree updates, which\nmakes it possible for death or low energy to interrupt work.
\nIt’s possible for behavior trees to include other behavior trees. The\n“FindWork” node above does this in order to perform work; each work type has a\ntree. Here’s the tree for construction work:
\n\nI implemented behavior trees in Tin Mining using the\nBehavior Tree Godot\nplugin. I wrote my own implementation for RVWP as I couldn’t find a good Lua\nimplementation, it was fairly easy to implement.
\n \nJust a quick mention of another framework that I considered.\nGOAP\nis an AI system that allows NPCs to work out how to achieve a goal based on\npossible actions. For example, the NPC knows that building a house requires\nwood, and cutting trees makes wood. GOAP allows it to put these two things\ntogether, and cut trees to get wood to build a house.
\nI think this isn’t necessarily mutually exclusive to behavior trees, it\noperates at a higher level. You may use GOAP to decide what actions to do, and\nthen implement those actions using behavior trees.
\nNow that we have a framework for performing work, we need a way to decide what\nwork an NPC should pick up next.
\nWhen a worker isn’t currently working on a task, it periodically asks the\nWork Manager for an available task. The Work Manager allocates tasks based on\ndistance and a heuristic weighting of the task. For example, moving dropped\nitems to the stockpile is weighted higher than mining or building, to avoid\nitems building up and clogging the walkways.
\nIn order to avoid NPCs working on the same thing, the Work Manager has a\nreservation system that allows NPCs to lock a tile, entity, or piece of work.
\nIt’s important that NPCs are only allocated to work that they can reach - this\nis called a reachability check, and is typically implemented using pathfinding.\nPathfinding tends to be quite expensive, but there are some quick checks that\ncan skip the effort in a lot of cases. My Tin Mining game uses a graph to\nrepresent all possible paths; if a tile isn’t walkable, it’s not on the graph.\nThis means that you can check whether work is probably unreachable by seeing\nif it’s on the graph, an O(1)
check.
Pathfinding is still needed because it’s possible to have multiple disconnected\nsubgraphs on the map. Pathfinding to every possible work would be quite expensive,\nso I defer pathfinding until after the work is allocated.
\nOne problem with this approach is that it doesn’t know how close the work\nreally is to the NPC. The NPC might be three tiles from the work but 100 tiles\nto walk there due to obstacles in the way. A future improvement will be to use\nthe actual walking distance rather than the direct distance.
\nWhen an NPC finishes mining a tile, it often reveals more tiles to be mined.\nIn the time between the NPC mining the tile and finishing the work, another NPC\nmay have been allocated to the neighbouring tile. This results in a lot of\ninefficiency, especially if the other worker is far away.
\n\nAt this time, I don’t have a good fix for this due to the simple nature of my\nwork allocator. My workaround for the meantime is to avoid gaps between the tile\nbeing mined and the worker looking for work - but this doesn’t work with\nmultiple-step work, such as digging a tile and then placing a ladder.
\nIn the future, I’d like to implement a global work allocator algorithm. Instead\nof considering each worker’s request independently, it should keep track of idle\nworkers in each frame and allocate them all in one go.
\nAnother option would be to add some form of work queue or reservation. This is\nhow RimWorld resolves this problem.
\nBecause the Work Manager uses a heuristic to allocate work, it’s possible to\nchange the heuristic to change how work is allocated.
\nIn my tin mining game, I was having an issue where the workers would keep mining\ntiles and leave all the rubble and ore on the ground. Ideally, workers mine for\na bit and then haul items back to the surface.
\nThe first change I made to fix this was to make the dropped entities appear\n20% closer than minable tiles, but also add an offset of 3 tiles so that\nworkers prioritise close tiles. This mostly worked, but when mining long\nsections you can still end up with a lot of dropped items.
\nThe second change was to make that 20% multiplier vary based on how many dropped\nitems there are. When there are not many dropped items, hauling items is a\nlower priority. The more dropped items there are per worker, the higher the\npriority becomes.
\n# Calculate multiplier based on number of dropped items\ntargetDroppedItems = min(3 * numberOfWorkers, 200)\ndroppedItemsMultiplier = 0.95\nif len(droppedItems) > 3*targetDroppedItems:\n droppedItemsMultiplier = 0.4\nelif len(droppedItems) > 2*targetDroppedItems:\n droppedItemsMultiplier = 0.65\nelif len(droppedItems) > 1*targetDroppedItems:\n droppedItemsMultiplier = 0.8\n\n# Calculate weight based on distance, the multiplier,\n# and an offset to prioritise nearby tiles\nweight = droppedItemsMultiplier * \\\n worker.position.distance_squared_to(item.global_position) + 3*3
\nOne of the hardest parts of designing complex systems is making it easy to\ndebug. You want to know what an NPC is ‘thinking’, and be able to trace why it\ndid something at a certain time. I use a combination of logging and UI debug\ntools to do this.
\nI have UI debug tools for the pathfinder, for inspecting work and locks on a\ntile, for showing the NPC behavior tree and current work, and more.
\nRVWP has an immediate mode based debug API that allows adding lines and labels\nto the world. This is very useful when designing game AIs, and can be seen in\nthe RVWP animation in the above section.
\nlocal debug = rvwp.get_debug()\ndebug:draw_line(from, next_pos, \"#fff\")\nfor i=self.path_i + 1, #self.path do\n debug:draw_line(self.path[i - 1], self.path[i], \"#999\")\nend
\nCreating complex systems from simple rules is a great way to get a lot of random\nbugs.
\nRunning the pathfinder every frame would be expensive, so instead paths are\ncached by the NPC. This resulted in NPCs not being aware of map changes, causing\nthem to fall down holes or get stuck behind new walls. The fix was to validate\nthe cached path against the navigation graph whilst moving.
\n \nAnother problem I kept running into were NPCs just doing nothing. One of the\ntimes this happened was because NPCs kept being allocated to work that would\nfail, perhaps it wasn’t reachable or wasn’t possible in some other way, Whilst I\ncould make the work manager check every precondition, it wouldn’t be very\nflexible. I’d rather preconditions be implemented by decorators on the behavior\ntree for each work type. My solution was to introduce failure lock outs: if a\npiece of work fails, the NPC won’t retry it for 10 seconds.
\nThe task of finding the nearest work by walking distance can be expensive.\nRimWorld has a fairly clever algorithm for doing this which I may investigate\nin the future.
\nRimWorld uses a system of “regions” to make looking for work based on walking\ndistance super fast. Regions are essentially a higher-level pathfinding system -\ninstead of pathfinding based on nodes, it groups the map into regions at most\n16x16 tiles in size, but further divided by walls. It remembers how regions are\nconnected. To find the closest work, you can iterate through the current and\nnearby regions. The developer of RimWorld made\nan excellent video explaining regions,\nI highly recommend watching it.
\n\nMy system is surprisingly effective despite being made out of simple rules and\ncomponents. While in the future I’ll probably implement more complicated work\nallocation algorithms and a region system, it works well enough for now, allowing\nme to focus on implementing other systems and game play mechanics.
\nOpenAI’s GPT-3 is a powerful new\nArtificial Intelligence model that uses deep learning to produce human-like\ntext. It was trained on a large body of text, with sources including websites,\nWikipedia, and books. It doesn’t just understand natural language, it can also\nwork with programming languages.
\nThis topic is especially relevant with the recent introduction of\nGitHub Copilot.\nCopilot is an “AI pair programmer” for your IDE that suggests code and entire\nnew functions. It’s based on same technology as GPT-3, but with a model derived\nfrom GPT-3 and optimised for code called Codex. This article will use GPT-3 and\nCodex, as I wasn’t able to get GitHub Copilot, but the results will be\nidentical.
\nIn this article, I will ask GPT-3 to write Minetest code and explore how much it\nknows about modding, creating simple and advanced Minetest mods. I will finish\nby using it to convert Minecraft mods to Minetest.
\nSo, what is Minetest? Let’s ask GPT-3 to explain:
\n\nExplain what Minetest is and what its benefits are\n
\n\n\nMinetest is a free and open-source voxel game engine with easy modding and\ngame creation capabilities. It has a strong emphasis on community-created\ncontent and user-friendliness. Some of Minetest’s benefits include being able\nto run on a wide variety of operating systems and hardware, having a fully\ncustomizable game engine, and being able to create and share mods and other\ncontent easily.
\n
And how is GPT-3 able to write Minetest code?
\n\nExplain how GPT-3 knows and is able to write Minetest code\n
\n\n\nGPT-3 has been trained on a large number of code examples, including code for\nMinetest. As a result, it is able to identify patterns in the code and\ngenerate new code that is similar to the examples it has seen.
\n
GPT-3 has a random element, and so can generate a different answer each time.\nThis is expressed as temperature - the higher the temperature, the more\nrandomness and the more creative the answers will be. But higher temperatures\nare also less accurate.
\nA common bias when people assess the output of GPT-3 is to select the best\nanswers. I’ll be using the first answer in most cases, and will try to keep\ntemperature low.
\nA node in Minetest is a position in the 3d grid, it’s known as a Block in\nother games. For example, a node could be air, grass, dirt, or something else.
\nMost Minetest modding tutorials start with registering a type of node, which\nmeans that there will be a lot of documentation on the Internet about doing\nthis. It seems likely that some of this information found its way into GPT-3’s\ntraining data.
\n\nWrite Lua code that adds a node in Minetest\n
\n\nWrite Lua code that adds a node in Minetest's world\n\n
minetest.add_node({x=1, y=2, z=3}, {name=\"default:dirt\"})
\nHuh. It modified the prompt to be about adding a node to Minetest’s world,\nrather than registering a node. I’m using GPT-3 in autocomplete mode here, where\nit attempts to finish the prompt you give it. In this situation, it decided that\nchanging the request was the more likely completion. One way to fix this is by\nadding a full stop.
\nRunning the prompt again resulted in a valid node registration, so it can work\nout the meaning, but not very reliably. It doesn’t help that “add” is a bit\nambiguous. I changed “adds” to “registers” to be a bit more specific:
\n\nWrite Lua code that registers a node in Minetest\n
\nminetest.register_node(\"my_node\", {\n description = \"My Node\",\n tiles = {\"my_node.png\"},\n groups = {cracky=3}\n})
\nOK, cool it can do that. The node name doesn’t follow conventions, but that’s\nforgivable.
\nBut how much does it actually understand it? This code is very simple, and it\nwould be easy for it to just echo it from tutorials. Let’s change the spec a\nlittle bit, to make it more unique:
\n\nWrite Lua code that registers a node in Minetest that says quack when you punch it\n
\nminetest.register_node(\"quack\", {\n description = \"Quack\",\n tiles = {\"default_wood.png\"},\n on_punch = function(pos, node, player, pointed_thing)\n minetest.chat_send_player(player:get_player_name(), \"Quack!\")\n end,\n})
\n\nThat’s cool. It’s able to understand that “say” means a chat message, and has\nalso inferred a node name and description based on the prompt. The node name\nstill doesn’t match the naming conventions, but the description is correctly in\ntitle case. GPT-3 is designed for finding patterns, so this isn’t too\nsurprising.
\nThis is still quite a simple example, let’s see how it does with a more\ncomplicated problem:
\n\nWrite Lua code that registers a node in Minetest that says duck when you punch\nit once, and goose when you punch it twice\n
\nfunction duck_goose(self, clicker)\n if not clicker or not clicker:is_player() then\n return\n end\n if self.object:get_luaentity().ducks == 1 then\n self.object:set_properties({\n textures = {\"goose.png\"},\n })\n self.object:get_luaentity().ducks = 2\n elseif self.object:get_luaentity().ducks == 2 then\n self.object:set_properties({\n textures = {\"duck.png\"},\n })\n self.object:get_luaentity().ducks = 1\n end\nend\n\nminetest.register_node(\"example:duck\", {\n description = \"Duck/Goose\",\n tiles = {\"duck.png\"},\n groups = {choppy=2,dig_immediate=2},\n on_punch = duck_goose,\n on_rightclick = duck_goose,\n ducks = 1,\n})
\nOK, this is interesting: it’s cargo-culting on steroids.\nCargo-culting is when\na programmer copies and combines code without understanding it. It’s mixed up\nthe entity and node APIs big time, and has also gotten “duck” and “goose” the\nwrong way around. It’s also changing the texture rather than saying something.\nTheoretically, the code almost makes sense though.
\nA big part of the problem is how the prompt is phrased. Making the prompt a\nlot less ambiguous makes a big difference, unsurprisingly. I also switched to\nEdit mode, which I found worked a bit better.
\n\nWrite this into Lua code for Minetest
\nA node that alternatives between saying duck and goose when punched\n
minetest.register_node(\"duck_goose:duck_goose\", {\n description = \"Duck Goose\",\n tiles = {\"duck_goose.png\"},\n groups = {cracky=3},\n on_punch = function(pos, node, puncher)\n local meta = minetest.get_meta(pos)\n local say = meta:get_string(\"say\")\n if say == \"duck\" then\n minetest.chat_send_all(\"Goose!\")\n meta:set_string(\"say\", \"goose\")\n else\n minetest.chat_send_all(\"Duck!\")\n meta:set_string(\"say\", \"duck\")\n end\n end,\n})
\nThis works, although isn’t entirely optimal. It would be better to use an\ninteger rather than a string to keep track.
\nAnd yes, I did make a typo in the prompt.
\nIt would be nice to find out just how well GPT-3 can understand the code. GPT-3\nis very good at noticing patterns and regurgitating, but is it able to explain\nwhat the code is doing?
\nUsing edit mode, I asked it to add comments to the first Quack node:
\n\nAdd comments to the code
\n<Quack node code>\n
-- create a duck block\nminetest.register_node(\"quack\", {\n -- set the description to 'Quack'\n description = \"Quack\",\n -- use the default wood texture\n tiles = {\"default_wood.png\"},\n -- when the player punches the block, the chat will send them the string 'Quack!'\n on_punch = function(pos, node, player, pointed_thing)\n minetest.chat_send_player(player:get_player_name(), \"Quack!\")\n end,\n})
\nThe results of this prompt varied quite a lot, from detailed comments to a\nsingle comment for the code. The above was the result of the first run, but in\nother runs GPT-3 was wrong (saying the node made a duck sound) or just unhelpful\n(writing “this is a comment”).
\nNext, I thought I’d try getting it to explain the code as prose:
\n\nExplain what this code does
\n<Quack node code>\n
\n\nThis code registers a new node type, “quack”, which has the default wood\ntexture and sends the player the message “Quack!” when punched
\n
Full marks there, I guess.
\nThe Quack node is a fairly simple example, exactly the sort of thing you’d find\nin a Minetest tutorial. How well does GPT-3 do on actual mod code?
\n\nI asked GPT-3 to explain the\ncode\nfor health bars in\nConquer, my RTS\nmod for Minetest.
\n\nExplain what this code does to a beginner
\n<Conquer code>\n
\n\nThis code creates an entity called a “healthbar” which is used to show a\nplayer’s health in the game “conquer”. The entity is a sprite, which is an\nimage that is displayed in the game, and it is immortal, meaning it can not be\nkilled. The healthbar is updated based on the health of the player it is\nattached to. If the player’s health changes, the healthbar will change to\nreflect that. If the player dies, the healthbar will be removed.
\n
This is almost entirely correct, the only inaccuracy is that it’s attached to\na unit entity and not a player. There’s absolutely no way it could have known\nthis though, given the code snippet doesn’t mention units at all.
\nLua is a fairly common programming language, used by many more games and\nprograms than Minetest. But what about formspecs, a domain-specific language for\ndefining GUIs in Minetest?
\n\nWrite a Minetest formspec with a field and two buttons\n
\nform = \"size[8,8]\" ..\n\n-- titlebar\n\"button_exit[0,0;1,1;exit;X]\" ..\n\n-- text input field\n\"field[0.5,1.5;7,1;text;Type your text here:;]\" ..\n\n-- buttons\n\"button[3,3;2,1;button1;Button 1]\" ..\n\"button[3,4;2,1;button2;Button 2]\"
\n\nIt took some creative liberty by adding a close button to the top, but it has\nfulfilled the requirements.
\nWhat about formspec code that doesn’t have surrounding text? I provided it with\nthe following code snippet:
\nsize[8,3]\nlabel[0.5,0.1;Hello everyone!]\nfield[0.5,1;3.5,1;name;Name;]
\nI started by wondering if it could detect the language:
\n\nWrite the name of the language in a comment
\n<formspec code>\n
// Minetest Formspec\n...
\nThat’s cool! Weird that it choose //
, given that formspecs are written inside\nLua which uses --
. I guess //
is more common.
What about manipulating the formspec?
\n\nAdd another text box for age twice the width of the name field
\n<formspec code>\n
size[8,3]\nlabel[0.5,0.1;Hello everyone!]\nfield[0.5,1;3.5,1;name;Name;]\nfield[0.5,2;7,1;age;Age;]
\n\nThat’s correct, it’s also placed it without overlap.\nI used “text box” in the prompt to avoid biasing it by saying “field”.
\nDoes it understand directions?
\n\nAdd another field for age to the left of the name field
\n<formspec code>\n
size[8,3]\nlabel[0.5,0.25;Hello everyone!]\nfield[0.5,1;1.5,1;age;Age;]\nfield[2,1;3.5,1;name;Name;]
\n\nIt even moved the name field out of the way, rather than going into the\nnegatives.
\nNext, I wondered if I could get it to generate a guessing game formspec, with\na formspec submission callback.
\n\nWrite into Lua as a Minetest mod
\nCreate a guessing game formspec\n
minetest.register_on_player_receive_fields(function(player, formname, fields)\n if formname ~= \"guess\" then\n return\n end\n if fields.quit then\n return\n end\n local guess = tonumber(fields.guess)\n if guess == nil then\n minetest.chat_send_player(player:get_player_name(), \"Please enter a number.\")\n return\n end\n if guess < 1 or guess > 100 then\n minetest.chat_send_player(player:get_player_name(), \"Please enter a number between 1 and 100.\")\n return\n end\n if guess == 42 then\n minetest.chat_send_player(player:get_player_name(), \"You win!\")\n else\n minetest.chat_send_player(player:get_player_name(), \"Sorry, try again.\")\n end\nend)\n\nminetest.register_chatcommand(\"guess\", {\n params = \"\",\n description = \"Play a guessing game\",\n func = function(name, param)\n local player = minetest.get_player_by_name(name)\n if player == nil then\n -- just a check to prevent the server crashing\n return false, \"Player not found\"\n end\n minetest.show_formspec(name, \"guess\",\n \"size[8,4]\" ..\n \"label[0,0;I'm thinking of a number between 1 and 100.]\" ..\n \"field[2.5,2;3,1;guess;Guess;]\" ..\n \"button_exit[2.5,3;3,1;quit;Quit]\")\n end,\n})
\n\nOK, now it’s scaring me.
\nIt’s missing a way to submit the guess - there’s only a quit button. It works if\nyou add this, though.
\nI found the “just a check to prevent the server crashing” comment quite\ninteresting. I wondered whether this was taken straight out of Minetest mods.\nA quick search\non GitHub revealed that this exact comment is used quite frequently by mods\nin chat commands.
\nGuessing games are fairly common, what about something more complicated?
\n\nWrite into Lua as a Minetest mod
\nCreate a formspec that shows and allows maintaining a list of fruit\n
minetest.register_on_player_receive_fields(function(player, formname, fields)\n if formname ~= \"fruit_list\" then\n return\n end\n if fields.add then\n local fruit = fields.fruit\n if fruit ~= \"\" then\n table.insert(fruit_list, fruit)\n end\n elseif fields.remove then\n local fruit = fields.fruit\n if fruit ~= \"\" then\n for i, v in ipairs(fruit_list) do\n if v == fruit then\n table.remove(fruit_list, i)\n break\n end\n end\n end\n end\n show_formspec(player)\nend)\n\nfunction show_formspec(player)\n local formspec = \"size[8,9]\" ..\n \"label[0,0;Fruit List]\" ..\n \"field[0.3,1;4,1;fruit;Fruit;]\" ..\n \"button[4.3,0.5;1,1;add;Add]\" ..\n \"button[5.3,0.5;1,1;remove;Remove]\"\n for i, v in ipairs(fruit_list) do\n formspec = formspec .. \"label[0,\" .. (i + 1) .. \";\" .. v .. \"]\"\n end\n minetest.show_formspec(player:get_player_name(), \"fruit_list\", formspec)\nend\n\nminetest.register_chatcommand(\"fruit\", {\n params = \"\",\n description = \"Show fruit list\",\n func = function(name, param)\n local player = minetest.get_player_by_name(name)\n if player == nil then\n return false, \"Player not found\"\n end\n show_formspec(player)\n return true, \"Fruit list shown\"\n end,\n})
\n\nWow. It’s just missing a definition of fruit_list
, and should be using\nminetest.formspec_escape
to handle the case where a fruit name has special\ncharacters. I think it’s interesting how it went with labels instead of a text\nlist.
One thing that could indicate deep understanding would be the ability to convert\nbetween different modding APIs. I found the following hello world example\nin a\nMinecraft Forge modding tutorial:
\npackage minecraftfun;\n\nimport net.minecraft.util.text.TextComponentString;\nimport net.minecraftforge.event.world.BlockEvent.PlaceEvent;\nimport net.minecraftforge.fml.common.eventhandler.SubscribeEvent;\n\npublic class BlockPlaceHandler {\n @SubscribeEvent\n public void onPlaceEvent(PlaceEvent event) {\n event.getPlayer().sendMessage(\n new TextComponentString(\"You placed a block : \" + event.getPlacedBlock().getBlock().getLocalizedName()));\n }\n}
\nThe first prompt I tried ended up just changing the package name in the first\nline to minetest
rather than rewriting it into Lua.
\nRewrite the following code into Minetest
\n<Java code>\n
package minetestfun;\n...
\nThe second time, I explicitly mentioned Lua:
\n\nWrite into Lua as a Minetest mod
\n<Java code>\n
--[[\n This is a Minetest mod that writes into a file when a block is placed.\n It is written in Lua.\n]]\nimport net.minecraft.util.text.TextComponentString;\nimport net.minecraftforge.event.world.BlockEvent.PlaceEvent;\nimport net.minecraftforge.fml.common.eventhandler.SubscribeEvent;\n\npublic class BlockPlaceHandler {\n @SubscribeEvent\n public void onPlaceEvent(PlaceEvent event) {\n event.getPlayer().sendMessage(\n new TextComponentString(\"You placed a block : \" + event.getPlacedBlock().getBlock().getLocalizedName()));\n }\n}
\nNo, adding a comment saying “It is written in Lua” doesn’t make it written in\nLua. The comment about writing to a file is also completely incorrect.
\nNext, I went back to saying “Rewrite”:
\nminetest.register_on_placenode(function(pos, newnode, placer, oldnode, itemstack, pointed_thing)\n if placer:is_player() then\n minetest.chat_send_player(placer:get_player_name(), \"You placed a block : \"..newnode.name)\n end\nend)
\nWoah! That’s exactly what I wanted. It even checks that placer
is a player,\nwhich is something that many modders forget to do.
The only thing it does do wrong is newnode.name
. It should get the node’s\ndescription from the node definition, this can either be done using\nItemStack(newnode.name):get_description()
or by looking up the node def from\nminetest.registered_nodes
. This mistake is understandable though given that\nthe Java code uses “name” for what Minetest calls “description”.
As mentioned, GPT-3 includes randomness; it doesn’t always produce the same\nresult. I ran it again, with a lower temperature, and it made up an API function\ncalled send_message
:
minetest.register_on_placenode(function(pos, newnode, placer, oldnode, itemstack, pointed_thing)\n placer:send_message(\"You placed a block: \"..newnode.name)\nend)
\nGPT-3/Codex is surprisingly intelligent and can produce working Minetest mods and\nformspecs. However, it frequently produces results that look like\ncargo-culting and making\nstuff up, and it can also produce vulnerable code. It also requires effort from\na human to write good prompts and identify problems.
\nGitHub’s Copilot uses another AI model from OpenAI called Codex. It’s very\nclosely related to GPT-3 but specialises in code rather than natural language.\nCopilot uses this model in IDEs, such as VSCode, to suggest code.
\nI’ll probably look into GitHub Copilot in the future, but I imagine it’ll\nproduce very similar results just with better IDE integration.
\nGPT-3/Codex learned how to write Minetest code by reading code on the Internet.\nThis code may or may not be open source, and may or may not be permissively\nlicensed. These models tend to regurgitate code, which leads to license\nlaundering - open source code being turned into proprietary code, without\ncredit. These products benefit from the unpaid labour of the open-source\ncommunity. So whilst this technology is interesting, I’m not sure how much I\nagree with it ethically.
\nYou can try out GPT-3 and Codex for free on\nOpenAI’s website, and\nGitHub Copilot is now available\npublicly.
\nIn 2018, I had the opportunity to create a\nweb app for University coursework, as a solo\nproject. I chose to create a package repository for Minetest, an open-source\nproject I help maintain.
\nMinetest is an open-source game engine with\nmillions of downloads and thousands of weekly players. The project has a very\nactive modding community, and many available games to run. There was one big\nissue - you had to manually install mods and games by unzipping their files into\na directory. This was a very poor user experience.
\nThe project aimed to make a website to hold the metadata of different types of\npackages. The website needed to have both an HTML front-end, and a JSON REST API\nto be used by the Minetest Engine and other software. Authors of packages should\nbe able to upload and maintain their packages, and Editors should be able to\nupload and maintain any package.
\nThe community commonly receives attacks from bots and malicious users, so the\nsystem needed sufficient moderator tools. New users should have any uploads or\nchanges checked before they’re published. There also needed be moderation tools,\nsuch as banning.
\nThe website needed to be stable and easy-to-use for content creators and users\nalike. This means that uploading packages needed to be as frictionless as\npossible, which calls for interesting features such as importing metadata from\nGitHub and other VCSes.
\nCreating a package repository for Minetest was by no means a brand new idea;\nthere had been many prior attempts, a few of which by me.
\nI created a PHP website called Minetest Extensions. It used a MySQL\ndatabase and implemented several proprietary APIs required by existing Minetest\npackage manager command-line tools. I wasn’t experienced with back-end\ndevelopment at the time, and so it suffered from bugs and security issues.
\nAnother user created a Python / Django package repository. It was made\nofficial and hosted by celeron55.
\nIt never had many mods due to a very manual data entry process, which could only\nbe performed by the mod author.
\nIt only supported mods and texture packs. Any mods were required to be mods to\nthe strict definition; Mod packs were not allowed. This was a huge problem given\nthat many of the most important mods are structured into mod packs.
\nThe website went offline due to the database breaking and the host being unable\nto fix it. It didn’t use containerisation and was hard to set up and develop.\nThe developer was also only sporadically available.
\nAnother user created a package repository based on Bower, a Git-based\npackage manager for the web. Using it for Lua-based packages was a bit of an\nabuse of the tool, but it did work.
\nThe major issue was that it only supported content with a Git repository, and\nrequired the owner to update the repository to add a bower.json file. This\nresulted in even fewer mods being added than to the Minetest Mod Database.
\nI created an Android app to\ninstall mods. It used a NodeJS backend that crawled the forums to get mod\ninformation. This worked to a degree, but the data was flawed and required me to\nmanually override a lot of information that the crawler didn’t detect properly.
\nI wrote a prototype using NodeJS, Sequelize, and PostgreSQL for a package\nrepository that had goals similar to ContentDB. It had support for Asynchronous\nTasks (Git importing and automatic releases), a user login system, and a\nREST API.
\nThis project allowed me to assess the suitability of NodeJS for the task; I\ngot negative reactions from other core developers. I also wasn’t a fan of\nSequelize as Python’s SQLAlchemy was so much better.
\nAnother user created a package repository based on Python / Django.\nIt supported mods, games, texture packs, skins, and servers.
\nDevelopment faded out by the start of 2018.
\nThe client for this project was Perttu Ahola, aka celeron55, the original\ncreator of Minetest.
\nI started by interviewing to ascertain the requirements of such a package\nrepository. The result was the following high-level requirements:
\n…and the following specific features:
\nWith these requirements in mind, I created a design document. I made sure to\nkeep celeron55 in the loop; he signed off on the final version.
\nOne of the most important system in ContentDB is the permission system. There\nare many named permissions, such as EDIT_PACKAGE
, DELETE_RELEASE
, and\nAPPROVE_SCREENSHOT
. When performing an action, ContentDB will check whether\nthe user has the required permission on the object in question.
Each user has a rank. A rank determines the permissions of the user and\ninclude the permissions of lower ranks.
\nNew packages must be approved by an Editor.
\nI chose to use Python/Flask with SQLAlchemy. I had prior experience with these\nframeworks, and Minetest already had a web app using Flask.
\nI decided to use a relational database to store the data required for ContentDB\nto function. As data integrity was a concern, I spent extra effort on designing\ndatabase-level validation using constraints.
\n\nI was given five months to develop the project for the university coursework\nmodule.
\nI started by setting up the database. I chose to use PostgreSQL, as it has\ngood support for validation constraints and migrations.
\nOnce the database was created, I worked on implementing a front-end template\nto allow further progress. I chose to use my own CSS to get the best possible\nscore. I wasn’t as good at CSS as I am today, and\nmaking it work was considered more important than making it pretty.
\nI used pre-existing libraries for user and login management for the coursework\nversion.
\nEdit Requests were a feature that existed in early versions of ContentDB.\nIt allowed users to create requests to edit a package, which can then be\naccepted or rejected by the package’s owner or an Editor. It was removed after\nthe project was submitted due to changing priorities and difficulties in\nmaintaining the implementation.
\nContentDB uses Celery to run asynchronous tasks, to avoid blocking the server\nthread. These tasks include importing meta from Git, creating releases, and\nfetching user info from the forums.
\nGit support was the most tricky thing to implement, as it required acquiring a\nfairly in-depth knowledge of how Git works - and it’s not simple software. For\nexample, one bug I had was a very weird error when attempting to clone a\nparticular repo. It turned out that a tag existing on a commit that wasn’t on a\nbranch was the cause. Through the use of integration tests, I was able to nail\nthe implementation.
\nI deployed ContentDB using Docker onto my dedicated server.\nAs well as production, development is also done using Docker.
\nI implemented support for ContentDB in Minetest’s main menu, allowing\nit to install and update packages.
\n\nI added Prometheus support with a\nGrafana dashboard\nto show statistics, such as downloads and total users.
\nIn the four years since ContentDB was submitted as coursework, I have continued\nto develop it. One of the first changes I made was switching from custom CSS to\nBootstrap CSS, which made future development easier.
\n\nThe submitted version just had a read-only API for getting information about\npackages and updates. I added support for API access tokens and extended\nthe API to allow for editing packages\nand making releases.
\nOther improvements include:
\nOne of the main problems with ContentDB was ensuring maintainability as it grew.\nSince submission, I’ve done several refactors to improve this, such as\nseparating business logic from the app’s routes. This separation allows the\nfront-end, REST API, and async tasks to share the same code, which reduces risk.
\nAnother problem was insufficient UI tests. I used Flask’s test context feature\nto check the response of queries. There are tests for core behaviour, such as\nuser logins and package pages, but they’re not particularly comprehensive. One\nreason is that the testing tools used were insufficient; I have since invested\ntime in learning how to better test web front-ends, using Selenium, and have\napplied that to more recent projects.
\nContentDB is one of the largest projects I’ve\ndeveloped, and one of the longest running.
\n","url":"https://blog.rubenwardy.com/2022/03/24/contentdb/","summary":"How I created ContentDB, the website that allows players to install mods, games, and texture packs from within Minetest.","date_published":"2022-03-24T00:00:00.000Z","image":"https://blog.rubenwardy.com/static/img/CCdENRY3Uf-540.png","tags":["projects","fullstack","minetest"]},{"id":"/2022/03/19/ld48-tin-mining/","language":"en","title":"Tin Mining - Ludum Dare 48 post-mortem","content_html":"In April 2021, I participated in my first game jam, Ludum Dare 48.\nLudum Dare is a popular online game jam; this event received over 3800\nsubmissions. The theme was “Deeper and Deeper,” and I created a game\nwhere you manage a tin mine.
\nThe year is 1790, and the Cornish tin industry is booming. You are a\nbusinessperson who has just secured investment to build a mine. The area is\nknown to be rich in tin, which is in high demand.
\nRather than controlling your workers directly, you drag out plans for tiles to\nbe mined and built. The workers will mine tunnels and build where ordered.\nThey will carry mined resources to the surface to be sold.
\n\nI started this a day late and had to work on Monday, so I was only able\nto implement the bare minimum viable gameplay. I had planned to add hazards,\nbuildings to prevent these hazards, and a tech tree. For example, gas explosions,\ntoxic gas, and flooding were huge problems in mining.
\nThe map is procedurally-generated, using a combination of multiple simplex\nnoises. The ores are randomly placed in veins, with the likelihood based on\ndepth.
\nI made all the art myself, using GIMP and the\nEndesga 32 color palette.\nI choose to make the art super low resolution - 8x8 pixels - to make the art\neasier to make, and hopefully hide my lack of artistic skills.
\n\nI created the game using TypeScript and Phaser 3. Phaser 3 was an absolute pain\nto use. It was poorly documented, and I spent most of the time fighting against\nthe API. One of the main things I’ve learned from doing this game is that I\nnever want to use Phaser 3 again.
\nI choose to use\nbehaviour trees\nto implement the worker AI. Behaviour trees make it easier to implement extendable\nNPC behaviour, which will be useful when working on more complex behaviour in the\npost-jam version.
\nWhen a worker isn’t currently working on a task, it periodically asks the\nWorkManager for an available task. The WorkManager allocates tasks based on\ndistance and a heuristic weighting of the task. For example, moving dropped\nitems to the stockpile is weighted higher than mining or building, to avoid\nitems building up and clogging the walkways.
\nWorkers shouldn’t get stuck on work that isn’t possible, such as trying to mine\ntiles that are not yet exposed. The way I implemented this was a bit hacky:\nworkers have “work failure lockouts.” If a worker attempts work and fails, then\nthat worker won’t attempt that piece of work again for 10 seconds. The work\nmanager will allocate tasks without checking their accessibility, and the worker\nis responsible for triggering a lockout if the work isn’t reachable.
\nThere were many bugs in the AI system, some of which were only found after\nsubmission. Luckily, Ludum Dare allows releasing bug fixes after the deadline,\nso I was able to patch most of the issues. The remaining issues just couldn’t be\nfixed due to architectural issues, or performance issues; the pathfinder was\npretty naive and slow.
\nThe main reason I entered a game jam is that I wanted to finish a game,\nat least to the minimum viable gameplay. It forces you to focus on making the\ngame, rather than making the code perfect, which is something I’ve not been\nthe best at in the past. This project certainly achieved these goals. Whilst\nI didn’t implement all the features I wanted, I still finished it sufficiently\nfor a small game.
\nI decided to learn a new framework for making games for the next game jam. I\nchose to learn the Godot Engine, and have since\nwrote several games in it, including a post-jam version of Tin Miner and an\nentry for Ludum Dare 49 - which I’ll write about at some point.
\nI look forward to working on the post-jam game in the future. I think this idea\nmay be a good game to work towards publishing on Steam, due to the smaller scope\ncompared to some of the other games I’ve made.
\nAs mentioned, Phaser 3 was an absolute pain to use. Therefore, I decided to\nrewrite the game using the Godot Engine. This was a\npleasant experience, as Godot’s API is very nice to use.
\n\nIt took a bit longer to make, as I focused more on doing things correctly.\nWhilst this was mostly differences in maintainability, I also snuck in some\nusability improvements:
\n\nThe pathfinder in the LD version reads the tiles directly, which is slow as\nthere needs to be a lot of navigation logic. The new Godot version stores a\nnavigation graph to represent possible routes, which increases performance by\na large factor.
\nOne nice thing about the new graph-based pathfinder is that it’s quick to check\nwhether a tile is on the network. This acts as a nice optimization for checking\nwhether a tile can be accessed to perform work.
\nAnother difference is the new version contains better debug tools.\nFor example, a debug menu to edit the world, navigation graph debugging,\nand graphs.
\n\nYou can play the Ludum Dare version\nonline.\nThe post-jam version is currently in development and will be made available\nwhen ready.
\nI have a lot of houseplants, but I often forget to water them.\nI’ve been getting into electronics and thought this would be a great\nopportunity to make something.
\nI made a plant monitor, which measures soil moisture, temperature, and humidity,\nand reports these things to a cloud IoT service called Thinger.io.
\nI wanted the monitor to be subtle and hidden, so I designed a case to fit\nbetween the inner planter and the outer decorative pot. I printed it using my\nbrand new Prusa Mini+. It took many iterations to get it perfect. The\nelectronics fit snug in the case, with ports for sensors and a microUSB charger.
\nThe case was the first thing I made using Computer-Aided Design (CAD),\nand was a particularly painful task as the case is curved to fit the 23cm\ndiameter pot.
\nI’ve put the files up on\nThingiverse and\nPrusaPrinters.
\n\nThe first problem I had was finding working soil moisture sensors. I bought\nsome capacitive soil moisture sensors from eBay, but they were very slow\nreacting - they took over 3 minutes to respond to changes in moisture.\nAfter some research, I found out that some low-quality sensors from China\ncontain a capacitor in the wrong place, which smooths out the output\nvoltage to an unusable degree. I replaced the sensors with ones bought from\nthe Pi Hut, which worked considerably better - they respond immediately to\nchanges in moisture. This event put me off buying from eBay for a while.
\n\nI decided to use an ESP32 as the microcontroller, as they seemed easy-to-use,\nabundant, and cheap. I choose a cheap ESP32-WROOM-32 devkit from a trusted\neBay seller.
\nThis is my first proper electronics project, and the first time I’ve made\na ‘permanent’ circuit. The soldering was a bit messy but functional\n(I blame the flux for spitting at me, but really it’s just a lack of practice).
\nIn order to use stripboard, you need to be able to break the copper tracks to\nallow the circuit to be correctly formed. I bought a Stripboard Track Cutter\nfrom the Pi Hut for this job, and it turned out to be incredibly ill-suited.\nIt took 40 seconds of hard pressure to break a track. I’ve since bought a new\ntrack cutter from eBay that only takes a few seconds of basically no pressure,\nso I’ll definitely be using that for future projects.
\n\nThe soil moisture sensor is a simple analog 3-pin device - two for power, one for a\nvoltage reading. The output voltage decreases with moisture.\nThis sensor connects to the board using a 3-pin male header.
\nThe temperature/humidity sensor uses Inter-Integrated Circuit (I2C), a low-level\nserial communication bus, to communicate with the microcontroller. I2C requires\n4-pins - two for power, and two for the bus.
\nI included connectors for two I2C devices, with space on the board for up to four\ndevices. I only use one I2C device currently, but I’m likely to add more devices\nsuch as a light intensity (lux) sensor in the future.
\nI bought some 4-pin JST PH connectors for I2C, but ended up using\nfemale headers instead as JST PH’s 2mm spacing doesn’t fit the stripboard’s\n3mm spacing. Perhaps in the future I’ll either make a PCB or create an adapter,\nallowing JST PH to be used.
\n\nI was tempted to implement my own cloud IoT platform, but then I reminded myself\nthat the point of this project was to learn electronics and 3d printing, not\nto spend hours reinventing the wheel.
\nI did some research on existing IoT platforms, and choose Thinger.IO as it is\nopen source and seemed to have the right balance of simplicity vs capability.
\nIntegrating with Thinger.IO with the microcontroller was as simple as including\ntheir SDK and writing up a few lines to publish the sensor values.
\nSetting up the dashboard on Thinger.IO was a bit more complicated. It took a\nwhile to work out how to record data rather than just show it, I needed to use\na data bucket with manually posted values, and then write to it every 5 minutes\nfrom the microcontroller. I would have preferred that the microcontroller\njust expose the sensor values, and not care about what the IoT platform does\nwith it.
\n\nThe source code is pretty basic, this was definitely more of an electronics\nproject than a programming one!
\n#include <Arduino.h>\n#include <Adafruit_SHT31.h>\n\n\n#define THINGER_SERIAL_DEBUG\n\n#include <ThingerESP32.h>\n#include \"secrets.hpp\"\n\nThingerESP32 thing(USERNAME, DEVICE_ID, DEVICE_CREDENTIAL);\n\n#define MOIS_LOW 1.8f\n#define MOIS_HIGH 2.2f\n\n\nint moistureReadPin = 35;\nint moistureEnPin = 32;\n\nAdafruit_SHT31 sht31 = Adafruit_SHT31();\n\n\nfloat measureSoil() {\n Serial.println(\"Measuring soil\");\n\n digitalWrite(moistureEnPin, HIGH);\n delay(1000);\n\n int raw = analogRead(moistureReadPin);\n float moisture = raw * 3.3f / 4095.0f;\n digitalWrite(moistureEnPin, LOW);\n return 100.f - 100.f * (moisture - MOIS_LOW) / (MOIS_HIGH - MOIS_LOW);\n}\n\n\nvoid setup() {\n Serial.begin(9600);\n\n pinMode(moistureReadPin, INPUT);\n pinMode(moistureEnPin, OUTPUT);\n\n if (!sht31.begin(0x44)) {\n Serial.println(\"Couldn't find SHT31\");\n while (1) {\n delay(1);\n }\n }\n\n sht31.heater(false);\n\n thing.add_wifi(SSID, SSID_PASSWORD);\n\n thing[\"data\"] >> [](pson &out) {\n out[\"temperature\"] = sht31.readTemperature();\n out[\"humidity\"] = sht31.readHumidity();\n out[\"soil_moisture\"] = measureSoil();\n };\n\n thing.handle();\n thing.write_bucket(\"Plants\", \"data\");\n ESP.deepSleep(60 * 1000000);\n}\n\n\nvoid loop() {\n // Never executed due to deep sleep\n}
\nThis was a very nice project to learn electronics and 3d printed, I certainly\nlearned a lot. I still forget to water my plants, but at least I do so in a\nhigh-tech fashion!
\nI look forward to working on more complicated projects in the future, from\nelectronics to mechanical design.
\nIn the future, I’d like to add a light intensity (lux) sensor to the monitor. I\nneed to think about where to put the sensor, as putting it on top of the pot\nwill result in an inaccurate reading. This is due to sunlight being obstructed\nby the pot, and the difference in height to the leaves.
\nThis project has seeded many ideas for future projects, such as self-watering\nplants, a weather station, a monitored indoor greenhouse, and hydroponics. I\nplan to seed basil and a few other herbs soon, having a mini indoors controlled\ngreenhouse with grow lights could be a cool project.
\n","url":"https://blog.rubenwardy.com/2022/03/17/plant-monitor/","summary":"I made a plant monitor, which measures soil moisture, temperature, and humidity, and reports these things to a cloud IoT service called Thinger.io.","date_published":"2022-03-17T00:00:00.000Z","image":"https://blog.rubenwardy.com/static/img/beRfMhh0s7-540.png","tags":["projects","electronics","3d_printing","cpp"]},{"id":"/2021/05/08/mozilla-bleach-markdown/","language":"en","title":"Securing Markdown user content with Mozilla Bleach","content_html":"Markdown is a common choice for rich text formatting due to its readability and\nease-of-use. Unlike a lot of markup, it aims to match natural text. It’s even\neasy for beginner users, and there are WYSIWYG editors available.
\nWe will be using the Python Markdown\nlibrary to convert Markdown to HTML. Markdown doesn’t have a well-defined\nstandard. The library aims to comply with what little is defined by the\nMarkdown syntax specification,\nmeaning that it is also often stricter than other parsers.
\nTo convert Markdown to HTML:
\nfrom markdown import Markdown\nmd = Markdown(extensions=[\"fenced_code\", \"tables\"], output_format=\"html5\")\nhtml = md.convert(source)
\nYou can use another library to interpret Markdown, if you wish.\nThe rest of the code will be dealing with the HTML output, so is independent of\nMarkdown parsers.
\nWhen allowing user submitted content, it’s important to sanitise it to avoid\nCross-Site Scripting attacks (XSS). If you don’t sanitise user input, then\nan attacker will be able to add HTML tags to run JavaScript when other users\nview your website. This can be used to steal login credentials, run bitcoin\nmining malware, or deface your website. So not ideal.
\nBleach, by Mozilla, is library to sanitised\nuntrusted HTML. It works based on a whitelist of tags and their attributes.\nI have based my list on the\nmdx_bleach extension,\nwhich you could use directly with the markdown library - but I prefer to use\nthe Bleach library directly after generating the HTML as dependencies have a\ntendency to break.
\nAnother thing that Bleach does is safely linkify text - it can convert text\nresembling a URL into a link.
\n# List of allowed HTML tags\nALLOWED_TAGS = [\n \"h1\", \"h2\", \"h3\", \"h4\", \"h5\", \"h6\", \"hr\",\n \"ul\", \"ol\", \"li\", \"p\", \"br\",\n \"pre\", \"code\", \"blockquote\",\n \"strong\", \"em\", \"a\", \"img\", \"b\", \"i\",\n \"table\", \"thead\", \"tbody\", \"tr\", \"th\", \"td\",\n]\n\n# A map of HTML tags to allowed attributes\n# If a tag isn't here, then no attributes are allowed\nALLOWED_ATTRIBUTES = {\n \"h1\": [\"id\"], \"h2\": [\"id\"], \"h3\": [\"id\"], \"h4\": [\"id\"],\n \"a\": [\"href\", \"title\"],\n \"img\": [\"src\", \"title\", \"alt\"],\n}\n\n# Allowed protocols in links.\nALLOWED_PROTOCOLS = [\"http\", \"https\", \"mailto\"]\n\nmd = Markdown(output_format=\"html5\")\n\n\ndef render_markdown(source):\n html = md.convert(source)\n\n cleaner = Cleaner(\n tags=ALLOWED_TAGS,\n attributes=ALLOWED_ATTRIBUTES,\n protocols=ALLOWED_PROTOCOLS,\n filters=[partial(LinkifyFilter, callbacks=bleach.linkifier.DEFAULT_CALLBACKS)])\n\n return cleaner.clean(html)
\nThe CodeHilite extension for Python-Markdown uses\nPygments to provide syntax highlighting.\nYou can enable the extension by adding it to the extensions list.
\nmd = Markdown(extensions=[\"fenced_code\", \"tables\", \"codehilite\"], output_format=\"html5\")
\nYou will also need to provide the .css
files for the\nstyle\n(demo) you choose.\nI ended up going with Darcula due to\npersonal preference.
Bleach will strip attributes that aren’t whitelisted, including the class names\nneeded by code highlighting. It’s important that you don’t just allow any\nclass
attribute values to be used, as this would allow malicious users to use\nany CSS class and deface your website. Instead, we will provide a function to\nthe ALLOWED_ATTRIBUTES
dictionary, which will check whether the provided\nvalues are allowed.
ALLOWED_TAGS = [\n # ...\n\n \"div\", \"span\",\n]\n\nALLOWED_CSS_CLASSES = [\n \"highlight\", \"codehilite\",\n \"hll\", \"c\", \"err\", \"g\", \"k\", \"l\", \"n\", \"o\", \"x\", \"p\", \"ch\", \"cm\", \"cp\", \"cpf\", \"c1\", \"cs\",\n \"gd\", \"ge\", \"gr\", \"gh\", \"gi\", \"go\", \"gp\", \"gs\", \"gu\", \"gt\", \"kc\", \"kd\", \"kn\", \"kp\", \"kr\",\n \"kt\", \"ld\", \"m\", \"s\", \"na\", \"nb\", \"nc\", \"no\", \"nd\", \"ni\", \"ne\", \"nf\", \"nl\", \"nn\", \"nx\",\n \"py\", \"nt\", \"nv\", \"ow\", \"w\", \"mb\", \"mf\", \"mh\", \"mi\", \"mo\", \"sa\", \"sb\", \"sc\", \"dl\", \"sd\",\n \"s2\", \"se\", \"sh\", \"si\", \"sx\", \"sr\", \"s1\", \"ss\", \"bp\", \"fm\", \"vc\", \"vg\", \"vi\", \"vm\", \"il\",\n]\n\ndef allow_class(_tag, name, value):\n return name == \"class\" and value in ALLOWED_CSS_CLASSES\n\nALLOWED_ATTRIBUTES = {\n # etc\n \"code\": allow_class,\n \"div\": allow_class,\n \"span\": allow_class,\n}\n\nmd = Markdown(output_format=\"html5\")
\nAnd there you are! You can now render untrusted user markdown safely, with\ncode highlighting and linkify.
\n","url":"https://blog.rubenwardy.com/2021/05/08/mozilla-bleach-markdown/","summary":"When accepting user content, it's important to sanitize user input to avoid Cross Site Scripting (XSS) and other attacks.","date_published":"2021-05-08T00:00:00.000Z","image":"https://blog.rubenwardy.com/static/img/uJ0aif-HGT-540.png","tags":["tutorials","fullstack"]},{"id":"/2020/09/13/return-to-android-dev/","language":"en","title":"ForumMate: My return to Android app development","content_html":"I worked as an Android developer just over two years ago, creating native apps\nfor clients using Java and Kotlin. During that time, Kotlin was gaining\nprominence and had just been made official by Google. Google also introduced\nArchitecture Components that year, later renamed to JetPack. Since then, the\nAndroid ecosystem has changed significantly, with Kotlin and JetPack gaining\nsignificant maturity and development. Out with Realm, Activities, and\nModel-View-Presenter (MVP), in with Room, fragment-based architecture, and MVVM.\nData-binding and\nMVVM are pretty awesome and breathe a whole new life into Android app development.
\nIn order to get back up to speed with Android development, I needed a project to\nwork on. I decided to make a forum/bulletin board aggregator app, which uses\nHTML parsing to get data from legacy forum websites and display it using a nice\nuser interface.
\nForumMate allows the user to log into\na phpBB forum using a WebView, where the app captures the session cookies. The\nuser can either select a forum from a list fetched using a REST API, or log into\nany forum by navigating to it on Google search.
\nThese session cookies are stored and used in future HTTP requests, where they are\nparsed using JSoup. Results are stored in a Room database, with the interface\nsubscribing to LiveData.
\nThe user interface is a still a work in progress, but it works.
\nThe app is architectured using MVVM, with\ntwo-way data-binding,\ndependency injection, and Navigation components.
\nThe code has 5 main zones: Views (ie: fragments), ViewModels, Repositories,\nDatabase (ie: Room), and the Backend. The backend consists of interfaces with an\nimplementation per forum software, used to interact with forums. The\nForumLoginStrategy interface is used to detect the forum software using the\nStrategy Pattern and obtain\naccess information.
\n\nI’ve decided not to publish ForumMate for two reasons:
\nI’ve achieved what I intended to do with ForumMate - I’ve gotten back up to\nspeed with Android development.
\n","url":"https://blog.rubenwardy.com/2020/09/13/return-to-android-dev/","summary":"I created a forum aggregator app to relearn Android devevelopment, including databinding, View Models, LiveData, and more.","date_published":"2020-09-13T00:00:00.000Z","image":"https://blog.rubenwardy.com/static/img/4gitHYSQ7m-540.png","tags":["projects","android"]},{"id":"/2020/07/26/sol3-script-sandbox/","language":"en","title":"Writing a Lua sandbox using sol2 / sol3","content_html":"Sandboxing can protect the user’s computer from malicious or buggy scripts.\nBut sandboxes are difficult to get right; you need to be very careful with what\nyou expose, and make sure you test for vulnerabilities.\nThe Sandboxes on the Lua wiki is required\nreading, as it contains very helpful advice.
\nAn environment is a table that stores the global variables available to a function.\nEach function will have an environment assigned to it, and we can use this to sandbox\ncode. To manage environments in sol3, you will use\nsol::environment.
\n// lua is a `sol::state` or `sol::state_view`\n\n// Create new blank environment\nauto env = sol::environment(lua, sol::create);\n\n// Set global variable for globals\nenv[\"_G\"] = env;
\nTo sandbox, you will want to create an environment with only whitelisted,\nsafe functions. Let’s list all the safe global functions:
\nconst std::vector<std::string> whitelisted = {\n \"assert\",\n \"error\",\n \"ipairs\",\n \"next\",\n \"pairs\",\n \"pcall\",\n \"print\",\n \"select\",\n \"tonumber\",\n \"tostring\",\n \"type\",\n \"unpack\",\n \"_VERSION\",\n \"xpcall\",\n\n // These functions are unsafe as they can bypass or change metatables,\n // but they are required to implement classes.\n \"rawequal\",\n \"rawget\",\n \"rawset\",\n \"setmetatable\",\n};
\nNow, let’s copy the whitelisted globals into the environment:
\nfor (const auto &name : whitelisted) {\n env[name] = lua[name];\n}
\nNext, you’ll want to define and copy whitelisted modules.\nWe didn’t include these in the above list, as we want to copy the tables themselves.\nThis prevents changes that untrusted code makes to modules from affecting trusted code.
\nstd::vector<std::string> safeLibraries = {\n \"coroutine\", \"string\", \"table\", \"math\"};\n\nfor (const auto &name : safeLibraries) {\n sol::table copy(lua, sol::create);\n for (auto pair : lua[name]) {\n // first is the name of a function in module, second is the function\n copy[pair.first] = pair.second;\n }\n env[name] = copy;\n}
\nFinally, you’ll want to partially copy modules that contain unsafe functions:
\nsol::table os(lua, sol::create);\nos[\"clock\"] = lua[\"os\"][\"clock\"];\nos[\"date\"] = lua[\"os\"][\"date\"];\nos[\"difftime\"] = lua[\"os\"][\"difftime\"];\nos[\"time\"] = lua[\"os\"][\"time\"];\nenv[\"os\"] = os;
\nLet’s first revise what each function does:
\nstring
as a function.nil, error_msg
.file
as a function.nil, error_msg
.dofile
.We need to provide safe implementations of each of these functions.\nWe will do this be making sure that the following things are checked:
\nloadstring
will set the\nglobal environment on the returned function, which allows escaping the sandbox.std::tuple<sol::object, sol::object> LuaSecurity::loadstring(\n const std::string &str, const std::string &chunkname) {\n if (!str.empty() && str[0] == LUA_SIGNATURE[0]) {\n return std::make_tuple(sol::nil,\n sol::make_object(lua, \"Bytecode prohibited by Lua sandbox\"));\n }\n\n sol::load_result result = lua.load(str, chunkname, sol::load_mode::text);\n if (result.valid()) {\n sol::function func = result;\n env.set_on(func);\n return std::make_tuple(func, sol::nil);\n } else {\n return std::make_tuple(\n sol::nil, sol::make_object(lua, ((sol::error)result).what()));\n }\n}
\nLUA_SIGNATURE
is the character used to indicate that some source code is\nbytecode.
env.set_on(func)
is used to set the environment.
std::tuple<sol::object, sol::object> LuaSecurity::loadfile(\n const std::string &path) {\n if (!checkPath(path)) {\n return std::make_tuple(sol::nil,\n sol::make_object(\n lua, \"Path is not allowed by the Lua sandbox\"));\n }\n\n std::ifstream t(path);\n std::string str((std::istreambuf_iterator<char>(t)),\n std::istreambuf_iterator<char>());\n return loadstring(str, \"@\" + path);\n}
\ncheckPath is a method that will be used later to verify that path
is allowed,\nfor now it can be defined to always return true.
sol::object LuaSecurity::dofile(const std::string &path) {\n std::tuple<sol::object, sol::object> ret = loadfile(path);\n if (std::get<0>(ret) == sol::nil) {\n throw sol::error(std::get<1>(ret).as<std::string>());\n }\n\n sol::unsafe_function func = std::get<0>(ret);\n return func();\n}
\ndofile
will need to check the load result, and run the function in unsafe mode.
Don’t forget to actually set them on the environment!
\nenv.set_function(\"loadstring\", &LuaSecurity::loadstring, this);\nenv.set_function(\"loadfile\", &LuaSecurity::loadfile, this);\nenv.set_function(\"dofile\", &LuaSecurity::dofile, this);
\nThe easiest way to run a script safely is to pass the environment into script_file
:
lua.script_file(\"mods/mymod/init.lua\", security->getEnvironment());
\nIn order to safely execute our scripts, we need to remember to set the safe\nenvironment. Wouldn’t it be nicer to change the default environment in Lua?
\nThe default environment in Lua is stored in a registry value, and so can be assigned\nlike so:
\n#if LUA_VERSION_NUM >= 502\n // Get environment registry index\n lua_rawgeti(lua, LUA_REGISTRYINDEX, env.registry_index());\n\n // Set the global environment\n lua_rawseti(lua, LUA_REGISTRYINDEX, LUA_RIDX_GLOBALS);\n#else\n // Get main thread\n int is_main = lua_pushthread(lua);\n assert(is_main);\n int thread = lua_gettop(lua);\n\n // Get environment registry index\n lua_rawgeti(lua, LUA_REGISTRYINDEX, env.registry_index());\n\n // Set the global environment\n if (!lua_setfenv(lua, thread)) {\n throw ModException(\n \"Security: Unable to set environment of the main Lua thread!\");\n };\n lua_pop(lua, 1); // Pop thread\n#endif
\nUnfortunately, the preprocessors to support multiple versions of Lua makes this\nugly. If you’re only targeting a specific version, you can remove the unused branch.
\nWe can now safely load scripts directly, without specifying the environment:
\nlua.script_file(\"mods/mymod/init.lua\");
\nIdeally, you’d not allow any file system access to untrusted scripts. You can\nuse virtual file systems to load all allowed resources into memory, and then only\nread from memory.
\nHowever, sometimes the scripts aren’t totally untrusted, and you would like to allow\nsome access to the file system. To do this, you can check the path to make sure it’s\nin an allowed location. Note that this isn’t completely safe, symlinks can be used\nto escaped the allowed path - however, if scripts can’t make symlinks then\nit’s the user’s stupidity.
\nC++17’s filesystem provides useful path-parsing methods:
\nbool LuaSecurity::checkPath(const std::string &filepath) {\n if (basePath.empty()) {\n return false;\n }\n\n auto base = std::filesystem::absolute(basePath).lexically_normal();\n auto path = std::filesystem::absolute(filepath).lexically_normal();\n\n auto [rootEnd, nothing] =\n std::mismatch(base.begin(), base.end(), path.begin());\n\n return rootEnd == base.end();\n}
\nYou may wish to extend this to add multiple base paths, and also add a way\nto make some paths readonly.
\nYou can use lua_sethook
to run a callback after a number of instructions,\nand then raise a Lua error.
https://stackoverflow.com/questions/2777527/stopping-a-runaway-lua-subprocess
\nI hope you found this article useful. This doesn’t cover all possible exploits -\nuntrusted code may still crash or freeze the program - but it aims to at least\nprotect the host from the code.
\nI’d like to finish by reminding you to add sandbox unit tests to make sure that\nit’s working correctly, and you don’t accidentally break it. This can be as\nsimple as some assert
s in a builtin Lua file somewhere.
Many APIs in my game push Vector3
s to and from Lua. It’s such a common operation,\nthat most of my functions used to look like this:
sol::table add(sol::table tPos) {\n Vector3f pos = TableToPos(tPos);\n\n // do something\n return PosToTable(pos);\n}
\nOne of the benefits of sol is that it is able to bind Lua arguments to C++ function\narguments, converting types implicitly. Having to convert from a table to a vector\nourselves is quite annoying. It would be much nicer to have sol do it for us.\nLuckily, sol allows you to customise how types are retrieved and pushed to Lua\nusing Customisation Points.
\nWhen trying to convert a type from Lua to C++, sol will call certain templated functions.\nWe will be customisating sol’s behaviour using a technique called template specialization,\nwhich allows us to specialise a specific instance of the templated functions and structs.\nBy the end of this article, we’ll be able to use Vector3
directly when using sol,\nallowing the above code to be turned into this:
Vector3f add(Vector3f pos) {\n // do something\n\n return pos;\n}
\nnamespace sol {\n\ntemplate <typename T>\ninline Vector3<T> sol_lua_get(sol::types<Vector3<T>>, lua_State *L, int index,\n sol::stack::record &tracking) {\n int absoluteIndex = lua_absindex(L, index);\n\n sol::table table = sol::stack::get<sol::table>(L, absoluteIndex);\n T x = table[\"x\"];\n T y = table[\"y\"];\n T z = table[\"z\"];\n\n tracking.use(1);\n\n return { x, y, z };\n}
\nsol_lua_get
is the function used to convert from Lua to C++.
The first argument of this function is a dummy argument used by the meta-programming\nto select the correct function to use. The tracking record argument is used to tell\nsol what you did in this function. In this particular case, we only read a single\nargument from the stack, so we call:
\ntracking.use(1);
\nThe example in the sol tutorial uses two arguments, and so calls use(2)
.
Note that it’s important to do this specialisation inside the same namespace\nas the original templated function. This is something that’s missed inside the\nsol tutorial.
\ntemplate <typename T>\ninline int sol_lua_push(sol::types<Vector3<T>>, lua_State *L, const Vector3<T> &pos) {\n lua_createtable(L, 0, 3);\n\n lua_getglobal(L, \"Vector\");\n lua_setmetatable(L, -2);\n\n sol::stack_table vec(L);\n vec[\"x\"] = pos.x;\n vec[\"y\"] = pos.y;\n vec[\"z\"] = pos.z;\n\n return 1;\n}
\nsol_lua_push
is the function used to convert from C++ to Lua.
Notice how stack_table is used to modify the table created by lua_createtable
.\nThis code also sets the global Vector as a metatable on the table, this is useful\nif you have a Lua Vector class.
template <typename T>\nstruct lua_type_of<Vector3<T>>\n : std::integral_constant<sol::type, sol::type::table> {};
\nThis is a type trait\nused to tell sol that the Lua type should be a table.
\ntemplate <typename Handler, typename T>\ninline bool sol_lua_check(sol::types<Vector3<T>>, lua_State *L, int index,\n Handler &&handler, sol::stack::record &tracking) {\n int absoluteIndex = lua_absindex(L, index);\n if (!stack::check<sol::table>(L, absoluteIndex, handler)) {\n tracking.use(1);\n return false;\n }\n\n sol::stack::get_field(L, \"x\", absoluteIndex);\n bool x = sol::stack::check<float>(L, -1);\n\n sol::stack::get_field(L, \"y\", absoluteIndex);\n bool y = sol::stack::check<float>(L, -1);\n\n sol::stack::get_field(L, \"z\", absoluteIndex);\n bool z = sol::stack::check<float>(L, -1);\n\n sol::stack::pop_n(L, 3);\n\n tracking.use(1);\n return x && y && z;\n}
\nsol_lua_check
is the function used to determine whether a stack value is\nof the correct type, and can be converted. In this case, we check that it’s\na table and it has the required fields.
It should all be defined in a header after you include sol.hpp
.
I like to have a Lua.hpp
header with the following content:
#pragma once\n\n#define SOL_ALL_SAFETIES_ON 1\n#include <sol/sol.hpp>\n\n#include <types/Vector3.hpp>\n\nnamespace sol {\n\n// All the specialisations here:\n// sol_lua_get, sol_lua_push sol_lua_check, lua_type_of\n\n}\n
\nInstead of including sol.hpp
in other files, I include Lua.hpp
. This makes sure\nthat sol receives the same defines and the same specialisations each time.
Drop shadows are created using a\nGaussian blur\ndrawn underneath the original element. You can do this using either a fixed\ntexture or a post-processing fragment shader.
\nIf you do not need to have dynamic drop shadows, you should consider baking the\nshadow into a texture and directly rendering that. However, a static shadow is\nnot suitable for things like text drop shadows, which will change drastically\ndepending on user input.
\nThe blur is done in two passes - horizontal and vertical. A pass blurs each\npixel using the weighted sum of its neighbours along a direction, up to some\nradius. Each pass is done using the same shader, which is given different values\nfor direction.
\n\nuniform sampler2D source;\nuniform vec4 mask; // shadow color\nuniform vec2 direction; // step size\n\nvoid main() {\n\tvec2 textureCoordinates = gl_TexCoord[0].xy;\n\n\tvec4 color = vec4(0.0);\n\tcolor += texture2D(source, textureCoordinates - 4.0 * direction) * 0.0162162162;\n\tcolor += texture2D(source, textureCoordinates - 3.0 * direction) * 0.0540540541;\n\tcolor += texture2D(source, textureCoordinates - 2.0 * direction) * 0.1216216216;\n\tcolor += texture2D(source, textureCoordinates - direction) * 0.1945945946;\n\tcolor += texture2D(source, textureCoordinates) * 0.2270270270;\n\tcolor += texture2D(source, textureCoordinates + direction) * 0.1945945946;\n\tcolor += texture2D(source, textureCoordinates + 2.0 * direction) * 0.1216216216;\n\tcolor += texture2D(source, textureCoordinates + 3.0 * direction) * 0.0540540541;\n\tcolor += texture2D(source, textureCoordinates + 4.0 * direction) * 0.0162162162;\n\tcolor = mask * vec4(100.0, 100.0, 100.0, color[3]) / 100.0;\n\n\tgl_FragColor = color;\n}
\nThis needs to be done using post-processing. The way to do this in SFML\nis to render your scene to a RenderTexture, which you then render to the screen\nusing a fragment shader like so:
\n// This is an example, `texture` is a RenderTexture\nsf::Sprite sprite(texture.getTexture());\n\nshadowShader->setUniform(\"mask\", sf::Glsl::Vec4(0, 0, 0, 50));\nshadowShader->setUniform(\"direction\", sf::Vector2f(0.f, pixelSize));\nsf::RenderStates states = baseStates;\nstates.shader = shadowShader.get();\nstates.transform.translate(pos);\ntarget.draw(sprite, states);
\nThe source drawable should be rendered to a temporary RenderTexture. This\nRenderTexture is then blurred in one direction to another RenderTexture.\nThis final RenderTexture is then blurred in the other direction to the screen.\nThe reason that you render the drawable to a RenderTexture before you first\nblur it is to make the fragment shader act like a post-processing shader - you\ndon’t want to act on the source drawable’s faces, but on the resulting pixels.
\nFull example source code can be found here, with a helper class:
\n","url":"https://blog.rubenwardy.com/2020/05/12/sfml-drop-shadow/","summary":"Drop shadows are created using a Gaussian blur drawn underneath the original element. There's two methods to do this in SFML, depending on your requirements.","date_published":"2020-05-12T00:00:00.000Z","image":"https://blog.rubenwardy.com/static/img/6dQ-80Q1rs-537.png","tags":["tutorials","sfml","cpp","gamedev"]},{"id":"/2020/05/06/android-complete-recyclerview-data-binding/","language":"en","title":"Android: Complete, generic data-binding RecyclerView adapter","content_html":"Data binding greatly reduces the amount of code you need to connect user-interfaces\nwith ViewModels. It keeps Activity and Fragment code small, and makes it easier\nto manage lifecycles.
\n<EditText\n android:id=\"@+id/username\"\n android:layout_height=\"wrap_content\"\n android:layout_width=\"match_parent\"\n android:text=\"@={viewModel.username}\"/>
\nI discovered that there was no attribute to bind the elements in a RecyclerView,\ndue to the fact that a RecyclerView needs an adapter to be able to create\nelement views. It would also be nice to automatically use data binding to create\nthe viewholders. There are a number of guides to do both of these halves, but I\nnow present the code to do the whole.
\nThis guide assumes you have at least a beginner’s knowledge of data binding,\nand have it enabled for your project.
\nThe BindingRecyclerAdapter is a generic class which allows binding item\nholder views to the item ViewModel, and has a property to set the contained data.\nThis property will be used by data binding later.
\nclass BindingRecyclerAdapter<T, V>(val viewModel: V, @LayoutRes val layout: Int): RecyclerView.Adapter<BindingRecyclerAdapter.ViewHolder>() {\n var data: List<T> = emptyList()\n set(v) {\n field = v\n notifyDataSetChanged()\n }\n\n class ViewHolder(val binding: ViewDataBinding) : RecyclerView.ViewHolder(binding.root) {}\n\n override fun onCreateViewHolder(parent: ViewGroup, viewType: Int): ViewHolder {\n val layoutInflater = LayoutInflater.from(parent.context)\n val binding = DataBindingUtil.inflate(layoutInflater, layout, parent, false)\n return ViewHolder(binding)\n }\n\n override fun getItemCount(): Int {\n return data.size\n }\n\n override fun onBindViewHolder(holder: ViewHolder, position: Int) {\n holder.binding.setVariable(BR.item, data.getOrNull(position))\n holder.binding.setVariable(BR.viewModel, viewModel)\n }\n}
\nNow, we need to create a binding adapter to handle the data
attribute:
@BindingAdapter(\"data\")\nfun <T> setRecyclerViewProperties(recyclerView: RecyclerView, data: List<T>?) {\n if (data == null) {\n return\n }\n\n if (recyclerView.adapter is BindingRecyclerAdapter<*, *>) {\n (recyclerView.adapter as BindingRecyclerAdapter<T, *>).data = data\n }\n}
\nFirst, you will need to add the ViewModel to the layout data section.\nLayouts with data-binding need to have <layout>
as the outer-most view.
<layout xmlns:android=\"http://schemas.android.com/apk/res/android\"\n xmlns:tools=\"http://schemas.android.com/tools\">\n <data>\n <variable\n name=\"viewModel\"\n type=\"com.example.app.viewmodels.MyListViewModel\" />\n </data>\n\n <androidx.recyclerview.widget.RecyclerView\n android:id=\"@+id/my_list\"\n android:scrollbars=\"vertical\"\n android:layout_width=\"match_parent\"\n android:layout_height=\"match_parent\"\n app:data=\"@{viewModel.items}\" />\n</layout>
\nNext, let’s set up our RecyclerView to use the the new BindingRecyclerAdapter:
\noverride fun onViewCreated(view: View, savedInstanceState: Bundle?) {\n super.onViewCreated(view, savedInstanceState)\n\n binding.apply {\n myList.apply {\n setHasFixedSize(true)\n adapter = BindingRecyclerAdapter<Item>(R.layout.item)\n layoutManager = LinearLayoutManager(context)\n }\n\n // This is needed to subscribe to LiveData updates\n lifecycleOwner = this@MyListFragment\n viewModel = myListViewModel\n invalidateAll()\n }\n}
\n<layout xmlns:android=\"http://schemas.android.com/apk/res/android\"\n xmlns:app=\"http://schemas.android.com/apk/res-auto\"\n xmlns:tools=\"http://schemas.android.com/tools\">\n\n <data>\n <import type=\"android.view.View\" />\n <variable\n name=\"item\"\n type=\"com.example.app.models.My\" />\n <variable\n name=\"viewModel\"\n type=\"com.example.app.viewmodels.MyListViewModel\" />\n </data>\n\n <TextView\n android:id=\"@+id/user\"\n style=\"?attr/titleTextAppearance\"\n android:layout_width=\"match_parent\"\n android:layout_height=\"wrap_content\"\n android:lines=\"1\"\n android:layout_marginStart=\"16dp\"\n android:text=\"@{item.author}\"\n android:textStyle=\"bold\"\n tools:text=\"Username\" />\n\n </androidx.constraintlayout.widget.ConstraintLayout>\n</layout>
\nYou may find it useful to add an empty view.
\nAdd the following property to BindingRecyclerAdapter
\nclass BindingRecyclerAdapter<T, V>(val viewModel: V, @LayoutRes val layout: Int): RecyclerView.Adapter<BindingRecyclerAdapter.ViewHolder>() {\n var emptyView: View? = null\n set(v) {\n field = v\n updateEmptyView()\n }\n\n var data: List<T> = emptyList()\n set(v) {\n field = v\n notifyDataSetChanged()\n\n updateEmptyView()\n }\n\n private fun updateEmptyView() {\n emptyView?.visibility = if (data.isEmpty()) View.VISIBLE else View.GONE\n }
\n","url":"https://blog.rubenwardy.com/2020/05/06/android-complete-recyclerview-data-binding/","summary":"Data binding greatly reduces the amount of code you need to connect user-interfaces with ViewModels.","date_published":"2020-05-06T00:00:00.000Z","tags":["snippet","tutorials","android"]},{"id":"/2020/05/06/why-i-switched-to-gitlab/","language":"en","title":"Why I switched from GitHub to GitLab","content_html":"Since switching to GitLab, a few people have asked me why. There are several\ndraws towards GitLab, and several pushes from GitHub.
\nI don’t usually like it when services are bloated by having a large amount of\nfunctionality that would be better as separate programs. But I am a\npragmatist, and the ease of use of GitLab more than makes it up for me.\nThe ability to use Docker images\nin CI is so much better than what I was doing before.
\nI’ve experimented with a few different CI/CD and GitLab Pipelines are by far the\neasiest I’ve found to set up and get working. It is also a lot faster than\nTravis, and allows you to host your own private Runners for projects, which is\nsuper nice.
\nGitLab Pages are published using their Pipelines system. This means that you\nhave a complete Linux container available to generate your static website,\nallowing custom plugins and code far beyond what GitHub Pages allows you. I have\na dedicated server, which means that I don’t need GitLab/GitHub pages as a host,\ninstead, I use them as a nice way to generate and make updating my static sites\neasy.
\nGitLab is free and open source; GitHub is not. I prefer to use free software\nwhere possible, which is reason enough. With GitLab, I can export my information\nin full from GitLab and move it to my own instance. Whilst GitHub does allow\nthis using its API, it would not be able to migrate to a self-hosted instance of\nthe same software.
\nA common argument against migrating is centered around GitHub being perceived as\neasier to use. Whilst the GitLab interface is slightly more cluttered, I think\nthat the perceived difference in use is much more due to users being used to the\nGitHub interface. This is no longer an issue for me as I will be getting used to\nit at my job.
\nMy main website and blog are in private repositories, and I wanted to keep it\nthat way. This would require paying for the GitHub Pro plan, but I do not want\nto directly contribute to Microsoft’s wealth. This was the final push towards\nGitLab.
\n","url":"https://blog.rubenwardy.com/2020/05/06/why-i-switched-to-gitlab/","summary":"Since switching to GitLab, a few people have asked me why. There are several draws towards GitLab, and several pushes from GitHub.","date_published":"2020-05-06T00:00:00.000Z","tags":["reviews"]},{"id":"/2020/01/21/creating-game-guis-in-sfml/","language":"en","title":"A Comparison of SFML GUI Libraries: TGUI vs SFGUI vs ImGui","content_html":"SFML is an excellent library that can be used to create 2D games and similar\napplications in C++. It’s an abstraction over OpenGL and various system APIs,\npresenting a consistent and easy-to-use interface.
\nProviding a Graphical User Interface (GUI / UI) API is out of scope for SFML.\nGUIs are complicated, and there’s no single good way to implement them.\nThe S in SFML stands for Simple but GUI code rarely is.
\nThere are many different options to choose from when making GUIs. This article\nis an in-depth comparison of the options for making GUIs in SFML, discussing\ntheir pros and cons.
\nA full GUI library is likely to be overkill for most simple uses, such as those\nconsisting of buttons and text boxes. These can be implemented rather simply\nusing SFML’s graphics and input APIs. SFML already provides a nice\ncross-platform API for text input and clipboards.
\n\n\tCreating a GUI system with SFML\n\nImGui is a very easy-to-use library designed for use in prototyping and tools,\nbut not for GUIs which are used by a typical end user, such as those that appear\nduring gameplay.
\nIt enables fast iteration, and prefers ease-of-use and simplicity over\nperformance and customisability.
\nThe Immediate Mode GUI\npattern combines the rendering and the event handling of a GUI element. This is\nin contrast to event-based GUI libraries, where you first set up the elements\nand then subscribe to receive events on them.
\nImGui::Text(\"Hello, world %d\", 123);\nif (ImGui::Button(\"Save\"))\n MySaveFunction();\nImGui::InputText(\"string\", buf, IM_ARRAYSIZE(buf));\nImGui::SliderFloat(\"float\", &f, 0.0f, 1.0f);
\n\n\tImGui Tutorial\n\tGitHub\n
\nSFGUI provides a fully-functional automatic layouting system, with element\nbounds being allocated using an approach using\nrequisition and allocation.\nPut simply: child elements request a minimum size called a requisition, and then\nparents allocate the final size for their children. This approach makes making\nresponsive GUIs much easier.
\nSFGUI doesn’t provide much documentation other than the tutorials. The source\nhas Doxygen documentation, but this isn’t hosted anywhere. Additionally, SFGUI\nonly receives the bare-minimum maintenance to remain working. The last update\nwhich implemented a feature or fixed a bug was in June 2018. Admittedly, SFGUI\nis pretty stable with very few noticeable bugs, but has some gaping omissions\nsuch as tooltip and copy+paste support.
\nI didn’t get far enough to investigate the theming capabilities of SFGUI.
\n\n GitHub\n
\nWhile Texus GUI does also provide some container-based formatting, such as a\ngrid element, it is predominantly a constraint-based method of implementing\nGUIs. The position and size of elements is controlled using a custom\ndomain-specific language (DSL).
\nwidget->setPosition({\"ButtonName.right + 50\", \"ButtonName.top\"});\nwidget->setSize({\"min(&.w, &.h * 4/3)\", \"min(&.h, &.w * 3/4)\"});
\nStyling is done using stylesheets written using another custom domain-specific\nlanguage. These stylesheets set up\nrenderers to contain properties\nabout the widget’s appearance.
\nButton {\n Texture = \"button.png\" Part(2, 104, 190, 49) Middle(10, 10, 170, 29);\n TextureHover = \"button.png\" Part(2, 155, 190, 45) Middle(10, 10, 170, 25);\n TextureDown = \"button.png\" Part(2, 202, 190, 45) Middle(10, 10, 170, 25);\n TextureFocused = \"button.png\" Part(2, 155, 190, 45) Middle(10, 10, 170, 25);\n TextureDisabled = \"button.png\" Part(2, 2, 195, 49) Middle(10, 10, 175, 29);\n TextColor = rgb(250, 250, 250);\n TextColorDisabled = rgb(100, 100, 100);\n}
\nTGUI does appear to lack some code reuse for each element, which can make\ntheming hit and miss. For example, I discovered that not all elements which have\nbackgrounds support the same style properties to customise the background.
\nTGUI also only has a single active maintainer, the original creator Texus, who\nis very responsive to bug and feature requests.
\nTGUI follows the source code approach to documentation - the docs are generated\nfrom Doxygen comments in the source code, detailing what each thing does.
\n\n Website\n GitHub\n Doxygen\n Tutorials\n Discord\n
\nCEGUI is a library which isn’t specific to SFML, and because of this has a much\nlarger user base than any of the other options. It has a team of maintainers,\nrather than a single person.
\nCEGUI uses outdated C++, and it shows in the API. There’s a lot of manual\npointer use and a lot of ugly APIs. This was an immediate killer for me, so I\ndidn’t go further than this when investigating this library.
\n\nIt’s possible to make use of a much larger ecosystem by looking for GUI\nlibraries that target OpenGL. You can use a sf::RenderTexture
as a target for\nthe raw OpenGL calls, and then render that texture to the screen.
Whilst I prefer the container-based requisition-allocation approach of SFGUI,\nthe benefits of a better-maintained library are apparent which is why I use\nTGUI. TGUI feels badly designed and incomplete.
\nI am considering switching either switching back to SFGUI, and maintaining one\nmyself, or switching the graphics backend completely in order to use a\nbetter-maintained library like nanogui, but fear that this is too much of a\ndistraction from making my game.
\nIn this post, I tried to collect and convey the merits of each approach but I\nmay have missed some things out. Feel free to comment below.
\n","url":"https://blog.rubenwardy.com/2020/01/21/creating-game-guis-in-sfml/","summary":"An in-depth comparison of the options for making Graphical User Interfaces (GUIs/UIs) in SFML, discussing their pros and cons.","date_published":"2020-01-21T00:00:00.000Z","image":"https://blog.rubenwardy.com/static/img/HoaaJMJEti-540.png","tags":["tutorials","reviews","sfml","cpp","gamedev"]},{"id":"/2019/04/24/rendering-multi-z-layer-topdown-map-using-SFML/","language":"en","title":"Rendering a topdown world with layers and z-levels using SFML","content_html":"Ruben’s Virtual World Project\nis a game I’ve been working on for almost 4 years now.\nRecently I rewrote the rendering code to support voxel lighting and multiple\nz-level - heights of the map.
\n\nEach z-level has two layers - a tile layer and a floor/terrain layer.\nEach of these layers has a mesh (VertexArray) which are created in slightly\ndifferent ways.\nThe floor layer is totally populated, meaning that every position has a quad\nrepresenting it. The tile layer is sparsely populated, meaning that only positions\nwhich have a tile have a matching mesh quad.
\nMultiple z-levels are rendered one after another, with hidden z-levels not\nrendered at all.
\nLighting is performed by a shader on each mesh, and takes in a\nthree different textures - diffuse, normal, and lightmap.\nThe mesh contains UV co-ordinates which are used to index the diffuse map\nand normal map, as both are dependent on the type of tile and not the position.\nThe lightmap is indexed using the position.
\n\nEach position has 3x3 pixels in the lightmap, representing the lighting above\nand from each of the four sides.
\nA vertex shader is needed to export a relative position to the fragment shader:
\nvarying vec4 relativePosition;\n\nvoid main() {\n relativePosition = gl_Vertex;\n\n gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;\n gl_TexCoord[0] = gl_TextureMatrix[0] * gl_MultiTexCoord0;\n gl_FrontColor = gl_Color;\n}
\nThe bulk of the work is done in the fragment shader:
\nuniform sampler2D source; // diffuse map\nuniform sampler2D lightmap;\nuniform sampler2D normalmap;\nuniform float factor; // Lower z-levels from cross-section have higher factors\nvarying vec4 relativePosition;\n\n// From a normal, calculate how much comes from each direction\nvec4 getComponents(vec4 normal) {\n vec2 rot = normal.xy*2.f - 1.f;\n return vec4(\n clamp(-rot.y, 0.f, 1.f),\n clamp(rot.x, 0.f, 1.f),\n clamp(rot.y, 0.f, 1.f),\n clamp(-rot.x, 0.f, 1.f)\n );\n}\n\nvoid main() {\n // Get light values for each direction\n vec2 rel = floor(relativePosition.xy / 64.f) / 16.f;\n vec4 lightAbove = texture2D(lightmap, rel + 0.5f / 16.f);\n vec4 lightUp = texture2D(lightmap, rel + vec2(0.5f, 0.f) / 16.f);\n vec4 lightRight = texture2D(lightmap, rel + vec2(0.8f, 0.5f) / 16.f);\n vec4 lightDown = texture2D(lightmap, rel + vec2(0.5f, 0.8f) / 16.f);\n vec4 lightLeft = texture2D(lightmap, rel + vec2(0.f, 0.5f) / 16.f);\n\n // Get normal and weighting for each direction\n vec4 normal = texture2D(normalmap, gl_TexCoord[0].xy);\n vec4 rot = getComponents(normal);\n\n // Leak sides to the above, to make underground wall tops visible\n lightAbove = lightAbove + clamp(lightUp + lightRight + lightDown + lightLeft, 0.f, 0.6f)\n lightAbove = clamp(lightAbove, 0.f, 1.f);\n\n // Calculate final light level\n vec4 lightV = rot[0]*lightUp + rot[1]*lightRight + rot[2]*lightDown + rot[3]*lightLeft + (normal.z*2.f - 1.f)*lightAbove;\n\n // Just support 1D lighting for now\n float light = lightV[0];\n\n // Calculate color\n vec4 color = texture2D(source, gl_TexCoord[0].xy);\n if (color[3] < 0.1) {\n gl_FragColor = color;\n } else {\n float u = 1.0 - clamp(factor, 0.0, 1.0);\n const vec4 BLUE = vec4(0, 0.75, 1.0, 1.0);\n gl_FragColor = mix(BLUE, color, u) * u * u * light * light;\n gl_FragColor[3] = color[3];\n }\n}
\nThis probably isn’t the best way to do it. This is one of the first shaders\nI’ve ever written, and graphics isn’t my thing.
\n","url":"https://blog.rubenwardy.com/2019/04/24/rendering-multi-z-layer-topdown-map-using-SFML/","summary":"A topdown game with Z-levels is 3d despite being rendered in 2d. This technique has 3d lighting and normal mapping, with multiple layers per tile for effects.","date_published":"2019-04-24T00:00:00.000Z","image":"https://blog.rubenwardy.com/static/img/OTyw-uIQ7E-400.png","tags":["tutorials","sfml","cpp","gamedev"]},{"id":"/2019/04/16/youtube-music-are-you-still-listening-fix/","language":"en","title":"Getting rid of YouTube Music's \"Are you still listening?\" dialog","content_html":"YouTube Music is a great way to listen for music for free, and with no adverts\nif you use an adblocker. There is one annoying problem however: after listening\nfor a while, Youtube will keep pausing the music to show a dialog which says\n“Are you still listening?”. This article will show how to automatically confirm\nthe dialog.
\nFirst, you need to install a plugin to allow running custom JavaScript on\nweb pages. I used Greasemonkey/Tampermonkey\n(Firefox |\nChrome),\nit seems to be the most popular plugin for this task.
\nOpen up the plugin’s menu, and click “new user script”.\nOpen up the menu again, select the script and click “edit”.\nPaste the following code as the source:
\n// ==UserScript==\n// @name Youtube Music Listening Clicker\n// @version 1\n// @grant none\n// ==/UserScript==\n\nsetInterval(() => {\n if (!document.querySelector(\"paper-dialog.ytmusic-popup-container\").getAttribute(\"aria-hidden\"))\n document.querySelector(\".ytmusic-you-there-renderer .style-blue-text .size-default\").click();\n}, 1000);\n\nconsole.log(\"Youtube Music Listening Clicker loaded\")
\nSave and go back to the plugin’s menu, select the script, and click “User script options”.\nAdd “music.youtube.com” to the scripts include textbox and check “Only use these” below.
\nReload youtube music if you have it open. Enjoy no more annoying dialogs!
\nPlease note: the music may still pause for a second or so if the tab isn’t\nfocused, due to Firefox/Chrome limiting JavaScript in the backend. There’s\nno real fix for this. One way to reduce the dialog from appearing at all is\nto just play another playlist when the dialog first appears, otherwise it will\njust keep asking.
\n","url":"https://blog.rubenwardy.com/2019/04/16/youtube-music-are-you-still-listening-fix/","summary":"After listening for a while, YouTube will keep pausing the music. You can suppress this using a browser plugin and some JavaScript","date_published":"2019-04-16T00:00:00.000Z","tags":["tutorials"]},{"id":"/2019/02/17/cpp-self-registering-test-macros/","language":"en","title":"C++: Self-registering functions using macros for test libraries","content_html":"Google’s C++ testing library has a nice syntax for registering tests,\nwithout needing to remember to add the tests to some central index.\nThis article will show how to use macros to allow the creation of tests\nusing only the following code:
\nTest(IntegerComparisonTest) {\n int a = 3;\n assert(a == 3);\n}
\nWhilst this gives the appearance of GTest adding a new keyword,\nTest()
is actually a macro. After the preprocessor stage, the code\nwill look something like this:
class Test_MyTest {\npublic:\n void execute();\n\n static Test_MyTest create() { return Test_MyTest(); }\n static bool registered = TestFactory::Register(\"MyTest\", &Test_MyTest::create)\n};\n\nvoid Test_MyTest::execute() {\n int a = 3;\n assert(a == 3);\n}
\nThe thing to note here is that the test body isn’t actually part of the macro,\nand that it utilises a split declaration and definition.
\nNow, this may be necessary in GTest’s use case to store some additional meta data\nin the class, but it’s not always necessary. It would be much cleaner to cut\nout the static create function and the unnecessary class.\nThere’s a solution to this: function pointers!
\nbool test_MyTest();\nstatic bool test_MyTest_registered = TestFactory::Register(\"MyTest\", &test_MyTest);\nbool test_MyTest() {\n int a = 3;\n assert(a == 3);\n}
\n// ---- .hpp ----\n\n#include <functional>\n\n#define Test(name) \\\n void test_##name(); \\\n static bool test_##name##_registered = TestFactory::Register(#name, &test_##name); \\\n void test_##name()\n\nclass TestFactory {\npublic:\n static bool Register(std::string name, std::function<bool()> func);\nprivate:\n static std::map<std::string, std::function<bool()>> Tests;\n}\n\n// ---- .cpp ----\n\nstd::map<std::string, std::function<bool()>> TestRunner::Tests;\n\nbool TestFactory::Register(std::string name, std::function<bool()> func) {\n auto it = Tests.find(name);\n if (it == Tests.end()) {\n Tests[name] = std::move(func);\n return true;\n }\n return false;\n}
\nRecently I had to reinstall Windows to debug a hardware issue.\nI decided to try to make the most of this by trying to build\nmy game on Windows.
\nFirst, you need to clone vcpkg.\nI installed the git bash program from the git website
\ngit clone https://github.com/Microsoft/vcpkg.git\ncd vcpkg\n./bootstrap-vcpkg.bat
\nNext, you need to decide whether to install 32-bit (x86) or 64-bit (x64) dependencies,\nor both, then run one or both of the following commands:
\nVCPKG_DEFAULT_TRIPLET=x86-windows ./vcpkg.exe install DEPS\nVCPKG_DEFAULT_TRIPLET=x64-windows ./vcpkg.exe install DEPS
\nwhere DEPS
is a space-deliminated list of package names.
You can search for packages like so:
\n./vcpkg.exe search query
\nIf you’re unable to find a dependency, you may have to write your own “port”\nfile to add it to vcpkg.
\nFinally, to let Visual Studio know about the dependencies, run this:
\n./vcpkg.exe integrate install
\nImport your program into Visual Studio as a cmake project.\nWhen generating the CMake cache, you’ll probably receive “not found” errors.
\nRight click on CMakeLists.txt and select “Open CMake Settings”.\nA file called CMakeSettings.json should appear, with content like the following:
\n{\n \"configurations\": [\n {\n \"name\": \"x64-Release\",\n \"generator\": \"Ninja\",\n \"configurationType\": \"RelWithDebInfo\",\n \"inheritEnvironments\": [\n \"msvc_x64_x64\"\n ],\n \"buildRoot\": \"${env.USERPROFILE}\\\\CMakeBuilds\\\\${workspaceHash}\\\\build\\\\${name}\",\n \"installRoot\": \"${env.USERPROFILE}\\\\CMakeBuilds\\\\${workspaceHash}\\\\install\\\\${name}\",\n \"cmakeCommandArgs\": \"\",\n \"buildCommandArgs\": \"-v\",\n \"ctestCommandArgs\": \"\"\n }\n ]\n}
\nAdd the vcpkg toolchain file to cmakeCommandArgs
\"cmakeCommandArgs\": \"-DCMAKE_TOOLCHAIN_FILE=\\\"C:\\\\Users\\\\ruben\\\\vcpkg\\\\scripts\\\\buildsystems\\\\vcpkg.cmake\\\"\"
\nClear the CMake cache by doing CMake > Delete Cache Folders > CMakeLists.txt,\nthen click the Generate button on the CMake changes detected banner.
\nThis should be enough to get Visual Studio to find the dependencies, but I\nfound that this wouldn’t work when you have custom CMake find files which\nprobably don’t support the toolchain.
\nI’ve yet to find out how to modify the find modules to use vcpkg’s toolchain\ncleanly, but in the meantime I set the directories manually by adding\nCMake flags and environment variables.
\nYou should prefer more general settings to individual include/lib settings.\nFor example, findX.cmake files may allow you to specify a single environment\nvariable to the install root instead of separate X_INCLUDE_DIR
and X_LIBRARY
\nCMake variables. This is shown below, with ENETDIR
and SFML_ROOT
.\nDon’t be too afraid to modify any embedded find scripts to support environment\nvariables.
{\n \"environments\": [\n {\n \"ENETDIR\": \"C:\\\\Users\\\\ruben\\\\vcpkg\\\\installed\\\\x64-windows\",\n \"SFML_ROOT\": \"C:\\\\Users\\\\ruben\\\\vcpkg\\\\installed\\\\x64-windows\",\n \"SFGUI_ROOT\": \"C:\\\\Users\\\\ruben\\\\vcpkg\\\\installed\\\\x64-windows\",\n \"THOR_ROOT\": \"C:\\\\Users\\\\ruben\\\\vcpkg\\\\installed\\\\x64-windows\",\n \"LUA_ROOT\": \"C:\\\\Users\\\\ruben\\\\vcpkg\\\\installed\\\\x64-windows\"\n }\n ],\n \"configurations\": [\n {\n \"cmakeCommandArgs\": \"-DCMAKE_TOOLCHAIN_FILE=\\\"C:\\\\Users\\\\ruben\\\\vcpkg\\\\scripts\\\\buildsystems\\\\vcpkg.cmake\\\"\"\n }\n ]\n}
\nNote that you may need to clear the CMake cache and regenerate for changes to\ntake affect.
\nBy default, MSVC will compile your program as a console program.\nThis mode results in Windows allocating and showing a console for you when starting\nthe program up. This console will require a redraw on every std::cerr or std::cout print,\nresulting in massive performance issues.
\nIf you program shows a graphical window, then you should change it to a Windows\nprogram.
\nThere are three methods to do this.
\nThe first option is to set the executable type to WIN32 in CMake:
\nif(WIN32)\n add_executable(${EXECUTABLE_NAME} WIN32 ${SRC})\nelse()\n add_executable(${EXECUTABLE_NAME} ${SRC})\nendif()
\nSecond, by setting linker flags using CMake:
\nif(WIN32)\n set(CMAKE_EXE_LINKER_FLAGS \"${CMAKE_EXE_LINKER_FLAGS} /SUBSYSTEM:WINDOWS /ENTRY:mainCRTStartup\")\nendif()
\nThe final option is to set the linker flags using a pragma directive:
\n#ifdef _WIN32\n #pragma comment(linker, \"/SUBSYSTEM:WINDOWS /ENTRY:mainCRTStartup\")\n#endif
\nThe ENTRY parameter allows you to use the standard main()
entrypoint instead\nof the non-standard WinMain()
.\nYou may need to clear and regenerate the CMake cache.
Hopefully that should be enough to get it work.\nPlease contact me if you know of any ways to make this cleaner or more robust.
\n","url":"https://blog.rubenwardy.com/2019/02/15/porting-cpp-to-windows-vcpkg/","summary":"Recently I had to reinstall Windows to debug a hardware issue. I decided to try to make the most of this by trying to build my game on Windows.","date_published":"2019-02-15T00:00:00.000Z","tags":["tutorials","cpp"]},{"id":"/2019/01/25/rimworld-progress-renderer-to-video/","language":"en","title":"RimWorld: Create timelapse from Progress Renderer on Linux","content_html":"Progress Renderer\n(Steam |\nGitHub)\nis a brilliant mod that takes a screenshot once a day, and dumps it into a folder.
\nThe following bash script can be used to generate an mp4 timelapse of the screenshots.\nSimply specify a region to cut out of the screenshots, and some other parameters,\nand run the script in the working directory.
\n#!/bin/bash\n\n###########################\n\n# Video resolution (must be even)\nSIZE=2272x1472\n\n# Video FPS\nFPS=10\n\n# Region to export\nCROP=4544x2944+4064+3360\n\n# Output file\nOUT=~/Videos/rimworldvid.mp4\n\n# TMP Location\nTMP=/tmp/rimworldvid\n\n###########################\n\necho \"================= CONFIG =====================\"\necho \"Region: $CROP\"\necho \"Output: $OUT\"\necho \"Resolu: $SIZE @ $FPS fps\"\necho \"Tmp: $TMP/\"\necho \"=============================================\"\necho \"\"\n\nmkdir -p $TMP\n\nfor filename in *.jpg; do\n if [ ! -f \"$TMP/$filename\" ]; then\n echo \"Cropping $filename\"\n convert $filename -crop $CROP -resize $SIZE \"$TMP/$filename\" || exit $?\n fi\ndone || exit $?\n\necho \"Rendering mp4\"\n\ncd $TMP\nffmpeg -framerate $FPS -pattern_type glob -i '*.jpg' \"$OUT\"
\n","url":"https://blog.rubenwardy.com/2019/01/25/rimworld-progress-renderer-to-video/","summary":"Progress Renderer is a brilliant mod that takes a screenshot once a day, and dumps it into a folder. Here's a script to turn that into a video.","date_published":"2019-01-25T00:00:00.000Z","tags":["tutorials"]},{"id":"/2018/09/07/raytracer-rasteriser/","language":"en","title":"Raytracer and Rasteriser","content_html":"I wrote a raytracer and a rasteriser as part of my university course.\nThe raytracer supported features such as indirect lighting, reflection,\nrefraction, and a photon mapper capable of simulating the final positions of\n60,000,000 photons in a few minutes (and quite a few GBs of RAM).
\nOnce anti-aliasing is implemented, the triangle normal need only be randomised for each sample, in order to acquire roughness. For each sample, the rotation of the triangle normal - for the triangle which the ray intersects with - is given a random offset. The magnitude of this random offset is dictated by the roughness attribute of the triangle. This gives a rough, blurred, look to rough mirrors. It also makes specularity look more realistic.
\nReflections can be implemented simply by firing another ray from the intersection\nposition in the right direction, and using the colour it returns as the pixel colour.\nThis colour is then blended with the diffuse color depending on material properties.\nYou get rough mirrors for free by having roughness.
\nRefraction is achieved by firing rays through the material, accounting for the difference\nin refractive index. The amount of light which is refracted vs diffused depends on the\ntransparency of the material.
\nIf a ray hits a transparent triangle on the opposite side to the normal, then the\nrefractive index is 1/R
, where R is the index of the material. Otherwise it’s just\nR
.
Fresnel’s Law determines how much of a ray is reflected when it hits a change\nin refractive index. Above a certain angle - the critical angle - the rays totally\nreflect, causing Total Internal Reflection.
\nBeer’s Law describes a property of semi-transparent materials where colour is absorbed from\nlight per length it passes through the material. The application of Beer’s law gives a jelly-like\neffect.
\nrefractedColor = incomingColor * exp(-absorbedColor * lengthInMaterial)
\nMillions of photons are simulated after being emitted from the light,\nand then their final positions are recorded. The record of all the final\npositions is called a photon map. I used an OctTree\n(like a quad tree, but 2x2x2 rather than 2x2) to enable efficient querying\nof photon positions.
\nHitting any surface will result in a certain percentage of the photon’s\ncolor being reflected, which results in indirect lighting. Photons will keep\nbouncing until their color is low enough to be negligible in which case they’ll\nstop. The percentage of color which isn’t reflected will be stored in the\nphoton map as a final photon position (ie: diffuse) or sent split into colors\nand refracted if the triangle is transparent.
\nWarning: the default settings for the photon map require 4GB of free RAM.\nReduce the photon count in drawRaytracer() if you don’t have that available.
\nI wrote a Bash script to sort git commits into buckets, to be used as the first\nstep of making a change log. It supports rewording commit messages, can be\nstopped and resumed, and supports automatic filtering based on keywords.
\nLicense: MIT
\n$ commitsorter ~/Documents HEAD~10..HEAD\n\nOutputting to /home/ruben/Documents/commitsorter\n\n[bugfix / feature / csm / ssm / other / delete / reword]\n\nFix something (''rubenwardy'')\n: b (auto)\n\nAdd foo (''rubenwardy'')\n: f\n\nAdd bnrre mispelt (''rubenwardy'')\n: r\n\nNew message: Add bar (''rubenwardy'')\n\nAdd bar (''rubenwardy'')\n: f\n\nSummary:\n- Bugfixes: 1\n- Features: 2\n- CSM : 0\n- SSM : 0\n- Others : 0\n- Deleted : 0\n
\n#!/bin/bash\n\nif [[ $# -ne 2 ]]; then\n\techo \"Usage: commitsorter output REF1..REF2\"\n\texit 1\nfi\n\n### Define new results here ... ###\nmakebuckets() {\n\techo -e \"## Bug fixes and Improvements ##\\n\" > \"$1/b.txt\"\n\techo -e \"## Features ##\\n\" > \"$1/f.txt\"\n\techo -e \"## Client Modding ##\\n\" > \"$1/c.txt\"\n\techo -e \"## Server Modding ##\\n\" > \"$1/m.txt\"\n\techo -e \"## Other / Misc ##\\n\" > \"$1/o.txt\"\n\techo -e \"## Deleted ##\\n\" > \"$1/d.txt\"\n}\n\n### ... and also add them to help ###\nhelp() {\n\techo -e \"[\\e[4mb\\e[0mugfix / \\e[4mf\\e[0meature / \\e[4mc\\e[0msm / ss\\e[4mm\\e[0m / \\e[4mo\\e[0mther / \\e[4md\\e[0melete / \\e[4mr\\e[0meword]\"\n}\n\noutput=\"$1/commitsorter\"\necho \"Outputting to $output\"\n\n### Uncomment to backup and start again instead of resuming ###\n# if [ -d \"$output\" ]; then\n# \tDATE=$(date +%Y_%m_%d_%H_%M_%S)\n# \tbackup=\"${output}_$DATE\"\n# \techo \"Backing up $output to $backup\"\n# \tmv \"$output\" \"$backup\"\n# fi\n\n# Create directories and buckets\nmkdir -p \"$1\"\nmkdir -p \"$output\"\nmkdir -p \"$output/cache\"\nif [ ! -f $output/b.txt ]; then\n\tmakebuckets $output\nfi\n\n# Get log\nentries=`git log --pretty=format:\"%h %s (''%an'')\" \"$2\"`\n\n# Print help\necho \"\"\nhelp\n\n# Loop\nIFS=$'\\n'\nfor line in $entries\ndo\n\thash=$(echo $line | cut -c1-7)\n\tmessage=$(echo $line | cut -c 9- )\n\n\techo \"\"\n\techo \"$message\"\n\n\tif [[ -f $output/cache/$hash ]] ; then\n\t\tinput=$(cat $output/cache/$hash)\n\t\techo -e \": $input (already sorted)\"\n\telif echo $message | grep -iq \"fix\\(ed\\)\\? \"; then\n\t\techo \": b (auto)\"\n\t\techo $message >> $output/b.txt\n\t\techo b > $output/cache/$hash\n\telif echo $message | grep -iq \"translated using weblate\"; then\n\t\techo \": d (auto)\"\n\t\techo $message >> $output/d.txt\n\t\techo d > $output/cache/$hash\n\telse\n\t\twhile true; do\n\t\t\tread -p \": \" -rsn1 input\n\t\t\techo $input\n\n\t\t\tLC_ALL=C # Turn off locale.\n\t\t\tif [[ $input == r ]] ; then\n\t\t\t\techo \"\"\n\t\t\t\tread -p \"New message: \" line2\n\t\t\t\tif [[ ! -z $message2 ]] ; then\n\t\t\t\t\tline=$message2\n\t\t\t\tfi\n\t\t\t\techo \"\"\n\t\t\t\techo $message\n\t\t\telif [[ $input == [a-z] ]] ; then\n\t\t\t\tif [ -f $output/$input.txt ]; then\n\t\t\t\t\techo $message >> $output/$input.txt\n\t\t\t\t\techo $input > $output/cache/$hash\n\t\t\t\t\tbreak\n\t\t\t\telse\n\t\t\t\t\thelp\n\t\t\t\tfi\n\t\t\telse\n\t\t\t\thelp\n\t\t\tfi\n\t\tdone\n\tfi\ndone\n\n\necho \"\"\necho \"Summary:\"\nfor filename in $output/*.txt; do\n\ttitle=$(head -n1 $filename | sed 's/^## \\([^#]*\\) ##$/\\1/')\n\tcount=$(cat $filename | tail -n+2 | sed '/^\\s*$/d' | wc -l)\n\techo \"- $title: $count\"\ndone
\n","url":"https://blog.rubenwardy.com/2018/08/24/commit-sorter-changelog/","summary":"I wrote a Bash script to sort git commits into buckets, to be used as the first step of making a change log.","date_published":"2018-08-24T00:00:00.000Z","tags":["snippet","linux","projects"]},{"id":"/2018/05/07/mingw-copy-dlls/","language":"en","title":"Finding and copying DLLs from MinGW-W64 directories to bin directory","content_html":"After cross-compiling your project for Windows, you find that it crashes due\nto missing DLLs. I will show how to identify any required DLLs using objdump,\nand copy them to your build directory.
\nmingw32-objdump can be used to analysis an executable or object. Among other\nthings, it lists the .DLLs which are required by the .EXE. Call it by passing\nthe path to the executable and the -p
flag:
x86_64-w64-mingw32-objdump -p project/bin/project.exe
\nThis will output a ton of content, so you’ll want to pass the result\nthrough grep and sed to get just the DLL names:
\nx86_64-w64-mingw32-objdump -p project/bin/project.exe | grep 'DLL Name:' | sed -e \"s/\\t*DLL Name: //g\"
\nThe output will look a bit like this:
\nlibgcc_s_seh-1.dll\nKERNEL32.dll\nmsvcrt.dll\nlibwinpthread-1.dll\nlibstdc++-6.dll\nWSOCK32.dll\nsfgui.dll\nsfml-graphics-2.dll\nsfml-network-2.dll\nsfml-system-2.dll\nsfml-window-2.dll\nlibthor.dll\n
\nNow we want to iterate over the result of this, and try and find the DLL\nin a number of search paths. Here’s the full script to do that:
\n#!/bin/bash\n\nBINDIR=\"project/bin\"\nEXE=\"$BINDIR/project.exe\"\nPREFIX=\"x86_64-w64-mingw32\"\n\npaths=(\"/usr/local/mingw64/bin\"\n \"/usr/local/mingw64/bin/x64\"\n \"/usr/$PREFIX/bin\"\n \"/usr/lib/gcc/$PREFIX/7.3-posix\"\n \"/usr/$PREFIX/lib\")\n\nfunction findAndCopyDLL() {\n for i in \"${paths[@]}\"\n do\n FILE=\"$i/$1\"\n if [ -f $FILE ]; then\n cp $FILE $BINDIR\n echo \"Found $1 in $i\"\n copyForOBJ $FILE\n return 0\n fi\n done\n\n return 1\n}\n\nfunction copyForOBJ() {\n dlls=`$PREFIX-objdump -p $1 | grep 'DLL Name:' | sed -e \"s/\\t*DLL Name: //g\"`\n while read -r filename; do\n findAndCopyDLL $filename || echo \"Unable to find $filename\"\n done <<< \"$dlls\"\n}\n\ncopyForOBJ $EXE
\nThe output will look like this:
\nFound libgcc_s_seh-1.dll in /usr/x86_64-w64-mingw32/bin\nUnable to find KERNEL32.dll\nUnable to find msvcrt.dll\nFound libwinpthread-1.dll in /usr/x86_64-w64-mingw32/bin\nFound libstdc++-6.dll in /usr/x86_64-w64-mingw32/bin\nUnable to find WSOCK32.dll\nFound sfgui.dll in /usr/local/mingw64/bin\nFound sfml-graphics-2.dll in /usr/local/mingw64/bin\nFound sfml-network-2.dll in /usr/local/mingw64/bin\nFound sfml-system-2.dll in /usr/local/mingw64/bin\nFound sfml-window-2.dll in /usr/local/mingw64/bin\nFound libthor.dll in /usr/local/mingw64/bin\n
\nSome DLLs won’t be found as they’re provided as part of Windows.
\n","url":"https://blog.rubenwardy.com/2018/05/07/mingw-copy-dlls/","summary":"After cross-compiling your project for Windows, you find that it crashes due to missing DLLs. I will show how to detect and copy required DLLs.","date_published":"2018-05-07T00:00:00.000Z","tags":["snippet","tutorials","cpp"]},{"id":"/2018/04/05/simple-kernel-in-c/","language":"en","title":"Rubix: My Simple Kernel written in C for arm","content_html":"During the second year of university, I created a kernel for the ARMv7\ninstruction set. I went above and beyond what was required on this project,\nachieving a clean design and features such as a blocked process queue, piping,\nkill, and a simple filesystem. This was my favourite coursework so far. I found\nit very interesting to learn about and implement the things that we take for granted\nas programmers.
\nI tried to stick to POSIX as much as possible, and stuck to the Linux method of\nhaving everything as either a file or process. Because pipes and standard in/out\nwere both “files”, I was able to implement both popen and piping of the output\nof a process to another process.
\nAll system calls conform to the equivalent POSIX standard, except for\nclose
which will close the appropriate resource (no need for fclose
or pclose
)\nand set_nonblocking
which is custom.
yield
- ends the current time slice.write
- writes to an open file descriptor.read
- reads from an file descriptor. Returns length of read, 0 on EOF.\nMay blocking - see set_nonblocking
.close
- closes a file descriptor.dup2
- duplicates fd from old
to new
. new
is closed if it already exists.pipe
- creates a pipe. fd[0] is read, fd[1] is write.fopen
- open file. Not quite POSIX, as it’s non-blockingfork
- clones process.\nReturn value is 0 if child, PID of child if parent, -1 if error.exec
- replaces the process with another program. PID is kept.\nStack and FDs (except in/out/err) are destroyed.exit
- exits with exit code.wait
- waits for a child program to exit, and gives exit code.kill
- sends a kill signal to a process. Killed processes will not return an exit code.\nsignal
is not yet implemented.setpriority
- set priority of child process.set_nonblocking
- is not POSIX, unfortunately. Set pipe non-blocking.The following functions use system calls to provide a higher function:
\npopen
- opens a process and returns a FD. Uses fork
, pipe
, exec
, and dup2
.wait
/waitpid
- both use the wait
syscall.The kernel allows the use of file descriptors to refer to resources. They\nare implemented under the hood using function pointers, which means that the main\npart of the kernel doesn’t even know what type of file they are. Can be blocking or not.
\nTypes:
\nShara and I exhibited on behalf of Minetest at Freenode #Live. Our stall had a\ncomputer where visitors could play Minetest. We talked about it and handed out\nleaflets.
\nIf you’d like to support Minetest, then please get involved\nand consider donating:
\nMaterials copyright © 2017 to rubenwardy and Shara.\nLicense: CC-BY-SA 4.0
\nThe following are the copyright owners of the screenshots,\nnot the mods involved.
\nHave I missed you out? Please contact me.
\n","url":"https://blog.rubenwardy.com/2017/10/29/freenode-live/","summary":"I ran an exhibit for Minetest at Freenode Live. Our stall had a computer where visitors could play Minetest. We talked about it and handed out leaflets.","date_published":"2017-10-29T00:00:00.000Z","image":"https://blog.rubenwardy.com/static/img/T3VooHHCPq-500.png","tags":["minetest"]},{"id":"/2017/09/30/install-autodesk-maya-ubuntu-17-04/","language":"en","title":"Install Autodesk Maya on Ubuntu 17.04","content_html":"Whilst Maya claims to have Linux support, it only officially supports RedHat\nand CentOS Linux. It is, however, still possible to install it on Debian and\nUbuntu based systems with some leg work.
\nTo make this work, you need to:
\nIf you have already downloaded maya, then cd
to the containing folder.\nIf you haven’t, don’t worry. This script will download it.
Don’t worry if the Autodesk installer claims the installation was unsuccessful,\nit may still have worked.
\nRun the following script as root:
\n#!/bin/bash\n\n# Make sure we’re running with root permissions.\nif [ `whoami` != root ]; then\n echo \"Please run this script using sudo\"\n echo \"Just type “sudo !!”\"\n exit 1\nfi\n\n# Check for 64-bit arch\nif [uname -m != x86_64]; then\n echo \"Maya will only run on 64-bit linux.\"\n echo \"Please install the 64-bit ubuntu and try again.\"\n exit 1\nfi\n\n# Detect existing download, else download from scratch\nif [ -f \"setup\" ]; then\n echo \"Found existing Maya installer\"\nelse\n echo \"No Maya installer in current directory\"\n echo \"Press [ENTER] to download and extract Maya 2017, or ctrl+C to cancel\"\n read -n 1 -s -r -p \"\"\n\n ## Create Download Directory\n mkdir -p maya2017Install\n cd maya2017Install\n\n ## Download Maya Install Files\n wget https://edutrial.autodesk.com/NET17SWDLD/2017/MAYA/ESD/Autodesk_Maya_2017_EN_JP_ZH_Linux_64bit.tgz\n tar xvf Autodesk_Maya_2017_EN_JP_ZH_Linux_64bit.tgz\nfi\n\n# Install Dependencies\nwget https://launchpadlibrarian.net/183708483/libxp6_1.0.2-2_amd64.deb\napt-get install -y libssl1.0.0 gcc libssl-dev libjpeg62 alien csh tcsh libaudiofile-dev libglw1-mesa elfutils libglw1-mesa-dev mesa-utils xfstt ttf-liberation xfonts-100dpi xfonts-75dpi ttf-mscorefonts-installer libfam0 libfam-dev libgstreamer-plugins-base0.10-0\n\n# Fix .so libs\nwget https://mirrors.kernel.org/ubuntu/pool/main/libp/libpng/libpng12-0_1.2.54-1ubuntu1_amd64.deb\nln -s /usr/lib/x86_64-linux-gnu/libtiff.so /usr/lib/x86_64-linux-gnu/libtiff.so.3\nln -s /usr/lib/x86_64-linux-gnu/libcrypto.so /usr/lib/x86_64-linux-gnu/libcrypto.so.10\nln -s /usr/lib/x86_64-linux-gnu/libssl.so /usr/lib/x86_64-linux-gnu/libssl.so.10\n\n# Convert .rpm to .dev\nalien -cv *.rpm\n\n# Install .deb\ndpkg -i *.deb\n\n# Run installer (setup executable)\nmv /usr/bin/rpm /usr/bin/rpm_backup\nln -s /bin/true /usr/bin/rpm\nchmod +x ./setup\n./setup\nrm /usr/bin/rpm\nmv /usr/bin/rpm_backup /usr/bin/rpm\n\n# Make tmp dir for Maya\nmkdir -p /usr/tmp\n\n# This is fine, as it's just another /tmp dir (which is 777 too)\nchmod 777 /usr/tmp\n\n# Maya Camera Modifier Key\ngsettings set org.gnome.desktop.wm.preferences mouse-button-modifier \"<Super>\"\n\n# Ensure that Fonts are Loaded\nxset +fp /usr/share/fonts/X11/100dpi/\nxset +fp /usr/share/fonts/X11/75dpi/\nxset fp rehash\n\necho \"Maya was installed successfully.\"
\nThen run the following script for each user you want to run Maya:
\n#!/bin/bash\n\nif [ `whoami` == root ]; then\n echo \"Please run this script using the user you want to run Maya with.\"\n echo \"I really hope that isn't root!\"\n exit 1\nfi\n\nmkdir -p ~/maya/2017/\n\n# Fix Segmentation Fault Error\necho \"MAYA_DISABLE_CIP=1\" >> ~/maya/2017/Maya.env\necho \"LC_ALL=C\" >> ~/maya/2017/Maya.env
\nThe above scripts are heavily dependent on your OS version and state,\nso you’re likely to have one of the following errors.
\nMake sure you’re running Maya with the following environment variables.\nYou can do this by adding the following lines to ~/maya/2017/Maya.env
:
MAYA_DISABLE_CIP=1\nLC_ALL=C
\nThis means the an compile lib object file cannot be found when running it.\nTo fix this:
\nCheck for an existing file with the same first segment in /usr/lib/x86_64-linux-gnu
.\nFor example, if it complains about libssl.so.12
then look for libssl.so
in the\ndirectory.
If the file exists, then create a soft symlink like so:
\nln -s /usr/lib/x86_64-linux-gnu/EXISTING.so /usr/lib/x86_64-linux-gnu/NOTFOUND.so
\nFor example,
\nln -s /usr/lib/x86_64-linux-gnu/libssl.so /usr/lib/x86_64-linux-gnu/libssl.so.12
\nDon’t worry about this, it may have still worked. Check it by running maya
-\nbut make sure to click finish on the installer and run the user script first.
For the last two years, I have been working on a\nvery ambitious game.\nThe game is a top-down sandbox with multiplayer support. I’m aiming towards a\ncity-based game, where players can wander around a procedurally generated city.\nOne of the main reasons I started creating this game is to learn about multiplayer\nnetworking at a low level - client-side prediction, server-side reconcilliation,\ncheat preventation, and reducing the visual effect of latency.
\n\nThe project is written in C++ using SFML. It’s the biggest project I’ve ever created,\nwith 9,000 lines of code and 64 classes.
\nI decided to split the project into several milestones, each on iterating on the\nlast and moving it towards a more complete game.
\nMS1 adds a foundation for future work, including: networking,\nMVC based architecture, basic entities, raytracer, materials to specify how to\ndraw something, collison, and loads more.
\nI started off by writing the basic world model and spatial paritioning. I then\nwrote the client/server handshake.
\n\nAfter that, I added rendering so I could see the world, and start adding\nactual game features.
\n\nThe next step is to add player movement and entity synchronisation, and then\nnode placement:
\n\nI also decided to switch to Kenney’s texture pack during this time.
\n\nAdds the ability to open doors, highlight entities, and damage them.
\nNext I needed a way to interact with the inventory, so I added an MVC framework\nto switch between windows and the game, and an inventory window to use the inventory.
\n\nI also added the ability for entities and players to die, and for players to be\nrespawned.
\nI allowed items to provide a ‘tool spec’. A tool spec is a struct which\nspecifies what functionalities an item has - for example, whether it can\ndamage tiles and entities, whether it can shoot bullets, and how much damage it\ngives.
\nThis was quite a big milestone. I added the following features:
\nSince the last milestone, I have been working on increasing code coverage.\nThe world logic is now at 97% covered, however the client (rendering, input, packet handling)\nand the server (packet handling, profiles, interaction logic) still need tests.
\nThe client and server are both designed in an MVP style, which means that it\nshouldn’t be too hard to isolate and test them using dependency injection,\nwhich is something I’d like to get around to.
\nAs for actual game progress, I’d like to work on making the server more\nscalable. I’m thinking of implementing a distributed architecture with\nchunk workers - processes that assigned to be the “truth” of a group\nof chunks, to update them based on logic, and to respond to any requests\nrelating to them.
\nAnother thing I need to investigate is lighting, as it’s good example of an\nalgorithm that needs to propogate through chunk boundaries and so will help\nwith making sure my workers can handle it.
\n\n\tPart of a series of blog posts about my game,\n\tRVWP\n
\n","url":"https://blog.rubenwardy.com/2017/08/13/multiplayer-topdown-sandbox-game/","summary":"The game is a top-down sandbox with multiplayer support. I'm aiming towards a city-based game, where players can wander around a procedurally generated city.","date_published":"2017-08-13T00:00:00.000Z","image":"https://blog.rubenwardy.com/static/img/zf-Dk4eMTR-540.png","tags":["projects","sfml","cpp","gamedev"]},{"id":"/2017/03/07/mingw-w64-cmake-unrecognised-option-rdynamic-ubuntu/","language":"en","title":"Mingw-w64 and CMake: unrecognised option -rdynamic on Ubuntu","content_html":"I had an issue where CMake was failing on a compiler test with the\nfollowing error:
\nerror: unrecognized option '-rdynamic'\n
\nThe problem was that CMake caches settings such as compiler flags\nin CMakeCache.txt, so you need to clear the cache when changing the\nplatform. Do this by deleting CMakeFiles and CMakeCache.txt
\n","url":"https://blog.rubenwardy.com/2017/03/07/mingw-w64-cmake-unrecognised-option-rdynamic-ubuntu/","summary":"How to solve \"error: unrecognized option '-rdynamic'\" on Ubuntu.","date_published":"2017-03-07T00:00:00.000Z","tags":["snippet","tutorials","cpp"]},{"id":"/2016/09/30/opencv-combine-images-into-strip/","language":"en","title":"OpenCV: compile images (Mats) into a strip","content_html":"I created a very short C++ snippet to accumulate a series of Mat
s into\na single Mat
strip. It works like acc = acc + m
- a new mat is added to the\naccumulator each time, then stored in the accumulator again.
#pragma once\n#include <opencv2/opencv.hpp>\n#include <opencv2/core/core.hpp>\n#include <opencv2/imgcodecs.hpp>\n#include <assert.h>\n\nusing namespace cv;\n\nclass MatStrip\n{\npublic:\n Mat current;\n float scale;\n\n MatStrip(Mat start, float scale=0.3f):\n scale(scale)\n {\n resize(start, current, Size(start.cols * scale, start.rows * scale), 1.0, 1.0, INTER_CUBIC);\n }\n\n void add(const Mat &in_o)\n {\n Mat in;\n resize(in_o, in, Size(in_o.cols * scale, in_o.rows * scale), 1.0, 1.0, INTER_CUBIC);\n\n assert(in.type() == current.type());\n Size sz1 = current.size();\n Size sz2 = in.size();\n Mat im3(sz1.height, sz1.width+sz2.width, current.type());\n Mat left(im3, Rect(0, 0, sz1.width, sz1.height));\n current.copyTo(left);\n Mat right(im3, Rect(sz1.width, 0, sz2.width, sz2.height));\n in.copyTo(right);\n current = im3;\n }\n};
\n","url":"https://blog.rubenwardy.com/2016/09/30/opencv-combine-images-into-strip/","summary":"I created a very short C++ snippet to accumulate a series of Mats into a single Mat strip.","date_published":"2016-09-30T00:00:00.000Z","image":"https://blog.rubenwardy.com/static/img/yixVzf1FwF-540.png","tags":["snippet","tutorials","cpp"]},{"id":"/2016/09/12/sfml-thor-emit-a-single-particle/","language":"en","title":"How to emit a single particle using SFML's Thor Particle System","content_html":"Usecase: shells dropping in sync with firing, fake bullets, etc
\nYou must use a particle emitter to create particles, however this doesn’t mean\nit’s impossible to create single particles on command. You can create a particle\nemitter which simply adds particles from a queue to the system
\n#pragma once\n#include <SFML/Graphics.hpp>\n#include <Thor/Math.hpp>\n#include <Thor/Graphics.hpp>\n#include <Thor/Particles.hpp>\n\nnamespace sfext {\n\nclass SingleParticleEmitter\n{\n std::vector<thor::Particle> *particles;\npublic:\n SingleParticleEmitter();\n SingleParticleEmitter(const SingleParticleEmitter& that);\n void operator() (thor::EmissionInterface& system, sf::Time dt);\n\n void push(thor::Particle particle);\n void push(sf::Vector2f position, sf::Vector2f velocity, float lifetime, float rot=0, float rotsp=0);\n};\n\n}; // end namesapce sfext
\nThe above allows you to use the push functions to emit particles.\nHere is an example of it in use:
\nSingleParticleEmitter single_emitter;\nthor::ParticleSystem particle_system;\nparticle_system.addEmitter(single_emitter);\n\n// When you need to emit a shell:\n\nsingle_emitter.push(origin, velocity, 1.0f);\n// lifetime of the particle in seconds --^
\nHere is the cpp file:
\n#include \"singleparticleemitter.hpp\"\n\nusing namespace sfext;\n\nSingleParticleEmitter::SingleParticleEmitter()\n{\n particles = new std::vector<thor::Particle>();\n}\n\nSingleParticleEmitter::SingleParticleEmitter(const SingleParticleEmitter& that)\n{\n particles = that.particles;\n}\n\nvoid SingleParticleEmitter::operator() (thor::EmissionInterface& system, sf::Time dt)\n{\n for (auto particle : *particles) {\n system.emitParticle(particle);\n }\n particles->clear();\n}\n\nvoid SingleParticleEmitter::push(thor::Particle particle)\n{\n particles->push_back(particle);\n}\n\nvoid SingleParticleEmitter::push(sf::Vector2f position, sf::Vector2f velocity, float lifetime, float rot, float rotsp)\n{\n thor::Particle particle(sf::seconds(lifetime));\n particle.position = position;\n particle.velocity = velocity;\n particle.rotation = rot;\n particle.rotationSpeed = rotsp;\n particle.scale = sf::Vector2f(1.f, 1.f);\n particle.color = sf::Color::White;\n particle.textureIndex = 0u;\n push(particle);\n}\n
\n","url":"https://blog.rubenwardy.com/2016/09/12/sfml-thor-emit-a-single-particle/","summary":"You must use a particle emitter to create particles, however this doesn't mean it's impossible to create single particles on command.","date_published":"2016-09-12T00:00:00.000Z","tags":["snippet","tutorials","sfml","cpp","gamedev"]},{"id":"/2016/08/16/sfgui-center-window-container/","language":"en","title":"How to center a SFGUI window (sfg::Window)","content_html":"Simply get the SFGUI window size using GetAllocation,\nthe sfml window size using getSize, then do this arithmetic:
\nauto window = sfg::Window::Create();\nauto win_rect = window->GetAllocation();\nsf::Vector2f size(win_rect.width, win_rect.height);\nwindow->SetPosition(((sf::Vector2f)rwindow->getSize() - size) / 2.0f);
\n","url":"https://blog.rubenwardy.com/2016/08/16/sfgui-center-window-container/","summary":"Simply get the SFGUI window size using GetAllocation, the sfml window size using getSize, then do some arithmetic.","date_published":"2016-08-16T00:00:00.000Z","tags":["snippet","tutorials","sfml","cpp","gamedev"]},{"id":"/2016/07/21/rimworld-setup-monodevelop/","language":"en","title":"Setting up a MonoDevelop solution and project for RimWorld modding","content_html":"This tutorial will show you how to create a new MonoDevelop solution and project,\nand how to properly connect it to RimWorld. You’ll need to have an installation\nof Mono that supports .NET 3.5 -\nhere is a tutorial on how to install it.
\nname
field.Some of this information was learned from\nthis wiki page.
\n","url":"https://blog.rubenwardy.com/2016/07/21/rimworld-setup-monodevelop/","summary":"This tutorial will show you how to create a new MonoDevelop solution and project, and how to properly connect it to RimWorld.","date_published":"2016-07-21T00:00:00.000Z","tags":["linux","tutorials"]},{"id":"/2016/07/20/rimworld-install-monodevelop-with-dot-net-3.5/","language":"en","title":"Install Monodevelop to build for .NET 3.5 (using Mono 3.14) - RimWorld","content_html":"This tutorial will show you how to install Mono and Monodevelop in order to\ndevelop .NET 3.5 projects. This is useful when writing C# assembly mods\nfor Unity engine based games, such as RimWorld, as they tend to require\n.NET 3.5.
\nIf you don’t use .NET 3.5, you may get errors including but not limited to:
\nSystem.TypeLoadException: Could not load type ‘System.Action’ from assembly ‘ModName’.
\nThat error is commonly thrown when versions such as .NET or .DLLs mismatch.
\nMono 4.0 and later don’t support .NET 3.5.\nTherefore, we’ll need to install an older version.
\nI couldn’t find many posts about this, doesn’t seem that many people develop RimWorld on Linux.
\nAfter this tutorial, MonoDevelop will support the following target frameworks:\nMono / .NET 2.0, Mono / .NET 3.0, .NET 3.5 Client Profile, Mono / .NET 3.5,\n.NET 4.0 Client Profile, Mono / .NET 4.0, Mono / .NET 4.5, .NET Framework 4.5.1
\nUnfortunately I do not know of a way to keep multiple versions of Mono and\nMonodevelop installed at a time, so you’ll need to uninstall them:
\nsudo apt-get remove mono-complete mono-devel monodevelop\nsudo apt autoremove
\nNow check that mono is not installed:
\nmono --version\n# you should get something like command not found
\nWe need to add a snapshot PPA, as described\nin the mono-project docs.
\nsudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 3FA7E0328081BFF6A14DA29AA6A19B38D3D831EF\necho \"deb https://download.mono-project.com/repo/debian wheezy/snapshots/3.12.0 main\" | sudo tee /etc/apt/sources.list.d/mono-xamarin.list\nsudo apt-get update
\nNow we need to install Mono 3.14 and Monodevelop 5.7. We also need to lock the\nversions of the installed packages, to stop apt-get upgrade from installing Mono 4.0+\nand reversing our efforts. We use apt-mark hold to do this.
\nsudo apt-get install build-essential cli-common libgtk2.0-cil-dev libglade2.0-cil-dev libgnome2.0-cil-dev libgconf2.0-cil-dev\nsudo apt-get install -t wheezy/snapshots/3.12.0 mono-complete\nsudo apt-mark hold mono-complete\nsudo apt-mark hold mono-devel\nsudo apt-mark hold libmono*\nsudo apt-get install -t wheezy/snapshots/3.12.0 monodevelop\nsudo apt-mark hold monodevelop
\nFor more information, see this thread about installing from a specific repository\nand this thread about locking package versions.
\nmono --version
\nShould give something like:
\nMono JIT compiler version 3.12.1 (tarball Fri Mar 6 19:12:47 UTC 2015)\nCopyright © 2002-2014 Novell, Inc, Xamarin Inc and Contributors. www.mono-project.com
\nAnd monodevelop should be in the start menu.\nNote that you may need to restart your computer, depending on your window manager.
\nNext: Setting up a MonoDevelop solution and project for RimWorld modding.
\nHere is an error you may get when implementing a JobDriver using .NET 4.0+
\nException in SetupToil(pawn=Kimmy, job=Accident): System.TypeLoadException: Could not load type ‘System.Action’ from assembly ‘toilets’.\nat Verse.AI.JobDriver.SetupToils()\nVerse.Log:Error(string)\nVerse.AI.JobDriver:SetupToil()\nVerse.AI.Pawn_JobTracker:StatJob(Job, JobCondition, ThinkNode, Boolean, Boolean, ThinkTreeDef)\nVerse.AI.Pawn_JobTracker:TryFindAndStartJob()\nVerse.AI.Pawn_JobTracter:EndCurrentJob(JobCondition)
\n","url":"https://blog.rubenwardy.com/2016/07/20/rimworld-install-monodevelop-with-dot-net-3.5/","summary":"This tutorial will show you how to install Mono and Monodevelop in order to develop .NET 3.5 projects.","date_published":"2016-07-20T00:00:00.000Z","tags":["linux","tutorials"]},{"id":"/2016/06/02/move-private-repos-from-bitbucket-to-github/","language":"en","title":"Move private repos from Bitbucket to GitHub","content_html":"Now that Github supports unlimited private repos in all plans, you might as well\nkeep things all together (although definitely have backups elsewhere incase\nGithub is DDOS’d again, dies or goes evil). Simply change “rubenwardy” to your\nusername and “XXXX” to a\npersonal access token with “repo” checked.\nAlso make sure you have an SSH key for Github.
\n# Clone from bitbucket\ngit clone git@github.com:samkuehn/bitbucket-backup.git\nmkdir bk\ncd bitbucket-backup\n./bitbucket-backup -u rubenwardy --mirror -l ../bk\n\n# Upload to GitHub\ncd ../bk\nfor D in *;\ndo\n echo $D\n cd $D\n data=\"{\\\"name\\\": \\\"$D\\\", \\\"auto_init\\\": false, \\\"private\\\": true }\"\n curl -i -H 'Authorization: token XXXX' -d \"$data\" https://api.github.com/user/repos\n git push --mirror git@github.com:rubenwardy/$D.git\n cd ../\ndone\ncd ../
\n","url":"https://blog.rubenwardy.com/2016/06/02/move-private-repos-from-bitbucket-to-github/","summary":"Now that Github supports unlimited private repos in all plans, you might as well keep things all together","date_published":"2016-06-02T00:00:00.000Z","tags":["snippet","tutorials"]},{"id":"/2016/05/24/xubuntu-xfce-sync-whisker-menu-favourites/","language":"en","title":"Xubuntu / Xfce sync whisker menu favourites","content_html":"I use a dual monitor setup. For each monitor I have a panel, and each panel has\na whisker menu. I found that the favourites section of each of these panels\nis not synchronised. To fix this, I wrote a simple script.
\nFirst, you have to decide which whisker config to keep.
\n\nThe favourites menu is stored at the top of each file, like this:
\n\nFind the one you want to keep. From this point on,\nreplace whiskermenu-1.rc
with the config you want to keep and\nwhiskermenu-9.rc
with the one you want to ditch.
The script is as simple as this:
\n#! /bin/bash\n\nrm -f ~/.config/xfce4/panel/whiskermenu-9.rc\ncp ~/.config/xfce4/panel/whiskermenu-1.rc ~/.config/xfce4/panel/whiskermenu-9.rc\nxfce4-panel -r
\nIf you save that to /usr/local/bin/syncwhisker, then you can use “syncwhisker”\nin the terminal to keep your menus the same.
\n","url":"https://blog.rubenwardy.com/2016/05/24/xubuntu-xfce-sync-whisker-menu-favourites/","summary":"When using multiple Xfce panels, it's annoying that the favourites aren't synchronised. Here's a script to fix that","date_published":"2016-05-24T00:00:00.000Z","tags":["snippet","linux","tutorials"]},{"id":"/2016/05/21/dual-boot-ubuntu-on-asus-x555la-uefi-laptop/","language":"en","title":"Dual boot Ubuntu on ASUS X555LA UEFI laptop alongside Windows 10","content_html":"I recently successfully dual booted Ubuntu on an ASUS X555LA laptop,\nthe X555LAB variety. This guide will work on most Ubuntu derivatives,\nI installed Xubuntu using these steps. The only things that are different\nbetween them are window managers and preinstalled software.
\nSpecs:
\n\n\nIntel® Core™ i5-5200U CPU @ 2.20GHz, max: 2.70GHz
\n
\nIntel® HD Graphics 5500
\n8GB RAM, 1TB HDD
\n1920x1080 screen, 2xUSB3.0, 1xUSB2.0
Here’s a step by step guide to what I needed to do.
\nThe only option should be: “Windows Boot Manager”.
\nHere are some problems.
\nsudo service network-manager restart
These problems are made up for that fact that 1. you now have a real terminal\nand 2. double tapping doesn’t break like on Windows (damn ASUS Smart Touch).
\nThanks for reading this. For more info, see:\nubuntu.com/community/UEFI/.
\n","url":"https://blog.rubenwardy.com/2016/05/21/dual-boot-ubuntu-on-asus-x555la-uefi-laptop/","summary":"I recently successfully dual booted Ubuntu on an ASUS X555LA laptop, the X555LAB variety.","date_published":"2016-05-21T00:00:00.000Z","tags":["linux","tutorials"]},{"id":"/2016/05/07/2100-link-external-phpbb-forum/","language":"en","title":"Linking a user account to an external phpBB forum","content_html":"This article will show you how to verify a user’s identity by letting them\nassociate their account with an external third party phpBB account.\nI used Python and Flask to achieve this, however any language and framework\nshould work, and shouldn’t be too hard to port to.
\nI wrote a Python library to parse phpBB profiles\nwhich I will be using throughout this tutorial.
\nFirst, you’ll need to install some packages:
\npip install beautifulsoup4 phpbb-parser
\nI’m using Flask, Flask-User and Flask-SQLAlchemy.
\npip install flask flask-user Flask-SQLAlchemy
\nYou need to add a forum
field to the user model:
class User(db.Model, UserMixin):\n # ...\n\n # Forum\n forum = db.Column(db.String(100), nullable=True)
\nThis is a page that gives the user the option to link to a forum.
\n\n<div class=\"box box_grey\">\n <h2>Link forum account</h2>\n\n <form method=\"post\" action=\"{{ url_for('link_forum_page') }}\">\n <p>\n Enter your forum username here:\n </p>\n\n <input type=\"text\" name=\"forum_username\"\n value=\"{{ forum_username }}\"\n required placeholder=\"Forum username\">\n\n <p>\n Go to\n <a href=\"{{ forum_url }}/ucp.php?i=profile&mode=signature\">\n User Control Panel > Profile > Edit signature\n </a>\n </p>\n <p>\n Paste this into your signature:\n </p>\n\n <input type=\"text\" value=\"{{ key }}\" readonly size=32>\n\n <p>\n Click next so we can check it.\n </p>\n <p>\n Don't worry, you can remove it after this is done.\n </p>\n\n <input type=\"submit\" value=\"Next\">\n </form>\n</div>\n
\nI used a memcache in order to temporarily associate an IP with a random token
\nfrom werkzeug.contrib.cache import SimpleCache\ncache = SimpleCache()
\n# Project components\nfrom app import app\nfrom models import *\n\n# Dependencies\nfrom flask import *\nfrom flask_user import *\nimport phpbb_parser as parser\n\n# GET link_forum_page\ndef link_forum_get():\n # Used to automatically fill out the forms in the template with a forum username\n forum_username = request.args.get(\"user\") or \"\"\n\n # Create random token\n import uuid\n key = uuid.uuid4().hex\n cache.set(\"forum_claim_key_\" + request.remote_addr, key, 5*60)\n\n return render_template('link_forum.html', key=key,\n forum_username=forum_username, forum_url=\"https://example.com\")\n\n# POST link_forum_page\ndef link_forum_post():\n forum_username = request.form[\"forum_username\"]\n\n # Get profile\n profile = parser.get_profile(\"https://example.com\", forum_username)\n if not profile:\n flash(\"Unable to find your forum user.\", \"error\")\n return redirect(url_for(\"link_forum_page\", user=forum_username))\n\n # Get stored key\n stored_key = cache.get(\"forum_claim_key_\" + request.remote_addr)\n if not stored_key:\n flash(\"Failed to get key\", \"error\")\n return redirect(url_for(\"link_forum_page\", user=forum_username))\n\n # Get token in signature\n if not stored_key in profile.signature.text:\n flash(\"Could not find the key in your signature!\", \"error\")\n return redirect(url_for(\"link_forum_page\", user=forum_username))\n\n # Get user from database\n user = User.query.filter_by(username=current_user.username).first()\n if not user:\n flash(\"Could not find a user of that name!\")\n return redirect(url_for(\"link_forum_page\", user=forum_username))\n\n user.forum = forum_username\n db.session.commit()\n\n # SUCCESS! Redirect to index page\n return redirect(url_for(\"index_page\"))\n\n# Register link_forum_page\n@app.route(\"/user/claim/\", methods=[\"POST\", \"GET\"])\n@login_required\ndef link_forum_page():\n if request.method == \"GET\":\n return link_forum_get()\n else:\n return link_forum_post()
\nFunction split up to make it easier to read.
\nThe above is only good for linking, and not for actually signing in.\nTo allow signing in, you could read public profile data off of the forum\nprofile and then use OAuth to sign in with those.
\nFor example, if the user has their Github listed on their forum profile,\nyou can sign them in using that Github account.
\nI recently wrote and released a python module to allow fetching of profile data.
\nYou can install it using pip:
\npip install beautifulsoup4 phpbb-parser
\nHere’s how you import and use a profile:
\nimport phpbb_parser as parser\n\nusername = \"rubenwardy\"\nprofile = parser.get_profile(\"https://forum.minetest.net\", username)\n\nif profile:\n signature = profile.signature.text\n location = profile.get(\"location\") or \"unknown\"\n github = profile.get(\"github\") or \"none\"\n\n print(username + \" from \" + location + \" has github \" + github)\n print(\"Signatue: \" + signature.text)\nelse:\n print(\"Could not get profile!\")
\nprofile.signature is a beautifulsoup4 object.
\nNext: Linking a user account to an external phpBB forum.
\n","url":"https://blog.rubenwardy.com/2016/05/07/1600-python-phpbb-parser/","summary":"I recently wrote and released a python module to allow fetching of profile data.","date_published":"2016-05-07T00:00:00.000Z","tags":["snippet","projects","tutorials","fullstack"]},{"id":"/2016/02/24/university/","language":"en","title":"1st Semester of University","content_html":"To get more than 50% on our coursework, you had to submit extensions.\nHere are some of my favourite ones.
\nI created a Pong clone using C and SDL.
\nI created a Chess game and Chess minimax implementation in C.
\nFeatures:
\nI was contacted by a client to create a system which calculates the workload\nfor employees based on their assignment to tasks and appointments.
\nThe system needs to solve two problems:\nFirstly, different staff members work different numbers of hours, which\nmakes it hard to allocate tasks fairly and proportionally.\nSecondly, the client wanted to use the system to analyse past workloads\nand to anticipate future workload, in order to improve her system of work.
\nThe solution contains an algorithm to suggest new employees to tasks, based on\nthe type of tasks they are currently assigned to, how many hours they should\nwork a week, and how what they are doing at the time.
\nThe solution was written as a web application, using Python, Flask and SQLAlchemy.
\n\nOriginally I created the graphs using a HTML5 canvas. However, during the\nfirst client feedback sesson, I found out they used Internet Explorer with\nJavaScript disabled. This meant that I needed to move all of the graph\nrendering to the server. I used the Python Imaging Library (PIL) to do this.
\n","url":"https://blog.rubenwardy.com/2015/12/10/etms/","summary":"I was contacted by a client to create a system which calculates the workload for employees based on their assignment to tasks and appointments.","date_published":"2015-12-10T00:00:00.000Z","image":"https://blog.rubenwardy.com/static/img/pDfv1W1JnM-540.png","tags":["projects","fullstack"]},{"id":"/2015/04/07/convert-folder-of-images-to-pdf-with-filenames/","language":"en","title":"Convert folder of images to PDF with filenames","content_html":"Here is a shell script specific for GNU/Linux based operating systems to use.\nOn different operating systems the convert commands will be the same, but\nthe for loop will be different due to a different batch file syntax.
\nrm /tmp/imageex -r\nmkdir /tmp/imageex\nfor filename in *.png; do\n echo \"Processing $filename\"\n convert $filename -background White \\\n label:$filename -gravity Center \\\n -append -pointsize 14 \\\n /tmp/imageex/$filename.png\ndone\necho \"Exporting to PDF...\"\nconvert /tmp/imageex/*.png output.pdf\necho \"Done.\"
\n","url":"https://blog.rubenwardy.com/2015/04/07/convert-folder-of-images-to-pdf-with-filenames/","summary":"Here is a shell script specific for GNU/Linux based operating systems to use. On different operating systems the convert commands will be the same.","date_published":"2015-04-07T00:00:00.000Z","tags":["snippet","linux","tutorials"]},{"id":"/2015/01/08/3d-projection/","language":"en","title":"3D Projection","content_html":"Hello 2015! Recently I have created an implementation of the\n3D projection algorithm.\nIt is just wireframe models. It works pretty well, except it doesn’t do frustum culling.\nYou still see things that are behind you, but upside down.
\nThe source code of this implementation is available under the WTFPL or CC0 licenses -\nyou can choose which one you want to use.\nUse WASD to move, arrow keys to rotate, space to ascend and shift to descend.
\n","url":"https://blog.rubenwardy.com/2015/01/08/3d-projection/","summary":"Recently I have created an implementation of the 3d projection algorithm. It is just wireframe models. It works well, except it doesn't do frustum culling.","date_published":"2015-01-08T00:00:00.000Z","tags":["computer_science","projects","fullstack","gamedev"]},{"id":"/2014/12/16/lotka-volterra/","language":"en","title":"Lotka Volterra - Predator Prey","content_html":"Hi all! I’m back again for another post.\nThis time I am going to show off a project I have had for quite a while -\nit is a cellular automaton which simulates the Lotka Volterra equations.
\n\n\nOrange for foxes, brown for rabbits.
\nThe Lotka Volterra equations, also known as the predator prey equations,\nare used to model the populations of two species over time - a predator\nand a prey.
\n\n","url":"https://blog.rubenwardy.com/2014/12/16/lotka-volterra/","summary":"This time I am going to show off a project I have had for quite a while - it is a cellular automaton which simulates the Lotka Volterra equations.","date_published":"2014-12-16T00:00:00.000Z","tags":["projects","fullstack","gamedev"]},{"id":"/2014/11/07/lichess-widgets/","language":"en","title":"Lichess Widgets","content_html":"Just a short post this time - I have created some widgets for the chess website\nLichess. I was quite surprised that these did not exist yet, in any form.\nLuckily Lichess exposes an API to use. You can have a look at them by clicking\nthe link below.
\n","url":"https://blog.rubenwardy.com/2014/11/07/lichess-widgets/","summary":"Just a short post this time - I have created some widgets for the chess website Lichess.","date_published":"2014-11-07T00:00:00.000Z","tags":["projects","fullstack"]},{"id":"/2014/10/26/lichess/","language":"en","title":"Lichess","content_html":"Lichess is an online chess game and community. It is free and open source,\nad-free and subscription-free. You can play against your friends, random\nstrangers and the computer. You can create teams to compete in tournaments.\nThere are tournaments going on all the time.
\nThere are different game modes available, but I don’t really touch these.\nChess960, King of the Hill, Three-check, From position.\nYou can request that your game is analysed by a chess engine, it gives you\nsuggestions on what you should have done.\nI like how it shows a graph of how much of an advantage one player has over\none another.
\nLichess is also good for becoming better at chess.\nChess puzzles are available in the training section,\nwhere you have to choose the best move in a given situation,\nand follow it through to checkmate.\nPlayers can create their own puzzles.
\nIt is truly better than the alternatives.\nChess.com is cluttered with ads and has an uglier design - minimalism is the way forward.\nLichess is free and open source, and always will be.
\n","url":"https://blog.rubenwardy.com/2014/10/26/lichess/","summary":"Lichess is an online chess game and community. It is free and open source, ad-free and subscription-free.","date_published":"2014-10-26T00:00:00.000Z","image":"https://blog.rubenwardy.com/static/img/ktL-TIXRBI-540.png","tags":["reviews"]},{"id":"/2014/10/22/bytecode-and-compilers/","language":"en","title":"Bytecode, Compilers and Interpreters","content_html":"Recently I have been looking at languages and compilation: VMs, parse trees,\nlexers, and interpreters. Nand to tetris is a\npretty awesome guide to how the CPU executes programs - from logic gates to high\nlevel languages.
\nI created an assembler and Virtual Machine to run assembly style instructions.\nI haven’t released it as it isn’t as complete as I want it yet.
\nTurns a string such as “( 0 - (6) + ( 6 ^ 2 - 4 * 1 * 5 ) ^ (1 / 2) ) / ( 2 * 1)”\ninto a binary syntax tree, and then into Reverse Polish Notation, and then executes it.
\nI have also experimented with a simple Lexical Analysisor, however it’s not at a state\nthat I’d like to release.
\nBefunge is an esoteric programming language -\na language which isn’t meant for pratical use, but rather to test programming\nuse. The language consists of a two dimensional grid of characters. Each\ncharacter is an individual instruction. I was disappointed that the authors\ndidn’t take it seriously enough, and didn’t add input and output to it. So, I\ncreated my own! I called it ‘Rufunge’. It’s not finished yet, but can currently\nrun most Befunge programs.
\nMesecode is a language which compiles to Lua. It’s purpose it to make\ndefinitions in Minetest easier to read.
\nInstead of:
\nminetest.register_node(\"default:stone\", {\n description = \"Stone\",\n tiles = {\"default_stone.png\"},\n is_ground_content = true,\n groups = {cracky=3, stone=1},\n drop = 'default:cobble'\n})
\nYou can write:
\nmod default\n\nnode Stone\n is ground, cracky=3, stone\n drops Cobble\n
\n","url":"https://blog.rubenwardy.com/2014/10/22/bytecode-and-compilers/","summary":"Recently I have been looking at languages and compilation: VMs, parse trees, lexers, and interpreters","date_published":"2014-10-22T00:00:00.000Z","tags":["computer_science","projects","cpp","minetest"]}]}