Faster Integer Division With Floating Point

Multiplication on a common microcontroller is easy. But division is much more difficult. Even with hardware assistance, a 32-bit division on a modern 64-bit x86 CPU can run between 9 and 15 cycles. Doing array processing with SIMD (single instruction multiple data)  instructions like AVX or NEON often don’t offer division at all (although the RISC-V vector extensions do). However, many processors support floating point division. Does it make sense to use floating point division to replace simpler division? According to [Wojciech Mula] in a recent post, the answer is yes.

The plan is simple: cast the 8-bit numbers into 32-bit integers and then to floating point numbers. These can be divided in bulk via the SIMD instructions and then converted in reverse to the 8-bit result. You can find several code examples on GitHub.

Continue reading “Faster Integer Division With Floating Point”

Better C Strings, Simply

If you program in C, strings are just in your imagination. What you really have is a character pointer, and we all agree that a string is every character from that point up until one of the characters is zero. While that’s simple and useful, it is also the source of many errors. For example, writing a 32-byte string to a 16-byte array or failing to terminal a string with a zero byte. [Thasso] has been experimenting with a different way to represent strings that is still fairly simple but helps keep things straight.

Like many other languages, this setup uses counted strings and string buffers. You can read and write to a string buffer, but strings are read-only. In either case, there is a length for the contents and, in the case of the buffer, a length for the entire buffer.

Continue reading “Better C Strings, Simply”

Pixel mashup with Wasm-4 logo and retro graphics

WASM-4: Retro Game Dev Right In Your Browser

Have you ever dreamt of developing games that run on practically anything, from a modern browser to a microcontroller? Enter WASM-4, a minimalist fantasy console where constraints spark creativity. Unlike intimidating behemoths like Unity, WASM-4’s stripped-back specs challenge you to craft games within its 160×160 pixel display, four color palette, and 64 KB memory. Yes, you’ll curse at times, but as every tinkerer knows, limitations are the ultimate muse.

Born from the WebAssembly ecosystem, this console accepts “cartridges” in .wasm format. Any language that compiles to WebAssembly—be it Rust, Go, or AssemblyScript—can build games for it. The console’s emphasis on portability, with plans for microcontroller support, positions it as a playground for minimalist game developers. Multiplayer support? Check. Retro vibes? Double-check.

Entries from a 2022’s WASM-4 Game Jam showcase this quirky console’s charm. From pixel-perfect platformers to byte-sized RPGs, the creativity is staggering. One standout, “WasmAsteroids,” demonstrated real-time online multiplayer within these confines—proof that you don’t need sprawling engines to achieve cutting-edge design. This isn’t just about coding—it’s about coding smart. WASM-4 forces you to think like a retro engineer while indulging in modern convenience.

WASM-4 is a playground for anyone craving pure, unadulterated experimentation. Whether you’re a seasoned programmer or curious hobbyist, this console has the tools to spark something great.

BASIC Co-Inventor Thomas Kurtz Has Passed Away

It’s with sadness that we note the passing of Thomas E. Kurtz, on November 12th. He was co-inventor of the BASIC programming language back in the 1960s, and though his creation may not receive the attention in 2024 that it would have done in 1984, the legacy of his work lives on in the generation of technologists who gained their first taste of computer programming through it.

A BBC Micro BASIC program that writes "HELLO HACKADAY!" to the screen multiple times.
For the 1980s kids who got beyond this coding masterpiece, BASIC launched many a technology career.

The origins of BASIC lie in the Dartmouth Timesharing System, like similar timesharing operating systems of the day, designed to allow the resources of a single computer to be shared across many terminals. In this case the computer was at Dartmouth College, and BASIC was designed to be a language with which software could be written by average students who perhaps didn’t have a computing background. In the decade that followed it proved ideal for the new microcomputers, and few were the home computers of the era which didn’t boot into some form of BASIC interpreter. Kurtz continued his work as a distinguished academic and educator until his retirement in 1993, but throughout he remained as the guiding hand of the language.

Should you ask a computer scientist their views on BASIC, you’ll undoubtedly hear about its shortcomings, and no doubt mention will be made of the GOTO statement and how it makes larger projects very difficult to write. This is all true, but at the same time it misses the point of it being a readily understandable language for first-time users of machines with very little in the way of resources. It was the perfect programming start for a 1970s or 1980s beginner, and once its limitations had been reached it provided the impetus for a move to higher things. We’ve not written a serious BASIC program in over three decades, but we’re indebted to Thomas Kurtz and his collaborator for what they gave us.

Thanks [Stephen Walters] for the tip.

Making Sense Of Real-Time Operating Systems In 2024

The best part about real-time OS (RTOS) availability in 2024 is that we developers are positively spoiled for choice, but as a corollary this also makes it a complete pain to determine what the optimal choice for a project is. Beyond simply opting for a safe choice like FreeRTOS for an MCU project and figuring out any implications later during the development process, it can pay off massively to invest some time up-front matching the project requirements with the features offered by these various RTOSes. A few years ago I wrote a primer on the various levels of ‘real-time’ and whether you may even just want to forego an RTOS at all and use a simple Big Loop™ & interrupt-based design.

With such design parameters in mind, we can then look more clearly at the available RTOS options available today, which is the focus of this article. Obviously it won’t be an exhaustive comparison, and especially projects like FreeRTOS have seen themselves customized to various degrees by manufacturers like ST Microelectronics and Espressif, among others. This also brings to the forefront less pleasant considerations, such as expected support levels, as illustrated by e.g. Microsoft’s Azure RTOS (formerly ThreadX) recently getting moved to the Eclipse Foundation as the Eclipse ThreadX open source project. On one hand this could make it a solid open-source licensed RTOS, or it could have been open sourced because Microsoft has moved on to something else and cleared out its cupboard.

Thus without further ado, let’s have a look at RTOSes in 2024 and which ones are worth considering, in my opinion.

Continue reading “Making Sense Of Real-Time Operating Systems In 2024”

The End Of Ondsel And Reflecting On The Commercial Prospects For FreeCAD

Within the world of CAD there are the well-known and more niche big commercial players and there are projects like FreeCAD that seek to bring a OSS solution to the CAD world. As with other OSS projects like the GIMP, these OSS takes on commercial software do not always follow established user interactions (UX), which is where Ondsel sought to bridge the gap by giving commercial CAD users a more accessible FreeCAD experience. This effort is now however at an end, with a blog post by Ondsel core team member [Brad Collette] providing the details.

The idea of commercializing OSS is by no means novel, as this is what Red Hat and many others have done for decades now. In our article on FOSS development bounties we touched upon the different funding models for FOSS projects, with the Linux kernel enjoying strong commercial support. The trick is of course to attract such commercial support and associated funding, which is where the development on the UI/UX and feature set of the core FreeCAD code base was key. Unfortunately the business case was not strong enough to attract such commercial partners and Ondsel has been shutdown.

As also discussed on the FreeCAD forum, the Ondsel codebase will likely be at least partially merged into the FreeCAD code, ending for now the prospect of FreeCAD playing in the big leagues with the likes of AutoCAD.

Thanks to [Brian Harrington] for the tip.

All You Need For Artificial Intelligence Is A Commodore 64

Artificial intelligence has always been around us, with [Timothy J. O’Malley]’s 1985 book on AI projects for the Commodore 64 being one example of this. With AI defined as being the theory and development of systems that can perform tasks that normally requiring human intelligence (e.g. visual perception, speech recognition, decision-making), this book is a good introduction to the many ways that computer systems for decades now have been able to learn, make decisions and in general become more human-like. Even if there’s no electronic personality behind the actions.

In the book’s first chapter, [Timothy] isn’t afraid to toss in some opinions about the true nature of intelligence and thinking. Starting with the concept that intelligence is based around storing information and being able to derive meaning from connections between stored pieces of information, the idea of a basic AI as one would use in a game for the computer opponent arises. A number of ways of implementing such an AI is explored in the first and subsequent chapters, using Towers of Hanoi, chess, Nim and other games.

After this we look at natural language processing – referencing ELIZA as an example – followed by heuristics, pattern recognition and AI for robotics. Although much of this may seem outdated in this modern age of LLMs and neural networks, it’s important to realize that much of what we consider ‘bleeding edge’ today has its roots in AI research performed in the 1950s and 1960s. As [Timothy] rightfully states in the final chapter, there is no real limit to how far you can push this type of AI as long as you have more hardware and storage to throw at the problem. This is where we now got datacenters full of GPU-equipped systems churning through vector space calculations for the sake of today’s LLM & diffusion model take on ‘AI’.

Using a Commodore 64 to demonstrate the (lack of) validity of claims is not a new one, with recently a group of researchers using one of these breadbin marvels to run an Ising model with a tensor network and outperforming IBM’s quantum processor. As they say, just because it’s new and shiny doesn’t necessarily mean that it is actually better.