Hardware Built For Executing Python (Not Pythons)

Lots of microcontrollers will accept Python these days, with CircuitPython and MicroPython becoming ever more popular in recent years. However, there’s now a new player in town. Enter PyXL, a project to run Python directly in hardware for maximum speed.

What’s the deal with PyXL? “It’s actual Python executed in silicon,” notes the project site. “A custom toolchain compiles a .py file into CPython ByteCode, translates it to a custom assembly, and produces a binary that runs on a pipelined processor built from scratch.” Currently, there isn’t a hard silicon version of PyXL — no surprise given what it costs to make a chip from scratch. For now, it exists as logic running on a Zynq-7000 FPGA on a Arty-Z7-20 devboard. There’s an ARM CPU helping out with setup and memory tasks for now, but the Python code is executed entirely in dedicated hardware.

The headline feature of PyXL is speed. A comparison video demonstrates this with a measurement of GPIO latency. In this test, the PyXL runs at 100 MHz, achieving a round-trip latency of 480 nanoseconds. This is compared to MicroPython running on a PyBoard at 168 MHz, which achieves a much slower 15,000 nanoseconds by comparison. The project site claims PyXL can be 30x faster than MicroPython based on this result, or 50x faster when normalized for the clock speed differences.

Python has never been the most real-time of languages, but efforts like this attempt to push it this way. The aim is that it may finally be possible to write performance-critical code in Python from the outset. We’ve taken a look at Python in the embedded world before, too, albeit in very different contexts.

Continue reading “Hardware Built For Executing Python (Not Pythons)”

Vibing, AI Style

This week, the hackerverse was full of “vibe coding”. If you’re not caught up on your AI buzzwords, this is the catchy name coined by [Andrej Karpathy] that refers to basically just YOLOing it with AI coding assistants. It’s the AI-fueled version of typing in what you want to StackOverflow and picking the top answers. Only, with the current state of LLMs, it’ll probably work after a while of iterating back and forth with the machine.

It’s a tempting vision, and it probably works for a lot of simple applications, in popular languages, or generally where the ground is already well trodden. And where the stakes are low, as [Al Williams] pointed out while we were talking about vibing on the podcast. Can you imagine vibe-coded ATM software that probably gives you the right amount of money? Vibe-coding automotive ECU software?

While vibe coding seems very liberating and hands-off, it really just changes the burden of doing the coding yourself into making sure that the LLM is giving you what you want, and when it doesn’t, refining your prompts until it does. It’s more like editing and auditing code than authoring it. And while we have no doubt that a stellar programmer like [Karpathy] can verify that he’s getting what he wants, write the correct unit tests, and so on, we’re not sure it’s the panacea that is being proclaimed for folks who don’t already know how to code.

Vibe coding should probably be reserved for people who already are expert coders, and for trivial projects. Just the way you wouldn’t let grade-school kids use calculators until they’ve mastered the basics of math by themselves, you shouldn’t let junior programmers vibe code: It simultaneously demands too much knowledge to corral the LLM, while side-stepping any of the learning that would come from doing it yourself.

And then there’s the security side of vibe coding, which opens up a whole attack surface. If the LLM isn’t up to industry standards on simple things like input sanitization, your vibed code probably shouldn’t be anywhere near the Internet.

So should you be vibing? Sure! If you feel competent overseeing what [Dan] described as “the worst summer intern ever”, and the states are low, then it’s absolutely a fun way to kick the tires and see what the tools are capable of. Just go into it all with reasonable expectations.

Programmer’s Macro Pad Bangs Out Whole Functions

Macro pads are handy for opening up your favorite programs or executing commonly used keyboard shortcuts. But why stop there?

That’s what [Jeroen Brinkman] must have been thinking while creating the Programmer’s Macro Pad. Based on the Arduino Pro Micro, this hand-wired pad is unique in that a single press of any of its 16 keys can virtually “type” out multiple lines of text. In this case, it’s a capability that’s being used to prevent the user from having to manually enter in commonly used functions, declarations, and conditional statements.

For example, in the current firmware, pressing the “func” key will type out a boilerplate C function:

int () { //
;
return 0;
}; // f 

It will also enter in the appropriate commands to put the cursor where it needs to be so you can actually enter in the function name. The other keys such as “array” and “if” work the same way, saving the user from having to enter (and potentially, even remember) the correct syntax.

The firmware is kept as simple as possible, meaning that the functionality of each key is currently hardcoded. Some kind of tool that would let you add or change macros without having to manually edit the source code and flash it back to the Arduino would be nice…but hey, it is a Programmers Macro Pad, after all.

Looking to speed up your own day-to-day computer usage? We’ve covered a lot of macro pads over the years, we’re confident at least a few of them should catch your eye.

Moving Software Down To Hardware

In theory, any piece of software could be built out of discrete pieces of hardware, provided there are enough transistors, passive components, and time available. In general, though, we’re much more likely to reach for a programmable computer or microcontroller for all but the simplest tasks for several reasons: cost, effort, complexity, economics, and sanity. [Igor Brichkov] was working with I2C and decided that he wanted to see just where this line between hardware and software should be by implementing this protocol itself directly with hardware.

One of the keys to “programming” a communications protocol in hardware is getting the timing right, the first part of which is initializing communications between this device and another on the bus. [Igor] is going to be building up the signal in parts and then ORing them together. The first part is a start condition, generated by one oscillator and a counter. This also creates a pause, at which point a second oscillator takes over and sends data out. The first data needed for I2C is an address, which is done with a shift register and a counter pre-set to send the correct bits out on the communications lines.

To build up the rest of the signal, including data from the rotary encoder [Igor] is using for his project, essentially sets of shift registers and counters are paired together to pass data out through the I2C communications lines in sequence. It could be thought of that the main loop of the hardware program is a counter, which steps through all the functions sequentially, sending out data from the shift registers one by one. We saw a similar project over a decade ago, but rather than automating the task of sending data on I2C it allowed the user to key in data manually instead.

Continue reading “Moving Software Down To Hardware”

Simulating Embedded Development To Reduce Iteration Time

There’s something that kills coding speed—iteration time. If you can smash a function key and run your code, then watch it break, tweak, and smash it again—you’re working fast. But if you have to first compile your code, then plug your hardware in, burn it to the board, and so on… you’re wasting a lot of time. It’s that problem that inspired [Larry] to create an embedded system simulator to speed development time for simple projects.

The simulator is intended for emulating Arduino builds on iPhone and Mac hardware. For example, [Larry] shows off a demo on an old iPhone, which is simulating an ESP32 playing a GIF on a small LCD display. The build isn’t intended for timing-delicate stuff, nor anything involving advanced low-level peripherals or sleep routines and the like. For that, you’re better off with real hardware. But if you’re working on something like a user interface for a small embedded display, or just making minor tweaks to some code… you can understand why the the simulator might be a much faster way to work.

For now, [Larry] has kept the project closed source, as he’s found that it wouldn’t reasonably be possible for him to customize it for everyone’s unique hardware and use cases. Still, it’s a great example of how creating your own tools can ease your life as a developer. We’ve seen [Larry]’s great work around here before, like this speedy JPEG decoder library.
Continue reading “Simulating Embedded Development To Reduce Iteration Time”

How To Use LLMs For Programming Tasks

[Simon Willison] has put together a list of how, exactly, one goes about using a large language models (LLM) to help write code. If you have wondered just what the workflow and techniques look like, give it a read. It’s full of examples, strategies, and useful tips for effectively using AI assistants like ChatGPT, Claude, and others to do useful programming work.

It’s a very practical document, with [Simon] emphasizing realistic expectations and the importance of managing context (both in terms of giving the LLM direction, as well as the model’s context in terms of being mindful of how much the LLM can fit in its ‘head’ at once.) It is useful to picture an LLM as a capable and obedient but over-confident programming intern or assistant, albeit one that never gets bored or annoyed. Useful work can be done, but testing is crucial and human oversight simply cannot be automated away.

Even if one has no interest in using LLMs to help in writing production code, there’s still a lot of useful work they can do to speed up the process of software development in general, especially when learning. They can help research options, interactively explore unfamiliar codebases, or prototype ideas quickly. [Simon] provides useful strategies for all these, and more.

If you have wondered how exactly glorified chatbots can meaningfully help with software development, [Simon]’s writeup hopefully gives you some new ideas. And if this is is all leaving you curious about how exactly LLMs work, in the time it takes to enjoy a warm coffee you can learn how they do what they do, no math required.

Why AI Usage May Degrade Human Cognition And Blunt Critical Thinking Skills

Any statement regarding the potential benefits and/or hazards of AI tends to be automatically very divisive and controversial as the world tries to figure out what the technology means to them, and how to make the most money off it in the process. Either meaning Artificial Inference or Artificial Intelligence depending on who you ask, AI has seen itself used mostly as a way to ‘assist’ people. Whether in the form of a chat client to answer casual questions, or to generate articles, images and code, its proponents claim that it’ll make workers more efficient and remove tedium.

In a recent paper published by researchers at Microsoft and Carnegie Mellon University (CMU) the findings from a survey are however that the effect is mostly negative. The general conclusion is that by forcing people to rely on external tools for basic tasks, they become less capable and prepared of doing such things themselves, should the need arise. A related example is provided by Emanuel Maiberg in his commentary on this study when he notes how simple things like memorizing phone numbers and routes within a city are deemed irrelevant, but what if you end up without a working smartphone?

Does so-called generative AI (GAI) turn workers into monkeys who mindlessly regurgitate whatever falls out of the Magic Machine, or is there true potential for removing tedium and increasing productivity?

Continue reading “Why AI Usage May Degrade Human Cognition And Blunt Critical Thinking Skills”