A black and white device sits on a beige table. A white rotary knob projects out near the base of it's rectangular shape nearest the camera. Near it is a black rectangular section of the enclosure with six white dots protruding through holes to form a braille display. A ribbon cable snakes out of the top of the enclosure and over the furthest edge of the device, presumably connecting to a camera on the other side of the device.

This Polaroid-esque OCR Machine Turns Text To Braille In The Wild

One of the practical upsides of improved computer vision systems and machine learning has been the ability of computers to translate text from one language or format to another. [Jchen] used this to develop Braille Vision which can turn inaccessible text into braille on the go.

Using a headless Raspberry Pi 4 or 5 running Tesseract OCR, the device has a microswitch shutter to take a picture of a poster or other object. The device processes any text it finds and gives the user an audible cue when it is finished. A rotary knob on the back of the device then moves the braille display pad through each character. When the end of the message is reached, it then cycles back to the beginning.

Development involved breadboarding an Arduino hooked up to some MOSFETs to drive the solenoids for the braille display until the system worked well enough to solder together with wires and perfboard. Everything is housed in a 3D printed shell that appears similar in size to an old Polaroid instant camera.

We’ve seen a vibrating braille output prototype for smartphones, how blind makers are using 3D printing, and are wondering what ever happened with “tixel” displays? If you’re new to braille, try 3D printing your own trainer out of TPU.

Continue reading “This Polaroid-esque OCR Machine Turns Text To Braille In The Wild”

Homebrew Traffic Monitor Keeps Eyes On The Streets

How many cars go down your street each day? How fast were they going? What about folks out on a walk or people riding bikes? It’s not an easy question to answer, as most of us have better things to do than watch the street all day and keep a tally. But at the same time, this is critically important data from an urban planning perspective.

Of course, you could just leave it to City Hall to figure out this sort of thing. But what if you want to get a speed bump or a traffic light added to your neighborhood? Being able to collect your own localized traffic data could certainly come in handy, which is where TrafficMonitor.ai from [glossyio] comes in.

Continue reading “Homebrew Traffic Monitor Keeps Eyes On The Streets”

Custom Drone Software Searches, Rescues

When a new technology first arrives in people’s hands, it often takes a bit of time before the full capabilities of that technology are realized. In much the same way that many early Internet users simply used it to replace snail mail, or early smartphones were used as more convenient methods for messaging and calling than their flip-phone cousins, autonomous drones also took a little bit of time before their capabilities became fully realized. While some initially used them as a drop-in replacement for things like aerial photography, a group of mountain rescue volunteers in the United Kingdom realized that they could be put to work in more efficient ways suited to their unique abilities and have been behind a bit of a revolution in the search-and-rescue community.

The first search-and-rescue groups using drones to help in their efforts generally used them to search in the same way a helicopter would have been used in the past, only with less expense. But the effort involved is still the same; a human still needed to do the searching themselves. The group in the UK devised an improved system to take the human effort out of the equation by sending a drone to fly autonomously over piece of mountainous terrain and take images of the ground in such a way that any one thing would be present in many individual images. From there, the drone would fly back to its base station where an operator could download the images and run them through a computer program which would analyse the images and look for outliers in the colors of the individual pixels. Generally, humans tend to stand out against their backgrounds in ways that computers are good at spotting while humans themselves might not notice at all, and in the group’s first efforts to locate a missing person they were able to locate them almost immediately using this technology.

Although the system is built on a mapping system somewhat unique to the UK, the group has not attempted to commercialize the system. MR Maps, the software underpinning this new feature, has been free to use for anyone who wants to use it. And for those just starting out in this field, it’s also worth pointing out that location services offered by modern technologies in rugged terrain like this can often be misleading, and won’t be as straightforward of a solution to the problem as one might think.

Drive For Show, Putt For Dough

Any golfer will attest that the most impressive looking part of the game—long drives—isn’t where the game is won. To really lower one’s handicap the most important skills to develop are in the short game, especially putting. Even a two-inch putt to close out a hole counts the same as the longest drive, so these skills are not only difficult to master but incredibly valuable. To shortcut some of the skill development, though, [Sparks and Code] broke most rules around the design of golf clubs to construct this robotic putter.

The putter’s goal is to help the golfer with some of the finesse required to master the short game. It can vary its striking force by using an electromagnet to lift the club face a certain amount, depending on the distance needed to sink a putt. Two servos lift the electromagnet and club, then when the appropriate height is reached the electromagnet turns off and the club swings down to strike the ball. The two servos can also oppose each other’s direction to help aim the ball as well, allowing the club to strike at an angle rather than straight on. The club also has built-in rangefinding and a computer vision system so it can identify the hole automatically and determine exactly how it should hit the ball. The only thing the user needs to do is press a button on the shaft of the club.

Even the most famous golfers will have problems putting from time to time so, if you’re willing to skirt the rules a bit, the club might be useful to have around. If not, it’s at least a fun project to show off on the golf course to build one’s credibility around other robotics enthusiasts who also happen to be golfers. If you’re looking for something to be more of a coach or aide rather than an outright cheat, though, this golf club helps analyze and perfect your swing instead of doing everything for you.

Continue reading “Drive For Show, Putt For Dough”

Dog Poop Drone Cleans Up The Yard So You Don’t Have To

Sometimes you instantly know who’s behind a project from the subject matter alone. So when we saw this “aerial dog poop removal system” show up in the tips line, we knew it had to be the work of [Caleb Olson].

If you’re unfamiliar with [Caleb]’s oeuvre, let us refresh your memory. [Caleb] has been on a bit of a dog poop journey, starting with a machine-learning system that analyzed security camera footage to detect when the adorable [Twinkie] dropped a deuce in the yard. Not content with just knowing when a poop event has occurred, he automated the task of locating the packages with a poop-pointing robot laser. Removal of the poop remained a manual task, one which [Caleb] was keen to outsource, hence the current work.

The video below, from a lightning talk at a conference, is pretty much all we have to go on, and the quality is a bit potato-esque. And while [Caleb]’s PoopCopter is clearly still a prototype, it’s easy to get the gist. Combining data from the previous poop-adjacent efforts, [Caleb] has built a quadcopter that can (or will, someday) be guided to the approximate location of the offending package, home in on it using a downward-looking camera, and autonomously whisk it away.

The retrieval mechanism is the high point for us; rather than a complicated, servo-laden “sky scoop” or something similar, the drone has a bell-shaped container on its belly with a series of geared leaves on the open end. The leaves are open when the drone descends onto the payload, and then close as the drone does a quick rotation around the yaw axis. And, as [Caleb] gleefully notes, the leaves can also open in midair with a high-torque yaw move in the opposite direction; the potential for neighborly hijinx is staggering.

All jokes and puns aside, this looks fantastic, and we can’t wait for more information and a better video. And lest you think [Caleb] only works on “Number Two” problems, never fear — he’s also put considerable work into automating his offspring and taking the awkwardness out of social interactions.

Continue reading “Dog Poop Drone Cleans Up The Yard So You Don’t Have To”

Mothbox Watches Bugs, So You — Or Your Grad Students — Don’t Have To

To the extent that one has strong feelings about insects, they tend toward the extremes of a spectrum that runs from a complete fascination with their diversity and the specializations they’ve evolved to exploit unique and ultra-narrow ecological niches, and “Eww, ick! Kill it!” It’s pretty clear that [Dr. Andy Quitmeyer] and his team tend toward the former, and while they love their bugs, spending all night watching them is a tough enough gig that they came up with Mothbox, the automated insect monitor.

Insect censuses are valuable tools for assessing the state of an ecosystem, especially insects’ vast numbers, short lifespan, and proximity to the base of the food chain. Mothbox is designed to be deployed in insect-rich environments and automatically recognize and tally the moths it sees. It uses an Arducam and Raspberry Pi for image capture, plus an array of UV and visible LEDs, all in a weatherproof enclosure. The moths are attracted to the light and fly between the camera and a plain white background, where an image is captured. YOLO v8 locates all the moths in the image, crops them out, and sends them to BioCLIP, a vision model for organismal biology that appears similar to something we’ve seen before. The model automatically sorts the moths by taxonomic features and keeps a running tally of which species it sees.

Mothbox is open source and the site has a ton of build information if you’re keen to start bug hunting, plus plenty of pictures of actual deployments, which should serve as nightmare fuel to the insectophobes out there.

40,000 FPS Omega camera captures Olympic photo-finish

Olympic Sprint Decided By 40,000 FPS Photo Finish

Advanced technology played a crucial role in determining the winner of the men’s 100-meter final at the Paris 2024 Olympics. In a historically close race, American sprinter Noah Lyles narrowly edged out Jamaica’s Kishane Thompson by just five-thousandths of a second. The final decision relied on an image captured by an Omega photo finish camera that shoots an astonishing 40,000 frames per second.

This cutting-edge technology, originally reported by PetaPixel, ensured the accuracy of the result in a race where both athletes recorded a time of 9.78 seconds. If SmartThings’ shot pourer from the 2012 Olympics were still around, it could once again fulfill its intended role of celebrating US medals.

Omega, the Olympics’ official timekeeper for decades, has continually innovated to enhance performance measurement. The Omega Scan ‘O’ Vision Ultimate, the camera used for this photo finish, is a significant upgrade from its 10,000 frames per second predecessor. The new system captures four times as many frames per second and offers higher resolution, providing a detailed view of the moment each runner’s torso touches the finish line. This level of detail was crucial in determining that Lyles’ torso touched the line first, securing his gold medal.

This camera is part of Omega’s broader technological advancements for the Paris 2024 Olympics, which include advanced Computer Vision systems utilizing AI and high-definition cameras to track athletes in real-time. For a closer look at how technology decided this historic race, watch the video by Eurosport that captured the event.

Continue reading “Olympic Sprint Decided By 40,000 FPS Photo Finish”