❌

Reading view

There are new articles available, click to refresh the page.

New multispectral analysis of Voynich manuscript reveals hidden details

side by side images of a folio from the voynich manuscript with its multispectral counterpart on the right

Enlarge / Medieval scholar Lisa Fagin Davis examined multispectral images of 10 pages from the Voynich manuscript. (credit: Lisa Fagin Davis)

About 10 years ago, several folios of the mysterious Voynich manuscript were scanned using multispectral imaging. Lisa Fagin Davis, executive director of the Medieval Academy of America, has analyzed those scans and just posted the results, along with a downloadable set of images, to her blog, Manuscript Road Trip. Among the chief findings: Three columns of lettering have been added to the opening folio that could be an early attempt to decode the script. And while questions have long swirled about whether the manuscript is authentic or a clever forgery, Fagin Davis concluded that it's unlikely to be a forgery and is a genuine medieval document.

As we've previously reported, the Voynich manuscript is a 15th century medieval handwritten text dated between 1404 and 1438, purchased in 1912 by a Polish book dealer and antiquarian named Wilfrid Voynich (hence its moniker). Along with the strange handwriting in an unknown language or code, the book is heavily illustrated with bizarre pictures of alien plants, naked women, strange objects, and zodiac symbols. It's currently kept at Yale University's Beinecke Library of rare books and manuscripts. Possible authors include Roger Bacon, Elizabethan astrologer/alchemist John Dee, or even Voynich himself, possibly as a hoax.

There are so many competing theories about what the Voynich manuscript isβ€”most likely a compendium of herbal remedies and astrological readings, based on the bits reliably decoded thus farβ€”and so many claims to have deciphered the text, that it's practically its own subfield of medieval studies. Both professional and amateur cryptographers (including codebreakers in both World Wars) have pored over the text, hoping to crack the puzzle.

Read 12 remaining paragraphs | Comments

New camera design can ID threats faster, using less memory

Image out the windshield of a car, with other vehicles highlighted by computer-generated brackets.

Enlarge (credit: Witthaya Prasongsin)

Elon Musk, back in October 2021, tweeted that β€œhumans drive with eyes and biological neural nets, so cameras and silicon neural nets are only way to achieve generalized solution to self-driving.” The problem with his logic has been that human eyes are way better than RGB cameras at detecting fast-moving objects and estimating distances. Our brains have also surpassed all artificial neural nets by a wide margin at general processing of visual inputs.

To bridge this gap, a team of scientists at the University of Zurich developed a new automotive object-detection system that brings digital camera performance that’s much closer to human eyes. β€œUnofficial sources say Tesla uses multiple Sony IMX490 cameras with 5.4-megapixel resolution that [capture] up to 45 frames per second, which translates to perceptual latency of 22 milliseconds. Comparing [these] cameras alone to our solution, we already see a 100-fold reduction in perceptual latency,” says Daniel Gehrig, a researcher at the University of Zurich and lead author of the study.

Replicating human vision

When a pedestrian suddenly jumps in front of your car, multiple things have to happen before a driver-assistance system initiates emergency braking. First, the pedestrian must be captured in images taken by a camera. The time this takes is called perceptual latencyβ€”it’s a delay between the existence of a visual stimuli and its appearance in the readout from a sensor. Then, the readout needs to get to a processing unit, which adds a network latency of around 4 milliseconds.

Read 14 remaining paragraphs | Comments

❌