[ad_1]
The technology might result in highly developed artificial intelligence that can instantaneously understand what it sees and has uses in robotics and self-driving cars.
Researchers at the University of Central Florida (UCF) have built a device for artificial intelligence that replicates the retina of the eye.
The research might result in cutting-edge AI that can identify what it sees right away, such as automated descriptions of photos captured with a camera or a phone. The technology could also be used in robots and self-driving vehicles.
The technology, which is described in a recent study published in the journal ACS Nano, also performs better than the eye in terms of the range of wavelengths it can perceive, from ultraviolet to visible light and on to the infrared spectrum.
Its ability to combine three different operations into one further contributes to its uniqueness. Currently available intelligent image technology, such as that found in self-driving cars, needs separate data processing, memorization, and sensing.
The researchers claim that by integrating the three procedures, the UCF-designed device is much faster than existing technology. With hundreds of the devices fitting on a one-inch-wide chip, the technology is also quite compact.
“It will change the way artificial intelligence is realized today,” says study principal investigator Tania Roy, an assistant professor in UCF’s Department of Materials Science and Engineering and NanoScience Technology Center. “Today, everything is discrete components and running on conventional hardware. And here, we have the capacity to do in-sensor computing using a single device on one small platform.”
The technology expands upon previous work by the research team that created brain-like devices that can enable AI to work in remote regions and space.
“We had devices, which behaved like the synapses of the human brain, but still, we were not feeding them the image directly,” Roy says. “Now, by adding image sensing ability to them, we have synapse-like devices that act like ‘smart pixels’ in a camera by sensing, processing, and recognizing images simultaneously.”
For self-driving vehicles, the versatility of the device will allow for safer driving in a range of conditions, including at night, says Molla Manjurul Islam ’17MS, the study’s lead author and a doctoral student in UCF’s Department of Physics.
“If you are in your autonomous vehicle at night and the imaging system of the car operates only at a particular wavelength, say the visible wavelength, it will not see what is in front of it,” Islam says. “But in our case, with our device, it can actually see in the entire condition.”
“There is no reported device like this, which can operate simultaneously in ultraviolet range and visible wavelength as well as infrared wavelength, so this is the most unique selling point for this device,” he says.
Key to the technology is the engineering of nanoscale surfaces made of molybdenum disulfide and platinum ditelluride to allow for multi-wavelength sensing and memory. This work was performed in close collaboration with YeonWoong Jung, an assistant professor with joint appointments in UCF’s NanoScience Technology Center and Department of Materials Science and Engineering, part of UCF’s College of Engineering and Computer Science.
The researchers tested the device’s accuracy by having it sense and recognize a mixed wavelength image — an ultraviolet number “3” and an infrared part that is the mirror image of the digit that were placed together to form an “8.” They demonstrated th
at the technology could discern the patterns and identify them both as a “3” in ultraviolet and an “8” in infrared.
“We got 70 to 80% accuracy, which means they have very good chances that they can be realized in hardware,” says study co-author Adithi Krishnaprasad ’18MS, a doctoral student in UCF’s Department of Electrical and Computer Engineering.
The researchers say the technology could become available for use in the next five to 10 years.
Reference: “Multiwavelength Optoelectronic Synapse with 2D Materials for Mixed-Color Pattern Recognition” by Molla Manjurul Islam, Adithi Krishnaprasad, Durjoy Dev, Ricardo Martinez-Martinez, Victor Okonkwo, Benjamin Wu, Sang Sub Han, Tae-Sung Bae, Hee-Suk Chung, Jimmy Touma, Yeonwoong Jung and Tania Roy, 25 May 2022, ACS Nano.
DOI: 10.1021/acsnano.2c01035
The work was funded by the U.S. Air Force Research Laboratory through the Air Force Office of Scientific Research, and the U.S. National Science Foundation through its CAREER program.
[ad_2]
Source link
More Stories
All About USB-C: Cable Types
Samsung Galaxy S23 Ultra China listing reveals speed boost, more
SEC charges Sam Bankman-Fried with fraud after Bahamas arrest