Herzliya Israel, March 20, 2024 (GLOBE NEWSWIRE) -- Beamr Imaging Ltd. (NASDAQ: BMR), a leader in video optimization technology and solutions, today announced that it has released the results of a case study which highlights how Beamr tech enables accelerated machine learning training by using significantly smaller video files and without any negative impact on the video artificial intelligence (AI) process.
Machine learning for video is becoming an increasingly significant technology for businesses. But the players in this expanding arena face critical pain points, like storage and bandwidth bottlenecks or the difficulty to reach acceptable training and inferencing speeds.
In this case study, Beamr’s R&D team showed that training an AI network using video files compressed and optimized through Beamr’s Content-Adaptive Bitrate technology produced results that are as good as training the network with the original, larger files. The AI network was trained to fulfill the task of action recognition, such as distinguishing between people who are walking, running, dancing or doing many other day-to-day actions.
Beamr CTO, Tamar Shoham, explained: “It was important to us to define a test case that really uses the fact that the content is video, instead of an image. When viewing individual frames, it is not possible to differentiate between frames captured during walking and running, or between someone jumping or dancing. Therefore, in order to classify videos according to the action they show, the temporal component is needed, which is why 'action recognition' was our task of choice”.
The video files used for machine learning training were optimized by Beamr Cloud, reducing file sizes by 24%-67%. Such a reduction is beneficial when storing video files for future use and possibly performing other manipulations on them. Recently-launched Beamr cloud is an optimization and modernization software-as-a-service (SaaS) that enables automated, efficient and fast video processing, through no-code processes or customized pipelines to meet specific user needs.
Training performed with the smaller video files optimized by Beamr tech, provided results which were equivalent to those obtained with the larger and non-optimized files (for more details about the experiment, see the full case study).
The case study is part of Beamr’s ongoing commitment to accelerate adoption and increase accessibility of machine learning for video and video analysis solutions. A previous case study focused on the AI network inference stage, which is the phase of drawing conclusions from an AI network that has already been trained. The previous experiment found that video files that were downsized by 40% on average streamlined machine learning processes. This allowed significant savings in storage and costs.
The current case study covers the more challenging task of training a neural network for action recognition in video. In coming months, the Beamr R&D team plans to expand the initial experiment described above to large scale testing, including neural networks that operate in the cloud using GPUs.
Accelerate the Adoption of Machine Learning for Video
The AI Network was Trained to Distinguish Between People who are Doing Many Day-to-Day Actions
About Beamr
Beamr (Nasdaq: BMR) is a world leader in content adaptive video solutions. Backed by 53 granted patents, and winner of the 2021 Technology and Engineering Emmy® award and the 2021 Seagate Lyve Innovator of the Year award, Beamr's perceptual optimization technology enables up to a 50% reduction in bitrate with guaranteed quality. www.beamr.com
Forward-Looking Statements
This press release contains “forward-looking statements” that are subject to substantial risks and uncertainties. Forward-looking statements in this communication may include, among other things, statements about Beamr’s strategic and business plans, technology, relationships, objectives and expectations for its business, the impact of trends on and interest in its business, intellectual property or product and its future results, operations and financial performance and condition. All statements, other than statements of historical fact, contained in this press release are forward-looking statements. Forward-looking statements contained in this press release may be identified by the use of words such as “anticipate,” “believe,” “contemplate,” “could,” “estimate,” “expect,” “intend,” “seek,” “may,” “might,” “plan,” “potential,” “predict,” “project,” “target,” “aim,” “should,” “will” “would,” or the negative of these words or other similar expressions, although not all forward-looking statements contain these words. Forward-looking statements are based on the Company’s current expectations and are subject to inherent uncertainties, risks and assumptions that are difficult to predict. Further, certain forward-looking statements are based on assumptions as to future events that may not prove to be accurate. For a more detailed description of the risks and uncertainties affecting the Company, reference is made to the Company’s reports filed from time to time with the Securities and Exchange Commission (“SEC”), including, but not limited to, the risks detailed in the Company’s annual report filed with the SEC on March 4, 2024 and in subsequent filings with the SEC. Forward-looking statements contained in this announcement are made as of the date hereof and the Company undertakes no duty to update such information except as required under applicable law.
Investor Contact: