MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
video
Search

3 Ways to Build AI Video Applications for Driver Safety

Wednesday June 15, 2022. 06:33 PM , from eWeek
The use of a monitoring video feed has helped to identify potential safety risks on our roads. But it’s usually a reactive tool that is used primarily to train employees after a potentially dangerous situation occurs, rather than avoiding issues in the moment.
Country roads, busy intersections, and fast-moving interstates are high-risk work environments that need artificial intelligence to detect roadway hazards and prevent accidents in real time.
AI video combines rich visual data, collected via video feeds, with in-vehicle sensors. This generates the power to identify problematic behavior or unsafe environments as they happen. Drivers are alerted in-cab and coached to take appropriate corrective action.
From how we train the AI to the tech stack we pair it with, here are three best practices to keep in mind when building out AI video technology for safety applications:
1) Pull Data from Multiple Sources
Single sources of data – such as visual cues from a camera – don’t provide enough insight to detect and predict outcomes. While video is a great source of data to pull into the AI algorithm, it often does not paint the full picture of an event.
When it comes to fleet safety, a video feed is not enough to prevent distracted and dangerous driving. For fleets, AI video systems are most effective when they take readings from multiple sensors to capture speed readings, engine and vehicle data alongside imagery of the roadway and the operator.
This vast array of data and sensor points ensures a more precise and relevant AI-based technology to accurately determine if driver distraction or a roadway hazard is likely to cause an accident.
As AI developers continue to collect data and increase the number of sensor points, they must fine-tune or “train” their AI-based technology to deliver the most accurate and meaningful information. By combining real-time sensor and video inputs with a continuous feed of footage from diverse drivers, roadways and vehicles, the AI and machine vision algorithm is optimized, improving its ability to detect and prevent safety incidents.
2) Build a Tech Stack That Enables Real-Time Reactions 
Many safety incidents occur in areas that lack reliable cell signals, like a construction site or a remote highway. It is critical that an AI video system can detect and communicate risks to the driver in real time, regardless of environmental challenges. Solving this requires building the AI directly into the video hardware, enabling data processing at the edge.
Edge processing does not require data to transmit across cellular networks; it works by analyzing the data as close to its source as possible. Building edge processing and local storage into your system is crucial when using AI for safety events. It eliminates the potential for lags to occur between the instant a risk is detected and when the driver receives an in-cab alert and audible coaching to take corrective action.
This immediacy in responding to events, like a driver distracted by a cell phone or following too closely in traffic, prevents accidents and saves lives.
AI video solutions that rely on transmitting data to the cloud for processing suffer from alerting and coaching delays that can be catastrophic. For example, imagine losing cell service while driving through a tunnel. If the AI data is being processed in the cloud, the driver will not be alerted to a risk until communication is restored and the information can be transferred back to the cab.
Solutions that depend on cloud processing can also cause data to fall out of sync when there are gaps in cell communication. Once collected, out-of-sync data cannot be relied upon because there is no longer a single source of truth. It is another important reason that data processing must happen at the point of collection to ensure accuracy by keeping it in sync with advanced telematics. 
3) Focus on Employees When Developing Your AI Video
Some solutions infuse the AI tech to simply seem cutting edge without thoroughly analyzing how the AI will benefit the driver. The Big Brother depiction of cameras and the “science fiction” sense of AI has led many employees to feel apprehensive about working with AI video systems.
It is an understandable hesitation, and one that businesses can counteract by more thoughtfully defining and monitoring safety events. AI should promote safe driving without being intrusive, ensuring a smooth workday and a safe return home at night. AI video should exist to proactively identify safety risks as a preventative measure, not capture every minute of the workday.
Conclusion
Understanding how AI video increases safety on our roads is critical to creating work environments that reduce dangerous conditions and prioritize driver safety. Central to an effective AI safety program is developing an understanding of how the AI and machine vision system is designed and trained to optimize safety performance across diverse driving environments.
This includes assessing which AI features and capabilities are contributing to increased safety versus those that could be unnecessary add-ons, merely using AI to deploy new tech for its own sake.
About the Author: 
Ryan Wilkinson, CTO, IntelliShift
The post 3 Ways to Build AI Video Applications for Driver Safety appeared first on eWEEK.
https://www.eweek.com/big-data-and-analytics/ai-video-applications-safety/
News copyright owned by their original publishers | Copyright © 2004 - 2024 Zicos / 440Network
Current Date
Mar, Fri 29 - 03:08 CET