CronAI - Experts & Thought Leaders
Latest CronAI news & announcements
Sparsh CCTV, a pioneer in innovative security solutions, and Cron AI, a pioneer in deep learning-based 3D perception technology, are thrilled to announce a strategic partnership designed to redefine video surveillance, Intelligent Transportation Systems (ITS), and infrastructure automation. By integrating edge computing, LiDAR, 3D perception, and camera technologies, this collaboration aims to advance safety, efficiency, and automation in transportation hubs, smart cities, border security, mobile sensing, and critical infrastructure. Intelligent video surveillance Designed collaboratively in India and the UK and simulated in India, the key will serve markets Through this partnership, Cron AI’s senseEDGE technology will be integrated into Sparsh’s cutting-edge CCTV systems. This advanced solution will enable real-time tracking, object detection, velocity measurement, and classification, resulting in an adaptable, intelligent video surveillance system suited for dynamic environments. Designed collaboratively in India and the UK and manufactured in India, the solution will serve both Indian and global markets. ITS and infrastructure automation Extending beyond conventional security applications, the partnership focuses on innovative solutions for ITS and infrastructure automation. This technology will enable enhanced flow management at high-traffic venues like airports, railway stations, and parking lots, improve safety and efficiency in smart cities, and strengthen monitoring capabilities in critical infrastructure. Unique features like people detection, flow management, intrusion detection, and border security will support complex environments where precise movement management and crowd density insights are critical. Here’s how LiDAR, camera and 3D perception will come together. Revolutionising Industries with Combined Camera, LiDAR, and 3D Perception Technologies Smart City Traffic Flow and Safety: By combining camera, LiDAR, and 3D perception, this solution provides precise, real-time data on traffic patterns, vehicle and pedestrian movement, and potential safety risks. For example, the integration can detect jaywalking or sudden vehicle stops, enabling city systems to respond instantly and improve traffic light coordination, optimise traffic flow, and enhance pedestrian safety. This fusion of technologies supports proactive interventions and incident prevention in bustling urban environments. Advanced Roadway Hazard Detection for Autonomous Vehicles: Cameras and LiDAR with 3D perception enable autonomous systems to classify objects accurately, recognise obstacles, and assess road conditions. This setup is particularly valuable for autonomous vehicles navigating complex environments where they may encounter unexpected obstacles or adverse weather conditions. Real-time data from this integrated system can improve decision-making and response times, ensuring safe navigation through dense urban streets or highways. Critical Infrastructure Surveillance and Security: In sensitive areas like power plants, airports, or government facilities, integrating high-performance cameras, LiDAR, and 3D perception allows for highly accurate monitoring and threat detection. The system can detect intrusions, identify objects, and differentiate between authorised personnel and potential threats. For example, it could trigger alerts upon recognising unauthorised vehicles or persons approaching restricted zones, while ignoring routine authorised activity. Automated Warehouse and Industrial Safety: Combining cameras, LiDAR, and 3D perception in warehouses and industrial facilities provides precise object tracking and spatial awareness, improving both safety and productivity. The system can monitor for obstructions in high-traffic areas, detect human presence in machinery zones, and prevent accidents by pausing equipment when people are nearby. This setup offers real-time spatial intelligence, allowing companies to safeguard workers and automate processes efficiently. Urban Infrastructure and Public Space Management: By integrating these technologies, cities can gain a detailed understanding of how public spaces are used, identifying crowded areas, traffic bottlenecks, or improperly parked vehicles. The system can monitor and analyse usage patterns across parks, public squares, and streets, delivering insights to optimise space, manage flow, and enhance the safety and usability of shared urban areas. Vision for LiDAR, camera, and 3D perception integration “At Sparsh CCTV, we have always championed solutions that surpass traditional boundaries,” said Sanjeev Sehgal, Managing Director of Sparsh CCTV. “Our partnership with Cron AI blends state-of-the-art technology with deep expertise in video surveillance and 3D perception, creating products that will set new standards in intelligent transport systems, infrastructure automation, and security solutions." "This launch propels us toward our mission of making India a global pioneer in technology-driven security. As ITS gains momentum worldwide, our collaboration offers transformative advancements for smart cities, secure infrastructure, and traffic management, with 3D perception and edge AI enabling real-time analysis and faster, more informed decision-making.” Cron AI’s 3D perception “We are immensely excited to partner with Sparsh CCTV on this journey,” shared Tushar Chhabra, CEO of Cron AI. “Cron AI’s 3D perception technology adds an entirely new dimension to safety and automation, providing unmatched accuracy and efficiency in detecting, analysing, and responding to real-world scenarios." "With deep learning, edge computing, and LiDAR at its core, this collaboration marks a significant milestone for industries globally.” Traffic Infra India 2024 Sparsh CCTV and Cron AI introduced this breakthrough technology at Traffic Infra India 2024 and followed with an exclusive demonstration event at the India Habitat Centre, attended by India’s pioneering system integrators.
Insights & Opinions from thought leaders at CronAI
Martyn’s Law (also known as ‘Protect Duty’) could forever change the landscape of event security if changes to legislation are passed. Some would argue it already has. In 2017, just as concertgoers were leaving the Manchester Arena, a terrorist detonated an improvised explosive device in a suicide attack killing 22 and injuring more than 250. The mother of one of the victims, Martyn Hett, has tirelessly campaigned for tighter security and a duty of care to be placed upon venues to protect their patrons. As a result, Martyn’s Law (‘Protect Duty’) has been proposed in UK legislation to protect the public from terrorism. At the same time, other global trends have indicated the need for action on this front. Labour-intensive task The Global Terrorism Index 2020, for instance, reported a steep increase in far-right attacks in North America, Western Europe, and Oceania, stating a 250% rise since 2014, with a 709% increase in deaths over the same period. But, how do we implement the measures proposed by Martyn’s law without intruding on our lives through mass surveillance? The Global Terrorism Index 2020, reported a steep increase in far-right attacks in North America Traditionally, cameras and CCTV have been the go-to solution for monitoring. However, maintaining a comprehensive view of locations with complex layouts or venues that host large crowds and gatherings can be a challenging and labour-intensive task for operatives. Camera outputs have been designed to be interpreted by people, which, in turn, requires a significant human resource that’s liable to inconsistent levels of accuracy in complex environments where getting things wrong can have a catastrophic impact. Highly accurate insights Fortunately, technology is evolving. AI-based perception strategies are being developed alongside advancements in 3D data capture technologies – including lidar, radar, and ToF cameras - that are capable of transforming surveillance with enhanced layers of autonomy and intelligence. As a result, smart, automated systems will be able to work alongside the security workforce to provide an always-on, omniscient view of the environment, delivering highly accurate insights and actionable data. And, with the right approach, this can be achieved without undue impact on our rights as private citizens. While much of this innovation isn’t new, it has been held back from at-scale adoption due to the gaps that remain between the data that’s captured and the machine’s ability to process it into an actionable insight. High traffic environments It’s crucial that they are able to detect all individuals and track their behaviour as they interact In security, for example, this gap is most present when it comes to addressing occlusion (in other words, recognising objects that move in and out of view of the sensors scanning a space). For security systems to provide the high levels of accuracy required in high traffic environments, such as concert venues, it’s crucial that they are able to detect all individuals and track their behaviour as they interact with a space and those within it. This, of course, is possible using multiple sensor modes. However, without the right perception platform to interpret the data being captured, the risk of missing crucial events as a result of the machine misinterpreting a partially concealed individual as an inanimate object, for instance, is significant. Identifiable personal data This gap is narrowing, and thanks to the first wave of sensor innovators, this shift in dependence from video read by people to 3D data point clouds read by machines have meant that we are now able to capture much richer information and data sets that can precisely detect and classify objects and behaviours – without capturing biometric and identifiable personal data. But what we need to fully close the gap are perception strategies and approaches that can adapt to the ever-changing nature of real-world environments. This gap is narrowing, and thanks to the first wave of sensor innovators Until now, this has been a lengthy and costly process requiring those implementing or developing solutions to start from scratch in developing software, algorithms, and training data every time the context or sensor mode is changed. But, by combining proven 3D sensor technologies like lidar with the deep learning first approach, this needed to be the case. Edge processing platform That’s why we are developing an adaptive edge processing platform for lidar that’s capable of understanding the past and present behaviour of people and objects within a given area. Through deep learning, it can predict the near-future behaviour of each object with some degree of certainty, thereby accurately and consistently generating real-time data and tracking the movement of people in the secured environment at scale. This approach has value beyond security. Facilities teams, for example, can extract a wealth of information beyond the primary function of security to support other priorities such as cleaning (tracking facility usage so that schedules can be adjusted), while retailers can optimise advertising and display efforts by identifying areas of high footfall. Likewise, health and safety teams can gather much deeper insights into the way spaces are used to enhance processes and measures to protect their users. Programming limitless scenarios Martyn’s Law will leave them with no option but to rethink their approach to security and safety As we’ve explained, perception is reaching new levels of sophistication through deep learning. By continually programming limitless scenarios, our approach can provide consistently accurate and rich data that users can trust. This will ultimately change the way we manage environments at a time when liability comes with ever-increasing consequences. For venue providers, Martyn’s Law will leave them with no option but to rethink their approach to security and safety. But, with new, smarter, more accurate tools at their disposal that will enable them to predict and protect, rather than just react, risks – both human and commercial – can be addressed. Meanwhile, the public can take comfort in knowing that measures to keep them safe needn’t mean sacrificing their privacy.
Like most industries, the fields of security, access and safety have been transformed by technology, with AI-driven automation presenting a clear opportunity for players seeking growth and leadership when it comes to innovation. In this respect, these markets know exactly what they want. They require solutions that accurately (without false or negative positives) classify and track people and/or vehicles as well as the precise location and interactions between those objects. They want to have access to accurate data generated by best-of-class solutions irrespective of the sensor modality. And, they need to be able to easily deploy such solutions, at the lowest capex and opex, with the knowledge that they can be integrated with preferred VMSs and PSIMs, be highly reliable, have low install and maintenance overheads and be well supported. With these needs in mind, camera and computer vision technology providers, solutions providers and systems integrators are forging ahead and have created exemplary ecosystems with established partnerships helping to accelerate adoption. At the heart of this are AI and applications of Convolutional neural networks (CNN), an architecture often used in computer vision deep learning algorithms, which are accomplishing tasks that were extremely difficult with traditional software. But what about 3D sensing technologies and perception? The security, safety and access market have an additional crucial need: they must mitigate risk and make investments that deliver for the long-term. This means that if a systems integrator invests in a 3D sensing data perception platform today, it will support their choice of sensors, perception strategies, applications and use cases over time without having to constantly reinvest in alternative computer hardware and perception software each time they adopt new technology or systems. This begs the question - if the security industry knows what it needs, why is it yet to fully embrace 3D sensing modalities? Perception strategy Intelligent perception strategies are yet to evolve which sees designers lock everything down at the design phase Well, one problem facing security, safety and access solutions providers, systems integrators and end-users when deploying first-generation 3D sensing-based solutions is the current approach. Today, intelligent perception strategies have yet to evolve beyond the status quo which sees designers lock everything down at the design phase, including the choice of the sensor(s), off-the-shelf computer hardware and any vendor-specific or 3rd party perception software algorithms and deep learning or artificial intelligence. This approach not only builds in constraints for future use-cases and developments, it hampers the level of perception developed by the machine. Indeed, the data used to develop or train the perception algorithms for security, access and safety use cases at design time is typically captured for a narrow and specific set of scenarios or contexts and are subsequently developed or trained in the lab. Technology gaps As those in this industry know too well, siloed solutions and technology gaps typically block the creation of productive ecosystems and partnerships while lack of commercial whole products can delay market adoption of new innovation. Perception systems architectures today do not support the real-time adaptation of software and computing engines in the field. They remain the same as those selected during the design phase and are fixed for the entire development and the deployment stage. Crucially, this means that the system cannot deal with the unknowns of contextually varying real-time situations where contexts are changing (e.g being able to reflex to security situations they haven’t been trained for) and where the autonomous system’s perception strategies need to dynamically adjust accordingly. Ultimately, traditional strategies have non-scalable and non-adaptable competing computing architectures that were not designed to process the next generation of algorithms, deep learning and artificial intelligence required for 3D sensor mixed workloads. What this means for industries seeking to develop or deploy perception systems, like security, access and safety, is that the available computing architectures are generic and designed for either graphic rendering or data processing. Solutions providers, therefore, have little choice but to promote these architectures heavily into the market. Consequently, the resulting computing techniques are defined by the computing providers and not by the software developers working on behalf of the customer deploying the security solution. Context…. we don’t know what we don’t know Perception platform must have the ability to adjust to changes in context, thereby improving the performance post-deployment To be useful and useable in the security context and others, a perception platform must have the ability to adjust to changes in context, can self-optimise and crucially, can self-learn, thereby improving the performance post-deployment. The combinations of potential contextual changes in a real-life environment, such as an airport or military base, are innumerable, non-deterministic, real-time, often analogue and unpredictable. The moment sensors, edge computing hardware and perception software are deployed in the field, myriad variables such as weather, terrain as well as sensor mounting location and orientation all represent a context shift where the perception systems’ solution is no longer optimal. For example, it might be that a particular sensor system is deployed in an outdoor scenario with heavy foliage. Because the algorithm development or training was completed in the lab, the moving foliage, bushes or low trees and branches are classified as humans or some other false-positive result. Typically, heavy software customisation and onsite support then ensue, requiring on-site support by solutions vendors where each and every sensor configuration needs to be hand-cranked to deliver something that is acceptable to the end customer. A new approach for effective perception strategies Cron AI is building senseEDGE, which represents a significant evolution in the development of sensing to information strategy. It is a 3D sensing perception and computer vision platform built from the ground up to address and remove the traditional deployment and performance bottlenecks we’ve just described. senseEDGE is aware of the user application reaction plan indication to trigger an alarm or turning on a CCTV camera The entire edge platform is built around a real-time scalable and adaptable computing architecture that’s flexible enough for algorithms and software to scale and adapt to different workloads and contexts. What’s more, it has real-time contextual awareness, which means that the entire edge platform is, at any time, aware of the external context, the sensor and sensor architecture and the requirements of the user application. Furthermore, when it produces the object output data, it also aware of the user application reaction plan indication, which could be triggering an alarm or turning on a CCTV camera when a specific action is detected. This approach turns traditional perception strategies on their head: it is software-defined programmable perception and computing architecture, not hardware-defined. It is free from the constraints imposed by traditional CPU or GPU compute dictated by hardware architecture providers and not limited to the perception built defined during design time. And, being fully configurable, it can be moved from one solution to another, providing computation for different modalities of sensors designed for different use cases or environments, and lower risk of adoption and migration for those developing the security solution. Future perception requirements senseEDGE is also able to scale to future perception requirements, such as algorithms and workloads produced by future sensors as well as computational techniques and neural networks that have yet to be invented. Meanwhile, latency versus throughput is totally software-defined and not limited by providers of computing architecture. Finally, contextually aware, it is fully connected to the real world where the reflexes adapt to even the subtlest changes in context, which makes all the difference in time and accuracy in critical security situations. This is how CronAI sees the future of perception. It means that security and safety innovators can now access and invest with low risk in a useable and scalable perception solution that can truly take advantage of current and future 3D sensor modalities.
The key to unlocking K12 school safety grants
Download5 surprising findings from OT vulnerability assessments
DownloadPhysical access control
DownloadHoneywell GARD USB threat report 2024
DownloadPalm vein recognition
Download