Integrated AI: Unlocking Unprecedented Potential in Security
The Evolving Landscape of Security Analytics
The security industry is undergoing a significant transformation, driven by advancements in Artificial Intelligence (AI). Historically, the widespread adoption of video analytics was hampered by concerns over affordability and accuracy. However, modern visual AI solutions have made substantial strides, significantly improving on both these fronts. This progress is a result of innovation across the entire technology ecosystem, from silicon manufacturers to AI model developers and solution builders. The collective effort is geared towards creating more cost-effective and reliable solutions suitable for mission-critical applications.
Advancements in AI Capabilities
AI technology is moving beyond traditional video analytics, which relies on pre-trained models for video processing. Newer innovations enable the creation and robust interaction with information extracted directly from video data. These advancements include enhanced contextual understanding, the automation of workflows, and the capability to detect complex activities alongside very specific objects or attributes. This evolution is crucial for elevating security from a reactive posture to a proactive one, allowing for the continuous processing of valuable information in the background. The benefits are manifold, including significant time savings and expanded capabilities for security operators and managers. Furthermore, it opens up new avenues for leveraging security video data for operational and business-critical use cases, thereby repositioning security teams from being perceived as a cost center to becoming a recognized value creator within an organization.
Architectural Flexibility in AI Deployments
The architecture of AI deployments in security is a critical consideration, offering various options to meet diverse operational needs. Traditionally, an AI model must be trained before it can be used for analysis (AI inference). While AI model training is predominantly conducted in cloud environments, AI inference can occur at the edge, on-premise, or in the cloud. On-premise AI deployments can range from "near edge" infrastructure, such as AI appliances and servers, to "far edge" AI-enabled devices, including smart cameras. The choice of AI architecture is influenced by a multitude of constraints, including form factors, the availability of capital versus operating budgets, real-time response requirements, network bandwidth and reliability, installation locations, data security policies, and local or industry regulations.
Near-Edge AI: Total Cost of Ownership Advantages
The AI capabilities of edge appliances and servers have seen dramatic improvements over time, with many advanced visual AI applications now running effectively at the edge. Compared to cloud-only or smart camera-only AI deployments, near-edge architectures present distinct advantages when specific requirements are present. These include real-time analytics use cases where split-second decisions are critical, multi-modal analytics that ingest data from video alongside other sensors, the ability to run multiple analytics concurrently on a single video stream, and scenarios with limited or unreliable upstream network bandwidth. Furthermore, near-edge solutions offer future-proof flexibility in the choice of video analytics and are well-suited for environments with data privacy regulations or concerns. The price per video channel is a significant factor in AI adoption, and near-edge solutions frequently demonstrate a comparatively low total cost of ownership (TCO). This is attributed to several factors: efficient utilization of compute budgets, as edge servers and appliances can process multiple video streams and be sized for near-full utilization; a one-time purchase model as opposed to recurring service fees; the ability to leverage existing camera and recorder infrastructure; and the availability of remote manageability software tools that reduce the need for on-site maintenance.
The Integrated AI Alternative: Beyond Discrete GPUs
Discussions about AI technology often prominently feature Graphics Processing Units (GPUs). However, discrete GPU cards represent just one type of AI accelerator, and they come with several compromises, including cost, power consumption, system form factor limitations, and availability issues. Fortunately, there are now more capable AI acceleration alternatives to GPU cards than ever before, providing greater choice for edge AI hardware, often with lower costs and fewer trade-offs. For instance, certain Central Processing Units (CPUs) and Systems on a Chip (SoCs) now incorporate "Integrated AI" technologies, such as integrated GPUs (iGPUs) and Neural Processing Units (NPUs), all packaged together with the CPU. Edge AI appliances and servers that utilize these CPUs can be described as having integrated AI capabilities. Just as GPU cards and iGPUs were originally designed for graphics and later repurposed for AI, it has been discovered that NPUs can deliver highly power-efficient AI inference. NPUs are available from multiple silicon providers across various architectures. The "AI PC" standards, for example, require a certain level of TOPS (Trillions of Operations Per Second) for an NPU to run specific AI applications, and this capacity can be leveraged to run other AI workloads at the edge, relevant to security use cases. Over time, these integrated AI technologies have become viable alternatives to discrete GPU cards, enabling highly capable, low-cost near-edge AI solutions and far-edge AI devices like smart cameras. Integrated AI solutions typically operate at a lower power budget, allow for much smaller system form factors, and are improving in size and capability more rapidly than discrete GPU cards. For light to medium AI use cases, inference on an iGPU is often sufficient. Selecting a CPU with both an iGPU and an NPU further extends AI capability, enabling the handling of heavier models and/or a greater number of video streams. Finally, advancements in CPU technology include AI instruction sets that make CPU-only AI solutions a reality, offering a low total cost of ownership (TCO).
Looking Ahead: The Future of AI in Security
What AI-related developments should the security industry anticipate in 2026 and beyond? Several key trends are emerging:
- Expanded Hardware Platform Choice: While vendor-specific methods for running AI inference exist, a growing number of vendor-neutral approaches are becoming available, supporting a wider range of architectures. Open-source standards and frameworks like PyTorch, TensorFlow, ONNX, OpenVINO, and OPEA simplify the process for AI solution developers to support multiple hardware platforms from different silicon vendors. Developers often only need to add a few lines of code to support new platforms. Video processing engines such as GStreamer, along with its variants like NVIDIA’s DeepStream and Intel’s open-source DLStreamer (included with the Metro AI Suite), are examples of this trend.
- Increased Flexibility: The industry continues to innovate with edge-only, cloud-only, and hybrid architectures, offering integrators and end-users a wide array of deployment options to suit their specific needs and environments.
- Generative AI Features: Beyond traditional video analytics, generative AI features are expected to be seamlessly incorporated into security applications, enhancing productivity and usability for security professionals.
- More Performance for Less Cost: Historical technological progression demonstrates a consistent trend of expanding capabilities coupled with declining costs. AI is no different, and this pattern is expected to continue, making advanced AI capabilities more accessible and affordable for the security industry.
These advancements signal a future where integrated AI will play an increasingly vital role in enhancing security operations, providing greater efficiency, deeper insights, and more proactive defense mechanisms.
AI Summary
This article delves into the transformative impact of integrated Artificial Intelligence (AI) on the security industry, as explained by David Raske, Safety & Security Business Lead at Intel Corporation. Historically, video analytics faced barriers in affordability and accuracy, but modern visual AI solutions have overcome these challenges. The entire technology ecosystem, from silicon providers to AI solution builders, is focused on creating lower-cost, reliable solutions for mission-critical applications. AI is advancing beyond traditional video analytics, which uses specifically trained models, to enable deeper contextual understanding, automated workflows, and the detection of complex activities and specific attributes. This shift allows security to move from a reactive to a proactive stance, processing valuable information in the background, saving time, and expanding capabilities for security operators. It also presents new opportunities to leverage security video for operational and business-critical use cases, transforming security teams from a cost center to a value creator. The article discusses architectural flexibility, highlighting the trade-offs between cloud and edge deployments. On-premise AI can be deployed in "near edge" infrastructure (appliances, servers) or "far edge" devices (smart cameras). Key architectural decisions depend on factors like form factors, budget, real-time requirements, network bandwidth, data security, and regulations. Near-edge AI architectures offer advantages for real-time analytics, multi-modal data ingestion, running multiple analytics on a single stream, environments with limited network bandwidth, future-proofing flexibility, and data privacy concerns. Near-edge solutions often provide a lower total cost of ownership (TCO) due to efficient compute utilization, one-time purchases, leveraging existing infrastructure, and reduced on-site maintenance. The concept of "Integrated AI" is presented as an alternative to discrete GPU cards. CPUs and Systems on a Chip (SoCs) now feature integrated AI accelerators like integrated GPUs (iGPUs) and Neural Processing Units (NPUs). Edge AI appliances and servers leveraging these components are described as having integrated AI capabilities. NPUs, in particular, offer power-efficient AI inference, with "AI PC" standards requiring significant TOPS (Trillions of Operations Per Second) for AI tasks. These integrated AI technologies provide viable, lower-cost alternatives to discrete GPUs, enabling capable near-edge AI solutions and far-edge devices. Integrated AI solutions often operate at lower power budgets, allow for smaller form factors, and are evolving rapidly. For lighter AI use cases, iGPU inference is sufficient, while CPUs with both iGPU and NPU extend AI capabilities for heavier models or more video streams. Advances in CPU technology also include AI instruction sets, making CPU-only AI solutions a cost-effective reality. Looking ahead to 2026 and beyond, the security industry can expect expanded hardware platform choices due to vendor-neutral AI inference methods and open-source standards. Increased flexibility in deployment options (edge-only, cloud-only, hybrid) will emerge. Generative AI features will be seamlessly incorporated into security applications for enhanced productivity. Finally, the trend of increasing performance while decreasing cost, a historical pattern in technology, will continue with AI.