Tuesday, March 17, 2026

From Sky To Strategy: How India Is Writing The Rules For Autonomics

Cdr Rahul Verma (r)

Cdr Rahul Verma (r)

“An autonomous system is only as trustworthy as the foundation upon which it is built.”

Autonomics isn’t a typo; it’s a reality. The age of isolated automation is ending. Across factory floors, contested battlefields, congested airways, and urban landscapes, autonomous systems are replacing humans in tasks that demand split-second decisions under conditions of radical uncertainty. From India’s indigenous Ghatak stealth UCAV to the AI-driven AkashTeer Air Defence system, from swarm drones protecting our borders to autonomous underwater vehicles securing our maritime approaches, the machines we build today will determine not just operational outcomes, but the lives and livelihoods of millions tomorrow. Yet despite this proliferation, we lack a rigorous, common foundation —a shared language, architecture, and engineering discipline — for developing these systems with the trustworthiness they demand. We term that missing foundation autonomics, and its absence threatens to constrain the very transformation India seeks to lead. In a 2019 paper published in  Proceedings of the National Academy of Sciences (PNAS), David Harel and colleagues introduced the term “autonomics” to propose a new foundation for engineering complex, autonomous systems. They argued that a discipline of “autonomics” is required to bridge the gap between autonomy ambition and trustworthy autonomous systems. 

This article, written in the context of Dubai Airshow 2025’s focus on next-generation aviation and defence technologies, proposes an architectural pathway for India to position itself not merely as a consumer or even producer of autonomous systems, but as a global standard-setter in autonomics the engineering foundation that ensures these systems can be trusted to do what we intend, and nothing that we don’t.

From Automation to Autonomy: Why Foundations Matter Now

India’s defence and aerospace sectors stand at an inflection point. With ₹1.27 lakh crore in indigenous defence production, 30-fold growth in defence exports, and ambitious programmes like Make in India driving capability development, the nation is rapidly indigenizing platforms from Tejas fighters to Arjun tanks. Yet the next wave already upon us demands more than platforms. It demands autonomy, systems that perceive, decide, plan, act, and learn with minimal human intervention across environments we cannot fully predict at design time.

Consider the operational reality much beyond the storylines. The Technology Perspective and Capability Roadmap (TPCR) 2025 outlines requirements for 90-100 stealth UCAVs for the Army and 40-50 for the Air Force, each capable of Manned-Unmanned Teaming (MUM-T), AI-driven programmable flight profiles, and autonomous aerial refuelling. The recently approved Collaborative Long Range Target Saturation/Destruction System (CLRTS/DS) features autonomous take-off, navigation, detection, and payload delivery. The Indian Navy’s future vision includes Autonomous Carrier On Board Delivery, deck-based CCA like Shield AI’s X-Bat, extra-large unmanned underwater vehicles, and shipborne laser weapons operating in contested electromagnetic environments where communication with human operators may be degraded or denied entirely. These are not science-fiction aspirations. They are funded programmes of record. Yet we face a fundamental gap; we are building autonomous systems faster than we are building the engineering foundations to ensure they can be trusted.

What Is Autonomics? A Conceptual Framework Beyond Autonomy

Autonomy describes what a system does, how it manifests agency through five interacting functions: perception (interpreting stimuli), model update (building run-time representations of the environment), goal management (choosing relevant objectives), planning (computing action sequences), and self-adaptation (learning and adjusting over time). Any system embodying these functions with minimal human intervention can be considered autonomous.

Autonomics, by contrast, is the engineering foundation that enables us to specify, analyze, build, and validate such systems across their lifecycle. It is to autonomy what aerodynamics is to flight. The underlying science and engineering discipline that transforms aspiration into reliable, repeatable capability. Where autonomy is the property we seek in our systems, autonomics is the body of knowledge, methods, architectures, and tools that allow us to achieve that property systematically and at scale.

Why does this distinction matter? Because the challenges facing next-generation autonomous systems like those India is fielding cannot be addressed by merely adding sensors, upgrading processors, or expanding test matrices. They require fundamentally new approaches to three core challenges identified by leading researchers in the field:

  1. Specifying behaviour in the face of unpredictability
  2. Analysing system performance in rich, human-inhabited environments
  3. Combining model-based and data-driven approaches (classical software engineering and machine learning)

From Kill-Chain to Cognitive Web: The Autonomics Evolution

Traditional automated systems, from autopilots to early UAVs, operated on pre-programmed logic within tightly bounded environments. The “kill-chain” metaphor captures this linearity: sensor detects, operator decides, weapon engages. Each step waits for the previous one to complete. Each function is isolated, validated separately, and certified in controlled conditions.

Next-generation autonomous systems embody a logic entirely different from that of traditional systems. Consider a scenario increasingly common in India’s operational environment—a satellite flags suspicious maritime activity off our western coast. A MALE UAV refines the contact, distinguishing between fishing vessels and potential hostile craft. A naval destroyer integrates this intelligence with AIS data, coastal radar, and electromagnetic intelligence from shore stations. An AI-driven combat management system recommends engagement options, prioritizing loitering munitions over ship-launched missiles to minimize collateral risk in congested shipping lanes. The LM autonomously adjusts its flight path to account for wind shear, updates target coordinates based on real-time movement, and executes terminal guidance using onboard electro-optical sensors, all while maintaining ROE compliance through continuous human oversight at the supervisory level.

This is not a kill-chain. It is a dynamic, distributed, and self-organizing cognitive web. Autonomics provides the architectural framework to validate not the individual platforms — satellite, UAV, ship, or loitering munition — but the emergent intelligence of the integrated system operating under uncertainty, denial, and stress.

Three Core Autonomics Challenges: An Indian Context

Challenge I: Specifying Behaviour Under Unpredictability

How do you specify what an autonomous convoy protection vehicle should do when it encounters a child’s ball rolling into its path on a crowded base road? Traditional requirements engineering assumes bounded, enumerable scenarios. But next-generation systems operate in “open-world” environments where the combinatorial explosion of possible situations, objects, agents, events, and their interactions defies exhaustive pre-specification.

India’s answer must lie in domain-specific ontologies —structured knowledge representations that capture not just objects and their properties, but also their action semantics: what they do, what can be done with them, and how our systems should respond. For defence and aerospace applications, this means developing Indian standards for mission-relevant concepts. What constitutes a “hostile act,” how to balance force protection with civilian safety, and how to reconcile conflicting goals like mission accomplishment and legal compliance in ambiguous scenarios.

The autonomous foundation of India must enable systems to:

  • Query ontology servers when encountering novel situations (e.g., an unknown object in Bharat Forge Limited’s Omega One’s flight path triggering cloud-based identification before applying local engagement logic with Bharat Forge Limited’s Omega Nine wingman LM systems)
  • Explain decisions in terms accessible to human operators and post-mission investigators (critical for legal accountability in autonomous weapons employment)
  • Handle dynamic specification updates, allowing commanders to adjust ROE or mission parameters mid-execution as operational situations evolve

Challenge II: Analysis in Human-Rich Environments

Indian autonomous systems will not operate in sterile test ranges. They will function amid the chaos of actual military operations, the crush of pedestrians and vehicles around airbases, the electromagnetic clutter of commercial shipping lanes, the dust, heat, altitude extremes, and marginal infrastructure of border regions. More critically, they will interface with humans who are not users or operators —civilians, friendly forces, adversaries, or neutrals — whose behaviour our systems cannot control but must anticipate and accommodate.

The autonomics foundation must provide:

  • Environment Modeling at Scale: Domain-specific simulation libraries capable of representing the three-dimensional complexity of Indian operational environments, not generic Western urban settings, but the specific challenges of subcontinental geography, demographics, and infrastructure. This includes monsoon weather effects on autonomous flight, extreme-altitude operations in the Himalayas, and tropical maritime conditions affecting sensor performance, as these factors can affect the BFL-Windracers Ultra MK II autonomous logistics carrier.
  • State-Aware Testing Infrastructure: Current testing validates that systems perform specified functions. Autonomics-based testing must validate that systems perceive the environment as intended. Did that convoy protection vehicle classify the child’s ball correctly, or did it coincidentally take correct action based on a sensor misclassification? State-aware infrastructure monitors not just behaviour but the internal reasoning that drives behaviour, which is critical for AI-heavy systems where “black box” neural networks, like those in Tardid’s brainbox, make life-and-death decisions.
  • Behavioural Coverage Metrics: India’s autonomous systems will be deployed in large numbers across diverse environments. The autonomics foundation must move beyond “miles driven” as a coverage metric to structured frameworks that measure composite state coverage. Have we tested this drone swarm’s coordination algorithm under realistic jamming, in dust storms, against adversarial spoofing, with partial communication loss, and while operating alongside manned aircraft? Autonomics provides the mathematics and tooling to systematically quantify and improve this coverage—and hence the lookout—in Sagar Defence’s USVs.

Challenge III: The Hybrid Architecture Imperative

India’s autonomous systems will inevitably combine model-based components (traditional software engineering with explicit, human-authored logic that can be verified and certified) and data-driven components (machine learning models trained on vast datasets that excel at pattern recognition but resist formal verification). Neither approach alone suffices.

Pure model-based systems cannot handle the perceptual complexity of real-world environments, distinguishing a hostile gesture from a friendly wave, identifying concealed threats in cluttered backgrounds, and adapting to novel tactics that enemy forces haven’t used before. Pure ML systems lack the explainability, decomposability, and formal trustworthiness required for safety-critical decisions. We cannot simply “train” our way to ROE compliance or casualty minimization without explicit human-authored constraints. This coordination is the secret sauce in the Brain Box.

The autonomous foundation of India must therefore pioneer hybrid architectures that:

  • Use ML for perception and environment modelling (where pattern recognition excels)
  • Use model-based logic for goal management, planning, and decision accountability (where formal methods provide trustworthiness)
  • Provide architectural interfaces allowing both paradigms to coexist, with protective wrappers ensuring ML components fail safely when their confidence drops or when they encounter out-of-distribution inputs

This is not mere academic theorizing. The AkashTeer system, already in service, embodies this hybrid approach, fusing ISRO satellite imagery, NAVIC navigation, ground radars, and AI-directed drone swarms into a decentralized “combat cloud.” The Autonomics Foundation formalizes this pattern, enabling it to be replicated, certified, and scaled across India’s autonomous systems portfolio.

India’s Autonomics Opportunity: From Consumer to Standard-Setter The Strategic Stakes

The development of autonomous systems is accelerating globally. But the development of autonomics lags. No universally accepted framework exists for specifying, analyzing, and certifying next-generation autonomous systems—particularly for defence applications, where the stakes include not just commercial success but also human lives and strategic advantage.

India can either adopt frameworks developed elsewhere—accepting specifications, standards, and architectures that reflect foreign operational assumptions, legal frameworks, and threat perceptions—or we can lead in defining the autonomics foundation itself, shaping global standards to reflect Indian requirements, values, and capabilities.

The Dubai Airshow 2025 offers a strategic window. As global aerospace and defence leaders gather to showcase next-generation platforms, India can position itself as the thought leader on what makes those platforms trustworthy—on the autonomics foundation that separates genuine capability from mere technological demonstration.

The Future of Autonomous Systems: Built on Indian Foundations

The trajectory is clear. Autonomous systems will dominate future aerospace and defence operations—not because they are fashionable, but because the operational pace, electromagnetic complexity, and geographic scale of modern conflict exceed human cognitive capacity. Defence Minister Rajnath Singh captured this succinctly: “The wars of tomorrow will be fought with algorithms, autonomous systems and artificial intelligence.”

But wars and peace will also be governed by the foundations upon which those systems are built. If those foundations are opaque, proprietary, or misaligned with Indian operational requirements and democratic values, we cede strategic advantage regardless of platform performance. Suppose those foundations are transparent, open, and shaped by Indian requirements. In that case, we lead a global shift toward autonomous systems that are not just capable but trustworthy, that amplify human judgment rather than replace it, that can explain their actions in terms we understand, and that fail safely when they encounter the inevitable unexpected.

Autonomics is that foundation. The question facing India at Dubai Airshow 2025 and beyond is not whether we will field autonomous systems; we already are. The question is whether we will lead in defining the engineering discipline that makes those systems worthy of trust. In that answer lies not just operational capability, but industrial advantage, strategic leverage, and moral leadership in the most consequential technological transformation of our era.

The foundation is ours to build. We need only begin.

Final Word from the Flight Deck

I have seen firsthand how naval aviators adapt split-second decisions to shifting threat pictures, degraded sensors, and the fog of combat. That adaptability, born of training, experience, and institutional knowledge, is what we now seek to encode in autonomous systems. But human judgment draws on foundations we rarely articulate, intuitions about risk, context-dependent priorities, and ethical constraints embedded through culture and profession.

Autonomics is the attempt to make those foundations explicit, formal, and reproducible in machines. It is not about replacing the Pilot or the Commander. It is about ensuring that when we do entrust decisions to autonomous systems —because the mission demands it, because human reaction times are inadequate, because lives hang in the balance —those systems embody not just capability, but the wisdom to use it well.

That is the autonomous foundation India must build. Not for the machines. For ourselves.

Cdr Rahul Verma (r), former Cdr (TDAC) at the Indian Navy, boasts 21 years as a Naval Aviator with diverse aircraft experience. Seaking Pilot, RPAS Flying Instructor, and more, his core competencies span Product and Innovation Management, Aerospace Law, UAS, and Flight Safety. The author is an Emerging Technology and Prioritization Scout for a leading Indian Multi-National Corporation, focusing on advancing force modernization through innovative technological applications and operational concepts. Holding an MBA and Professional certificates from institutions like Olin Business School, NALSAR, Axelos and IIFT, he’s passionate about contributing to aviation, unmanned technology, and policy discussions. Through writing for various platforms, he aims to leverage his domain knowledge to propel unmanned and autonomous systems and create value for Aatmannirbhar Bharat and the Indian Aviation industry.


Most Popular