Independent research tools, governance frameworks and advisory resources for leaders navigating complex, fast-moving decisions. Built for practice. Grounded in evidence.
Single-file HTML tools — complete a short survey and receive the file by email. Works fully offline once downloaded.
MeitY's AI Governance Guidelines distilled into board-ready reading, risk tools, maturity assessment, and evidence frameworks. Grounded in published policy, built for practice.
EU AI Act, NIST AI RMF, and India Guidelines mapped side-by-side to practical controls and board evidence requirements.
Interactive implementation of the GAIQ framework — qualifying GenAI use cases across PVI, TFR and ERC with NEXA/NOVA investment instruments.
Full papers available on ResearchGate. Expanded interactive tools built from each framework are in development.
"Technology is moving faster than governance, compliance and funding can keep up."
A design-science-based, gate-driven model for qualifying GenAI use cases. Use cases that are strategically attractive but weak in ethics or technology cannot advance. Validated on simulated enterprise scenarios — more consistent, auditable, and business-aligned than generic AI maturity models.
Short, sharp, data-grounded. Each Signal surfaces a blind spot, names the tension, and asks the question that matters.
95% of enterprise GenAI pilots deliver no measurable P&L impact. The gap is not technical.
MIT and McKinsey point the same direction: only 1% of companies are truly mature in GenAI deployment. The real challenge is governance, change management, operating models, and accountability — not model selection or vendor choice.
DTC brands cut out middlemen. AI agents are becoming the new middlemen — with zero brand influence over them.
Adobe confirms GenAI-driven retail traffic surged 693% YoY in 2025. Over one-third of shoppers now use AI assistants for purchase decisions. Every dollar in website optimisation and influencer content is becoming stranded investment.
Inference costs dropped 280-fold in two years. Experimentation is now cheap. Accountability for outcomes is not keeping pace.
Stanford's AI Index confirms the reset in deployment economics. Yet governance frameworks for measuring actual business impact haven't evolved. The blind spot: optimising cost per experiment rather than sustained value delivery at scale.
India hosts 50% of the world's GCCs. Cost and scale is no longer the differentiator — and AI is absorbing the execution layer GCCs were built to scale.
NASSCOM 2025: 50% of centres have evolved into transformation hubs. ER&D capabilities growing 1.3x faster than overall GCC growth. Boards expect innovation value beyond cost arbitrage — but measurement frameworks remain underdeveloped.
Only 39% of consumers believe organisations handle their data responsibly. Every personalisation strategy depends on those same consumers sharing more.
Three forces converging: regulatory compression (170+ countries tightening), end of third-party tracking, and Gen Z as dominant consumer by 2027. Organisations have an 18–24 month window to adapt on their own terms.
Third-party involvement in breaches doubled to 30%. As internal defences strengthen, threat actors systematically target the vendor ecosystem.
Verizon DBIR 2025: vulnerability exploitation jumped 34% YoY. The real challenge is designing operations that assume third-party failures will occur — and building resilience accordingly.
Confidence is rising. Protection is not. The risk is not buying insurance — it is believing it changes the survival curve.
Munich Re: modelled accumulation potential USD 20–46B against USD 16.3B in premiums. IAIS warns systemic cyber events could destabilise insurers themselves. The uninsured layer is where operational survival actually sits.
When decisions scale faster than responsibility. Average Responsible AI maturity: 2.0 out of 4 across 750+ leaders in 38 countries.
PwC 2025: 56% place Responsible AI under first-line teams, yet half still struggle to turn principles into repeatable processes. In 2025, a major consulting firm refunded fees after AI fabrications appeared in a government report — undisclosed until errors surfaced.
#DK360 is an independent research and advisory practice by Deepak Kota — focused on helping leaders navigate governance, strategy and emerging technology through structured foresight, clear POV, and actionable frameworks.
All tools are made freely available to practitioners, boards, and policy professionals. Research is published for the community. DK360 Signals are published on LinkedIn.
All resources are published for research, learning and informational purposes only. They do not constitute legal, regulatory or professional advice. Users should seek qualified advice before making governance or compliance decisions. © 2026 Deepak Kota. All rights reserved. CC BY-NC-ND 4.0.