Security in the Time of AI
Tripty Arya
May 3, 2025
C-Suite Leaders,
AI promises to transform multifamily operations — from leasing to maintenance, from compliance to resident experience. But with that opportunity comes an urgent challenge: how to protect the integrity, privacy, and security of your data in an AI-driven environment?
For many executives, the concern is less about the value of AI — and more about where their data goes, who has access, and whether existing controls are still enough.This is no longer just an IT issue. When AI is embedded into workflows that touch leases, financials, and personal communications, data security becomes a board-level concern.
The Real Risk Isn’t the AI—It’s the Architecture
The problem isn’t that AI is inherently insecure. It’s that most organizations are applying outdated models of security to fundamentally new technology. Certifications like SOC 2 and ISO 27001 are useful, but they were designed for static, server-based applications. AI introduces non-deterministic behavior, cross-functional data flow, and model-driven systems that learn and adapt across business functions.
Here’s why that matters:
Software solutions which simply “add AI” often rely on middleware, external providers and data infrastructure that the vendor does not control. This creates hidden handoffs, where your data moves across systems you can’t audit. In these setups, legacy platforms become responsible for retrofitting security layers — on architectures never designed for them.
Non-native AI doesn’t ‘speak property management.’ It lacks the domain-specific taxonomy and workflow context needed to recognize where data is sensitive or risk-prone.”
These platforms may appear to offer quick wins — but they can be costly to secure, hard to scale, and risky to trust. When AI is not natively integrated, organizations must rely on documentation and external agreements — rather than enforceable, embedded infrastructure. That’s not governance. That’s hope.
AI-native platforms can enforce security at every layer — including data ingestion, model interaction, output delivery, and access control. Wrapped AI tools, by contrast, often obscure these layers behind proprietary integrations or third-party dependencies — making governance far more complex.
A Secure Deployment Framework: What, How, Who
To avoid these pitfalls and move forward confidently, multifamily leaders should apply a combined lens: strategy, architecture, and governance working in concert. Here’s a streamlined framework to assess and implement AI securely across your organization:
1. [What] Data AI Will Touch
All data is not equal — and fear should not be used as a blocker. For example, PII is not the same as conversational data (e.g., messages). But your ability to scrub PII from all conversations is a clear marker of security maturity.
When working internally or with a vendor, start by asking: What data do we have?Understanding your different types of data is crucial so they can be classified as operational, sensitive, or regulated. Your internal teams or preferred vendors can then architect around this or similar taxonomy.
This is also why domain understanding is critical in AI solutions — far more so than it was in SaaS, where security remained largely a technical function.
2. [How] is the Data transformed and processed
Push beyond checkboxes. Ask vendors about the ability to track and intervene in their pipelines. A platform built for AI will make it easy to implement controls at multiple levels. But wrappers or API-based “AI solutions” are often difficult to audit or govern. Architecture diagrams should clearly show redaction, deletion, and logging as built-in functions — not bolted-on processes. Wrapped tools may depend on policies they can’t enforce at the infrastructure level.
3. [Who] has access to information
Security breakdowns often stem from unclear access rights. Access control in AI isn’t just about logins and views — it’s about traceable accountability.
Who can use the system? Who can approve data flows? Who owns the outcome? In secure AI environments, these roles are not just assigned — they’re codified and learnable. The user enablement layer goes beyond screen-level permissions. It requires identity management and domain awareness built into the AI’s data pathways. Deploying tools like single sign-on (SSO), role-based access control (RBAC), and endpoint monitoring ensures your AI operating model is not just intuitive — but also secure.
Security is an AI Enabler
Too often, security is seen as a brake on AI adoption. But experience across real estate, infrastructure, and utilities shows the opposite: well-secured environments unlock faster, broader, and safer AI deployment.
When data is trusted, access is governed, and usage is trackable, teams are empowered to experiment and scale — because the guardrails are already in place. Security doesn’t restrict innovation — it enables it.
Until another Saturday, next month.
Best,
Tripty
About This Email Series
This email is part of an ongoing Strategy Saturday series written for C-suite leaders and focused on the strategic shifts required to lead effectively in an AI-driven world. The insights and perspectives shared are intended to support strategic reflection and informed decision-making, rather than prescribe specific actions.