The Algorithm That Let in a Ghost Tenant
The Challenge
In fall 2024, Dominion Lease Partners launched a new AI-powered tenant approval tool intended to streamline the leasing process. Designed to improve efficiency and reduce bias, the tool evaluated incoming applications using pattern-matching logic trained on historical data. What the executive team did not anticipate, however, was how quickly fraudsters would exploit this lack of oversight. Within a matter of weeks, an application using fully fabricated documents passed through the automated process undetected. Lease approval was granted to a ghost tenant with a fake name, stolen social insurance number, and forged pay stubs.
An internal team raised concerns when the assigned unit experienced extended mail pileup and missed utility activation. A manual review revealed that the documentation had never been verified by human eyes. The AI, trained to prioritize document formatting and keyword presence, was blind to contextual red flags. The vendor contract for the tool had not included specific obligations for fraud detection, risk analysis, or adversarial testing. Worse, the executive board had not been consulted before the rollout, and there was no AI governance policy in place.
It became clear that the technology had been deployed without adequate safeguards, and that automation had become a blind spot rather than a benefit. The reputational damage to Dominion was compounded by coverage in real estate trade media that raised questions about the industry's growing reliance on artificial intelligence.
Our Solution
Our firm was brought in to conduct a rapid response assessment of the AI implementation and its surrounding governance. First, we recommended that all automated approvals be suspended until a thorough review was complete. Next, we worked with the client to develop a comprehensive AI governance framework, including risk classification, executive oversight, and fraud resistance testing.
We reviewed the algorithm’s training data, vendor onboarding protocols, and decision logic thresholds. A dual-approval workflow was introduced to ensure that all high-risk applications passed through human review. Contracts were revised to define security obligations, reporting thresholds, and liability in the event of downstream fraud. The executive board participated in a tailored AI risk workshop to better understand how these tools fit into broader regulatory and ethical expectations.
The Value
Dominion regained control over its leasing process and rebuilt confidence with stakeholders. Fraud prevention metrics improved significantly, and the board now receives quarterly reports on all emerging technology deployments. Clients and applicants expressed stronger trust in the process after public disclosures showed that Dominion was taking accountability and leading with transparency.
Implementation Roadmap
1. Suspend automated approval processes and launch governance review
2. Implement dual-layer approval for flagged applications
3. Develop and adopt AI governance policy with executive oversight
4. Rework contracts with technology vendors to define security standards
5. Deliver board-level training on emerging technology risks and responsibilities
Info Sheet
(Story 2 text included above in the compiled stories)
Info Sheet
Industry Sector: Real Estate and Rental and Leasing
Applicable Legislation:
- PIPEDA
- Canadian Anti-Fraud Centre Guidelines
- Canadian AI Ethics and Use Recommendations
Necessary Action Type: AI Risk Governance and Fraud Detection Reform
Steps to Be Taken:
- Conduct adversarial testing of AI approval systems
- Pause automated approvals until risk frameworks are in place
- Establish AI governance policy with ethical use guidelines
- Train leadership on automation risks and mitigation strategies
- Integrate identity verification layers into automated workflows
Involved Third Parties:
- Automated leasing technology vendor
- External fraud analytics and machine learning consultants

