The AI Now Institute, People Power Media, and the Anti-Eviction Mapping Project today launched Landlord Tech Watch, a crowdsourced map examining where surveillance and AI technologies are being used by landlords to potentially disempower tenants and community members. The site invites tenants to self-report the types of tech that are being installed in their residences and neighborhoods, and it aims to serve as a resource to help educate about the widespread use and harms of these technologies.
Currently, there’s little in the way of legislation governing the collection and use of data in the context of real estate. Owners and landlords typically purchase and install tech products and platforms without notifying or discussing potential harms with their tenants, and sometimes without even letting them know.
For instance, in New York City, rent-stabilized tenants at the Atlantic Plaza Towers in Brownsville were subjected to a facial recognition security system from a third-party vendor. Elsewhere in the city, an elderly tenant in Hell’s Kitchen charged that a keyless system installed by his landlord was too complicated, and feared that his movements would be tracked through the technology.
Residents and local elected officials were quick to rail against the systems, and last October, City Council proposed regulation that would force landlords to provide tenants with traditional metal keys to enter their buildings and apartments. The tenant in Hell’s Kitchen along with neighbors secured the right to physical keys in May after suing the landlord.
Landlord Tech Watch aims to offer tenants and researchers a better sense of the scope and scale of landlord technology currently in use like camera, payment, and screening. It includes examples of different types of tech and the specific harms associated with each type, along with a deployment map that indicates where such tech is being used and a survey that encourages people to share their experiences with the ways that their building and neighborhood are installing technology.
Residents at 406 West 129th Street in Manhattan have already used Landlord Tech Watch to report that intercoms from GateGuard have been installed at buildings without permission. (CNET recently reported GateGuard has been pitching its technology to landlords in New York as a way to sidestep rent-control regulations.) At 61 Wyckoff Ave in Brooklyn, a tenant claims the landlord recently replaced buzzers with new camera-equipped electronic buzzers.
“Facial and movement recognition cameras made by the Israeli-based FST21 [have been installed in our building],” a resident of New York’s 10 Monroe Street wrote. “This came after Hurricane Sandy inflicted damage on the building. The landlord then installed this without our consent … We don’t know what happens with the data being collected about us. It also doesn’t work well, and we all have to do humiliating dances to be recognized by it.”
The Landlord Tech Watch website notes that tech can be used to perform potentially prejudicial background, income, and credit checks on prospective landlords; while there’s no registry of all tenant screening companies, it’s estimated that there are over 2,000. (Last year, the U.S. Department of Housing and Urban Development began circulating rules that would make it harder for tenants to sue landlords when algorithms disproportionately deny housing to people of color.) Virtual property management platforms might prevent tenants from communicating with their actual landlord, resulting in neglect and less responsive management. And AI security systems could target and potentially endanger certain tenants depending on their ethnicity and skin color.
Consider facial recognition, which countless studies have shown to be susceptible to bias. A study last fall by University of Colorado, Boulder researchers showed that AI from Amazon, Clarifai, Microsoft, and others maintained accuracy rates above 95% for cisgender men and women but misidentified trans men as women 38% of the time. Separate benchmarks of major vendors’ systems by the Gender Shades project and the National Institute of Standards and Technology (NIST) suggest that facial recognition technology exhibits racial and gender bias and facial recognition programs can be wildly inaccurate, misclassifying people upwards of 96% of the time.
Credit: Source link