Article Post

Algorithmic Bias In Smart Cities

Recently I posted about a bill passed by New York City that calls for the creation of a task force to monitor algorithms used by municipal agencies and it applies to any city agency using them to target “services to persons, imposing penalties, or policing”.

According to a New Yorker piece, the bill came into being after New York City Council member James Vacca became frustrated with not knowing the “criteria and formula” for staffing decisions in the Bronx police department located within his district. New York City already has a Mayor’s Office of Data Analytics, which uses algorithms for a variety of tasks, from identifying high-risk illegal conversions to conducting risk-based inspection for fire systems to granting parole. The New York Police Department has also budgeted $45 million spending on predictive policing, a system that uses algorithms, over the next five years. But the bill, which is waiting on Mayor De Blasio’s desk to be signed into law, will have repercussions that go beyond crime.

For example, it could affect the city’s public school district system as stakeholders demand transparency into the process by which city agencies allocate students to schools within the country’s largest school district system. Similarly, teachers could also demand to know more about the evaluation criteria used to assess their capabilities.

A precedent for the latter case already exists. In May this year, teachers filed a case against the Houston Independent School District (HISD) system charging violation of the fourteenth amendment, which guarantees “due process” of law for any decision that abridges rights or immunities of an individual. Because the algorithms (in this case, the Education Value-added Assessment Systems (EVAS))  used to evaluate teachers are classified as trade secrets by HISD’s vendor, the teachers do not have visibility into the process used to judge them.

Other cities have also used algorithms to to make city processes more efficient.

Chicago, the country’s third largest city, has extensively used code to analyze city data. For example, it implemented analytics in its Department of Streets and Sanitation to aid in rodent baiting efforts. Based on reports, that measure resulted in 20% increase in staff productivity. It also automated a pilot program to optimize food inspection processes in 2015.  

But algorithms are a double-edged sword.

Even as they streamline processes and help analyze reams of data, they may introduce bias into the process. While several such cases of bias already exist, it is still difficult to define algorithmic bias and ascertain causes. To a large extent, that depends on the variables used for evaluating the algorithm. For example, Chicago’s food inspections app used new metrics, such as number of burglaries within a given area, in addition to standard variables (such as an establishment’s history). While the app has resulted in a 15% rise in reported numbers of critical violations, the number of illnesses within a given neighborhood has remained flat. Consequently, it is difficult to evaluate the app’s efficacy since an increase in number of violations does not necessarily translate to efficiency or safety gains.     

At a recent conference, Michael Garris, a scientist with the National Institute of Standards and Technology, said algorithms were an area ripe for standardization. Bias inherent in algorithms is being studied at places like the Massachusetts of Technology.

According to Kate Crawford from the AI Now Initiative, these are still “early days” for understanding algorithmic bias. “Just this year we’ve seen more systems that have issues, and these are just the ones that have been investigated,” she told MIT Tech Review in an email.

But even that might be difficult. Investigating source code for algorithms within the context of municipal agencies is currently not possible. This is because vendors to municipal agencies can invoke the argument for “proprietary software” as defense against revealing the logic of their algorithms. In the New Yorker article, a city official argues that city agencies cannot force agencies to reveal an algorithm’s working because it violates the terms of agreement with the vendor. There might also be the danger that it dissuades companies from working with city agencies because it would affect their competitive advantage.  

When the Brennan Center  for Justice at the NYU School of Law made an FOIA request for the NYPD to reveal source code for its predictive policing algorithm, the police agency refused. Its argument was that that this might help criminals avoid areas where police patrols are being conducted. But the Center says that patrol officers are required to only spend “a fraction” of their time in a given area during their patrols and that revealing the source code will not reveal the location of each and every officer. As of this writing, the consensus seems to be that the solution to this problem might lie in “qualified transparency”, which divulges only select portions of code to the public.

Explore Related Topics:


No discussions yet. Start a discussion below.