New York City moves to establish algorithm-monitoring task force
A while back, I'd written a post about the relationship between inequality and smart cities. Algorithms, which will play a key role in functioning of the smart city ecosystem, are also susceptible to bias. So, in a move that is the first of its kind, New York City plans to institute a task force to evaluate algorithms used by municipal agencies. The task force, which will be formed three months after the bill's signing, is responsible for authoring a report that will provide recommendations regarding the following:
How can people know whether or not they or their circumstances are being assessed algorithmically, and how should they be informed as to that process?
Does a given system disproportionately impact certain groups, such as the elderly, immigrants, the disabled, minorities, etc?
If so, what should be done on behalf of an affected group?
How does a given system function, both in terms of its technical details and in how the city applies it?
How should these systems and their training data be documented and archived?
No discussions yet. Start a discussion below.