University of Twente, The Netherlands
Generative AI is a form of artificial intelligence (AI) that produces content based on patterns it observes in large datasets. Although AI has been in use extensively in many settings (e.g. in scientific work and in the classroom), excitement and concern around AI seemed to reach a crescendo at the beginning of 2023. For instance, teachers began sharing reflections on the implications these language models could have for their profession as students began turning in assignments generated by chatGTP, while creatives – writers, graphic designers, musicians – expressed alarm over AI’s seemingly indiscriminate use of their intellectual property. Others voiced grave concern about the potential consequences AI misuses and abuses could have for human rights and democratic institutions. Accordingly, some governments have taken stringent action in response to the outcry.
Amidst these concerns, cities’ numerous and pluriform experiments with AI were pretty much sidelined. In this contribution to EURA Conversations I want to draw attention to the potential role that cities could play in ensuring that AI benefits, rather than harms, people and societies. Since Amsterdam launched its digital city in 1994, cities have become key figures in ICT innovations and in developing innovative governance paradigms to mitigate risks and harm related to data processes like AI, while shaping national and European policy innovation on these technologies. I reflect on some lessons learned and open questions in what follows.
AI experiments in cities
AI shapes urban social and economic organization in both visible and invisible ways (Leszczynski, 2020), with important implications for a city’s workers, residents, and visitors. Today, AI is being used for many critical urban services like waste management, mobility, and telecommunications, and cities are exploring whether AI can be leveraged to find effective climate change mitigation and adaptation strategies and to spur on economic development (e.g. AI for circular economies).
Critics have however raised concerns of a widening digital divide that threatens to amplify entrenched urban inequalities. Moreover, AI is being applied in domains where discernment is especially important, where the datafication of human beings can be dangerous, and where AI may double down a city’s bias-driven injustice (e.g. AI for predictive policing). One example is the Rotterdam social fraud detection platform, SyRI, which demonstrated how cities might automate injustice when the AI they use is trained on biased and inaccurate data.
Adopting AI also be costly for cities, as it requires significant investments in relevant skills, abilities, processes as well as the resources to implement upgrades and stay abreast of technological developments. Not all local governments have this kind of economic power, and given the economic advantages AI can confer, AI deployment may dually increase inequalities within and between cities. This begs the question, “Are cities adopting AI to embrace new opportunities or rather to avoid being left behind?”
AI does more than reproduce the past. The technology sets in motion new problematic dynamics, including concerns about people’s rights vis-à-vis the collection, accessing, sharing, use, storage, and ownership of their personal data. Relatedly, intensifying deployment of AI technology deepens the dependency between government and technology firms, potentially weaking the formers’ ability to regulate the latter.
But local authorities are not without power when it comes to AI. They have both the authority and mandate to govern the contexts within which AI is deployed, and thus to regulate AI. For instance, cities have begun to use the procurement process as a window of opportunity to shape AI design. It is at this point that cities can set the conditions for AI development and deployment. Here, Amsterdam has also been at the forefront through its use of an array of diverse contractual clauses, a framework which is being used as a model by the EU in developing its own standards.
How to govern AI in European cities?
AI in Europe is currently governed through a patchwork of policies which largely aim to encourage and enable AI development (allowing Europeans to draw benefits from these systems) while instituting safeguards from AI-related risks. These national and supranational policies are often flanked by initiatives aiming to address AI risks at the design phase, spearheaded by local activists, academics (e.g. Sasha Constanza-Chock) and practitioners (e.g. the distributive AI research institute) but also, by some technologists. Many of these initiatives fall under the banner of responsible AI, emphasizing mutually reinforcing design features: human-centered AI ethics, multi-stakeholder co-creation, and regulatory capacity building.
Cities are actively engaging with the question of AI ethics, and they are informing their engagement with their experience in this domain. Rather than a simply being needs-based, human centered ethics place emphasis on human values and principles such as fairness and transparency. For instance, UNESCO identifies four key values that can stand as pillars of ethics for general purpose AI: human rights, peace, diversity and inclusion, and environmental sustainability.
Establishing and using AI ethics is pursued, in part, by engaging citizens, particularly those from affected communities, in AI development. This is done to ensure that AI systems embed values and assumptions that align with the public interest (e.g. Kitchin, 2016). Non-Governmental Organizations (NGOs) and other civil society actors are also seen as important stakeholders of urban AI systems. Alongside citizens, they are also engaged in the monitoring and evaluation of these systems as a way to ensure that AI systems work as intended.
I end these remarks by raising a question for those wishing to bring about a responsible AI approach to AI development and governance. Cities can also suffer from blind-spots, and I am concerned that when discussions about AI risks and harms center solely on human rights, they may trigger legalistic and bureaucratic governance impulses (the EU Artificial Intelligence Act emphasises risk reduction which local governments enforce through impacts assessments and contracts) without catalyzing the deep societal transformations that AI misuse clearly demonstrate is needed. Moreover, to the extent that cities are critically embedded in tightly coupled regional and global networks, it is an open question whether attempts to solving problems at a city level might shifts the consequences outward to another jurisdiction, further fueling global injustices. Insofar as AI harms have implications for other species and future generations, I wonder whether moving beyond human rights and centering on a more than human approach (Brown, 2016) to AI urban governance might enable us to add to important questions of “justice” other questions relating to kindness, stewardship, and caring.
References:
Brown, F. L. (2016). The city is more than human: An animal history of Seattle. University of Washington Press Kitchin, R. (2016). The ethics of smart cities and urban science. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 374(2083), 20160115 Leszczynski, A. (2020). Glitchy vignettes of platform urbanism. Environment and Planning D: Society and Space, 38(2), 189–208
Cookie | Duration | Description |
---|---|---|
cookielawinfo-checkbox-advertisement | 1 year | Set by the GDPR Cookie Consent plugin, this cookie is used to record the user consent for the cookies in the "Advertisement" category . |
cookielawinfo-checkbox-analytics | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics". |
cookielawinfo-checkbox-functional | 11 months | The cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional". |
cookielawinfo-checkbox-necessary | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary". |
cookielawinfo-checkbox-necessary | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary". |
cookielawinfo-checkbox-others | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other. |
cookielawinfo-checkbox-performance | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance". |
CookieLawInfoConsent | 1 year | Records the default button state of the corresponding category & the status of CCPA. It works only in coordination with the primary cookie. |
viewed_cookie_policy | 11 months | The cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data. |
viewed_cookie_policy | 11 months | The cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data. |
Cookie | Duration | Description |
---|---|---|
__cf_bm | 30 minutes | This cookie, set by Cloudflare, is used to support Cloudflare Bot Management. |
S | 1 hour | Used by Yahoo to provide ads, content or analytics. |
ugid | 1 year | This cookie is set by the provider Unsplash. This cookie is used for enabling the video content on the website. |
Cookie | Duration | Description |
---|---|---|
_ga | 2 years | The _ga cookie, installed by Google Analytics, calculates visitor, session and campaign data and also keeps track of site usage for the site's analytics report. The cookie stores information anonymously and assigns a randomly generated number to recognize unique visitors. |
_ga_8HW723R63G | 2 years | This cookie is installed by Google Analytics. |
_gat_gtag_UA_214261937_1 | 1 minute | Set by Google to distinguish users. |
_gid | 1 day | Installed by Google Analytics, _gid cookie stores information on how visitors use a website, while also creating an analytics report of the website's performance. Some of the data that are collected include the number of visitors, their source, and the pages they visit anonymously. |
CONSENT | 2 years | YouTube sets this cookie via embedded youtube-videos and registers anonymous statistical data. |
Cookie | Duration | Description |
---|---|---|
NID | 6 months | NID cookie, set by Google, is used for advertising purposes; to limit the number of times the user sees an ad, to mute unwanted ads, and to measure the effectiveness of ads. |
VISITOR_INFO1_LIVE | 5 months 27 days | A cookie set by YouTube to measure bandwidth that determines whether the user gets the new or old player interface. |
YSC | session | YSC cookie is set by Youtube and is used to track the views of embedded videos on Youtube pages. |
yt-remote-connected-devices | never | YouTube sets this cookie to store the video preferences of the user using embedded YouTube video. |
yt-remote-device-id | never | YouTube sets this cookie to store the video preferences of the user using embedded YouTube video. |
Cookie | Duration | Description |
---|---|---|
COMPASS | 1 hour | No description |
DEVICE_INFO | 5 months 27 days | No description |
EDUWEBSESSID | session | No description |
The expansion of AI and the rise of public awareness has certainly fired people’s imagination. Long points out the potential of cities as a laboratory for experiment with AI. Urban infrastructure can support proliferation of sensors which are needed to provide real-time data for AI models. Discovered patterns may provide the basis for feedback to control systems of various kinds embedded in the physical fabric of the city. AI may enable an efficient city but also a city where human behaviour is moulded to meet the commercial interests of private sponsors, in particular nudging behaviour in desired directions and managing the attention of the citizen.
Rather than bureaucratic regulations which might be skirted or nullified in the courts, what is required is a transparency and openness to citizens as to what AI is doing, how it is doing it and why. Decision making, procurement and data sources should all be transparent.
Long calls for a ‘more than human approach’ and points toward human centred ethics. But what does that mean? The corporate focus on AI is often one of rational efficiency, data driven decisions and cost savings. AI predicts common patterns, identifies the average and the standard. As such an urban AI application may tend towards promoting sameness. Human centred AI in the urban environment must engage with the difference, enabling diversity, bringing colour and innovation; providing knowledge that empowers citizens to both collaborate and create.
Endless governance reports, regulations, Government acts, risk assessments, frameworks and standards will have little traction if we don’t address the anonymisation and alienation of urban environments. The real risk is that the proliferation of AI in cities becomes an instrument for further alienation.
Edited on: 5 November 2023
Thank you for your comment.
I agree wholeheartedly that transparency must be a cornerstone of AI ethics generally, and urban AI ethics in particular. One of the core challenges tied to this is indeed control, as AI development and AI deployment are not necessarily coupled processes. Like any other tool, governments may deploy them, without contributing greatly to their design and/or development due to practical constraints (mainly inhouse expertise). Which is why not just deployment, but also development of AI should be seen as a political act and it is important that cities contribute to these debates in order to minimize and mitigate AI related risks. The point I would like to converse with you further on is on the issue of information sharing about AI. While I agree that government plays an important role in informing and “forming” intelligent AI citizens, I do believe that local governments play a just as critical role in informing national and international governing bodies on the possibilities and also risks related to AI. From the production and use of data feeding into AI models, to the design of the algorithms themselves, and their eventual use (e.g. as decision models), cities can share their experiences with and concerns about AI from the vantage point of those sitting closest to the ground. For instance, even the very collection of collection of “big data” has important data justice and environmental implications and there have been numerous documented instances of AI design failures. To make good use of these powerful new technologies, local governments must insist on having a hand in their design in order to safeguard local democratic and environmental resilience.