Building AI for the Global South

Join Transform 2021 for the most important themes in enterprise AI & Data. Learn more.

Harm wrought by AI tends to fall most heavily on marginalized communities. In the United States, algorithmic harm may lead to the false arrest of Black men, disproportionately reject female job candidates, or target people who identify as queer. In India, those impacts can further impact marginalized populations like Muslim minority groups or people oppressed by the caste system. And algorithmic fairness frameworks developed in the West may not transfer directly to people in India or other countries in the Global South, where algorithmic fairness requires understanding of local social structures and power dynamics and a legacy of colonialism.

That’s the argument behind “De-centering Algorithmic Power: Towards Algorithmic Fairness in India,” a paper accepted for publication at the Fairness, Accountability, and Transparency (FAccT) conference, which begins this week. Other works that seek to move beyond a Western-centric focus include Shinto or Buddhism-based frameworks for AI design and an approach to AI governance based on the African philosophy of Ubuntu.

“As AI becomes global, algorithmic fairness naturally follows. Context matters. We must take care to not copy-paste the Western normative fairness everywhere,” the paper reads. “The considerations we identified are certainly not limited to India; likewise, we call for inclusively evolving global approaches to Fair-ML.”

The paper’s coauthors concluded that conventional measurements of algorithm fairness make assumptions based on Western institutions and infrastructures after they conducted 36 interviews with researchers, activists, and lawyers working with marginalized Indian communities. Among the five coauthors, three are Indian and two are white, according to the paper.

Google research scientist Nithya Sambasivan, who previously worked to create a phone broadcasting system for sex workers in India, is the lead author. Coauthors include Ethical AI team researchers Ben Hutchinson and Vinodkumar Prabhakaran. Hutchinson and Prabhakaran were listed as coauthors of a paper titled “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” that was also accepted for publication at FAccT this year, but the version submitted to FAccT does not include their names. That paper was the subject of debate at the time former Google AI ethics co-lead Timnit Gebru was fired and concludes that extremely large language models harm marginalized communities by perpetuating stereotypes and biases. Organizers of the conference told VentureBeat this week that FAccT has suspended its sponsorship relationship with Google.

The paper about India identifies factors commonly associated with algorithmic harm in the country, including models being overfit to digitally rich profiles, which usually means middle class men, and a lack of ways to interrogate AI.

As a major step toward progress, the coauthors point to the AI Observatory, a project to document harm from automation in India that launched last year with support from the Mozilla Foundation. The paper also calls for reporters to go beyond business reporting and ask tech companies tough questions, stating, “Technology journalism is a keystone of equitable automation and needs to be fostered for AI.”

“While algorithmic fairness keeps AI within ethical and legal boundaries in the West, there is a real danger that naïve generalization of fairness will fail to keep AI deployments in check in the non-West,” the paper reads. “We must take pains not to develop a general theory of algorithmic fairness based on the study of Western populations.”

The paper is part of a recent surge in efforts to build AI that works for the Global South.

A 2019 paper about designing AI for the Global South describes the term “Global South” as similar to the term “third world,” with a shared history of colonialism and development goals. Global South does not mean simply the southern Hemisphere, as northern hemisphere nations like China, India, and Mexico are generally included, while Australia is in the southern hemisphere but is considered part of the Global North. China seems to be set aside since its AI ambitions and results instill fear in politicians in Washington, D.C. and executives in Big Tech alike.

“The broad concern is clear enough: If privileged white men are designing the technology and the business models for AI, how will they design for the South?” the 2019 paper reads. “The answer is that they will design in a manner that is at best an uneasy fit, and at worst amplifies existing systemic harm and oppression to horrifying proportions.”

Another paper accepted for publication at FAccT this week and covered by VentureBeat examines common hindrances to data sharing in Africa. Written primarily by AI researchers who grew up or live in Africa, the paper urges relationships to data that build trust and consider historical context, as well as current trends of Big Tech companies growing operations in Africa. Like the Google paper, that work draws conclusions from interviews with local experts.

“In recent years, the African continent as a whole has been considered a frontier opportunity for building data collection infrastructures. The enthusiasm around data sharing, and especially in machine learning or data science for development/social good settings, has ranged from tempered discussions around new research avenues to proclamations that ‘the AI invasion is coming to Africa (and it’s a good thing).’ In this work, we echo previous discussions that this can lead to data colonialism and significant, irreparable harm to communities.”

The African data industry is expected to see steady growth in the coming years. Companies like Amazon’s AWS and Microsoft’s Azure opened their first datacenters in Africa in 2019 and 2020, respectively. Such trends have led to examination of data practices around the world, including in the Global South.

Last year, MIT hosted a three-day summit in Boston to discuss AI from a Latin American perspective. The winner of a pitch competition at that event was a predictive model for attrition rates in higher education in Mexico.

Above: The 2020 Global AI Readiness Index comparing preparedness and capacity across 33 different metrics

As part of the summit, Latinx in AI founder Laura Montoya gave a presentation about the Global AI Readiness (GAIR) score of Caribbean and Latin American countries, alongside factors like unemployment rates, education levels, and the cost of hiring AI researchers.

The inaugural Government AI Readiness Index ranked Mexico highest among Latin American nations, followed by Uruguay and Colombia. Readiness rankings were based on around a dozen factors, including skills, education levels, and governance. Cuba ranked last in the region. When coauthors introduced GAIR in 2019, they questioned whether the Global South would be left out of the fourth industrial revolution. That concern was echoed in the 2020 report.

“If inequality in government AI readiness translates into inequality in AI implementation, this could entrench economic inequality and leave billions of citizens across the Global South with worse quality public services,” authors of the report said.

In the 2020 GAIR, Uruguay inched ahead of Mexico. At #42 in the world, Uruguay is the highest-ranking country in Latin America. Top 50 nations in the AI readiness index are almost entirely in the Global North. And authors of the report stress that having the capabilities to advance isn’t the same thing as successful implementation.

Montoya insists that Caribbean and Latin American nations must consider factors like unemployment rates and warns that brain drain can also be a significant factor and lead to a lack of mentors for future generations.

“Overall, Latin American and Caribbean do have fairly high education levels, and specifically they actually develop more academic researchers in the area of AI than other regions globally, which is of note, but oftentimes those researchers with high technological skills will leave their country of origin in order to seek out potential job opportunities or resources that are not available in their country of origin,” she said.

Leda Basombrio is the data science lead at a center of excellence established by Banco de Credito del Peru. Speaking as part of a panel on the importance of working with industry, she described the difficulty of trying to recruit Latinx AI talent away from Big Tech companies like Facebook or Google in the United States. The majority of AI Ph.D. graduates in the U.S. today are born outside the United States, and about four out of five stay in the U.S. after graduation.

And solutions built elsewhere don’t simply transfer without consideration of local context and culture, she said. Americans or Europeans are likely unfamiliar with the financial realities in Peru, like microfinance loans or informal economic activity.

“The only people that are capable of solving and addressing [problems] using AI as a tool or not are ourselves. So we have to start giving ourselves more credit and start working on those fields because if we expect resolutions will come from abroad, nothing will happen, and I see that we do have the talent, experience, everything we can get,” she said.

AI policy: Global North vs. the Global South

Diplomats and national government leaders have met on several occasions to discuss AI investment and deployment strategies in recent years, but those efforts have almost exclusively involved Global North nations.

In 2019, OECD member nations and others agreed to a set of principles in favor of the “responsible stewardship of trustworthy AI.” More than 40 nations signed the agreement, but only five were from the Global South.

Later that year, the G20 adopted AI principles based on the OECD principles calling for human-centered AI and the need for international cooperation and national policy to ensure trustworthy AI. But that organization only includes six Global South nations: Brazil, India, Indonesia, Mexico, South Africa, and Turkey.

The Global Partnership on AI (GPAI) was formed last year in part to counter authoritarian governments’ efforts to implement surveillance tech and China’s AI ambitions. The body of 15 nations includes the U.S., but Brazil, India, and Mexico are its only members from the Global South.

Last year, the United States Department of Defense brought together a group of allies to consider artificial intelligence applications in the military, but that was primarily limited to U.S. allies from Europe and East Asia. No nations from Africa or South America participated.

Part of the lack of Global South participation in such efforts may have to do with the fact that several countries still lack national AI strategies. In 2017, Canada became the first country in the world to form a national AI strategy, followed by nations in western Europe and the U.S. An analysis released this week found national AI strategies are under development in parts of South America, like Argentina and Brazil, and parts of Africa, including Ethiopia and Tunisia.

Above: A global map of national AI policy initiatives according to the 2021 AI Index at Stanford University

An analysis published in late 2020 found a growing gap or “compute divide” between businesses and universities with the compute and data resources for deep learning and those without. In an interview with VentureBeat earlier this year about an OECD project to help nations understand their compute needs, Nvidia VP of worldwide AI initiatives Keith Strier said he expects a similar gap to form between nations.

“There’s a clear haves and have-nots that’s evolving, and it’s a global compute divide. And this is going to make it very hard for tier two countries in Africa, in Latin America and Southeast Asia and Central Europe. [I] mean that the gap in their prosperity level is going to really accelerate their inability to support research, support AI startups, keep young people with innovative ideas in these fields in their country. They’re all going to flock to big capitals — brain drain,” Strier said.

The OECD AI Policy Observatory maintains a database of national AI policies and is helping nations put ethical principles into practice. OECD AI Policy Observatory administrator Karine Perset told VentureBeat in January that some form of AI strategy is underway in nearly 90 nations, including Kenya and others in the Global South.

There are other encouraging signs of progress in AI in Africa.

The machine learning tutorial project Fast.ai found high growth in cities like Lagos, Nigeria in 2019, the same year the African Union formed an AI working group to tackle common challenges, and GitHub ranked a number of African and Global South nations in percentage in growth in contribution to open source repositories. In education, the African Master’s in Machine Intelligence was established in 2018 with support from Facebook, Google, the African Institute for Mathematical Sciences, and prominent Western AI researchers from industry and academia.

The Deep Learning Indaba conference has flourished in Africa, but AI research conferences are generally held in North America and Europe. The International Conference on Learning Representations (ICLR) was scheduled to take place in Ethiopia in 2020 and would have been the first major machine learning conference in Africa, but it was scrapped due to the COVID-19 pandemic.

The AI Index released earlier this week found that Brazil, India, and South Africa have some of the highest levels of hiring in AI around the world, according to LinkedIn data.

Analysis included in that report finds that attendance at major AI research conferences roughly doubled in 2020. COVID-19 forced major AI conferences to move online, which led to greater access worldwide. AI researchers from Africa have faced challenges when attempting to reach conferences like NeurIPS on numerous occasions in the past. Difficulty faced by researchers from parts of Africa, Asia, and Eastern Europe led the Partnership on AI to suggest that more governments create visas for AI researchers to attend conferences, akin to the visas some nations have for athletes, doctors, and entrepreneurs.

Make a lexicon for AI in the Global South

Data & Society has launched a project to map AI in the Global South. Ranjit Singh, a member of the AI on the Ground team at Data & Society, in late January launched a project for mapping AI in the Global South over the course of the year. As part of that project, he will collaborate with members of the AI community, including AI Now Institute, which is working to build a lexicon around conversations about AI for the Global South.

“The story of how public-private partnerships are imagined in the context of AI, especially in the Global South, and the nature of these relationships that are emerging, I find that to be quite a fascinating part of this study,” Singh said.

Singh said he focuses on conversations about AI in the Global South because identifying key words can help people understand critical issues and provide information needed for governance, policy, and regulation.

“So I want to basically move from what the conversation and keywords that scholarly research, as well as practitioners in the space, talk about and use to then start thinking about, ‘OK, if this is the vocabulary of how things work, or how people talk about these things, then how do we start thinking about governance of AI?’” he said.

A paper published at FAccT and coauthored by Singh and the Data & Society AI on the Ground team considers how environmental, financial, and human rights impact assessments are used to measure commonalities and quantify impact.

Global South AI use cases

Rida Qadri is a Ph.D. candidate who grew up in Pakistan and now studies urban information systems and the Global South at MIT. Papers about data and AI in India and Africa published at FAccT emphasize that the narrative around AI in the Global South often panders to specific ethics topics and communities influenced by legacies of colonialism. Qadri agrees with this assessment.

“They’re thinking about those kinds of ethical concerns that now Silicon Valley is being critiqued for. But what’s interesting is they position themselves as homegrown startups that are solving developing world problems. And because the founders are all from the developing world, they automatically get a lot of legitimacy. But the language that they’re speaking is just directly what Silicon Valley would be speaking — with some sort of ICT for development stuff thrown in, like empowering the poor, like educating farmers. You have ethics washing in the Global North, and in the developing world we have development washing or empowerment speak, like poverty porn,” she said.

Qadri also sees ways AI can improve lives and says that building innovative AI for the Global South could help solve problems that plague businesses and governments around the world, particularly when it comes to working in lean or resource-strapped environments.

Trends she’s watching around AI in the Global South include security and surveillance, census and population counts using satellite imagery, and predictions of poverty and socio-economics.

There are also numerous efforts related to creating language models or machine translation. Qadri follows Matnsāz, a predictive keyboard and developer tool for Urdu speakers. There’s also the Masakhane open source project to provide machine translation for thousands of African languages to preserve local languages and enable commerce and communication. That project focuses on working with low-resource languages, those with less text data for machine translation training than languages like English or French.

Final thoughts

Research published at FAccT this week frequently expresses concerns about data colonialism from the Global North. If AI can build what Ruha Benjamin refers to as a new Jim Code in the United States, it seems critically important to consider trends of democratization, or the lack thereof, and how AI is being built in nations with a history of colonialism.

It’s also true that brain drain is a major factor for businesses and governments in the Global South and that a number of international AI coalitions have been formed largely without these nations. Let’s hope those coalitions expand. Doing so could involve reconciling issues between countries largely known for colonization and those that were colonized. And enabling the responsible development and deployment of AI in Global South nations could help combat issues like data colonialism in other parts of the world.

But issues of trust remain a constant across international agreements and papers published at FAccT by researchers from Africa and India this week. Trust is also highlighted in agreements from the OECD and G20.

There can be a temptation to view AI ethics purely as a human rights issue, but the fair and equitable deployment of artificial intelligence is also essential to adoption and business risk management.

Above: Global AI adoption rates according to McKinsey

Jack Clark was formerly director of policy at OpenAI and is part of the steering committee for the AI Index, an annual report on the progress of AI in business, policy, and use case performance. He told VentureBeat earlier this week that the AI industry is industrializing rapidly, but it badly needs benchmarks and ways to test AI systems to move technical performance standards forward. As businesses and governments increase deployments, benchmark challenges can also help AI practitioners measure progress toward shared goals and rally people around common causes — like preventing deforestation.

The idea of common interests comes up in Ranjit Singh’s work at Data & Society. He said his project is motivated by a desire to map and understand the distinct language used to discuss AI in Global South nations, but also to recognize global concerns and encourage people to work together on solutions. These might include attempts to understand when a baby’s cough is deadly, as the startup Ubenwa is doing in Canada and Nigeria; seeking public health insights from search engine activity; and fueling local commerce with machine translation. But whatever the use case, experts in Africa and India stress that equitable and successful implementation depends on involving local communities from the inception.

VentureBeat

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more

Source: Read Full Article