top of page

Black communities and AI: Helpful or harmful?

  • Feb 4
  • 6 min read
A black woman using a laptop in a dim room, illuminated by the screen light. Background features computer servers with blue glow.
Image by Christina Morillo from Pexels

Calling artificial intelligence (AI) a hot topic is an understatement - almost all industries are adopting AI in their business practices, and its current global market value is estimated to reach $1.81 trillion by 2030 (1). With increasing applications in health, education, retail and more, it’s likely that AI will be a mainstay in our lives for years to come. Whilst this technology has the potential to facilitate our lives through, for example, finding ways to make infrastructure resilient against natural disasters or designing new drugs against superbugs, a lack of governance or policy could lead to unethical use and possibly harm. In recent years we have already seen examples of AI having negative impacts on health, environment and social wellbeing in predominantly Black areas. This blog post will discuss AI ventures that impact Black communities and what we can do to use these tools positively.



Our environment

Agricultural companies have adopted AI to predict optimal conditions for farming in African countries. This has proven to be particularly useful for farmers in Nigeria; Crop2Cash and Ignitia are using the technology to provide tailored recommendations based on the weather, farm level data and environmental data (2). Amini - a tech company rooted in the Global South - also aligns information on soil quality, climate patterns and topography with satellite data in order to predict long-term sustainability and evaluate reforestation efforts in Kenya (3). There is clear potential for AI in optimising agriculture, but the effects of this technology on air quality and the climate cannot be ignored. As AI models like ChatGPT are often powered by a large number of processors (or even supercomputers), they need large amounts of electricity and water for power and cooling. GPT-3 was estimated to produce 626,000 pounds of carbon dioxide in 2019 - this is equivalent to around 250 round-trip car journeys between Glasgow and London (4,5). The Grok chatbot on X (formerly known as Twitter) has proven to be particularly harmful to a Black community in the USA: Grok is powered by the xAI supercomputer located near Boxtown, a historically Black neighbourhood in Memphis, Tennessee. xAI uses an estimated 300 megawatts of power (equivalent to 250,000 households) and over 30 methane gas turbines are fitted for its operation. Whilst anonymous fliers were mailed to Boxtown residents claiming xAI had low emissions, the turbines are known to release harmful pollutants (6,7). As more AI models are developed, it is crucial that their locations and environmental impact are monitored, especially as research demonstrates Black individuals are often the first to experience the harms of pollution (8).


 Our social wellbeing

A human hand overlaid with robotic features reaches upwards against a turquoise background.
Image by Thisisengineering from Pexels

In recent years new tools have been developed to identify and address disparities on local and national scales. Microsoft’s AI for Good Lab has developed a spatiotemporal machine learning model which is able to detect malnutrition hotspots in Kenya, identifying the regions that are the most vulnerable to food insecurity (9). Moreover, the AI Lab Ethics Initiative launched by the NHS has supported projects to explore care inequalities in marginalised groups, including an investigation into adverse incidents during birth and maternity in Black mothers (10). Nevertheless, there is a danger for AI to encourage bias. For example, an AI-based mortgage lending system was found to charge higher rates for Black borrowers compared to white borrowers (11), and a 2024 study of large language models (LLMs) demonstrated they often suggest that speakers of African American Vernacular English (AAVE) be assigned less prestigious jobs and receive harsher criminal sentences (12). AI models run the risk of perpetuating racial bias if their training data isn’t properly designed to remove it.


Our health

Portrait of a person with curly hair and calm expression. Red laser lines cross their face. Plain background, white shirt.
Image by cottonbro studio from Pexels

AI has a diverse range of applications for healthcare, with usage in diagnosis, treatment and disease monitoring. For example, a machine learning algorithm has been developed in order to predict vaso-occlusive crises (painful episodes of blood vessel blockages in sickle cell disease) based on physiological data captured by a smartwatch. An early alert system for these crises could allow patients to seek preventive treatments faster and avoid severe organ damage (13). Researchers are also using models to mitigate diagnostic inequalities in Black patients for conditions such as skin cancer and heart failure (14,15). Unfortunately, general health technology that utilises AI is undertested on Black patients, leading to disparities in outcomes. One model widely used to scan chest X rays for disease was unable to identify serious diseases in marginalised groups, with high rates of misdiagnosis observed in Black women (16). Health technology must be beneficial to all patients regardless of their ethnicity, and AI tools should not be an exception.



AI and machine learning has seen an unprecedented spread and it may become more difficult to control when and how it is used. Still, there are small changes we can make to our personal usage and response to these tools to minimise environmental impact and ensure it is used for our collective benefit:


  •  Use prompt engineering - prompt engineering is “the skill of crafting clear, effective inputs” to get a better and faster answer from generative AI models. This can save computing time and electricity/water usage. One prompt engineering technique to try is ROCKS (17):

    • Role - state your role (“I am a PhD student teaching Year 1 Physics students”)

    • Objective - state your aim (“I would like to make my seminar on velocity more engaging”)

    • Community - describe your audience (“The seminar will be given to 100 students”)

    • Key/tone - describe the tone/style you want (“I would like an inclusive and enthusiastic tone that uses active participation”)

    • Shape - describe what kind of output you would like from the model (“I would like suggestions on a group work sheet and a live quiz for this seminar”)

  • Remove the Google AI overview - sometimes you’ll search a question or topic on Google but the AI overview appears first, likely giving unreliable answers and using up energy. If you don’t want to see the overview, type “-AI” after your question (e.g. “What is the most dangerous metal -AI”)

  • Make your voice heard and amplify others -  the more models and tools are developed, the more data centres and supercomputers must be built. It’s crucial that they are built ethically and sustainably with the consent of the communities close to them. Keep an eye out for any new constructions and related meetings, or even petitions. For instance, you can also support initiatives like the Black Tech Agenda and this petition for the residents of Boxtown.



There are numerous advantages and disadvantages to AI; responsible use is key. As new models and tools emerge, it’s important to be aware of who can benefit and who may be negatively affected. We hope that this blog post has demonstrated that AI can be useful, but like many technologies, moderation and consideration is crucial.


By Vanessa Ankude, Blog Writer


References

  1. Lukan E. AI statistics 2025: top trends, usage data and insights. 2025 Aug 29. Available at: https://www.synthesia.io/post/ai-statistics

  2. Humeau E. AI for Africa: use cases delivering impact - Nigeria deep dive. 2024. Available at: https://www.gsma.com/solutions-and-impact/connectivity-for-good/mobile-for-development/wp-content/uploads/2024/07/NIGERIA_AIforAfrica.pdf 

  3. Humeau E. AI for Africa: use cases delivering impact - Kenya deep dive. 2024. Available at: https://www.gsma.com/solutions-and-impact/connectivity-for-good/mobile-for-development/wp-content/uploads/2024/07/KENYA_AIforAfrica.pdf 

  4. Strubell E et al. Energy and policy considerations for deep learning in NLP. 2019 Jun 5. Available at: https://arxiv.org/pdf/1906.02243

  5. Keep Scotland Beautiful. The carbon footprint of our day-to-day life. 2020. Available at https://www.keepscotlandbeautiful.org/media/dg5jy3tq/carbon-footprint-everyday-life.pdf 

  6. Belanger A. Thermal imaging shows xAI lied about supercomputer pollution, group says. 2025 Apr 25. Available at: https://arstechnica.com/tech-policy/2025/04/elon-musks-xai-accused-of-lying-to-black-communities-about-harmful-pollution/ 

  7. Kerr D. Elon Musk’s xAI accused of pollution over Memphis supercomputer. 2025 Apr 25. Available at: https://www.theguardian.com/technology/2025/apr/24/elon-musk-xai-memphis 

  8. Djanegara NDT et al. Exploring the impact of AI on Black Americans: considerations for the Congressional Black Caucus’s policy initiatives. 2024. Available at: https://hai.stanford.edu/assets/files/2024-02/Exploring-Impact-AI-Black-Americans.pdf 

  9. Microsoft. AI in Africa: meeting the opportunity. 2024. Available at: https://blogs.microsoft.com/wp-content/uploads/prod/sites/5/2024/01/AI-in-Africa-Meeting-the-Opportunity.pdf 

  10. NHS England. NHS Lab AI Ethics Initiative: striving for health equity. Available at: https://digital.nhs.uk/services/ai-knowledge-repository/nhs-lab-ai-ethics-initiative/striving-for-health-equity 

  11. Mcllwain C. AI has exacerbated racial bias in housing. Could it help eliminate it instead? MIT Technology Review. 2020. Available at: https://www.technologyreview.com/2020/10/20/1009452/ai-has-exacerbated-racial-bias-in-housing-could-it-help-eliminate-it-instead/ 

  12. Hofmann V et al. AI generates covertly racist decisions about people based on their dialect. Nature. 2024;633:147-154. Available at: https://www.nature.com/articles/s41586-024-07856-5 

  13. Summers K et al. Predicting vaso-occlusive crises in sickle cell disease through digital, longitudinal tracking of wearable metrics and patient-reported outcomes. Blood. 2023;142(Suppl 1):1059. Available at: https://www.sciencedirect.com/science/article/abs/pii/S0006497123076620 

  14. Khatun N et al. Technology innovation to reduce health inequality in skin diagnosis and to improve patient outcomes for people of color: a thematic literature review and future research agenda. Frontiers of Artificial Intelligence. 2024;7:1394386. Available at: https://pubmed.ncbi.nlm.nih.gov/38938325/ 

  15. Wu J et al. Artificial intelligence methods for improved detection of undiagnosed heart failure with preserved ejection fraction. European Journal of Heart Failure. 2024;26(2):302-310. Available at: https://pubmed.ncbi.nlm.nih.gov/38152863/ 

  16. Yang Y et al. Demographic bias of expert-level vision-language foundation models in medical imaging. Science Advances. 2025;11(13). Available at: https://www.science.org/doi/10.1126/sciadv.adq0305 

  17. Armstrong A. Think before you prompt: reduce your AI carbon footprint with ROCKS. 2025 May 15. Available at: https://altc.alt.ac.uk/blog/2025/05/think-before-you-prompt-reduce-your-ai-carbon-footprint-with-rocks/#gref 

1 Comment

Rated 0 out of 5 stars.
No ratings yet

Add a rating
Guest
Feb 05

I found this post on how AI affects Black communities really eye‑opening, especially how tools can help in farming and health but also harm through bias and environmental strain. It made me think about a tough semester when I felt overloaded, and I relied on WGU exam prep service to help me finish my work while staying aware of what I was learning. Reading this reminded me that technology must be fair and kind to everyone if it’s going to truly help us all.


Like
bottom of page