top of page

The Rise of AI

Artificial intelligence (AI) has been taking the world by storm. As the world tries to embrace AI, many people are divided on whether increasing the use of AI in our daily lives can help or hinder humanity. In this post, we explore what AI is, its risks and benefits, and what living with AI means for us as Black women.


What is AI?

In the simplest terms, AI is a technology that can take in information from different sources, process it, and use it to solve problems and come up with new ideas, in a similar way to humans. AI uses machine learning and deep learning to develop algorithms like the decision-making processes of human brains. Machine learning is a branch of computer science that uses computational models to estimate patterns in large amounts of organized (also known as ‘structured’ or ‘labelled’) data (eg. selecting all the ‘cats’ from a list of words). Deep learning is a subsection of machine learning that uses more complex processing methods to identify patterns in ‘unstructured’ data (eg. selecting all the cats from a group of images of different pets by looking for cat-specific features like pointy ears and whiskers). Both machine learning and deep learning allow AI to ‘learn’ from existing data to make increasingly more accurate predictions and classifications over time.


AI can be categorised into 4 types depending on their capabilities:

  1. Reactive machines – AI systems that are designed and trained to complete a specific task. This type of AI can’t store ‘memories’ and therefore can’t build on any new things that it ‘learns’. This is also known as ‘weak AI’. Industrial robots and virtual assistants like Siri use this type of AI

  2. Limited memory – These AI systems are able to remember some of the previous actions that they performed so they can use previous experience to ‘decide’ future actions. Self-driving cars use this type of AI for some of their decision-making processes

  3. Theory of mind – This type of AI would have the ability to understand emotions and use that to predict and understand human behaviour and intentions. This would allow AI to act as members of human teams and act more like humans than other forms of AI. Scientists are currently trying to develop this type of AI

  4. Self-awareness – These AI systems would have consciousness and be able to understand themselves. This could allow AI to improve itself without any direct human input. This type of AI does not yet exist, but some scientists are trying to create it


Today, AI is used in a number of different ways. Digital assistants like Apple’s Siri, Amazon’s Alexa, and Samsung’s Bixby use AI to recognise your speech patterns and find answers to your questions. Open AI’s ChatGPT, which I’m sure you’ve all heard of, is an example of generative AI which ‘creates’ new text based on prompts using natural language processing (NLP). Open AI’s DALL.E 2 is another example of generative AI. DALL.E 2 can ‘create’ images and ‘artwork’ from text prompts. All applications of AI use ‘knowledge’ gained from human-generated training datasets to ‘create’ their outputs. Although developing AI algorithms uses data that humans provide, once these models are tested, computers and machines can use AI to independently complete some tasks that previously only humans could do. This includes things like reading a map and redirecting when someone goes off-course like the Global Positioning System (GPS) guidance in apps like Google Maps, Apple Maps, and Waze.


What are the pros & cons of AI today?

The increasing use of AI in our daily lives has divided the population. Many people are excited about the advancements and wholeheartedly embrace AI whilst others are less thrilled and warn against the dangers of it.


AI is everywhere. AI is generally considered to be more impartial, precise, accurate, and less likely than humans to make processing mistakes. This can mean that AI can handle larger amounts of data and detail-oriented tasks for a longer amount of time than any one human can. In computer science, AI can help make many processes more efficient (1). AI can automate workflows and carry out repetitive tasks. This can help free up humans for other tasks that require more nuance. There is even a version of AI that can generate new code from a text prompt. Although, setting up your systems to use AI can be expensive, once it is up and running, using AI for these tasks can often be more cost-effective than hiring several people to do the same job. Some people believe that this use of AI will eventually result in humans being pushed out of their jobs in favour of computers. This may come to pass however it does not necessarily mean the end of human involvement in jobs that require repetitive tasks. People can work in tandem with AI. For example, humans may still be needed to perform quality checks on AI output or help train AI models/algorithms.


AI is also being implemented in many fields outside of computer science. In healthcare, for example, AI has been used to help identify women who could benefit from extra mammogram screenings due to their high risk of developing breast cancer (2). The use of AI in healthcare can help doctors make faster, and potentially more accurate, medical accurate diagnoses which can ultimately improve patient outcomes and reduce long-term care-associated costs (3). AI has also been used for things such as scheduling appointments and predicting and understanding pandemics like COVID-19 (4). In the area of Human Resources, AI has been used to assist in processes like candidate selection for interviews, helping to streamline the process. In business, AI has been used to help companies understand their customers by identifying and analysing patterns in browsing and purchasing. AI has also been used to answer common questions, resolve issues, and escalate more complex requests to humans which helps clients and customers get faster solutions to their problems at any time of day (1). In many of these applications of AI, sensitive personal data is collected, which can raise ethical and privacy concerns if the data is not handled securely.


Over the years, AI has become increasingly more integrated into our daily lives. This can benefit some people but be very disadvantageous for others.


What does this mean for us?

Developing algorithms (training) for AI needs human input which means that it is open to human error and bias. Unfortunately, this means that AI outputs can also be racially discriminatory, sexist, and ableist, just like humans can (5). For example, in hiring, AI has been shown to routinely reject participants with minority-ethnic-sounding names (6). In policing, AI often overestimates the risk of offending for Black people (7). AI has also been shown to discriminate against people based on gender and personality (8,9). AI facial recognition algorithms have been used to determine a candidate’s ‘personality’ and subsequent ‘alignment’ with company culture (10). This doesn’t always favour Black women as our neutral resting faces can often be misconstrued as displaying negative emotions. As Black women, we can be doubly disadvantaged by racial and gender-based discrimination from AI (11). These biases can come from the use of limited data sets to train the AI’s algorithm and/or from the designer of the algorithm (12). You can hear more about racial bias in AI from African AI and Technology Ethics and Policy researcher Favor Borokini in our podcast episode, which is available wherever you get your podcasts.


Current versions of AI are unable to recognise when they are being biased and also lack the ability to correct themselves. Until AI can both recognise and correct for bias, it is up to humans to ensure that the development of AI systems is fair and as ethical as possible (13). This can look like increasing the diversity of the datasets that AI is trained with. Introducing legislation to regulate AI can also help to reduce bias. In the EU, non-discrimination laws have been introduced to help reduce bias in AI development and outputs (14). The Artificial Intelligence Act was passed in March 2024 and is the world’s first comprehensive AI law. Under the AI Act, AI systems would be overseen by humans to prevent harmful outcomes. Since the AI Act was just recently passed, it is a little too soon to tell how well it works. We can only hope that the law works as intended and helps to ensure fairer AI systems by reducing bias/discrimination.


In conclusion, AI is fast becoming an integral part of society and it is currently embedded in many areas, including healthcare, education, and law enforcement. As AI systems are developed using human input, they are vulnerable to racial and gender-based bias, just like humans. Some work is being done to minimise this bias, including the introduction of regulatory laws. There is still some work to be done on ensuring that AI is fair to everyone and there’s still a lot to learn about what AI is capable of. In the meantime, we can take advantage of the extremely useful tools that are available to us.


By Esther Ansah, Blog Writer


References

1.          Duggal N. Advantages and Disadvantages of Artificial Intelligence [AI]. Simplilearn: AI & Machine Learning. 2024

2.          Gastounioti A, Eriksson M, Cohen EA, Mankowski W, Pantalone L, Ehsan S, et al. External Validation of a Mammography-Derived AI-Based Risk Model in a U.S. Breast Cancer Screening Cohort of White and Black Women. Cancers (Basel). 2022 Oct 1;14(19):4803.

3.          Patil S, Shankar H. Transforming Healthcare: Harnessing the Power of AI in the Modern Era. International Journal of Multidisciplinary Sciences and Arts. 2023 Jul 10;2(1):60–70.

4.          Syrowatka A, Kuznetsova M, Alsubai A, Beckman AL, Bain PA, Craig KJT, et al. Leveraging artificial intelligence for pandemic preparedness and response: a scoping review to identify key use cases. npj Digital Medicine 2021 4:1. 2021 Jun 10;4(1):1–14.

5.          Chen Z. Ethics and discrimination in artificial intelligence-enabled recruitment practices. Humanities and Social Sciences Communications 2023 10:1. 2023 Sep 13;10(1):1–12.

6.          Intahchomphoo C, Gundersen OE. Artificial Intelligence and Race: a Systematic Review. Legal Information Management. 2020 Jun;20(2):74–84.

7.          Browning M, Arrigo B. Stop and Risk: Policing, Data, and the Digital Age of Discrimination. American Journal of Criminal Justice. 2021 Apr;46(2):298–316.

8.          Leavy S. Gender bias in artificial intelligence: The need for diversity and gender theory in machine learning. Proceedings - International Conference on Software Engineering. 2018 May 28;14–6.

9.          Avery M, Leibbrandt A, Vecci J. Does Artificial Intelligence Help or Hurt Gender Diversity? Evidence from Two Field Experiments on Recruitment in Tech. SSRN Electronic Journal. 2023 Feb 14

10.        Raso F, Hilligoss H, Krishnamurthy V, Bavitz C, Kim LY. Artificial Intelligence & Human Rights: Opportunities & Risks. SSRN Electronic Journal. 2018 Sep 25

11.        Schelenz L. Artificial Intelligence Between Oppression and Resistance: Black Feminist Perspectives on Emerging Technologies. 2022;225–49.

12.        Panch T, Mattie H, Atun R. Artificial intelligence and algorithmic bias: implications for health systems. J Glob Health. 2019;9(2).

13.        Adams R. Can artificial intelligence be decolonized? Interdisciplinary Science Reviews. 2021 Mar 1;46(1–2):176–97.

14.        Wachter S, Mittelstadt B, Russell C. Why fairness cannot be automated: Bridging the gap between EU non-discrimination law and AI. Computer Law & Security Review. 2021 Jul 1;41:105567.


 


39 views0 comments

コメント

5つ星のうち0と評価されています。
まだ評価がありません

評価を追加
bottom of page