Back to Course

DLC518066: AGI Fundamentals for Business

0% Complete
0/0 Steps
  1. Lesson 1: The Context
    7 Topics
  2. Lesson 2: AGI Tools
    6 Topics
  3. Lesson 3: Tips That Teach Us
    6 Topics
  4. Lesson 4: AI in The Organization - Potential and Case Studies
    7 Topics
  5. Lesson 5: Diving Deeper
    7 Topics
  6. Lesson 6: Updates
    6 Topics
  7. Setting a Test and Receiving a Certificate
    3 Topics
    |
    1 Quiz
Lesson 1, Topic 5
In Progress

Artificial Intelligent (AI) Impact and Our Responsibility (10 min read)

Lesson Progress
0% Complete

How AI gets built is currently decided by a small group of technologists. As this technology is transforming our lives, it should be in all of our interest to become informed and engaged. Transformative artificial intelligence is defined by the impact this technology would have on the world. A small number of people at a few tech firms directly working on artificial intelligence (AI) do understand how extraordinarily powerful this technology is becoming. If the rest of society does not become engaged, then it will be this small elite who decides how this technology will change our lives.

To change this status quo, Max Roser, founder and director of Our World in Data wants to answer three questions in this article:

  • Why is it hard to take the prospect of a world transformed by AI seriously?
  • How can we imagine such a world?
  • What is at stake as this technology becomes more powerful?

For AI, the spectrum of possible outcomes – from the most negative to the most positive – is extraordinarily wide. AI systems can also cause unintended harm, when they act differently than intended or fail. The AI does what we told it to do, but not what we wanted it to do. Making sure that the development of artificial intelligence goes well is not just one of the most crucial questions of our time, but likely one of the most crucial questions in human history. This needs public resources – public funding, public attention, and public engagement. Currently, almost all resources that are dedicated to AI aim to speed up the development of this technology. Efforts that aim to increase the safety of AI systems, on the other hand, do not receive the resources they need. 

Roser summarizes some of the risks of AI in this short article and therefore recommends reading the book The Alignment Problem by Brian Christian and Benjamin Hilton’s article ‘Preventing an AI-related catastrophe’ on some of the very worst risks of AI systems, and what we can do now to reduce them.