AI-first World
Source: Dilbert |
Amazon, Facebook, Google, IBM and Microsoft have joined hands to create an organization to set the ground rules for protecting humans — and their jobs — in the face of rapid advances in artificial intelligence - Partnership on AI
The group released eight tenets that are evocative of Isaac Asimov’s original “Three Laws of Robotics” which appeared in a science fiction story in 1942.
Asimov’s rules are:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
- We will seek to ensure that AI technologies benefit and empower as many people as possible.
- We will educate and listen to the public and actively engage stakeholders to seek their feedback on our focus, inform them of our work, and address their questions.
- We are committed to open research and dialog on the ethical, social, economic, and legal implications of AI.
- We believe that AI research and development efforts need to be actively engaged with and accountable to a broad range of stakeholders.
- We will engage with and have representation from stakeholders in the business community to help ensure that domain-specific concerns and opportunities are understood and addressed.
- We will work to maximize the benefits and address the potential challenges of AI technologies, by: * Working to protect the privacy and security of individuals.
- We believe that it is important for the operation of AI systems to be understandable and interpretable by people, for purposes of explaining the technology.
- We strive to create a culture of cooperation, trust, and openness among AI scientists and engineers to help us all better achieve these goals.
* Striving to understand and respect the interests of all parties that may be impacted by AI advances.
* Working to ensure that AI research and engineering communities remain socially responsible, sensitive, and engaged directly with the potential influences of AI technologies on wider society.
* Ensuring that AI research and technology is robust, reliable, trustworthy, and operates within secure constraints.
* Opposing development and use of AI technologies that would violate international conventions or human rights, and promoting safeguards and technologies that do no harm.
Comments
Post a Comment