AI and Comedians can impress with their wit or bamboozle you with bullshit



Two of the well-known problems in A.I. research are about maintaining “alignment” and avoiding “hallucinations.” Alignment involves an A.I.’s ability to carry out the goals of its human creators — in other words, to resist causing harm in the world. Hallucinations are about adhering to the truth; when A.I. systems get confused, they have a bad habit of making things up rather than admitting their difficulties. 

One primary criticism of systems like ChatGPT, which are built using a computational technique called “deep learning,” is that they are little more than souped-up versions of autocorrect — that all they understand is the statistical connections between words, not the concepts underlying words. 

Comments

Popular posts from this blog

Maven Crash Course - Learn Power Query, Power Pivot & DAX in 15 Minutes

"Data Prep & Exploratory Data Analysis" course by Maven Analytics

Oracle Cloud Infrastructure 2024 Generative AI Professional Course & Certification Exam (1Z0-1127-24)