Two of the well-known problems in A.I. research are about maintaining “alignment” and avoiding “hallucinations.” Alignment involves an A.I.’s ability to carry out the goals of its human creators — in other words, to resist causing harm in the world. Hallucinations are about adhering to the truth; when A.I. systems get confused, they have a bad habit of making things up rather than admitting their difficulties.
One primary criticism of systems like ChatGPT, which are built using a computational technique called “deep learning,” is that they are little more than souped-up versions of autocorrect — that all they understand is the statistical connections between words, not the concepts underlying words.
Comments
Post a Comment