THE more powerful A.I. gets - the less humans understand about it - but perhaps we shouldn't judge ourselves too harshly for that.
As it turns out, A.I. doesn't even understand itself. Researchers from Carnegie Mellon University and Massachusetts Institute of technology asked A.I. models to explain how much they were thinking about problems as they worked through them.
The models were pretty bad at introspection. Explaining their chain of thoughts, step by step, made them worse at some tasks. Emmy Liu, a doctoral candidate at Carnegie Mellon at the research team, told me.
But then, that's how people are, too.
Mr. Wolfram wrote : Show yourself a picture of a cat, and ask., '' Why is that a cat? '' Maybe you'd start saying, ' Well, I see its pointy ears etc.' But it's not very easy to explain how you recognized the image as a cat.''
It's one thing to honor the mystery of the human brain, but quite another to admit that artificial intelligence - both creative and conniving - is slipping away from our understanding.
The World Students Society thanks Peter Coy for his Opinion.
0 comments:
Post a Comment
Grace A Comment!