Saving this, from danhon’s s2e16 newsletter to come back to: There’s a somewhat complicated paper/presentation from Geoffrey Hinton, who’s incredibly smart and knows what he’s talking

Saving this, from danhon’s s2e16 newsletter to come back to:

There’s a somewhat complicated paper/presentation from Geoffrey Hinton, who’s incredibly smart and knows what he’s talking about, about something called thought vectors[1] which I think is moderately readable, but if you want to get the main gist of it, it’s this: you don’t necessarily need to know what it is that you’re *thinking* more that the if you get a description of what’s inside your head, you can try and document or describe it as a vector. And it turns out that computers – at least, the ones we have so far – aren’t that bad at doing things to vectors. And that if you can turn a sentence into a thought vector (which Hinton shows) what you’re doing is you’re taking a sentence in a language and turning into the underlying *thought* that the sentence expresses. And once you’ve done *that*, you’re a hop-skip-and-a-jump (here I really *do* expect Borenstein to jump in) to predicting what thought vector follows any given thought vector. Because we can do that to sentences.