Then in some sense what the AI does is to start generating “reasonable” pictures at random, in effect continually checking what the picture it’s generating seems to be “of”, and tweaking it to guide it towards being a picture of what one wants. How does an AI manage to create a picture, say of a cat in a party hat? Well, the AI has to be trained on “what makes a reasonable picture”-and how to determine what a picture is of. And the result is that we can begin to build up intuition about what the worlds of different-and alien-minds can be like. But most of the core phenomena we’ll observe here seem robust and fundamental enough that we can expect them to span very different kinds of “brains”-human, artificial and alien. Of course, there are many differences between real brains and artificial neural nets. But we know that when a human brain suffers a stroke, this can lead to phenomena like “ hemispatial neglect”, in which a stroke victim asked to draw a clock will end up drawing just one side of the clock-a little like the pictures of cats “degrade” when parts of the “neural net brain” are knocked out. For example, we’ll often be “knocking out” particular parts of our “neural net brain”, a little like how injuries such as strokes can knock out parts of a human brain. We can think of what we’re doing as a kind of “artificial neuroscience”, probing not actual human brains, but neural net analogs of them.Īnd we’ll see many parallels to neuroscience experiments. But what’s important is that-by studying the effects of changing the neural net-we now have a systematic “experimental” platform for probing at least one kind of “alien mind”. There are many details of how this works that we’ll be discussing below. But it soon becomes more and more alien: the mental image in effect diverges further from the human one-until it no longer “looks like a cat”, and in the end looks, at least to us, rather random. Let’s say we use a typical generative AI to go from a description in human language-like “a cat in a party hat”-to a generated image:Īt the beginning it’s still a very recognizable picture of “a cat in a party hat”. So how can we see what such an alien AI-or alien mind-is “thinking”? A convenient way is to try to capture its “mental imagery”: the image it forms in its “mind’s eye”. But what if we take a human-aligned AI, and modify it? Well, then we get something that’s in effect an alien AI-an AI aligned not with us humans, but with an alien mind. We typically go to a lot of trouble to train our AIs to produce results that are like we humans would do. Because in AI we finally have an accessible form of alien mind. But in the past I’ve never really had a way to build my intuition about it. I’ve wondered about alien minds for a long time-and tried all sorts of ways to imagine what it might be like to see things from their point of view. And it now turns out to also be a question that rises to prominence in connection with the concept of the ruliad that’s emerged from our Wolfram Physics Project. How do alien minds perceive the world? It’s an old and oft-debated question in philosophy.
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |