I’ve read that, when you dream, the things you perceive both visually and audially effect the same patterns in your occipital and temporal lobes, respectively, as would be effected if you were awake and physically saw and heard them. (I heard about the visual component and the audial component separately at different times.) If this is the case, then one should be able to record dreams by reading these patterns in the brain and then interpreting them correctly.
Of course, there are a couple of problems with this approach.
1. How can you record the neural impulses without opening the brain? And even if you opened it how would you measure action in all 3 dimensions?
2. Once you had that data, how could you possibly figure out how to properly interpret it? It’s probably even different for each individual.
The answer to (1) lies in the fact that neural impulses are electrical: as such, each impulse emits a tiny electromagnetic pulse. This electromagnetic wave must extend all the way to outside the brain, even if only very weakly, because electromagnetic waves attenuate when they propagate but never reach zero intensity (unless it’s so far away that the likelihood of receiving a single photon is slim), and because of the existence of EEGs we know that EM waves can make it through the skull. So, the trick would be to put many, many (very, very sensitive) sensors all around the head of the person sleeping.
But how do you determine which synapse a given impulse came from? The answer is triangulation. Arrange the sensors in a dense, 3D pattern around the head and then resolve the slew of differently timed stimulations of different sensors into individual impulses and their locations. Any given three sensors (the minimum number required to triangulate the source of a signal) would be receiving signals from many different synapses at once, so the data from all sensors really needs to be processed and resolved as a whole.
This would take massive computing power—much more than we currently have—but fortunately computing power has been roughly doubling every 18 months since the invention of computers. We also might have completely new computing technologies on the horizon, such as quantum computers and optical computers.
Also, it’s not necessarily true that we need to actually calculate the triangulations of the signals. We’re going to use artificial neural networks (as described below), and the neural networks may figure out themselves how to properly process the data, with or without inventing its own intermediary triangulation processing step.
The answer to (2) is to use an artificial neural network. ANNs learn through being subjected to many instances of input and their preferred outputs. For example, to train optical character recognition, you’d show it many different pictures of an A and “tell it” that the ASCII or Unicode representation of “A” is what you want it to output, for each input “A,” and do the same for all other letters, numbers, punctuation, etc.
Similarly, we can show our subject particular visuals and audio, record the brain’s activity via the above method, and then feed the ANN the occipital and temporal lobes’ activities as the input, with the images shown and sounds played as the expected results for their respective lobes’ activity. (This would of course require the use of an eye tracking device.) That way, when you feed it the EM pulses coming from the subject while they’re dreaming, it automatically knows from its training how to interpret it. And you can create and store as many new training models as you want for multiple individuals.
So, that’s the idea—now we just have to wait for the technology to catch up..
See also: Until the End of the World (1991)