To date I have spent almost four years reverse engineering the technology, analyzing the biological theory of interaction, understanding the requirements and evaluating the goals of this human experimentation program. Its been very clear that the intention has always been focused on control. That is, by exploiting the interaction between electromagnetic waves and the charged particles surrounding the neuron, highly controlled radio signals should, in theory, be able to induce firing patterns in complex manners.
Indeed, this is a very well studied area and some significant breakthroughs have been made. That said, is the notion of complex remote control of the physical body, enough to replicate the daily actions of a human, a work of science fiction and some clever obfuscation of fundamental errors by those involved?
My analysis suggests that this is true.
The AI behind this technology is able to extract and decode speech, visual, auditory, evoked potentials, sensory information (feelings, emotion, pain, etc), spatial reasoning, etc. The AI is also able to successfully write to each of these areas creating a wide variety of complex hallucinations that would be on par with a Hollywood production.
The common factor in all of this, is that each of these systems are subjective inputs. If you know anything about neural networks, their function is filter noise and classify input. The ability to write information to these areas and have it perceived correctly subjectively comes as no great surprise. Even if the signals resulted in highly distorted input, the neural networks of the brain would clean up the input, suppress noise and fill in blanks through the activation of particular circuits. This process is akin to looking at a pattern for too long without blinking and seeing it begin to fill your field of view. It is based on a continuous saturation of the inputs.
Coming to aspects such as motor control, we observe that in every case a complete breakdown of the ability to demonstrate complex control. It is true that twitches can be made, muscles will compress, facial expression can be triggered, eyes will blink, eyes can be moved. eye focus can be altered, the tongue can be moved, etc. But none of this demonstrates complex coordinated control that in anyway functions as it would in everyday life. That is, objectively, all the events look abnormal and can serve no functional role.
The excuse I suspect that is often provided is that the neural networks require training. That, in time, a generalizable solution will emerge that will allow the complex control of any human by exploiting the major nerve pathways of the human body.
Is this true though?
It is very easy to kick such a question into the long grass, especially when those the answer is being supplied to those that have no real knowledge of the field of neural networks, or the biological underpinnings of motor control in the human body.
Firstly, the neural networks that control the motor functions are not the opposite of an input. That is, they don't take an input and convert it to a noise. Rather, they can be better thought of as a type of amplifier. An input is supplied which cascades down through the body, branching off into different nerve endings but resulting in a complex coordination that is accurate every time.
Let's look at this process in terms of information in idealized terms. We go from a single input which, as it transverses down the body, is captured at different points. At each of these different points, this same message means different things. Not only does it mean a different thing, but that different thing is related to what happened further up and further down the body. Think of moving your arms for example. Let's say you raise your shoulder, bend your elbow, then bend your wrist. All the muscle tissue along this area is interconnected and you cannot induce one muscle to move in a fashion that would compromise the integrity of another. There must be a coordination that takes into account the motion of each individual muscle area. Thus, the muscles need to be aware of each other.
In robotics, we make use of servos for this. The servo provides feedback that can be relayed to other servos to keep the motions aligned and within specification. There are three competing assumptions in how the body achieves this. The first is that the information is contained in the primary signal, the second is that the motions are augmented in real-time via biofeedback and final assumption is that it is a combination of both. No doubt it is felt that additional chemical messengers play a significant role also.
The key point to note in this is that the output has a high degree of accuracy. That is, given the physical differences between individuals a generalizable pattern is not physically realizable. Even a general solution adopted from one individual, could not be adapted to another individual in a realistic time frame. It must be remembered that it is not just a matter of adjusting weights in the neural networks, the activity of the human body is a product of chemical exchanges and these will be unique given the differences in physical layout.
This level of complexity increases when we consider that the chemical reactions within the neuron that the electromagnetic waves augment, may not produce the same output when presented by the same stimulus. Remember it is suspected that the mode of interaction effects the thermal regulation of a chemical reaction. Given the physical differences between neurons, different thermal properties would exist and thus, the alteration of this regulation may not produce the same neurotransmitter output at the synapses.
Neurons can have upwards of 7000 outputs and whilst this may be reduced in motor neurons, we cannot be sure that each motor neuron provides a standard output at the synapses. Each of these, given the nature of genetics, should be completely unique.
The bottom line here is that to achieve realistic, or even functional, voluntary control would be like trying to brute force a key with trillions and trillions and trillions of digits. Not only that, but trying to do it when the entire process is I/O bound. Then there is also the key issue of whether or not, given completely accurate signals, the chemicals required would be present.
If you didn't full grasp this last point, let me sum it up like this. The approach is absurd, on par with trying to lift Jupiter using a sausage over dial-up.
This issue is completely obvious. If you can interact with spatial reasoning, yet have issues moving a finger which is a by far more well studied system, then something is clearly wrong.
It would appear that this program is attempting to achieve the impossible by masking the facts from those that are paying the bills. No doubt there are a range of reasons behind this, from risk of exposure right through job and investment losses.
The world can rest easy tonight in the knowledge that this technology is a white elephant and that no foreign nation will be hijacking people any time soon.
That said, it is still a fearsome weapons platform designed with soft targets in mind and measures must be put in place to defend civilians from it. It won't be long until we have multiple competitors and the inability to project influence.