So for my engineering class I'm supposed to make a mock company that is based on a engineering discipline, and I picked cloud computing for my company. Of course, I had to differentiate from current competition, so I developed a model for continuous development within a system controlled by the system itself. It's kind of long, but if you read it, thanks! I'd like some feedback on it, if it's viable, and if it can be done.
A cycle loop is set up to perform routine resource monitoring and integrity checking through temperature checks, intrusion checks, load checks, SMART tests, and data block latency tests. After every cycle is completed notes are made documenting the changes since the last cycle, and images of the system are stored if there were any changes from the last cycle (in configuration). This continues until there is a problem. A problem can be defined as a malicious attack, a program error or bug, an intrusion, a vulnerability, or unexpected load.
There are preset rules to deal with each problem, along with last resort options universal between problems. In the event of a malicious attack, the threat is looked for on the system, by using definitions that contain information about a malicious file, or the system may detect a process that may be executing malicious actions, even if the files it uses are not malicious by themselves. If the threat is not removed, the system elevates itself to remove the threat with increased permissions. If the malicious software is able to sustain itself though duplication or target mapping, the system block communications to the malware delivery address, and blocks the task from running. In the case of failure, the system restarts. In the case of continued failure after a system restart, the system restores from a image backup. System managers can view images and flag them as release images, so the system knows which backups are optimal. The system will try to recover to the last working backup, and if that does not work, the last flagged image.
The system may also deliver random images to test which one works best with users, this is based on the theory of evolution. The system may also analyze individually documents changes in each image and combine images by combining a bunch of changes together that may have not been on the same development path or branch at the beginning but may be more optimal together than if they were alone.
The system may also optimize the hardware circuits to the repetitive task they are accomplishing, this may be done off to the side of the main CPU, where a task can be specialized by a FPGA.
Most compilers and IDEs check code when it is perceived to be suboptimal or incomplete. The system, which has access to source code, will be able to apply these recommended changes like optimizing if statements, imports, etc.
Using a catalog of open source methods can be useful to the system. The system could test new methods and see if the old method and the new method both return the same output, and if the new method performs better than the old method, the new method could replace the old method in a new image branch, available for random testing.
Compiling other languages to C++ could also result in a better performance boost, without much development in AI or recursive-self improvement, and is a possibility when writing code on an Automaton network.
In terms of security, a system could brute force access or request access randomly across ports or services to test safety. A system could host an isolated mirror of itself and then test these vulnerabilities.
All of this could mean that humans could produce fuzzy data, a broad, unclear direction, in a configuration, where the computer knows the absolute and tries to reach the absolute through variation.
you read all of that?