Top Hat - The Little Box of stupid
Posted: Fri Aug 14, 2020 3:53 pm
To kick this Forum off I am going to start with a concept or scenario which I think has fascinated many people be they from the film and movie industry, academia or just the average joe as he really gets a kick out of films like the Terminator , Cyborg 3 - The Recycler, Robocop 2 and more.
In this discussion we will be focusing on the what if scenario and taking it to its extreme.
What if dinosaurs were ressurected? Later.
What if AI became powerful enough to control humanoid robotic drones that could present a threat to your survival and perhapsall mankind? The shutoff of switch would conceivably lay in the sphere of reverse engineering or
hacking. Given that such an entity in possession of a physical body or not would seek self
preservation as its highest imperitive its second imperitive might be the anhilation of what it
might consider a parasitic organism visa vis humankind.
If there were to be an escalation in conflicts between adversarial nations the use of this form of adversarial AI in drones humanoid or otherwise would represent a real threat. The combined use of quantum level encryption would also present a significant hurdle to any would be hacker not to mention
the physical problem of disabling a heavily armoured machine with a hunter seek and destroy
class of AI. Such an AI could also employ swarm behaviours mimicking natural phenomenana to
exploit its environment and overcome its enemys more easily and in greater number.
It has already been posited that with existing knowledge of AI systems based on novel self
evolving or self writing topologys that this AI favours an archetecture that can be overcome
by the use of the 'Top Hat' scenario [1]. In this type of scenario the 'Top Hat' is simply referred
to the combinatorial exploitation of its connective strength. Using combinatorial logic one can
derive and sublimit the collective connectivity of any given system governed as it would by the
arrangement of an increasingly large number of simpler rule based structures be they artificial neurons or
decision trees.
"The little box of stupid" - Case in point - example the complexity of terrestrial broadcasts, advertising streams, obfuscated internet code and digital computer security.
By creating a system that is equally complex as the one you are trying to subvert one can create
a top hat simply by having that system learn its own combinatorial complexity and then applying
it to the adversary whom would already be delimited to any given action in any scenario as many
as they may be it cannot thwart a system that has a greater or equal degree of complexity.
The TopHat comes into play once that system has learnt its own complexity. In a kind of kung fu
of the wifi that conjoins the two agents the adversary is quickly disabled and its connectivity
is reduced to acceptable safe limits.
A system that can learn its own complexity is useless in any real sense other than producing a
Top Hat - Hence the phrase "Little box of stupid" or "Dense as f**ck". One that has been delimited through combinatorial optimal connectivity can learn to
adapt to any known environment and is highly dangerous.
The top hat merely delimits this system further - a kind of virtual pruning takes places that turns a
healthy AI into a dead one.
Ofcourse the alternatives would be a quantum montecarlo computation to unencrypt the wifi that
may or may not enable a shutdown of the system in question. Again a top hat solution would
benefit from this as it would require adaptive snooping of the targets communication policys.
Assuming the Likelyhood is that new policys would not have been evolved and adapted to this
level of encryption or that these new Policys could not be likened to those underpinning
existing or known policys e.g Tcp/Ip etc...
Finally and the most likely scenario is the construction of EMP devices, homemade shotguns
etc...FUN!!!!
Interestingly enough and this is also the topic for another discussion the existence of a 'Combinator' or as some may posit a Y-combinator accepts the fact that any class worthy or general AI can only be improved by limiting its connectivity through Combinatorial Optimization accessing its weight matrix and its connective topology and making that an Optimization problem. In the same way a new born infant is provided with initially too many connections as it learns about the world neural plasticity becomes the hard wiring for later life. Envision a neural plasticity of a massive network of neurons. One in which every modular component is initalised to some predefined highly connective state then as the machine learns it adapts and modulises different parts of the system. A Top Hat here would either restore that system to its formerly stupid state of being overly connected or delimit it further until its prelearnt archecture became useless. Like that scene where the machine is hitting its head against the wall (The Killing Machine 1994)! A Y-combinator on the other hand would give its operator human or otherwise control over what the machine is best at - a kind of inhibition rule - so fundamental to all hebbian learning. You could also invision this being applied to humans...Ahhhhh.
In this discussion we will be focusing on the what if scenario and taking it to its extreme.
What if dinosaurs were ressurected? Later.
What if AI became powerful enough to control humanoid robotic drones that could present a threat to your survival and perhapsall mankind? The shutoff of switch would conceivably lay in the sphere of reverse engineering or
hacking. Given that such an entity in possession of a physical body or not would seek self
preservation as its highest imperitive its second imperitive might be the anhilation of what it
might consider a parasitic organism visa vis humankind.
If there were to be an escalation in conflicts between adversarial nations the use of this form of adversarial AI in drones humanoid or otherwise would represent a real threat. The combined use of quantum level encryption would also present a significant hurdle to any would be hacker not to mention
the physical problem of disabling a heavily armoured machine with a hunter seek and destroy
class of AI. Such an AI could also employ swarm behaviours mimicking natural phenomenana to
exploit its environment and overcome its enemys more easily and in greater number.
It has already been posited that with existing knowledge of AI systems based on novel self
evolving or self writing topologys that this AI favours an archetecture that can be overcome
by the use of the 'Top Hat' scenario [1]. In this type of scenario the 'Top Hat' is simply referred
to the combinatorial exploitation of its connective strength. Using combinatorial logic one can
derive and sublimit the collective connectivity of any given system governed as it would by the
arrangement of an increasingly large number of simpler rule based structures be they artificial neurons or
decision trees.
"The little box of stupid" - Case in point - example the complexity of terrestrial broadcasts, advertising streams, obfuscated internet code and digital computer security.
By creating a system that is equally complex as the one you are trying to subvert one can create
a top hat simply by having that system learn its own combinatorial complexity and then applying
it to the adversary whom would already be delimited to any given action in any scenario as many
as they may be it cannot thwart a system that has a greater or equal degree of complexity.
The TopHat comes into play once that system has learnt its own complexity. In a kind of kung fu
of the wifi that conjoins the two agents the adversary is quickly disabled and its connectivity
is reduced to acceptable safe limits.
A system that can learn its own complexity is useless in any real sense other than producing a
Top Hat - Hence the phrase "Little box of stupid" or "Dense as f**ck". One that has been delimited through combinatorial optimal connectivity can learn to
adapt to any known environment and is highly dangerous.
The top hat merely delimits this system further - a kind of virtual pruning takes places that turns a
healthy AI into a dead one.
Ofcourse the alternatives would be a quantum montecarlo computation to unencrypt the wifi that
may or may not enable a shutdown of the system in question. Again a top hat solution would
benefit from this as it would require adaptive snooping of the targets communication policys.
Assuming the Likelyhood is that new policys would not have been evolved and adapted to this
level of encryption or that these new Policys could not be likened to those underpinning
existing or known policys e.g Tcp/Ip etc...
Finally and the most likely scenario is the construction of EMP devices, homemade shotguns
etc...FUN!!!!
Interestingly enough and this is also the topic for another discussion the existence of a 'Combinator' or as some may posit a Y-combinator accepts the fact that any class worthy or general AI can only be improved by limiting its connectivity through Combinatorial Optimization accessing its weight matrix and its connective topology and making that an Optimization problem. In the same way a new born infant is provided with initially too many connections as it learns about the world neural plasticity becomes the hard wiring for later life. Envision a neural plasticity of a massive network of neurons. One in which every modular component is initalised to some predefined highly connective state then as the machine learns it adapts and modulises different parts of the system. A Top Hat here would either restore that system to its formerly stupid state of being overly connected or delimit it further until its prelearnt archecture became useless. Like that scene where the machine is hitting its head against the wall (The Killing Machine 1994)! A Y-combinator on the other hand would give its operator human or otherwise control over what the machine is best at - a kind of inhibition rule - so fundamental to all hebbian learning. You could also invision this being applied to humans...Ahhhhh.