« First « Previous Comments 83 - 103 of 103 Search these comments
Not proof. But observations. The "fact" that "reality" fits into formulas further illustrates that it is a matter of linguistics.
If you get ridiculous, nothing is provable, not even a priori matters. You might as well argue that this proof that the square root of two is an irrational number is incorrect. After all, all proofs are based on assumptions, even this one is based on the assumption that there are no contradictions in mathematics.
If you're going to accept that nonsense then the word proof becomes meaningless and you end up not being able to do jack shit like formally verifying the correctness of the code running the nuclear reactor, so you don't, and the reactor goes critical killing millions. Best not to be ridiculous.
A computer thinking about high level concepts means it has a model of these concepts.
Not necessarily. It can pick up nuances without structures. Concept is a matter of language. It does not require a model.
Mathematics is about taking certain things as granted.
Evidence still has a place in life. It alters the expected payoff of any bet or speculation.
Even formally verified systems can fail due to factors outside of that formal system. At some point, you have to draw a line and accept some unknowns.
Nor are you going to act on the possibility that the only way to save the universe from popping out of existence is to ...
To me, the universe going out of existence has the same negative payoff as me being struck by lightning. I still walk in the rain.
It can pick up nuances without structures. Concept is a matter of language. It does not require a model.
Vacuous pontification.
Explain how you can represent concepts, including language, in the memory of computer without structures.
Remember, the set of axioms is either incomplete or inconsistent.
Godel's theorem is restricted to mathematical first order logic in its common formulation.
All it shows is that there are paradoxes, which happen only for knowledge about knowledge itself. Paradoxes don't happen in the physical universe, nor in arithmetic.
It can pick up nuances without structures. Concept is a matter of language. It does not require a model.
Vacuous pontification.
Explain how you can represent concepts, including language, in the memory of computer without structures.
Data structures are probably needed to implement various AI algorithms and supporting systems.
However, concepts are not necessarily modeled in formal constructs.
Paradoxes don't happen in the physical universe, nor in arithmetic.
Only because the modernist/reductionist/positivist understanding of reality does not allow them.
However, concepts are not necessarily modeled in formal constructs.
You are aware that everything in the memory of a computer is a formal construct, right?
However, concepts are not necessarily modeled in formal constructs.
You are aware that everything in the memory of a computer is a formal construct, right?
There are:
1. Constructs designed by human intelligence to implement AI, such as chips, memory, arrays, data structures
2. Constructs formed by AI to perceive/comprehend/reason/speculate
(2) are not necessarily formal structures.
For example, we need to use "formal constructs" to implement algorithms, e.g. a denoising autoencoder.
However, the machine does not need models about features to extract information regarding such features from an image.
2. Constructs formed by AI to perceive/comprehend/reason/speculate
(2) are not necessarily formal structures.
You don't know much about programming, do you?
2. Constructs formed by AI to perceive/comprehend/reason/speculate
(2) are not necessarily formal structures.
You don't know much about programming, do you?
What kind of AI are you building?
What kind of AI are you building?
One based on programming structures, like other programs.
This is why we must live at the frontier of its development. He who controls AI controls the human destiny.
Not even sure if the frontier is needed. The current development of personal digital assistants, already have a lot of implementation arcs where in effect, an organization can add more work while also laying ppl off. In the past, that strategy usually failed because customers became aware that their support efforts went downhill, as well as the general quality of work. Thus, a loss of let's say 25% of a firm's staff, usually resulted in a loss of output. In only a few short years, it'll be more like a 25% layoff will not only add to the bottom line but also increased output for those who're left behind. Add a few more product generation cycles on this and soon, we'll have a very limited white collar workspace.
What kind of AI are you building?
One based on programming structures, like other programs.
IMO, there are several types (stages) of Artificial Intelligence:
1. Programs (written in a programming language), you give the machine exact instructions to perform a task
2. Supervised Machine Learning, you teach a machine what things are
3. Unsupervised Machine Learning, the machine teaches itself with or without your guidance
Obviously, (3) is the most interesting because machines can move beyond its programming. This is where true emergence can occur.
Which one are you talking about?
There are other interesting views on intelligence. Some equate it with entropy maximization. Perhaps the universal will-to-power is all about maximizing future possibilities?
So exciting! :-)
This video is intriguing:
Add a few more product generation cycles on this and soon, we'll have a very limited white collar workspace.
Pretty much jobs will no longer exist. Soon enough, machines can do practically anything a human can do, only better and cheaper. Moreover, self-replicating robotic law enforcement can maintain peace in ANY environment, effectively and without moral confusion.
It is going to be interesting. :-)
Obviously, (3) is the most interesting because machines can move beyond its programming. This is where true emergence can occur.
Which one are you talking about?
(2) and (3) are always included in (1). They are programs like any other.
(2) and (3) are both necessary for intelligence. They just play different roles.
Obviously, (3) is the most interesting because machines can move beyond its programming. This is where true emergence can occur.
Which one are you talking about?
(2) and (3) are always included in (1). They are programs like any other.
(2) and (3) are both necessary for intelligence. They just play different roles.
Yes, they build on one another. (2) and (3) are programs but not in the same sense as (1). You as the programmer further removed from the problem (as its solver) in (2) and (3) then in (1).
« First « Previous Comments 83 - 103 of 103 Search these comments
In either case, it is because Modernism is scared. In reality, it is reductionism fighting against the unknown and any possible emergence.
Science, as it stands today, is pathetic.
https://www.yahoo.com/tech/s/stephen-hawking-artificial-intelligence-could-150024478.html
Stephen Hawking seems to be afraid. Alas, who cares for a theory of everything?