Can experience be used to predict the future? | Page 2 | INFJ Forum

Can experience be used to predict the future?

Also, how do you define truth? You know, from a Pragmatist's point of view, your abstract ideal of truth is just that, an ideal, and it is more useful to take truth as "what is so extremely likely to happen that it would be silly to claim the contrary". Commonsense is actually taken very seriously in the philosophy of causation. From the viewpoint of this particular school and its definition of truth, induction is not in danger.

So yes - this makes me really aware of the fact that we can't take the meaning of truth for granted it here. Let's agree on a definition :) Also, why would inductive reasoning have to be 'true', supposing we arrive at that definition? Can't we just settle for the possibility of its being valid? In which case, the formal approach I sketched above might help us achieve validity; while what is not taken into account in the 'extremely likely to happen' might belong to pure, impossible-to-causally-define future contingency. It's quite possible that induction cannot achieve more than that, but I don't see this as weakening induction to any great extent, in practice.

This still poses a problem. As I stated above, how do we know an inductive inference is even 'probably true'? Now you proposed a way around this problem by supposing that induction is about drawing 'valid conclusions'. Maybe induction is not really about truth at all, but rather validity. But this poses a further problem. If induction is just about 'validity', how does one distinguish between valid and invalid inference? Maybe an inference is valid if a correlation has been observed enough times. Or maybe the more correlations that are observed, the more 'valid' your inference will be. Or even perhaps the more observations you make, the more certain you can be of your inference. Although any one of these interpretations might be true, they all face essentially the same problem: 'how do we know that we can extrapolate valid observations from observed correlations'. All of these interpretations suffer from the same trapping that they leave essentially the same problem unanswered. Although there may be an answer, not one has been found yet.

So let me propose a possibility. It seems that no matter which way we define induction, we run into a variant of the same problem. Perhaps this suggests that we are thinking about knowledge and prediction completely the wrong way? Maybe the relationship between observation and prediction isn't what we think it is? Whenever our current way of thinking fails to lead to results, maybe this is an indication that we need to change the way we think? :) David Deutsch proposed a solution, but that will take us too far afield!
 
Last edited:
  • Like
Reactions: Free and Ren
This still poses a problem. As I stated above, how do we know an inductive inference is even 'probably true'? Now you proposed a way around this problem by supposing that induction is about drawing 'valid conclusions'. Maybe induction is not really about truth at all, but rather validity. But this poses a further problem. If induction is just about 'validity', how does one distinguish between valid and invalid inference? Maybe an inference is valid if a correlation has been observed enough times. Or maybe the more correlations that are observed, the more 'valid' your inference will be. Or even perhaps the more observations you make, the more certain you can be of your inference. Although any one of these interpretations might be true, they all face essentially the same problem: 'how do we know that we can extrapolate valid observations from observed correlations'. All of these interpretations suffer from the same trapping that they leave essentially the same problem unanswered. Although there may be an answer, not one has been found yet.

So let me propose a possibility. It seems that no matter which way we define induction, we run into a variant of the same problem. Perhaps this suggests that we are thinking about knowledge and prediction completely the wrong way? Maybe the relationship between observation and prediction isn't what we think it is? Whenever our current way of thinking fails to lead to results, maybe this is an indication that we need to change the way we think? :) David Deutsch proposed a solution, but that will take us too far afield!
Actually, I would be very interested in knowing more about the David Deutsch solution. I'd love it if you could share it, or if that'd take too much time, suggest an article or something.

I will return to your other points later on when I have more time, but I'm liking this exchange a lot even if we may not arrive at our satisfactory answer!
 
Actually, I would be very interested in knowing more about the David Deutsch solution. I'd love it if you could share it, or if that'd take too much time, suggest an article or something.

I will return to your other points later on when I have more time, but I'm liking this exchange a lot even if we may not arrive at our satisfactory answer!

Hi.

I'm enjoying this too. I'll try to find a way to summarise his ideas and then post it. This might take a bit of time, so I'll get back to you when it's done. Also, I'd be glad to hear your criticisms.
 
  • Like
Reactions: Free and Ren
Right! This highly exciting post can have my full attention once again :) Since we can’t seem to arrive at a conclusion regarding inference, I think I will slightly update my approach and shift towards open musings that may lead you/others/myself towards insights useful for the investigation.

When you ask, “How do we know that we can extrapolate valid observations from observed correlations?” I would be tempted to give you the Pragmatist’s answer once again: we know this simply because experience has taught us that correlation repeated through time has tended to yield valid observation. This approach dispenses with the notion of truth altogether. It simply says: “If I empty a bucket of water over a small fire, the fire is extinguished 100 times out of 100. If I repeat the experiment another 900 times, I find that the same result is observed. Therefore, I will consider that it is true that emptying a bucket of water over a small fire will extinguish the fire”. This is only a pragmatic definition of truth, if you like. But why reject it, if it works within that paradigm? Does this not grant at least a weak form of validity to inference? I believe it does, based on my simple common sense.

This is why I asked you to define truth (which, by the way, you haven’t done yet). My feeling is that you have quite a stringent definition of truth. You haven’t verbalised it yet, but I bet that it would probably defeat any "common sense" attempt at articulating inference in a way that is satisfying to you. And yet, the Pragmatist would answer your question in the following way: “But why, my dear @wolly.green, do we even need to articulate a theory of how we can extrapolate valid observations from observed correlations, if the results themselves are obvious?”

Maybe this is the strongest objection to your ‘hard’ definition of inference: the fact that the extremity of the emphasis you seem put on being able to establish pure causation, and to explain it, isn’t actually useful to the way inference is being successfully used in a plethora of observations and experiments.

Let me throw in here that I am not at all a Pragmatist. I just thought mentioning their argument would be interesting. I am myself not decided on the issue.
 
Last edited:
Just following the discussion, I think it's helpful to super-briefly clarify one's concept of experience. Probably the line between experience and reason is not all that rigid, as per some of the stuff Ren has said -- the mind is already doing quite a bit to be able to make propositional claims to discern the contents of a given experience using language/logic and formulate its 'form' from otherwise unfiltered data hitting the senses, and so on.

I think it might help to call these statements descriptive ones (not a framework I'm endorsing so much as vocabulary to help ease the way formulating the problem), and claims of a more explanatory nature might be called explanatory statements. One view of Hume might be how to go from descriptive to explanatory propositions. Descriptive ones would simply involve giving categories to parts of an experience/relating these definitions. Explanatory ones would actually aim to recognize that we're seeing the same phenomenon again and again, and really tell us what's going on above and beyond what the direct content of the experience seems to say.

I think Hume would probably accept that we can recognize two experiences both satisfy certain descriptive propositions (e.g. a glowing ball ascended into the sky in the morning on two separate occasions), but would wonder how we can then say "these really are instances of the same phenomenon" -- that is, conclude that this is just a phenomenon called a sunrise that always happens under certain conditions. Once we DO say two experiences are the "same phenomenon", I think it's safe to say there's no leap in saying it for any further N experiences.


I made that clarifying vocabulary to just note that it seems Ren was trying to go the route of saying the mind already had to do some work to make descriptive statements, so, he might submit, perhaps we're already exiting and/or never were in the realm of "pure experience" (whatever that would be -- some unfiltered bunch of data that the brain can't make sense of).
Still, you might (as I guess you indeed did) express that there's more to say, because this will at most let you describe your experiences, not recognize that they involve the same phenomenon.

I think my interpretation of Hume really is that he was in a way agreeing we need a creative leap, not just experience, to gain scientific knowledge, but I am guessing he was not comfortable with the idea of a creative leap. I'm guessing he would be OK with formal logic, and that for similar reasons, he could live with "descriptive" statements, since really they involve defining one's terms and relating them precisely. However, explanatory ones seemed to him to involve a kind of scary leap.

But in any case, I'd agree just to declare two experiences instances of the same phenomenon, we really do have to have some kind of creative leap/imagination.
In that sense, the error-prone-ness of reason might basically be what Hume was reacting to.
 
Last edited:
Huh. You are good at philosophy. Have you studied it? I like the way you put your question together. I'd like to take a shot at answering it. I love epistemology. Plus procrastination from work is key :m114:


This thread is about induction. So is induction possible? Is it possible to derive scientific knowledge – or any knowledge for that matter – from inductive inferences? Sunrise is the famous example that is used to illustrate induction, so we will start with that. Don’t worry if you’re scratchy on the details, it will all make sense soon.

We have all experienced a sunrise. It’s that time of day when the sun ascends above the horizon and into the sky. Over the years, we have come to not only know what a sunrise is, but to actively expect it. Morning after morning, we all expect the sun will rise, even if we cannot see it beyond a cloudy sky. And surely enough, morning after morning our expectations are verified. But how did we come to know and expect that the sun will rise? Induction says that we “know” the sun will rise because we have extrapolated it from experience. Day after day, we have exactly the same experience of the sun ascending above the horizon, and thus extrapolate from those experiences that it will ascend again in the future. However, is this really how we come to “know”? Do we really gain knowledge about what to expect by extrapolating from experience? I want to argue that this cannot be the case. One problem to consider is: how do we ever know when two experiences are the same?

How do I know when two experiences are identical? I may have experienced a sunrise today, but how do I know that I have experienced a sunrise in the past? I may label these two experiences with the same name – I may call them both ‘a sunrise’ – but how do I know they are the same? One answer is: I can say two experiences are identical if I experience them under the same conditions. For example, I know that one condition for a ‘sunrise’ is that it must happen in the morning. I know another is that it cannot without a sun. But this leads to a further question: how do you know which conditions are related to which experience? The answer, I think, is because our explanations tell us so. The fact that a particular experience is related to some set of conditions is itself a conjecture; a creative leap of imagination. But if we come to know two things are related though conjecture, then our prediction that the sun will rise in the future has nothing to do with extrapolation. Which implies that knowledge cannot be derived through induction.
Heh, I like what you did here. I think we can reach a solution with a proper theory of reference. I think you are right that connecting sense perception to some, what I take to be identity conditions, does take a cognitive leap, but I don't think it is unfounded. This is done naturally by assigning identities. For example, I think you were hinting towards a descriptivist theory. Sense perception X is identical to abstract conception Y just in case the properties representing X and Y map onto each other sufficiently (I'm sticking to a naive version to outline my idea, then I can give you a better theory if you want it. Descriptivism actually doesn't work as a theory of reference, but it is the easiest to outline). When we connect these identities, we can now work with the concept represented by the sense perception.

What about a law? There could be a law of induction that tells us when two experiences are identical. If we had such a law, we could use it to make inductive inferences because it answers our question: how do we know two experiences are the same? However, the problem with this approach is that no one has yet discovered such a law. No one has yet to formulate one that is useful for deriving knowledge from experience. Therefore, until it has been discovered, it cannot be used to ‘justify’ induction.

In short, experience cannot be used to make predictions about the future. Nor can it be used to derive knowledge about anything. Thoughts?
Now, correct me if I'm wrong, but I think this is a second question. Rather than "how" induction can be applied, you're asking from where the rationalization for induction derives. Specifically, how to connect it to knowledge. This is a much harder question, and depends on how you mean "knowledge". If you take a skeptic's "you only know something if it cannot be doubted", then you never have knowledge with induction, and arguably loose it in everything but some kinds of self knowledge. If you mean knowledge = True Justified Belief (or similar theory), then we can get this from induction. The tricky bit is the justification, but I won't try to go into that unless you ask. I think this is one of the questions you want to discuss, but I feel lazy right now :m066:
 
@charlatan Thanks for providing this distinction between descriptive and explanatory statements via Hume. This useful :) You’re also right about what I tried to convey. Now that you’ve made this terminological distinction between descriptive and explanatory propositions available to us, I feel like the conversation may continue with renewed strength. I am neither an expert in epistemology nor logic, but I will try to answer what you said and in the process refine my own understanding of this topic.

You say that explanatory propositions are meant to go “beyond what the direct content of what the experience seems to say.” What exactly do you mean by this direct content? Are you referring to the experience’s unique content, or to something more akin to its form or substance? Let me attempt a schematization of sorts, which will probably be faulty but which might still prove helpful through being corrected and improved by other people.

One day, I observe the sun set at 6pm from Lake Geneva, and it is partly hidden by the Mont Blanc. A month later, I observe that it sets at 7pm from the top of the Mont Blanc. (This is merely for the sake of illustration).

If we take the sun as the subject (or ontologically, the substance) S of both propositions, “setting” as the predicate (or ontologically, the property) P of substance S, and 6pm/partly hidden and 7pm/from the mountaintop as accidents A1/A2 and A3/A4 of substance S, we could maybe formalise the two propositions as:

First: There is an S such that P and A1 A2 = SP A1A2
Second: There is an S such that P and A3 A4 = SP A3A4

We see here that in both propositions, “SP” occurs. And if there were a third and a fourth, with accidents A5/A6 and A7/A8, “SP” would still occur in them. Maybe what links S and P together in such a way at to appear in all propositions about observing a sunset is the form F. My claim, which is likely one that Hume would reject, is that the human mind has the ability to recognise F when presented with at least two instances of the proposition: SP AxAy and SP AwAz. It would recognise both propositions as formally sharing SP = the sun is setting.

Maybe you are right @charlatan – maybe here, we are still at the level of description, not explanation. There is no reason to believe based on just observing this, that the two occurences of SP will ever appear again, formally or whatever. But hopefully my schematisation will provide a modest basis for moving from the observation of recurrence to the hope for the formulation of a law. In fact, as per one of my previous posts, Pragmatism would probably hold something like: “If SP is observed enough times, then we may just as well settle for the commonsensical conclusion that if S, then P.”

Thoughts?
 
Last edited:
  • Like
Reactions: Free
When you ask, “How do we know that we can extrapolate valid observations from observed correlations?” I would be tempted to give you the Pragmatist’s answer once again: we know this simply because experience has taught us that correlation repeated through time has tended to yield valid observation. This approach dispenses with the notion of truth altogether. It simply says: “If I empty a bucket of water over a small fire, the fire is extinguished 100 times out of 100. If I repeat the experiment another 900 times, I find that the same result is observed. Therefore, I will consider that it is true that emptying a bucket of water over a small fire will extinguish the fire”. This is only a pragmatic definition of truth, if you like. But why reject it, if it works within that paradigm? Does this not grant at least a weak form of validity to inference? I believe it does, based on my simple common sense.

Hi again. I'm sorry I forgot to respond.

This is actually a circular argument. What you have basically said is that we know induction is reliable because "experience has taught us that correlation repeated through time has tended to yield valid observation". But this is an inductive inference; you are using induction to justify induction! Which means we still have no reason to think induction is possible.

“But why, my dear @wolly.green, do we even need to articulate a theory of how we can extrapolate valid observations from observed correlations, if the results themselves are obvious?”

Karl Popper actually explained why induction is impossible in his book Conjectures and Refutations. His argument is almost exactly the same as mine, except he focuses more on how induction leads to a logical contradiction. For the moment, I will not present his argument since it does not answer the pragmatists question: 'why should anyone bother to find an explanation in the first place?' If induction works, what value is there in looking for a theory?

The reason to want a theory that explains induction is exactly the same as the reason to want one that explains anything at all; because theories explain reality. This seems like an innocuous statement, but it is quite significant. Suppose that we do accept Karl Poppers refutation as true, what then? Well if we are to take his refutation seriously, the next question to ask is: 'if one does not reason according to induction, then how exactly does one reason?'. The answer to this question is obviously valuable because it will show us -- among other things -- how to make predictions that are even more reliable than before. This is essentially the answer to your pragmatist question.
 
Last edited:
  • Like
Reactions: Free and Ren
Now, correct me if I'm wrong, but I think this is a second question. Rather than "how" induction can be applied, you're asking from where the rationalization for induction derives. Specifically, how to connect it to knowledge. This is a much harder question, and depends on how you mean "knowledge". If you take a skeptic's "you only know something if it cannot be doubted", then you never have knowledge with induction, and arguably loose it in everything but some kinds of self knowledge. If you mean knowledge = True Justified Belief (or similar theory), then we can get this from induction. The tricky bit is the justification, but I won't try to go into that unless you ask. I think this is one of the questions you want to discuss, but I feel lazy right now :m066:

This is exactly the question I'm asking. So please, go ahead. How exactly is justification possible from induction?
 
This is actually a circular argument. What you have basically said is that we know induction is reliable because "experience has taught us that correlation repeated through time has tended to yield valid observation". But this is an inductive inference; you are using induction to justify induction! Which means we still have no reason to think induction is possible.
You are absolutely correct! I actually felt like something wasn't quite right when I wrote this, but I chose to unpack it anyway, thinking you might point out the fallacy better than myself. Oh well.... I have tried to give induction a good fight, but it appears like I've been knocked down by its big circular weapon. I have to admit myself defeated, I do not know how to solve your paradox using the resources of our current language.

148.jpg

The reason to want a theory that explains induction is exactly the same as the reason to want one that explains anything at all; because theories explain reality. This seems like an innocuous statement, but it is quite significant. Suppose that we do accept Karl Poppers refutation as true, what then? Well if we are to take his refutation seriously, the next question to ask is: 'if one does not reason according to induction, then how exactly does one reason?'. The answer to this question is obviously valuable because it will show us -- among other things -- how to make predictions that are even more reliable than before. This is essentially the answer to your pragmatist question.
I don't think this is an innocuous statement! It is certainly vital to seek for a scientific theory that is as wide as possible to explain reality. I think the Pragmatist would agree with that, too. Maybe the conclusion we have to draw from this is indeed that we need to rethink our "theory" of induction from the ground up, or to rethink induction itself. Since Popper seems to show that induction leads to a logical contradiction, maybe a new kind of logic, or an already existing form of deviant logic developed since Popper, would accommodate it? How is David Deutsch doing? :)
 
Last edited:
Ren said:
What exactly do you mean by this direct content?

Ah, I really just meant the descriptive content. Basically, the crux of my point is this: if we accept that there's no such thing as (talking about) "raw" experience while remaining in the realm of experience+interpret the OP to be referring to "raw" experience, then the conclusion of the OP is trivially true. That is, if we say that to even talk of what happened in a given experience is to go beyond its reach, then definitely we can't gain knowledge through experience alone.

I gather that isn't the idea the OP is going for, so I make the distinction between talking about an experience and relating two experiences; the OP seems to be saying something more along the lines that relating two experiences will involve going beyond what's involved in merely talking about an experience. I wanted to note that, to the extent relating it involves only logical relations between our descriptive statements, not so. To the extent it involves saying how we could have known something about the second/first being instances of the same phenomenon/that we can say stuff about future experiences without having them, we're going beyond description.

BTW descriptive/explanatory was vocab I coined, not from Hume; I noticed something like that was relevant to clarifying the discussion between you/wolly.
 
Last edited:
  • Like
Reactions: Free and Ren
Yeah I think it can. Problem is we are constrained in our knowledge of all variables to do so accurately. It does not mean it is logically impossible like Hume suggested (that inductive logic is deductively invalid)- just that it is practically impossible. (I can go into great detail on this if you are interested- basically I agree with Leibniz).

I suspect as computing technology gets better and better our ability to predict outcomes will improve dramatically and this practical problem will be lessened somewhat. We will probably never have it 100% though because of weird stuff going on at quantum levels.... or why we would want to go through such trouble in the first place. Physicists will probably never be happy, but for most other areas (esp. of practical concern like medicine and social sciences) they should be able to pull off some amazing things via statistically modelling. Big data and analytics are the beginning of this trend IMO.

Nobody likes being reduced to a statistic though.
 
@Faye I would be very interested in your Leibnizian perspective if you care to develop it a little bit :)
 
  • Like
Reactions: wolly.green
Yeah I think it can. Problem is we are constrained in our knowledge of all variables to do so accurately. It does not mean it is logically impossible like Hume suggested (that inductive logic is deductively invalid)- just that it is practically impossible. (I can go into great detail on this if you are interested- basically I agree with Leibniz).

Im with Ren here. It would be helpful if you could clarify your position a bit. At the moment, there is very little substance to sink my thoughts in to. No pressure though! :blush:

I suspect as computing technology gets better and better our ability to predict outcomes will improve dramatically and this practical problem will be lessened somewhat. We will probably never have it 100% though because of weird stuff going on at quantum levels.... or why we would want to go through such trouble in the first place. Physicists will probably never be happy, but for most other areas (esp. of practical concern like medicine and social sciences) they should be able to pull off some amazing things via statistically modelling. Big data and analytics are the beginning of this trend IMO.

This is really tricky. You said that we will probably never be able to predict outcomes with 100% accuracy; but what exactly do you mean by this? When you say 'predict outcomes', do you mean that we will never be able to predict the outcomes of experiments with complete accuracy, or that we will never be able to predict the future with complete accurately? Although this might seem like logic chopping, they actually have completely different meanings.
 
  • Like
Reactions: Ren
It's worth noting that (at least so it seems to me) neither Hume nor Popper conclude we can attain scientific explanations through induction, but where they differ is where they go from there. I think Hume's view is somewhat more negative about the negative answer. Whereas, Popper's view is more along the lines that our best theories should be highly falsifiable and have survived till now. They aren't derived from experience so much as we simply say what the theory is, and then deduce its consequences -- and then note if one of those consequences is in fact false, the theory has been falsified.

I would guess that Hume would still not be happy, because he'd probably say the fact that they've not been falsified till now does not do anything to tell me how to act in the future, since I should simply say that they've not been falsified till now, not that they will be useful in the future (yet in practice, we don't think airplanes will fall due to the laws of physics suddenly winding up being falsified).

On the other hand, significantly, the same can be said of our mathematics: we cannot know for sure that there won't be an inconsistency found in mathematics some day. We just know that it doesn't seem to be the case, and reason (error-prone) says that the present foundations are good. We may need to revise them. When we formulate axioms, we do so "intuitively" (e.g. in set theory), and then try our best to ensure the formalism doesn't lead to paradox, even though it may in fact do so.

Also, for those who suggest that the fact something hasn't been falsified till now does not imply much confidence about the future, Popper is still addressing that. He's saying that, if our theory is highly falsifiable, we'd find that it can be destroyed quite easily. So, one might surmise, if it hasn't been destroyed yet, it's proven its use {BTW, in reference to some of Ren's discussions, here's when there's some sliver of pragmatism that creeps into a theory that's nonetheless aiming for truth, not just to be useful}.
However, the skeptic may reply, this last line is where one might get off the train -- the part about utility.

I think at the end of the day, Hume's sentiments resemble those of someone who'd want to just formalize mathematics completely and avoid the error-prone-ness of reason. Yet the truth is that formalization program fails to avoid the error-prone-ness. All we know is that, by our error-prone methods, we've managed to note that there's never been an inconsistency that couldn't be resolved till now -- that we expect the same of the future is a leap of imagination, it's not something we know for sure. It's a matter of present conceivability, and it could be wrong.

So in a manner of speaking, I think Popper does solve the problem, in the sense that he shows us there's no escaping this error-prone aspect anywhere you go....that is, even if we stuck to descriptive, not explanatory, statements, there may be an inconsistency in the way we formulate our descriptions, even if we haven't observed one yet. When we describe a certain experience, it may seem we're staying in the realm of logic and definitions, thus we're safe. However, we've made a leap even here: that our framework is reasonable, won't have some gross inconsistency, and so on, is a leap, and ultimately one based on the hope that we're uncovering truth by our descriptions. So all these endeavors are deductive+based on the hope of eventually not ending up in contradiction.

The error-prone-ness of reason is probably not unrelated to godel type arguments which will suggest that intelligence basically requires the ability to err and revise. Nor surprisingly, this is why statistics is related to the AI world.
 
Last edited:
  • Like
Reactions: Ren
It's worth noting that (at least so it seems to me) neither Hume nor Popper conclude we can attain scientific explanations through induction, but where they differ is where they go from there. I think Hume's view is somewhat more negative about the negative answer. Whereas, Popper's view is more along the lines that our best theories should be highly falsifiable and have survived till now. They aren't derived from experience so much as we simply say what the theory is, and then deduce its consequences -- and then note if one of those consequences is in fact false, the theory has been falsified.

I would guess that Hume would still not be happy, because he'd probably say the fact that they've not been falsified till now does not do anything to tell me how to act in the future, since I should simply say that they've not been falsified till now, not that they will be useful in the future (yet in practice, we don't think airplanes will fall due to the laws of physics suddenly winding up being falsified).

On the other hand, significantly, the same can be said of our mathematics: we cannot know for sure that there won't be an inconsistency found in mathematics some day. We just know that it doesn't seem to be the case, and reason (error-prone) says that the present foundations are good. We may need to revise them. When we formulate axioms, we do so "intuitively" (e.g. in set theory), and then try our best to ensure the formalism doesn't lead to paradox, even though it may in fact do so.

Also, for those who suggest that the fact something hasn't been falsified till now does not imply much confidence about the future, Popper is still addressing that. He's saying that, if our theory is highly falsifiable, we'd find that it can be destroyed quite easily. So, one might surmise, if it hasn't been destroyed yet, it's proven its use {BTW, in reference to some of Ren's discussions, here's when there's some sliver of pragmatism that creeps into a theory that's nonetheless aiming for truth, not just to be useful}.
However, the skeptic may reply, this last line is where one might get off the train -- the part about utility.

I think at the end of the day, Hume's sentiments resemble those of someone who'd want to just formalize mathematics completely and avoid the error-prone-ness of reason. Yet the truth is that formalization program fails to avoid the error-prone-ness. All we know is that, by our error-prone methods, we've managed to note that there's never been an inconsistency that couldn't be resolved till now -- that we expect the same of the future is a leap of imagination, it's not something we know for sure. It's a matter of present conceivability, and it could be wrong.

So in a manner of speaking, I think Popper does solve the problem, in the sense that he shows us there's no escaping this error-prone aspect anywhere you go....that is, even if we stuck to descriptive, not explanatory, statements, there may be an inconsistency in the way we formulate our descriptions, even if we haven't observed one yet. When we describe a certain experience, it may seem we're staying in the realm of logic and definitions, thus we're safe. However, we've made a leap even here: that our framework is reasonable, won't have some gross inconsistency, and so on, is a leap, and ultimately one based on the hope that we're uncovering truth by our descriptions. So all these endeavors are deductive+based on the hope of eventually not ending up in contradiction.

The error-prone-ness of reason is probably not unrelated to godel type arguments which will suggest that intelligence basically requires the ability to err and revise. Nor surprisingly, this is why statistics is related to the AI world.

Hi Charlatan!

I don't think this is the reason statistics is used in AI. As far as I can tell, AI researchers use statistics because they are empiricists. They all believe it is possible to derive true explanations from data, and that it is possible to use statistics to make these derivations. There are ALOT of misconceptions here. First that an explanation is the same as a formula or a pattern --it is not. Second, that human like intelligence amounts to nothing more than sophisticated pattern recognition -- it does not.
 
Last edited:
  • Like
Reactions: Ren
wolly.green said:
I don't think this is the reason statistics is used in AI. As far as I can tell, AI researchers use statistics because they are empiricists.

Well I wasn't so much interested in the data part of stats (you're right about that having to do with empiricism) as the fact that it involves uncertainty! The idea that intelligent machines have to be able to make mistakes/do trial/error goes back to Turing/godelian ideas.

There's a difference between programming a more "mechanical" less "intelligent" thing vs AI.


Now, obviously I don't think this is necessarily sufficient for human intelligence or anything like that. Just that it's the smallest thing to add in.
 
  • Like
Reactions: wolly.green and Ren
Well I wasn't so much interested in the data part of stats (you're right about that having to do with empiricism) as the fact that it involves uncertainty! The idea that intelligent machines have to be able to make mistakes/do trial/error goes back to Turing/godelian ideas.

There's a difference between programming a more "mechanical" less "intelligent" thing vs AI.

Now, obviously I don't think this is necessarily sufficient for human intelligence or anything like that. Just that it's the smallest thing to add in.

Ok this makes sense. However, what do you mean when you say that uncertainty is "the smallest thing to add in?" What I think you are trying to say is: 'conjecture is essentially random'. Am I correct?