Pure Math, Or, How to Think Like a Pro | INFJ Forum

Pure Math, Or, How to Think Like a Pro

patricky

Banned
Jan 22, 2010
187
48
159
MBTI
ENTP
This morning I managed to pin down a few people and yell at them about why pure math is so amazing, and to my surprise they actually ended up being interested. I'm posting this thread to relate my experience with pure math and recommend a few resources to anyone interested in trying it out.

First of all, pure math is like no math you've ever seen before. Unless you knew exactly what pure math was before you clicked this thread, I can feel confident saying that you've never done it. Pure math is not crunching numbers, not solving weird algebraic equations using memorized formulas, and not anything resembling high school calculus. Furthermore, before you also think "but I'm not some robot Ti/Te, hence pure math is not for me", read on! It is for everyone that gives it a chance!

Which is not to say that it is easy. Like any other field, it takes patience, practice, and honesty with yourself if you're studying it independently. However, I've never met someone who couldn't pick up the basics in 20 minutes and see why the basics led to interesting problems. I knew people in my pure math program who were highly skilled and terrible at high school style math, or university engineering/science style math. They are worlds apart.

Pure math is like philosophy on crack. But better than being interesting, it is highly practical in a holistic, general intelligence kind of way. There aren't many places in life where you'll write up a proof to a problem facing you; however, having practiced writing proofs in pure math will influence the way that you approach and solve abstract problems. It gives you little mental tools, highlights the details of circumstances, and colors the junctions in logic leading from what you know to what you can conclude. I don't plan on applying any of the theorems I prove in pure math directly in real life, but the experience of the first 2.5 years of curriculum has convinced me to graduate with at least a minor, maybe just a major in pure math. It is like lifting weights for you brain. The point isn't to be able to lift weights really well, the point is that you get an overall improvement in performance at a variety of tasks.

So now that most of you have decided that this is still lame and left, and some of you have stopped hyperventilating at the idea of math, and maybe even a few of your are interested in these magical powers I seem to be promising, what's the deal with this pure math stuff? Pure math is the study of writing proofs, generally about sets of objects that have some kind of structure. Proofs, you say? No, not quite like those nasty proofs in high school geometry, with a bunch of memorized, abbreviated rules--although closer to that style than, say, high school algebra. High school algebra is a sin against math education. In fact...

High school algebra is a sin against math education. High school calculus, too. Even AP BC Calc. It follows the same general pattern of forcing you to memorize a bunch of disparate formulas and rules so that you can, ultimately, compute a bunch of numbers. These classes would more appropriately be called "Beginner Mathematics Algorithms". Algorithms are those things that computers run in order to take some input data and turn it into output. Gross, right?

High school calculus is particularly bad about the computational aspect, and it also relies heavily on visual intuition. After being shown how to do the same class of problems over and over, you're expected to "just see" how to set up the problem (and setting up the problem is 90% of the work). This kind of "observe and generalize" approach is the result of teachers who don't really know the material that deeply (since you also won't get a good treatment of it in university unless you know which rocks to turn over or you get lucky) combined with standardized testing that rewards material that can be automatically marked via multiple choice scantrons.

High school algebra has the opposite problem: it's not visual at all. How do these algebraic relations turn into that graph? Why do I have to fiddle with this equation like an awkward teenager trying to get to second base in order to get the answer? There are two fundamental mistakes in the high school approach: (1) Like calculus, an "observe and generalize" approach is taken, relying on the student to make the real leap in understanding rather than guiding them. (2) In most applied algebra, they never really tell you what the equals sign means. Seriously. You can usually sort of "figure it out" in individual cases, but in general they do not tell you. More on this later.

Now that we're done discussing sins, let's get back to proofs. First of all, while there will be some algebraic manipulation and certainly some variables, all of the heavy lifting in pure math is done using English. Words. Letters that don't mean "multiply us all together" when you clump them together. There is some terminology involved, but in whatever you spend time doing on a daily basis there's also lingo that isn't immediately obvious at first. The point is that pure math is not a page full of insane looking equations--in fact, the more equations you use, the more likely you are to be compensating for a lack of understanding of the actual proof.

There's that word again--what is a proof? A proof is a series of statements that transform a premise (P) into a conclusion (Q), using a small, simple set of rules in order. This is frequently a very creative process, as stodgy and linear as it might sound at first. It's also easy (once you've tried a few) to look at an exercise and say to yourself, "Oh, that's obvious why. I don't even need to write down the proof." OMFG Wrong. Every single time, wrong. Even on the simple ones that take a few steps, totally and utterly wrong. Writing out that proof is the entire point of studying pure math. Just because you know you can curl that weight doesn't mean that you don't bother to do it, and when you use proper form it's usually harder that it looks. The proof is the exercise for your brain, it forces those problem solving parts of your brain to activate, whether they feel like it or not. Knowing that you can prove something doesn't activate them; actually writing out the proof using small, verifiable steps leads you to the real crux of the problem, when one step just isn't obvious to get from point A to point B. That's when your creative leaps take charge.

I mentioned a small set of rules: these are technically called first order prepositional logic, but the space station's toilet is technically called a 'liquid waste receptacle'. Don't let the name fool you, the rules are pretty much obvious. To begin with, there is a notion of an assertion, otherwise known as a predicate. A predicate is a fancy way of saying something about something else. Seriously, it can be nearly anything: all that matters is that we give a unique name to the predicate that says something. For example, let's say:

Code:
Drunk(a)
is a predicate that simple means "I am valid if person a is drunk". Let's suppose that I just drank 8 shots of tequila and we're letting p be shorthand for "Patrick" (that is, me). Then

Code:
Drunk(p)
is valid. Simple as that! It's either valid or it's not, and my breath is providing very conclusive evidence. Now let's suppose that j represents Jester and he's been keeping sober for a while now. Then naturally

Code:
Drunk(j)
is not valid. This is getting a little pedantic, isn't it? Well, the idea of 'being valid' comes into play when we start gluing these predicates together. For example, let's say we want to know if either Patrick is drunk OR Jester is drunk--we don't care which one.

Code:
Drunk(p) U Drunk(j)
Encodes this question! (Read 'U' as 'OR'). Assuming we're talking about the same night of drinking, then this statement would be valid, since Patrick is drunk, and OR means only one of us needs to be puking in the toilet.

Similarly, we can ask is Patrick drunk AND is Jester drunk. This is only true when we're BOTH drunk. So we'd encode it as:

Code:
Drunk(p) ^ Drunk(j)
And it would be invalid, since Jester is not drunk and both sides of an AND (which is how you read '^') must be valid in order for the whole expression to valid.

Okay, so we've been gluing together drunken questions like a rowdy night on tinychat, but where is this really leading? There's another connector called the implies that looks like this: '=>'. Let's make up an additional predicate, called Sick(a), which is valid when the person a is going to get sick anytime in the next day. Now we can say:

Code:
Drunk(a) => Sick(a)
That reads, "If a is drunk, then a will get sick sometime in the next day". Not true for some of you iron gutted ethanol fish, but for now let's take it on faith that getting drunk means you're going to get sick. Remember, this is tequila we're talking about.

Note: While this is related to a real life, physical example, usually these assumptions we're given relate to the relationships of abstract mathematical objects with very well defined behaviors. So there would be none of the ambiguity or hand waving that I used to justify why this must always be true. Obviously it isn't.

Since we're assuming this implication to be true, let's try to make a leap of logic. We know:

Code:
Drunk(p)
and we also know

Code:
Drunk(a) => Sick(a)
by substituting p in for a we get:

Code:
Drunk(p) => Sick(p)
and by the rule for evaluating the implication symbol, we can say:

Code:
Sick(p)
I am going to be sick! I just snuck a term by you there: rule. There are five such rules in the most common formulation of prepositional logic, and there are a few other connective symbols I didn't mention. Briefly:

NOT (or '~'): Is valid only if the predicate to the right is not valid, and is not valid only if the predicate to the right is valid. Very intuitive meaning.

There exists (or 'E' except flipped to point left): I'm going to cheat here and call it %, and I'm making that completely up so don't remember it. This symbol precedes a variable and a predicate like this:
Code:
%x Drunk(x)
This simply says "there exists some x such that x is drunk". It has a rule where, given a statement like this as an assumption, you are then allowed to talk about some specific x (though you might not know which) that is drunk as though it is true. Don't worry if this is a little weird.

For all (or upside-down 'A'. You think I'm the only mathematician that drinks when they choose notation? Let's pretend it's called 'A*'): Like 'there exists', this guy precedes a variable and a predicate and means that for EVERY choice of variable, that predicate holds. So
Code:
A* x Drunk(x)
Means that everyone is drunk!

And that's it. Those are the only connectors you need (some others exist for convenience), and the rules to manipulate statements using those connectors are pretty straight forward, though I won't go into them here.

Unfortunately, this example doesn't illustrate why pure math requires creativity. This is sort of like telling you what the intervals are in the major scale before you've ever heard a piece of music played. But this sort of stuff gets knocked down in the first week or two of studying, and you move on to much more interesting and abstract proofs. Here's a quick theorem for you, which, unlike the Drunk => Sick theorem above is actually always true. No matter what choices of the predicates P and Q, this statement will hold:

Code:
(P => Q) => (~Q => ~P)
DON'T PANIC. I'll be gentle. This has nested implies symbols, so it's not really obvious what it's saying at first glance. In plain English: "If P implies Q, then not P implies not Q".

Suppose P means "is a duck" and Q means "is a bird". P => Q, hence "is a duck" implies "is a bird". Makes sense, right? All ducks are birds. Now, ~P => ~Q. "is not a bird" implies "is not a duck". If something is not a bird, then it is not a duck.

Think about it for a second: the example is a little obvious, but it's not so obvious at first why the general statement is true.

Done thinking? Positive? Okay. Think a little more.

Alright. First we take the left half of the theorem, P=>Q. It is our goal to show using only the knowledge that a duck is a bird that something which is not a bird is not a duck. Keep that in mind, while we look at the right side.

On the right side, given what we already know, we want to show ~Q => ~P. Well, let's consider what happens when ~Q is true. What happens when it is not a bird? Let's make one further assumption: P. Since we've assumed P, we know by the first have that P=>Q, hence we can conclude Q. Since it is a duck, it is a bird. But we already assumed ~Q before. We already assumed it is not a bird. This is a contradiction! This contradiction happened because we assumed P, and everything else we assumed (P=>Q, and also ~Q) was given to us. So we must conclude ~P! Hence, ~Q => ~P, given that P=>Q. In other words:

Code:
(P=>Q) => (~Q => ~P)
Once again, a little pedantic, but here it's important to note that we didn't need to talk about ducks: it was just a tool to help us visualize a concrete scenario. Our logic, however, was solid without any concrete objects. All we needed to know was that P=>Q. Didn't matter at all what P and Q actually meant or referred to, and yet we can still say with certainty that ~Q=>~P.

While this style of proof by contradiction is reasonably common in pure math, the wacky nots, implies, and other connectives are not actually that common. You do see the for-all and exists floating around because they're very convenient, but generally the basic =>, ~, ^, and U occur as a result of plain english, like our duck talk. I used them here simply to demonstrate that everything you do in proof theory has a concrete reason and justification. Sometimes you make leaps of faith to figure out the solution, but once you get it, the answer is incontrovertible.

After prepositional logic, the only other thing pure math depends on is set theory. I won't go into it right now, but honestly it is nearly as easy as thinking about putting kittens in buckets.

istockphoto_4376779-littel-kitten-in-wooden-bucket.jpg


Seriously, I've had this analogy used quite successfully. Look at the kitty.


Well, I originally started this post to say that pure math is awesome and to give some references. I sort of derailed myself and gave an example of low level logic that isn't really used in every day proofs, but gives you a taste of the precision and lack of algebraic magic that pure math actually involves. If you didn't understand this, don't take that as a sign that you're not capable of understanding it. The beginning stuff always seems much more intimidating than it really is, and the later stuff always seems much easier than it really is. Please reply to this post if you're having trouble understanding but want to.

Finally, if you liked this post and want to study pure math on your own, here are some cheap but excellent books to help you. It's important to remember that getting frustrated with a problem is normal, and the only way to get better is to do every single exercise written out on paper with good, easily to follow language, and to study from 20-40 minutes per day at least 3 days a week. It's that easy.

An Introduction to Algebraic Structures ($10):
Amazon.com: An Introduction to Algebraic Structures (9780486659404): Joseph Landin: Books

Starts at a nice easy pace for beginners, covers the basics of set theory and then works its way up to group theory and beyond.

Calculus - Michael Spivak (~$100 for the newest edition, but you'll be fine an edition back, which you can probably hunt down for under $30):

Absolute Bible of analysis for beginners. This will teach you real calculus, but it's designed for people without a strong mathematical background. It has balls, but Spivak is very lucid and conversational. His demonstrations are very thorough and easy to follow, and the exercises provide a great resource.



SUPER DUPER FINALLY:

If anyone has a proof from a book, the internets, or your own musings and you want it reviewed, feel free to post it here or PM it to me! As long as it isn't your homework.
 
Last edited:
What if the bucket is too small?
 
What if the bucket is too small?

The bucket is never too small. Sometimes you have to be a little careful if you want to put an infinite number of kittens into the bucket though. But only if it's as many kittens as there are numbers on the real line, it's okay to have as many kittens as there are integers.
 
Your using logic which is still math but the math of arguments. Still awful :m071:
 
Last edited:
You had me at "Pure math is like philosophy on crack."
 
This is not always true.
Code:
(P => Q) => (~P => ~Q)



Let's consider the following terms:
Code:
P = "NumLock light is on"
Q = "Keyboard is on"


The following is true: If the light is on, the keyboard must be on.
Code:
P => Q : NumLock on => Keyboard on


The following is false: If the light is off, the keyboard could be on while NumLock is off.
Code:
~P => ~Q : NumLock off => Keyboard off


So (P => Q) cannot always imply (~P => ~Q)


The following is true for our considered terms, however.
Code:
(P => Q) => (~Q => ~P)
 
Last edited:
I loved this tautology problems until I started working with fuzzy logic (because lets face it it's not the same whether you are in a coma drunk, or just a little drunk enough to say sing some embarrassing karaoke). :)

But I don't understand, isn't this a part of regular school program? Sorry if this is a stupid question but I don't know much about math programs in USA.

And although I already knew everything you wrote I must applaud you for a very fine presentation. :) I think you should continue, you are good in presenting things in an interesting fashion.
 
Last edited:
Let's consider the following terms:
Code:
P = "NumLock light is on"
Q = "Keyboard is on"

Is it logical to address the state of two objects with that equation rather than the objects themselves? Still trying to figure out what applies and what doesn't apply.

Is this equation supposed to apply to anything you throw at it?
 
Mathematics is a sin :p

I've always been pretty crap at maths and the mere sight of equations befuddles me somewhat. But I have a lot of respect for it and realise we owe it alot.
 
Is it logical to address the state of two objects with that equation rather than the objects themselves? Still trying to figure out what applies and what doesn't apply.

Is this equation supposed to apply to anything you throw at it?

Yeah John, I debated how to address this in the example and decided I would go with the less symbolically tricky version. P and Q are simply placeholders for an expression, where an expression is a combination of those logical connectors and other valid predicates. So we could say S = (P => Q), and then S would be a single expression denoting the implication (P => Q). It's possible to talk about these formulations without ever referring to an 'object' it refers to, but for the sake of simplicity you can just assume that every single expression boils down to either valid or invalid (true or false). However, that's generally not the most useful way to go about solving a problem. This requires a sort of inductive mindset where you assume P is a 'phrase' that means something, and then talk about the implications of logical statements about P without worrying about what P really means.

Ultimately, it would be valid (perhaps even more rigorous) to put a "For all a" quantifier in front of the whole expression, and have all of the Ps and Qs be P(a) and Q(a). e.g.

Code:
A* a ((P(a) => Q(a)) => (~Q(a) => ~P(a)))
Where once against 'A*' is a ghetto way of writing upside-down A, "for all". I do think it's time for me to request latex support on the forum ;)
 
But I don't understand, isn't this a part of regular school program? Sorry if this is a stupid question but I don't know much about math programs in USA.

Beyond geometrical proofs that I learned in 9th grade, more or less no. And I'm not sure if it's only for honors math. Even then, students complained about it being "impractical" and whatnot.

I agree with patricky. If you tell most people Euler's Identity it means nothing to most people. But if you go through the proof, and they understand why it works and what it means, it's mind-blowing. I wish we did more proofs in math.

(Also, to avoid confusion between Americans and non-Americans on this thread, British maths and American math are the same thing; American College is the same as a British University)
 
When I took euclidean geometry in high school, proofs saved my GPA. I've never been a fan of math, mostly because I just don't care and make so many "careless mistakes," but I must dorkily admit that I really enjoyed doing proofs once I'd become pretty familiar with all of the postulates and theorems we were expected to use to solve them.
 
So pure math is propositional logic?

Solve this:

Heh, this is a sticking point between two different schools of philosophy. The first says that we can take it to be axiomatic that (~H OR H), and this relies on the assumption that H must evaluate to either true or false. However this latter statement (H is TRUE or H is FALSE) isn't always stated axiomatically (when prepositions aren't defined explicitly in terms of boolean primitives), so generally it's taken to be an axiom that (~H OR H) => True; that is, it's a definition as much as it is a conclusion. (The G in your example is unnecessary, the idea is frequently used to invoke ~H OR H and then prove that whatever conclusion you're trying to reach is implied by either ~H or H.)

However, there are complications with taking this seemingly obvious statement as an axiom, especially when dealing with logic systems where the concept of a preposition having an unknown value is treated seriously.