Saturday, March 28, 2015

Musings on my faith going into Holy Week 1/n

As I stand on the precipice of entering Holy Week, the holiest, most important time of the year for Christians that the rest of the world kind of (mercifully) ignores because it has only managed to co-opt the Easter Egg and candy part of things, which is literally the least important part, I have been reflecting, as I ought, on what my faith means. A kind of all compassing musing on what it is I believe, why I bother to believe it, and going all the way to "What do I call myself, since 'Christian Scientist' is something other than what I am?" I'm going to try to write as much of it as I can on this blog, because I feel it is important, but being musings I can't promise they will be thesis like. They may ramble a bit. Some may be long and some may be short. If you come here for physics posts, sorry not sorry for the theological interlude.

Holy Week, particularly in the liturgical tradition, throws sharp relief on a lot of doctrinal points that Christians tend to go 'yeah, yeah I know' at and non-Christians think we are crazy for believing. It can also bring up, if you run in the right circles, friendly debates about atonement vs. redemption theology, the sufficiency of Christ's sacrifice, and even the purpose of baptism, getting into the paedobaptism vs. believer baptism debate. The practice of Holy Week is designed to remind us, in case Lent did not, that we are broken, and that Christ died to heal that brokenness, and rose again to usher in the coming of wholeness.

That we are broken is something of which I have no doubt. I don't see how anyone can disagree with it. As my father observed, "The doctrine of total depravity has never lacked for outside proof"[ETA: This is apparently a quotation from G.K. Chesterton]. That Christ died to heal that brokenness I also have no doubt, though this is where a lot of the people I know think I've jumped the shark, so to speak. A fair number of my peers (and superiors and inferiors, I have no doubt) think that my faith is odd, nutty, a bit of a relic or even 'something [I'll] outgrow'. I have no problem with the ones who think the first two, I can understand, though not agree with the third and the  fourth I find unbearably patronizing, but that is neither here nor there. Christianity *is* weird. And a lot of humans have horribly twisted it and corrupted it and I desperately wish we could make those corruptions a thing of the past, though there is something to be said for the devil you know.

So let's get something out of the way before I get any father into recording my theological thoughts. Just make this the first post.

My faith is not just a comfort in bad time (though it is that), or a I'll-go-someplace-nice-when-I-die wishful thinking, or a philosophy, or a way to connect with a larger community. It is in a very real sense *everything* to me. It defines the universe, my place in the universe, the purpose of the universe and myself; it defines my relationship to God, between myself and my family, between myself and my husband, between myself and every human I will ever encounter; it determines my responsibilities to this world, and everyone and everything in it; it is the entire framework on which my life is built. If you striped everything else away, my faith remains.

"How can you be a scientist and a Christian?" is a question I have heard a (frankly) irritating number of times. From both directions, actually. Scientists who are atheists look askew at my ability to trust science if I also believe in a man-god, and Christians with whom I have strong doctrinal disagreements don't trust my soul to be saved if I think we came from monkeys. The question makes as much sense to me as "how can you be a scientist if you are a woman?". If I really believe that God created the universe, and he created us, how can I *not* believe that this universe would be designed in such a way that we, striving to understand it as we follow our natural, God-given curiosity and using the minds He gave us, could understand? How could I not jump at the opportunity to study a master-craftsman's work? If you think I'm crazy for believing in a Creator, or for believing in a Triune God, or a Savior or whatever particulars of my doctrine baffle you  to the extent you doubt my science, you are welcome to check my math. If you think I'm going to Hell because  when the math and science say the universe is 14 billion give-or-take years old, I trust that it's right,  please point me to the passage in the New Testament where this is named as a salvific issue. I'll wait.

That I am a scientist is not a stumbling block to my faith, and  my faith is not a stumbling block to my science. Though I wont go quite so far as Kepler to say that math is the language of God, or even as far as the Belgic confession in favor of natural theology, I will say with the psalmist that the "heavens declare the glory of the LORD" and with Maltbie D. Babcock that "This is my Father's world".


Tuesday, December 16, 2014

Accepting that I'm qualified to do things

An interesting thing happened last week. Through a very long email forwarding chain, it came to my attention that one of the small, religiously affiliated schools in the area (actually half way between the university and my new city) was looking for an adjunct physics professor to teach an algebra-based physics 2 course during the spring semester. I jumped on the chance, after getting my adviser's blessing, because it's a chance to hone my teaching skills, it would look good on a resume, it's a foot in the door, a little more money coming in, etc. 

But I considered it to be a long shot. It required a master's degree, preferably a PhD, and while I could have my masters by now, I've never bothered with the paperwork and paying for it, so officially I have a bachelor of science and 3 years of grad school. I emailed the contact on the listing indicating my intention of applying. After writing up my CV (I had resumes but no CVs on tap), filling out the application and a phone interview, I have the job, pending the ok from HR. Turns out, I'm the only applicant and they need someone NOW because the person they hired for the entire year bailed after the fall semester. 

So as I'm talking about it with people, I've been say that I got the job because I was the only applicant. That, essentially, I got lucky.

Which is interesting because my husband and I were just reading an article in the Wall Street Journal on Sunday about how women communicate differently in the workplace, and will frequently say that they "got lucky" instead of taking credit for something. Females are socially trained to be self-deprecating, men are trained to brag, was what that part of the article boiled down to. 

As I was walking back from submitting my transcript request, and thinking to myself how I only got the job because they were desperate and I was the only choice, it suddenly hit me that I was doing the self-deprecating thing.

 I am perfectly qualified to do the job as advertised. I love teaching. I prepare before classes, I know what I'm teaching and I'm not afraid to say "I don't know, I'll get back to you" when a question comes up that I hadn't prepared for. I've taught college classes for 3 years, I done lab work and prep work and grading. There's nothing I'm going to learn in my last year or two of research that will help me teach basic physics to non-physics/engineering majors. The only thing my students and my supervisors agree on is that I'm a good teacher.  

When I texted a friend who had helped me with my CV that I had got the job and thanking him for his help, he texted back "Congrats! I doubt I had anything to do with it! You totally deserve the job."

So I'm going to stop saying that I only got the job because I was the only candidate. There is every chance I would have gotten the job if I had had competition. I am a dedicated, knowledgeable, and tested teacher. And I'm going to prove it. 

Friday, December 12, 2014

Basic Physics: Part 0, Section 5: Derivatives--Exponential Function

In the last post we covered all the rules we needed for calculating derivatives, but I mentioned that there were two special case functions that weren't really special cases that we needed to cover. They are usually thought of as special cases because the way we habitually use and write them hides what's actually going on when we take their derivatives. The first case is the exponential function, and the second is the trig functions sine and cosine. 

The exponential function is something you may or may not have run across directly, depending on how far you got in math or how nerdy your friends are. But it affects or models nearly everything in your life, from population growth to radioactive decay, and is integral to oscillatory functions and nearly every branch of mathematically describable knowledge. The 'natural' number \(e\) is a transcendental number (meaning it's not the root of any integer polynomial) and it's irrational--meaning it never repeats a sequence and it doesn't 
end:
 $$ e = 2.718281828459045235360287471352662497757247093699959574966967... $$ More importantly, it seems to be kinda built into the fabric of the universe because like \(\pi\) it shows up everywhere. Also, it is integral to one of the most beautiful equations in ever--Euler's Identity: $$e^{i \pi} + 1 = 0$$ What's so special about this you ask? Well, it includes 5 of the most basic and important concepts in math and relates them all in an absurdly simple, beautiful way*. The natural number (\(e\)), \(\pi\) and the complex number \(i\) are the most important numbers you've never heard of or used. One and zero are so fundamental explaining why they are fundamental usually leads you in circles. It's also a bit off topic. Back to derivatives!

 The exponential function, the most basic form of which is $$ e^{x} $$ though you can put other stuff (sometimes a lot of stuff) in that exponent. It's usually cited as a special case for derivatives because if you write it the way most people do, it seems like it is it's own derivative:
$$f(x) = e^x$$
$$\frac{df}{dx}= e^x$$
which is weird and uncomfortable and shouldn't be, should it?

This seems to completely violate all the rules we set out, except that it actually follows all the rules and you can demonstrate it three ways. Firstly, if you have enough time and a graphing calculator capable of finding tangents to curves, you can manually (and tediously) show that \(e^x\) is a weird curve whose slope is described by itself. I'm going to skip this way.

Another way to go is to use the power series expansion. This way feels like cheating to me, because in the strictest sense of things, it's an approximation unless I expand the series to infinite terms, but it's also the clearest way for a lot of people.

Let's first start by discussing what a series expansion is. Have you ever wondered how mathematicians can calculate things like thousands of digits of Pi or the natural number when they aren't simple fractions? The answer is power series. Series let you expression something very very complicator or long in a compact format of a bunch of additions. The catch is that, unless you take it out to infinite terms, it's only an approximation. The good news is, you don't need to take it out to infinity for most intents and purposes, because you don't need an infinite amount of precision. You just need 5 or 10 or 100 decimal places worth of precision, which you can get with way less than infinite terms.

The power series expansion for \( e^x \) is
$$e^x = \sum_{n=0}^{\inf} \frac{x^n}{n!}$$
Which looks like a lot of gibberish, but is just mathematician shorthand for
$$e^x = 1 + \frac{x}{1} + \frac{x^2}{1*2} + \frac{x^3}{1*2*3}+\frac{x^4}{1*2*3*4}+....$$
and on and on forever. That ellipsis at the end indicates that it just keeps going like that.

Fortunately, this is something that we know how to deal with, using the rules we learned in section 4.
$$e^x = 1 + x + \frac{1}{2} x^2+ \frac{1}{6}x^3+\frac{1}{24}x^4+....$$
$$\frac{d}{dx} e^x = 0 + 1 + \frac{1}{2} x^1 * 2 +\frac{1}{6}x^2 *  3 + \frac{1}{24}x^3 * 4 + ...$$
$$\frac{d}{dx} e^x = 1 + x+\frac{1}{2}x^2 + \frac{1}{6}x^3  + ...$$

Which is right back where we started! This is a neat and useful property of the exponential function.

The third way is to engage a rule that we didn't discuss last time because it doesn't show up very much, but it's very similar to the chain rule--it's called the power rule and having shown I wasn't lying about the things in part 4, I hope you can just trust me on this one. It's a little messy at the beginning, but just hang with me until the end.

 The power rule goes like this. For a function that has the form of [constant] to the power of [function of variable], like \( f(x) = a^{u_x}\) where \(u_x\) just denotes that \(u\) is a function of \(x\) (yes, a function within a function, it's perfectly legal and it doesn't look as weird as it sounds when you say it out loud...er write it out verbally?), the derivative is
$$\frac{df}{dx} = a^{u_x} *\ln{a} * \frac{du}{dx} $$
Which...looks pretty awful, doesn't it? Just hang with me a little longer. Let's look at a test case. Let's let our \(f(x) = 3^{4x^2}\).
$$\frac{df}{dx} = 3^{4x^2} *\ln{3} * (4*2x) $$
$$\frac{df}{dx} =(8 x) (3^{4x^2}) *\ln{3}  $$
$$\frac{df}{dx} = 8x \ln{3} 3^{4x^2}$$

So, what happens when we apply this to \(e^x\)? Let's see:
$$\frac{d}{dx}(e^x) = e^x * \ln{e} *1$$
Now, the natural log (\(\ln\)) and the natural number are inverses of sorts, so \( \ln(e) =  1\). So that just leaves us with
$$\frac{d}{dx}(e^x) = e^x$$
A perfectly law abiding, if funky looking, function. In reality, it mainly looks weird when we take the derivative because we leave out that \( \ln{e} \) step. Think of it like how a native speaker will use contractions.

Let's test this out on a few more examples to get the hang of it. How about \( g(x) = e^{2x}\)?
$$\frac{dg}{dx} = e^{2x} * \frac{d}{dx} (2x)$$
$$\frac{dg}{dx} = e^{2x} * 2$$
$$\frac{dg}{dx} =2 e^{2x} $$

Still a little weird looking, but not bad from an execution standpoint. Let's do one more for practice.
$$h(x) = 3 e^{4x^2} $$
$$\frac{dh}{dx} = 3 e^{4x^2} * \frac{d}{dx}(4x^2) $$
$$\frac{dh}{dx} = 3 e^{4x^2} * 4*2*x$$
$$\frac{dh}{dx} = 24 x e^{4x^2} $$

Hopefully this has helped you to see that even things that are "special cases" are not exceptions to the rules. If anything is still unclear, please let me know in the comments! Next time, we'll deal with one more "special case", that of trig function derivatives.

*And one of the reasons I will never, ever support tau replacing pi. I don't care if it removes a factor of two from some calculations--it ruins the beauty of Euler's identity to have to divide the exponent by 2.

Updated 12/12/14 8 pm: Corrected the first example. Thanks to @Lacci for alerting me to the problem. 

Wednesday, September 10, 2014

Publishing Research and other stuff

So I know the next installment of Basic Physics is several weeks overdue, but there has been so much going on I haven't had time to do it justice. So here's a post on what's been going on!

Firstly, my first paper got accepted for publication! This is a research project that I had been fighting for well over a year, and the results were/are really cool. It's also my first first author paper, which is a really big deal in the sciences (at least my branch of the sciences). I don't know of an equivalent outside of research circles.

I'm working on new but related research projects, which will hopefully bear fruit soon.

DH got a job in a city that is just far enough away to make commuting 5 days a week untenable, so we are slowly transitioning our lives to an apartment in new city, with me getting the house ready to rent out in our old city. So, ya know, that's a bit time and energy consuming.

I'm teaching half time this semester, which is great, but eats my Thursdays between prep and teaching and seminar and teaching and then eats a couple hours not on Thursdays for grading and getting lesson plans and weekly tests ready for myself and the other two TAs to use.

I have to write and present and get approved a prospectus/research plan by the end of the semester or get kicked out. It is the vaguest most important piece of writing I have to do to date.

I'm also attending the Frontiers in Optics conference in October! Which is going to be fantastic and exhausting and in Arizona! It's also forcing me to actually get some more 'professional' looking clothes, which for me basically means I didn't make them and/or I couldn't rake leaves in them. I am not going to be removing my earrings unless my advisor specifically says otherwise though. They are a part of me, and besides my hair provides decent camouflage.

I also seem to be morphing into a high classical-christianity Anglican instead of a good Calvinist-Presbyterian, and I have to write a separate post on that.

So you can see, there is A LOT going on my life right now, so if the postings are a bit thin on the ground, hopefully you can forgive me.

Cheers!

Tuesday, August 19, 2014

Basic Physics: Part 0, Section 4: Derivatives

Previously in this series, we covered algebra, trigonometry, vectors and vector multiplication. Now (after more delay than I would have liked) it's time to tackle the elephant in the room--calculus.

No, please don't close this tab! I swear, it's not as scary as you've been told. If you made it through trig and vectors (which, if you are reading this I assume you have) you've really made it through more mind bending stuff than we will need to cover here.

Why cover calculus at all? Aren't there algebra-based physics courses at every university? Yes, yes there are. And anyone who has taught physics with only algebra, trig and vectors will tell you it's actually harder to teach physics without reference to derivatives and integrals. Newton invented/discovered calculus so he could describe his theory of gravity and motion (his notation was abysmal, though). Calculus is the mathematics of change. Algebra is the mathematics of stability. And physics is really boring if nothing ever moves.

Now, depending on when you went to school, learning calculus may have been reserved for the students who were good at math, or who hadn't been told that "math wasn't for them". I am here to tell you this is like telling students who are going to live in another country that they don't need to learn the past or future tense, they can get along just fine with the present tense. Technically, this is true in a lot of cases, but it limits their ability to get everything out of their trip. Try to think of calculus in this way--not as some strange new kind of math, but just a different tense in this language.

We'll begin where most calculus textbooks begin with derivatives. Calculus has a very intuitive explanation of derivatives: they are the slopes of lines. That's it. What makes derivatives interesting is that they give you the slope at any point along a line*. You will generally hear the included caveat that the line must be smooth and continuous, but this isn't a calc class and I'm not going to show you any equations which are not differentiable (capable of having their derivative taken), so we aren't going to worry about that here.

Let's start with the simplest case, a straight line going through the origin of our coordinate system:

In this case, the slope of the line is going to be the same everywhere, and we can find the slope using the tried and true "rise over run" method. In moving 4 units to the right, the line has moved 3 units up, so our slope \(a\) is $$ a = 3/4 = .75 $$ So far so good. Nothing scary or uncomfortable  to date. A little algebra, that's all, and a little reading off a plot. Now, what if we were just given the equation for this line, in the slope-intercept form encountered in algebra class: $$ y = ax$$ $$y = .75 x$$
Still not too bad. And if I had presented this to your first, you probably could have told me the slope of this line just from this--the coefficient of \(x\) gives the slope, so \(.75\). Congratulations, you just took your first derivative without knowing it!

So, if derivatives are that easy, you ask your computer suspiciously, why is there an entire semester of calculus dedicated to it, hmm? Well, two reasons. First of all, because there are way more complicated kinds of lines than straight lines, and second of all, no one dedicates an entire semester to derivatives. They usually also teach limits (proto-derivatives) and numerical integration (proto-integrals) in the same semester. Derivatives are usually 4-5 weeks of the semester, a lot of that learning special cases.

What if I gave you the line with the equation $$ y = x^2 + 3, $$ would you know what it's slope is? It looks similar to the linear equation in slope-intercept form, but you probably have a feeling that the \(x\) being squared complicates things. And it does, since \(x^2\) is a parabola.


Parabola!

Now, you could draw tangent lines at a sampling of points along the parabola, and find the slope of those tangent lines, plot those slope values and approximate the slope of \(y = x^2 + 3\) and you would find that it came close to \(2x\). I can't speak for everyone here, but I find doing that unbelievably boring. Some algebra teacher once made me do that once and  it was tiresome to say the least.

But calculus and the tool of differentiation gives us a much better way.  Remember, mathematicians do not "invent" new kinds of math to torture students and non-mathematicians. They develop new techniques because the old way was inefficient or tedious or just didn't work all that well. Calculus is  a great example of this. Rather than calculating a bunch of individual slope points and extrapolating what we think the slope is, we can find the exact slope with a few simple rules, and a little new notation.

Let's look at our parabola again. So we have the equation $$y = x^2 +3$$ which describes the line itself. If we want to say that we are looking at the equation for the slope of that line we can write it in Leibniz notation as $$ \frac{dy}{dx}$$ which is nice and concise (there is also Lagrange notation and Newton notation). But Leibniz is nice for beginning with because it has a nice math to english translation: the change in \(y\) over the change in \(x\). This is the more formal way to say "rise over run" and is more generally applicable. Also, now that we are finding the slope of the parabola everywhere we call it a "derivative", and we find it by the process of "differentiation".

To find this, we need two rules. The first rule is formally known as the "elementary power rule" but I just learned it as "this is how you do it". For a function \(f(x)\) that has the form (i.e., it looks like or follows the pattern of) $$ f(x) = c x^n $$ where \(c\) is a constant, \(x\) is the variable and \(n\) is a real number (usually integer, but not necessarily) the derivative can always be found in the following manner: $$\frac{df}{dx} = c*n*x^{n-1} $$ If you are wondering what that \(f(x)\) is doing here, since I kinda just started using it, think of it as a way to label a generic equation. You could keep saying \(y=\) such and such, but then it's not always clear which \(y\) you're talking about. If you instead use the notation of Letter(variable) it lets you label both the equation uniquely (function f, function g, function h) and specify which letter is acting as your variable (x, y, z). Neat, huh?

That's it. That is the most basic rule and definition of the derivative. For the special case where there is no variable, just a constant, the derivative of a constant is \(0\). So, to summarize,

  1. Given a function \( f(x) = c x^n\), the derivative is \(\frac{df}{dx} = c*n*x^{n-1}\).
  2. Given a constant function \(f(x) = c\), the derivative is \( \frac{df}{dx} = 0 \)

So, let's apply these rules to the equation for our parabola.
$$y = x^2 + 3$$
$$\frac{dy}{dx} = (2) x^{(2-1)} + 0$$
$$\frac{dy}{dx} = 2x^1 = 2x$$

And so we find in three lines of calculus the  same answer that a bunch of line drawing and measuring and plotting got you. Let's try another one, that's a little longer.
And really funky looking on a graph.
$$f(x) = 3 x^{5} - 2 x^{2} + x^{-3} $$
$$\frac{df}{dx} = 3*5 x^{(5-1)} - 2*2 x^{(2-1)} + -3 x^{(-3-1)} $$
$$\frac{df}{dx} = 15 x^{4} -4 x - 3 x^{-4} $$

Longer, but still not too bad, right? See, I told you calculus wasn't the terror it was made out to be. One more rule and we've knocked out all the differential calculus we'll need for both physics 1 and physics 2. This rule is called the "chain rule" and it covers almost every other situation we could face outside of a calculus book or more advanced physics. What it is, really, is a short cut when your variable of interest is buried inside a parenthetical expression, instead of having to bother to separate it out by algebra (if it can be separated by algebra at all).

Let's start with something that we could mess around with algebra and get it into a form that our first two rules apply. Let's begin with the equation $$g(x) = (x+2)^2$$ By using the FOIL method, we  find that this could also be stated as $$g(x) = x^2 + 4 x + 4$$ Using the two rules laid out above, we find that it's derivative is $$\frac{dg}{dx}= 2x + 4$$ Now we have something to check the chain rule against.
Displaced parabola!

The chain rule is a way to approach these things methodically, working from the outside in. You start by treating everything inside the parentheses as a block. It does not matter how complicated it is inside the parentheses, or how simple. Treat it all as though it were just the variable. So step one of the chain rule gives us $$\text{ Step 1: } \frac{dg}{dx} = 2 (x+2)^{2-1}$$
Now you take the derivative of what's inside the parentheses, and multiply that result by the result of Step 1. $$\text{Step 2: } \frac{dg}{dx} = 2(x+2)^{1} (1+0) = 2x+4$$
Lo and behold, it's the same result. Now for something this simple is using the chain rule worth it? Maybe, maybe not. But what about something that I don't know how to FOIL, like $$h(x)= (x+2)^{-1/2} =\frac{1}{\sqrt{x+2}} $$
How do you FOIL a square root?! Tell me!
Let's try the chain rule on this and see if it doesn't save us having to look that one up in an obscure algebra text.
Step 1: Ignore what's inside parentheses, take the derivative as if (blah blah) = variable.
$$\frac{dh}{dx} = (-.5)(x+2)^{(-.5 - 1)} $$
Step 2: Take the derivative of what's inside the parentheses, multiply it by Step 1.
$$ \frac{dh}{dx} = -.5(x+2)^{-1.5} (1)$$
Step 3: Simplify if necessary
$$\frac{dh}{dx} = -.5 (x+2) ^(\frac{-3}{2}) = \frac{-1}{2 (x+2)^{\frac{3}{2}}}$$

I can guarantee that that was easier than trying to FOIL a square root. But what about something really nasty, like THIS
Honestly had no idea what this would look like before I graphed it
Behold, the rollercoaster that is $$ k(x) = (x^3 + 2)^{-.5}$$ Surely my nasty, terrifying calculuses gets horrifying and complicated now, heh? Stupid physicistses.

Um, nope. Not really. Let's take a look, shall we?
Step 1: Ignore what's inside parentheses, take the derivative as if (blah blah) = variable.
$$\frac{dk}{dx} = (-\frac{1}{2})(x^3+2)^{(-.5 - 1)} $$
Step 2: Take the derivative of what's inside the parentheses, multiply it by Step 1.
$$ \frac{dk}{dx} = (-\frac{1}{2})(x^3+2)^{-1.5} (3x^2)$$
Step 3: Simplify if necessary
$$\frac{dh}{dx} = (\frac{-3x^2}{2})(x^3+2)^{-1.5}  = \frac{-3x^2}{2 (x^3+2)^{\frac{3}{2}}}$$

Still just 3 bite sized steps.

Ah ha, you say, but what if there are parentheses inside the parentheses? What if I have a russian nesting doll of a problem?

You just repeat step 2 until you run out of parentheses inside parentheses. But I honestly can't say that I've ever seen that happen.

And that's nearly all you really need to know about differential calculus to conquer introductory physics! Hopefully you can see, at least a little bit, why physicists and mathematicians love it. It's like upgrading from a hand drill to a power drill. Or a sheet of sandpaper to a power sander. It might take a little getting used to, but it is a very powerful tool in our toolbox and one that will open up the rules of the physical universe to us in a way that algebra just can't. Because as I said in the beginning, the physical world is dynamic and changing, and algebra is the math of the static and stable.

There are two "special cases" that aren't really special cases that we will need, and they are very easy to use, but a bit lengthy to explain, so I'll cover them in a separate section, partly because they are both really cool, and partly because this post is already pretty long.

If anything is still unclear, or even a little foggy, let me know in the comments and I'll do my best to explain! And I hope to see you next time for integration!

* there are a few significant exceptions to this, which we don't have to be concerned with here. If you are interested in knowing more about these exceptions, brownian motion is a particularly interesting case being continuous everywhere and differentiable nowhere.

Thursday, July 24, 2014

Kill the myth of "stupid"

For about a month now, I've been plugging away at a series called "Basic Physics", trying to go through a first year physics curriculum in a way that is understandable to people who aren't in STEM, and may not have even looked at 'math' in years. My mother has kindly been acting as one of the guinea pig for this experiment, reading through posts and giving me feedback on what is or isn't clear, is or isn't helpful. The last post on vector multiplication was particularly difficult for the both of us. It's hard to explain simply, and she really wanted to understand them in the same way she understood the trig section (after some rewrites at her suggestions). Every time we spoke and she said she still didn't get it, she would apologize "for being so stupid".

Now, stupid isn't a word I would use to describe my mother, and I sincerely doubt she has ever honestly been accused of that in her life. I reassured her that these were not easy topics, and pointed out that I had complained to her for at least 2.5 years now that my students, who nominally should walk into my classroom knowing this stuff, don't get it. I added a paragraph of encouragement at the top of the post, which seemed to help because I got this as a response:
ok I realized that I was trying too hard.
I get it now because I accept your math without trying to do it in my head every step.
Bring on the next chapter.
I called her up later in the day to thank her, because I realize that she probably hadn't been looking to learn this stuff before I asked for her help. She is a very gracious woman, and said she was always open to learning, but again apologized for being "stupid".

After we hung up, I realized that this is a refrain I have heard over and over when teaching: "I'm sorry I'm being so stupid". I've heard it from students in class, in office hours, in tutoring sessions back in college, and now from my mother. The general sentiment always seems to be that if they can't get it on the first go round, they are stupid and incapable rather than the reality that the topic is difficult. My students have gone so far as to tell me that I must be far more intelligent than them to understand this stuff.

There is an article in the New York Times today who headline was "Why do Americans Suck at Math?" and I can't help but think that the refrain of "I'm sorry I'm so stupid" and headlines like this are connected. Connected because they reinforce this idea that people "suck" at math in bulk. There is this weird perception that math is something only special people are good at, that you have to have some innate ability to do it and understand it. That people who are good at math look always use the Feynman method of problem solving: write the problem down, think about it, write down the solution. The idea that math people look at a new math topic and go "Oh, of course! Obviously this is true" and run off and use it seems to be weirdly pervasive, both consciously and unconsciously.

Of course, it would be lovely if this were true. I could have whole years of my life back if this were true. And of course it feels nice to be on the math people side of this, because it makes one feel smart and talented when in your work you frequently feel frustrated. It's like payback for the mockery, real or perceived, for being STEM types with all the cultural baggage that goes with it.

But I think it is also incredibly toxic. If math is something only special people can do, then why should ordinary people try? If we ignore or hide away our own struggles with understanding, we encourage this myth and scare people away who, even if they aren't in STEM, might enjoy seeing the beauty of it all. And it is beautiful. Being able to see the world with math and science at your back is awe inspiring, adding a whole new dimension to everything you can look at and experience.

I know very, very few people who haven't struggled to grasp every math and physics concept when they were first introduced. I think I've known two in my entire life. I was on the 'elevated' math track in school, which means I got all the way through AP Calc B in high school. And I still struggled and struggle with math. What my students (and my mother) never saw was me with wikipedia on my laptop and my calc book open as I desperately tried to understand different kinds of integrals, or tests for convergence. They never saw the early mornings, between classes and late nights in the physics lounge with scratch paper everywhere, chalk covered hands, asking anyone who entered the room, "Can you explain this? What is a [cross product, wave equation, probability density, etc]?" The extended arguments that eventually ended up with the stuffed monkey Harold on one of the professor's door in a plea for help. They will never know how much help I got from professors, from other students, from older students as I struggled to learn this stuff that I now seem so natural at. I'm not smarter than them. I was just persistent. When my students see me reduce a fraction on the board, or quickly do a cross product they assume it's just natural to me, like music is natural to my dad. What it really is is 7 years more experience and work.

Now, is there some natural inclination involved? Sure. But not nearly as much as people seem to think. Being good at anything, regardless of natural inclination, requires work above all else. My sister is more naturally inclined than I towards languages; she also studied more and is therefore far more fluent than I am (as in, actually fluent). No matter what your natural talent and inclination, if you never work at it, it will wither and dry up. And while you may never be a prodigy, hard work can get a person far in pretty much anything that's not sports.

People don't seem to believe me when it comes to math and science, so here's an analogy. I enjoy cooking. At this point in my life I am pretty good at it. I can make recipes up on the fly and nine times out of ten they work. I can tell if a cake is done by appearance and a light poke; I know if my steak is done to my liking by touch. Now, is there some natural inclination at work? Maybe. My mother is an excellent cook, and let me mess around in the kitchen at an early age. But mostly it's because I've been cooking for over half my life. Because I read cookbooks and watched masters and purposefully worked on my techniques, my understanding of the underlying food chemistry, the physics of different methods of cooking. Anything I am good at is maybe 5% natural talent, 95% work. Five percent alone gets you absolutely no where. Ninety five percent alone can get you pretty far.

This is something that we need to work on emphasizing more. We need to emphasize fewer Sheldon Coopers and Charlie Epps, boy geniuses grown up and solving MATH. We need to make it clear that what we do is not magic, not the result of some fluke of genetics that gave us special math powers. Something sparked an interest and we pursued it to the best of our abilities. We weren't destined to become mathematicians/physicists/chemists/what-have-you any more than non-STEM people were destined to be librarians/writers/bankers/secretaries/what-have-you. We chose to be what we are, and we worked hard to get here. Of course, this means admitting that we aren't special beings with math vision. But if we want to encourage people to engage with STEM, we need to kill this myth of "stupid".

Wednesday, July 23, 2014

Basic Physics: Part 0, Section 3: Vector Multiplication

Welcome back to my Basic Physics series! In previous sections, I covered some basic algebra topics, necessary trig functionscoordinate systems and vectors. The last post was getting rather long, and since vector multiplication is a bit tricky, I broke that topic into its own section, which you have before you! And I going to further preface this section with this: if you read through this, and you aren't sure you understand what is going on, don't give up! It's not easy to understand, and it looks down right weird; I think my reaction upon first introduction was something along the lines of 'what new devilry is this?!". I'm pretty sure I was using vector multiplication for at least 4 years before really understanding what it is. It didn't stop me from getting a B.Sci in physics, and it shouldn't discourage you from continuing reading this series, because it is possible to use these things without really understanding. Think of it like a car--you use a car everyday, you can make it do what you want, but almost everything under the hood is a mystery. Doesn't mean you can't use it to get you where you want to go! They also get easier with physical examples, which we will get to in coming posts. 

There are two types of vector multiplication. Each has its own uses and peculiarities. This section is going to cover what they are, their peculiarities, and how to do them. They pop up repeatedly in physics, so we'll discover their myriad uses along the way. Let's bring back our two arbitrary vectors from the last post 
$$\vec{v} = a \hat{x} + b \hat{y} + c \hat{z}$$
$$\vec{w} = d \hat{x} + e \hat{y} + f \hat{z}$$
and see what weird things we can do with them!

The first type of multiplication is called the "dot product" (remember that "product" is whatever results from a multiplication), and we get the new symbol \(\cdot \) to indicate that we are combining two vectors using this method. It is fairly straight forward: you multiply the x-hat components together, you multiply all the y-hat components together, you multiply all the z-hat components together and then you add up the results. 
$$\vec{v} \cdot \vec{w} = (a \hat{x} + b \hat{y} + c \hat{z})\cdot(d \hat{x} + e \hat{y} + f \hat{z})$$
$$\hspace{15 pt} = (a*d) + (b*e) + (c*f) = ad+be+cf$$
Interestingly, a dot product of two vectors does not yield a new vector with magnitude and direction, but rather a "scalar" which just has magnitude. This little quirk becomes very important when we get to electromagnetism. It should also be noted that the dot product is commutative, which is to say that 
$$ \vec{v} \cdot \vec{w} = \vec{w} \cdot \vec{v} $$
The same cannot be said of the other type of vector multiplication, which we will get to shortly. 

So, what does a dot product tell you? Speaking geometrically, it gives you the product of the length of the projection of one vector onto another vector. That's about as clear as mud, so lets look at some pictures. Here we have two vectors, \(\vec{A}\) (red), and \( \vec{B}\) (green), with an angle \( \theta \)  between them. 


If we draw a line between the end of \(\vec{A}\) and the point on \( \vec{B}\) where that line can intersect normally

we now have a right triangle. And we know from two weeks ago in the trigonometric section what to do with right triangles. We are given the angle, so by using the cosine function we can find the length of the projection of \(\vec{A}\) onto \(\vec{B}\), kind of like finding the length of your shadow. Let's label that projection \(A_B\) since it's the shadow of \(\vec{A}\) on \( \vec{B}\). So
$$ A_B = |\vec{A}| \cos{(\theta)} $$
with the vertical lines around indicating that we are using the total length, the magnitude, of A and not the vector form. The direction information is not helpful when dealing with triangles. So now we have the length of the projection of \(\vec{A}\) onto \(\vec{B}\). Assuming we have the length of \(\vec{B}\)  we can now find the geometric value of the dot product
$$ \vec{A} \cdot \vec{B} = |\vec{A}| \cos{(\theta)} |\vec{B}|$$
or, more prettily and more commonly,
$$ \vec{A} \cdot \vec{B} = |\vec{A}|  |\vec{B}|\cos{(\theta)}$$
This is another way to calculate the dot product, and is handy if you have been given magnitudes and angles, and not the component form for your vectors. 

But what does this mean, I can imagine you asking. Recall the idea of orthogonality from the previous section. The dot product allows you to relate two vectors based on the degree to which they are orthogonal to each other. If two vectors are perfectly orthogonal, the angle between them is \(90^{\circ}\), there is no projection, no 'shadow' of one vector onto the other, the cosine is zero, and the dot product is zero. The vectors are completely unrelated to one another, and they have no multiplicative interaction. If, on the other hand, the angle between them is \( 0^{\circ}\), then the cosine between them is 1 and they are parallel. They are both going in the same direction and can have the largest multiplicative effect on one another. (If you happen to dot a vector with itself, the angle will be zero, the magnitudes will be identical and you will get the square of the magnitudes. This is in fact how the magnitude of a vector is defined--the square root of the vector dotted with itself.) For any angle in between the effect is proportionately diminished. This will be easier to see when we can give some physical examples. 

The second type of vector multiplication is the "cross product", and the \( \times \) symbol which you probably learned to use for regular old multiplication back in elementary school gets reserved for this particular operation from here on in. Regular multiplication is usually indicated either by abutting parentheses, by an asterisk, or in the case of variable/coefficient terms just writing them next to each other without a space, as in the dot product example above. However, it is not generally as straightforward a calculation as the dot product, because the result of a cross product is still a vector. 

In order to calculate the cross product from scratch, as it were, we need to borrow a tool from linear algebra, namely the determinant*. The determinant is a way to arrange vectors so that you can easily calculate the cross product, no matter the size of your vectors. It is basically an organizational tool.  Once again, let's use the general vectors \(\vec{v}\), \(\vec{w}\), and calculate the cross product of \( \vec{v} \times \vec{w}\). 
$$ \vec{v} \times \vec{w} =\begin{vmatrix}\hat{x}& \hat{y} & \hat{z} \\ a & b & c \\ d & e & f  \end{vmatrix}$$
First, let's dissect what this thing is, line by line. 

 The first row is a label row of sorts. They label each column and they will help label the results. All the \(\hat{x}\) components go in the column under the \(\hat{x}\); if there is no \(\hat{x}\) component to a particular vector, that column gets a 0 for an entry for that vector's row. The same goes for the rest of the directions, namely \( \hat{y}\) and \(\hat{z}\).


 The second row is where the elements from the first vector, in this case \(\vec{v}\) are placed in their respective columns. Remember, if a vector is lacking an element, it is entered as a zero; the column is not deleted entirely, even if all it contains is the label and zeros.

The third row is treated in the same manner as the second row, except that it contains the second vector, in this case \(\vec{w}\). It's rather like filling out a spreadsheet. 

Now what do we do with this thing? 

You first calculate the \(\hat{x}\) component. To do this, you imagine blocking out everything that shares the \(\hat{x}\) column and row, leaving you with a square of components that are not the \(\hat{x}\) component

With those remaining 4 elements, you multiply the diagonal elements, and subtract the lower left/upper right pairing from the upper left/lower right pairing. So the results of this step are 
$$\hat{x} (bf-ec) $$
Note that the \(\hat{x}\) component of the product contains every element except the \(\hat{x}\) components of the  original vectors. Cool, right?

The second step is a little strange, because by blocking out everything in the \(\hat{y}\) element's column and row, we get a kind of split square, or two rectangles. 

What this means is that you have to subtract this result from the final product. You multiply the elements from the two rectangles as though it were one square, so this component of the product gives us
$$-\hat{y}(af - cd)$$
So far, so good. 

The last component is almost identical to the first. 

And you find the results the same way you did for the \(\hat{x}\) portion of the product. So the final bit is 
$$\hat{z}(ae-db)$$
Weird, but not too horribly complicated.  So the final result is 
$$\vec{v} \times \vec{w} = \hat{x} (bf-ec) -\hat{y}(af - cd) + \hat{z}(ae-db) $$

Some people are able to simply memorize the result given above, and don't bother to use the determinant. I am not one of those people gifted in memorizing formulae.  It's easier for me to remember a compact method or tool than to memorize lines of elements. 

It should be noted that the cross product is not commutative--the order in which things are multiplied matters, unlike the dot product. That is to say, \( \vec{v} \times \vec{w} \neq \vec{w} times \vec{v}\). It is however anticommutative, which means that reversing the order negates the result: \( \vec{v} \times \vec{w} = - \vec{w} \times \vec{v}\). 

So, what does the cross product give you? This is a little easier put than the dot product. The cross product calculates the area of the parallelogram whose sides are defined by the two vectors. 
So what's all that vector information doing? Well, hold on to your socks, it turns out that area is a vector quantity. Yep, and the direction of that area is normal to the surface which the area is a measure of. So if you can imagine standing on that parallelogram, which every direction is straight out of your head is the direction that that area points! This also make clearer an important point about the cross-product, which is that the resulting vector from a cross product is necessarily orthogonal to its two parent vectors. The two vectors must lie in a plane to form a parallelogram, and the normal to that parallelogram must be normal to the two vectors defining that parallelogram. 

The fact that the cross product calculates the area of a parallelogram leads us to our last point. What if you don't need to know which way the area is pointing, for some reason? You just need the magnitude of the result, not the whole thing. Well, if you remember your geometry you might recall that the area of a parallelogram can be found by multiplying the lengths of the sides and the sine of the angle between the sides. The same formula works for the cross product in a way. Assuming you have the magnitudes of each vector and the angle between them, you can find the magnitude of the cross product, at the cost of the direction information. 
$$|\vec{v} \times \vec{w} | = |\vec{v}| |\vec{w}| \sin{(\theta)}$$

That wraps up what you need to know from vectors. I know it may seem like a lot, but remember that it's a tool in our tool kit for physics. We will be using these tools frequently, and like any skill it becomes easier with use, and you get to build more incredible things the better you become!

As always, I hope that I have explained things clearly. If I haven't, please let me know in the comments, and I'll do my best to clarify!

*DH objected to this section, because what physicists call a determinant is technically a 'formal determinant', as it has the right form but does not adhere to the strict definition used by mathematicians. If you should show this to a mathematician, they will twitch, and possibly rant about physicists. This is the normal reaction of mathematicians to physicist notation. 

However, the math editor of this blog, sirluke777, objects to DH's objection, and says it's perfectly fine. His background is math, physics and chemistry, so make of that what you will. If there is an outcome to this math geek debate, I will update here.