Wednesday, September 10, 2014

Publishing Research and other stuff

So I know the next installment of Basic Physics is several weeks overdue, but there has been so much going on I haven't had time to do it justice. So here's a post on what's been going on!

Firstly, my first paper got accepted for publication! This is a research project that I had been fighting for well over a year, and the results were/are really cool. It's also my first first author paper, which is a really big deal in the sciences (at least my branch of the sciences). I don't know of an equivalent outside of research circles.

I'm working on new but related research projects, which will hopefully bear fruit soon.

DH got a job in a city that is just far enough away to make commuting 5 days a week untenable, so we are slowly transitioning our lives to an apartment in new city, with me getting the house ready to rent out in our old city. So, ya know, that's a bit time and energy consuming.

I'm teaching half time this semester, which is great, but eats my Thursdays between prep and teaching and seminar and teaching and then eats a couple hours not on Thursdays for grading and getting lesson plans and weekly tests ready for myself and the other two TAs to use.

I have to write and present and get approved a prospectus/research plan by the end of the semester or get kicked out. It is the vaguest most important piece of writing I have to do to date.

I'm also attending the Frontiers in Optics conference in October! Which is going to be fantastic and exhausting and in Arizona! It's also forcing me to actually get some more 'professional' looking clothes, which for me basically means I didn't make them and/or I couldn't rake leaves in them. I am not going to be removing my earrings unless my advisor specifically says otherwise though. They are a part of me, and besides my hair provides decent camouflage.

I also seem to be morphing into a high classical-christianity Anglican instead of a good Calvinist-Presbyterian, and I have to write a separate post on that.

So you can see, there is A LOT going on my life right now, so if the postings are a bit thin on the ground, hopefully you can forgive me.


Tuesday, August 19, 2014

Basic Physics: Part 0, Section 4: Derivatives

Previously in this series, we covered algebra, trigonometry, vectors and vector multiplication. Now (after more delay than I would have liked) it's time to tackle the elephant in the room--calculus.

No, please don't close this tab! I swear, it's not as scary as you've been told. If you made it through trig and vectors (which, if you are reading this I assume you have) you've really made it through more mind bending stuff than we will need to cover here.

Why cover calculus at all? Aren't there algebra-based physics courses at every university? Yes, yes there are. And anyone who has taught physics with only algebra, trig and vectors will tell you it's actually harder to teach physics without reference to derivatives and integrals. Newton invented/discovered calculus so he could describe his theory of gravity and motion (his notation was abysmal, though). Calculus is the mathematics of change. Algebra is the mathematics of stability. And physics is really boring if nothing ever moves.

Now, depending on when you went to school, learning calculus may have been reserved for the students who were good at math, or who hadn't been told that "math wasn't for them". I am here to tell you this is like telling students who are going to live in another country that they don't need to learn the past or future tense, they can get along just fine with the present tense. Technically, this is true in a lot of cases, but it limits their ability to get everything out of their trip. Try to think of calculus in this way--not as some strange new kind of math, but just a different tense in this language.

We'll begin where most calculus textbooks begin with derivatives. Calculus has a very intuitive explanation of derivatives: they are the slopes of lines. That's it. What makes derivatives interesting is that they give you the slope at any point along a line*. You will generally hear the included caveat that the line must be smooth and continuous, but this isn't a calc class and I'm not going to show you any equations which are not differentiable (capable of having their derivative taken), so we aren't going to worry about that here.

Let's start with the simplest case, a straight line going through the origin of our coordinate system:

In this case, the slope of the line is going to be the same everywhere, and we can find the slope using the tried and true "rise over run" method. In moving 4 units to the right, the line has moved 3 units up, so our slope \(a\) is $$ a = 3/4 = .75 $$ So far so good. Nothing scary or uncomfortable  to date. A little algebra, that's all, and a little reading off a plot. Now, what if we were just given the equation for this line, in the slope-intercept form encountered in algebra class: $$ y = ax$$ $$y = .75 x$$
Still not too bad. And if I had presented this to your first, you probably could have told me the slope of this line just from this--the coefficient of \(x\) gives the slope, so \(.75\). Congratulations, you just took your first derivative without knowing it!

So, if derivatives are that easy, you ask your computer suspiciously, why is there an entire semester of calculus dedicated to it, hmm? Well, two reasons. First of all, because there are way more complicated kinds of lines than straight lines, and second of all, no one dedicates an entire semester to derivatives. They usually also teach limits (proto-derivatives) and numerical integration (proto-integrals) in the same semester. Derivatives are usually 4-5 weeks of the semester, a lot of that learning special cases.

What if I gave you the line with the equation $$ y = x^2 + 3, $$ would you know what it's slope is? It looks similar to the linear equation in slope-intercept form, but you probably have a feeling that the \(x\) being squared complicates things. And it does, since \(x^2\) is a parabola.


Now, you could draw tangent lines at a sampling of points along the parabola, and find the slope of those tangent lines, plot those slope values and approximate the slope of \(y = x^2 + 3\) and you would find that it came close to \(2x\). I can't speak for everyone here, but I find doing that unbelievably boring. Some algebra teacher once made me do that once and  it was tiresome to say the least.

But calculus and the tool of differentiation gives us a much better way.  Remember, mathematicians do not "invent" new kinds of math to torture students and non-mathematicians. They develop new techniques because the old way was inefficient or tedious or just didn't work all that well. Calculus is  a great example of this. Rather than calculating a bunch of individual slope points and extrapolating what we think the slope is, we can find the exact slope with a few simple rules, and a little new notation.

Let's look at our parabola again. So we have the equation $$y = x^2 +3$$ which describes the line itself. If we want to say that we are looking at the equation for the slope of that line we can write it in Leibniz notation as $$ \frac{dy}{dx}$$ which is nice and concise (there is also Lagrange notation and Newton notation). But Leibniz is nice for beginning with because it has a nice math to english translation: the change in \(y\) over the change in \(x\). This is the more formal way to say "rise over run" and is more generally applicable. Also, now that we are finding the slope of the parabola everywhere we call it a "derivative", and we find it by the process of "differentiation".

To find this, we need two rules. The first rule is formally known as the "elementary power rule" but I just learned it as "this is how you do it". For a function \(f(x)\) that has the form (i.e., it looks like or follows the pattern of) $$ f(x) = c x^n $$ where \(c\) is a constant, \(x\) is the variable and \(n\) is a real number (usually integer, but not necessarily) the derivative can always be found in the following manner: $$\frac{df}{dx} = c*n*x^{n-1} $$ If you are wondering what that \(f(x)\) is doing here, since I kinda just started using it, think of it as a way to label a generic equation. You could keep saying \(y=\) such and such, but then it's not always clear which \(y\) you're talking about. If you instead use the notation of Letter(variable) it lets you label both the equation uniquely (function f, function g, function h) and specify which letter is acting as your variable (x, y, z). Neat, huh?

That's it. That is the most basic rule and definition of the derivative. For the special case where there is no variable, just a constant, the derivative of a constant is \(0\). So, to summarize,

  1. Given a function \( f(x) = c x^n\), the derivative is \(\frac{df}{dx} = c*n*x^{n-1}\).
  2. Given a constant function \(f(x) = c\), the derivative is \( \frac{df}{dx} = 0 \)

So, let's apply these rules to the equation for our parabola.
$$y = x^2 + 3$$
$$\frac{dy}{dx} = (2) x^{(2-1)} + 0$$
$$\frac{dy}{dx} = 2x^1 = 2x$$

And so we find in three lines of calculus the  same answer that a bunch of line drawing and measuring and plotting got you. Let's try another one, that's a little longer.
And really funky looking on a graph.
$$f(x) = 3 x^{5} - 2 x^{2} + x^{-3} $$
$$\frac{df}{dx} = 3*5 x^{(5-1)} - 2*2 x^{(2-1)} + -3 x^{(-3-1)} $$
$$\frac{df}{dx} = 15 x^{4} -4 x - 3 x^{-4} $$

Longer, but still not too bad, right? See, I told you calculus wasn't the terror it was made out to be. One more rule and we've knocked out all the differential calculus we'll need for both physics 1 and physics 2. This rule is called the "chain rule" and it covers almost every other situation we could face outside of a calculus book or more advanced physics. What it is, really, is a short cut when your variable of interest is buried inside a parenthetical expression, instead of having to bother to separate it out by algebra (if it can be separated by algebra at all).

Let's start with something that we could mess around with algebra and get it into a form that our first two rules apply. Let's begin with the equation $$g(x) = (x+2)^2$$ By using the FOIL method, we  find that this could also be stated as $$g(x) = x^2 + 4 x + 4$$ Using the two rules laid out above, we find that it's derivative is $$\frac{dg}{dx}= 2x + 4$$ Now we have something to check the chain rule against.
Displaced parabola!

The chain rule is a way to approach these things methodically, working from the outside in. You start by treating everything inside the parentheses as a block. It does not matter how complicated it is inside the parentheses, or how simple. Treat it all as though it were just the variable. So step one of the chain rule gives us $$\text{ Step 1: } \frac{dg}{dx} = 2 (x+2)^{2-1}$$
Now you take the derivative of what's inside the parentheses, and multiply that result by the result of Step 1. $$\text{Step 2: } \frac{dg}{dx} = 2(x+2)^{1} (1+0) = 2x+4$$
Lo and behold, it's the same result. Now for something this simple is using the chain rule worth it? Maybe, maybe not. But what about something that I don't know how to FOIL, like $$h(x)= (x+2)^{-1/2} =\frac{1}{\sqrt{x+2}} $$
How do you FOIL a square root?! Tell me!
Let's try the chain rule on this and see if it doesn't save us having to look that one up in an obscure algebra text.
Step 1: Ignore what's inside parentheses, take the derivative as if (blah blah) = variable.
$$\frac{dh}{dx} = (-.5)(x+2)^{(-.5 - 1)} $$
Step 2: Take the derivative of what's inside the parentheses, multiply it by Step 1.
$$ \frac{dh}{dx} = -.5(x+2)^{-1.5} (1)$$
Step 3: Simplify if necessary
$$\frac{dh}{dx} = -.5 (x+2) ^(\frac{-3}{2}) = \frac{-1}{2 (x+2)^{\frac{3}{2}}}$$

I can guarantee that that was easier than trying to FOIL a square root. But what about something really nasty, like THIS
Honestly had no idea what this would look like before I graphed it
Behold, the rollercoaster that is $$ k(x) = (x^3 + 2)^{-.5}$$ Surely my nasty, terrifying calculuses gets horrifying and complicated now, heh? Stupid physicistses.

Um, nope. Not really. Let's take a look, shall we?
Step 1: Ignore what's inside parentheses, take the derivative as if (blah blah) = variable.
$$\frac{dk}{dx} = (-\frac{1}{2})(x^3+2)^{(-.5 - 1)} $$
Step 2: Take the derivative of what's inside the parentheses, multiply it by Step 1.
$$ \frac{dk}{dx} = (-\frac{1}{2})(x^3+2)^{-1.5} (3x^2)$$
Step 3: Simplify if necessary
$$\frac{dh}{dx} = (\frac{-3x^2}{2})(x^3+2)^{-1.5}  = \frac{-3x^2}{2 (x^3+2)^{\frac{3}{2}}}$$

Still just 3 bite sized steps.

Ah ha, you say, but what if there are parentheses inside the parentheses? What if I have a russian nesting doll of a problem?

You just repeat step 2 until you run out of parentheses inside parentheses. But I honestly can't say that I've ever seen that happen.

And that's nearly all you really need to know about differential calculus to conquer introductory physics! Hopefully you can see, at least a little bit, why physicists and mathematicians love it. It's like upgrading from a hand drill to a power drill. Or a sheet of sandpaper to a power sander. It might take a little getting used to, but it is a very powerful tool in our toolbox and one that will open up the rules of the physical universe to us in a way that algebra just can't. Because as I said in the beginning, the physical world is dynamic and changing, and algebra is the math of the static and stable.

There are two "special cases" that aren't really special cases that we will need, and they are very easy to use, but a bit lengthy to explain, so I'll cover them in a separate section, partly because they are both really cool, and partly because this post is already pretty long.

If anything is still unclear, or even a little foggy, let me know in the comments and I'll do my best to explain! And I hope to see you next time for integration!

* there are a few significant exceptions to this, which we don't have to be concerned with here. If you are interested in knowing more about these exceptions, brownian motion is a particularly interesting case being continuous everywhere and differentiable nowhere.

Thursday, July 24, 2014

Kill the myth of "stupid"

For about a month now, I've been plugging away at a series called "Basic Physics", trying to go through a first year physics curriculum in a way that is understandable to people who aren't in STEM, and may not have even looked at 'math' in years. My mother has kindly been acting as one of the guinea pig for this experiment, reading through posts and giving me feedback on what is or isn't clear, is or isn't helpful. The last post on vector multiplication was particularly difficult for the both of us. It's hard to explain simply, and she really wanted to understand them in the same way she understood the trig section (after some rewrites at her suggestions). Every time we spoke and she said she still didn't get it, she would apologize "for being so stupid".

Now, stupid isn't a word I would use to describe my mother, and I sincerely doubt she has ever honestly been accused of that in her life. I reassured her that these were not easy topics, and pointed out that I had complained to her for at least 2.5 years now that my students, who nominally should walk into my classroom knowing this stuff, don't get it. I added a paragraph of encouragement at the top of the post, which seemed to help because I got this as a response:
ok I realized that I was trying too hard.
I get it now because I accept your math without trying to do it in my head every step.
Bring on the next chapter.
I called her up later in the day to thank her, because I realize that she probably hadn't been looking to learn this stuff before I asked for her help. She is a very gracious woman, and said she was always open to learning, but again apologized for being "stupid".

After we hung up, I realized that this is a refrain I have heard over and over when teaching: "I'm sorry I'm being so stupid". I've heard it from students in class, in office hours, in tutoring sessions back in college, and now from my mother. The general sentiment always seems to be that if they can't get it on the first go round, they are stupid and incapable rather than the reality that the topic is difficult. My students have gone so far as to tell me that I must be far more intelligent than them to understand this stuff.

There is an article in the New York Times today who headline was "Why do Americans Suck at Math?" and I can't help but think that the refrain of "I'm sorry I'm so stupid" and headlines like this are connected. Connected because they reinforce this idea that people "suck" at math in bulk. There is this weird perception that math is something only special people are good at, that you have to have some innate ability to do it and understand it. That people who are good at math look always use the Feynman method of problem solving: write the problem down, think about it, write down the solution. The idea that math people look at a new math topic and go "Oh, of course! Obviously this is true" and run off and use it seems to be weirdly pervasive, both consciously and unconsciously.

Of course, it would be lovely if this were true. I could have whole years of my life back if this were true. And of course it feels nice to be on the math people side of this, because it makes one feel smart and talented when in your work you frequently feel frustrated. It's like payback for the mockery, real or perceived, for being STEM types with all the cultural baggage that goes with it.

But I think it is also incredibly toxic. If math is something only special people can do, then why should ordinary people try? If we ignore or hide away our own struggles with understanding, we encourage this myth and scare people away who, even if they aren't in STEM, might enjoy seeing the beauty of it all. And it is beautiful. Being able to see the world with math and science at your back is awe inspiring, adding a whole new dimension to everything you can look at and experience.

I know very, very few people who haven't struggled to grasp every math and physics concept when they were first introduced. I think I've known two in my entire life. I was on the 'elevated' math track in school, which means I got all the way through AP Calc B in high school. And I still struggled and struggle with math. What my students (and my mother) never saw was me with wikipedia on my laptop and my calc book open as I desperately tried to understand different kinds of integrals, or tests for convergence. They never saw the early mornings, between classes and late nights in the physics lounge with scratch paper everywhere, chalk covered hands, asking anyone who entered the room, "Can you explain this? What is a [cross product, wave equation, probability density, etc]?" The extended arguments that eventually ended up with the stuffed monkey Harold on one of the professor's door in a plea for help. They will never know how much help I got from professors, from other students, from older students as I struggled to learn this stuff that I now seem so natural at. I'm not smarter than them. I was just persistent. When my students see me reduce a fraction on the board, or quickly do a cross product they assume it's just natural to me, like music is natural to my dad. What it really is is 7 years more experience and work.

Now, is there some natural inclination involved? Sure. But not nearly as much as people seem to think. Being good at anything, regardless of natural inclination, requires work above all else. My sister is more naturally inclined than I towards languages; she also studied more and is therefore far more fluent than I am (as in, actually fluent). No matter what your natural talent and inclination, if you never work at it, it will wither and dry up. And while you may never be a prodigy, hard work can get a person far in pretty much anything that's not sports.

People don't seem to believe me when it comes to math and science, so here's an analogy. I enjoy cooking. At this point in my life I am pretty good at it. I can make recipes up on the fly and nine times out of ten they work. I can tell if a cake is done by appearance and a light poke; I know if my steak is done to my liking by touch. Now, is there some natural inclination at work? Maybe. My mother is an excellent cook, and let me mess around in the kitchen at an early age. But mostly it's because I've been cooking for over half my life. Because I read cookbooks and watched masters and purposefully worked on my techniques, my understanding of the underlying food chemistry, the physics of different methods of cooking. Anything I am good at is maybe 5% natural talent, 95% work. Five percent alone gets you absolutely no where. Ninety five percent alone can get you pretty far.

This is something that we need to work on emphasizing more. We need to emphasize fewer Sheldon Coopers and Charlie Epps, boy geniuses grown up and solving MATH. We need to make it clear that what we do is not magic, not the result of some fluke of genetics that gave us special math powers. Something sparked an interest and we pursued it to the best of our abilities. We weren't destined to become mathematicians/physicists/chemists/what-have-you any more than non-STEM people were destined to be librarians/writers/bankers/secretaries/what-have-you. We chose to be what we are, and we worked hard to get here. Of course, this means admitting that we aren't special beings with math vision. But if we want to encourage people to engage with STEM, we need to kill this myth of "stupid".

Wednesday, July 23, 2014

Basic Physics: Part 0, Section 3: Vector Multiplication

Welcome back to my Basic Physics series! In previous sections, I covered some basic algebra topics, necessary trig functionscoordinate systems and vectors. The last post was getting rather long, and since vector multiplication is a bit tricky, I broke that topic into its own section, which you have before you! And I going to further preface this section with this: if you read through this, and you aren't sure you understand what is going on, don't give up! It's not easy to understand, and it looks down right weird; I think my reaction upon first introduction was something along the lines of 'what new devilry is this?!". I'm pretty sure I was using vector multiplication for at least 4 years before really understanding what it is. It didn't stop me from getting a B.Sci in physics, and it shouldn't discourage you from continuing reading this series, because it is possible to use these things without really understanding. Think of it like a car--you use a car everyday, you can make it do what you want, but almost everything under the hood is a mystery. Doesn't mean you can't use it to get you where you want to go! They also get easier with physical examples, which we will get to in coming posts. 

There are two types of vector multiplication. Each has its own uses and peculiarities. This section is going to cover what they are, their peculiarities, and how to do them. They pop up repeatedly in physics, so we'll discover their myriad uses along the way. Let's bring back our two arbitrary vectors from the last post 
$$\vec{v} = a \hat{x} + b \hat{y} + c \hat{z}$$
$$\vec{w} = d \hat{x} + e \hat{y} + f \hat{z}$$
and see what weird things we can do with them!

The first type of multiplication is called the "dot product" (remember that "product" is whatever results from a multiplication), and we get the new symbol \(\cdot \) to indicate that we are combining two vectors using this method. It is fairly straight forward: you multiply the x-hat components together, you multiply all the y-hat components together, you multiply all the z-hat components together and then you add up the results. 
$$\vec{v} \cdot \vec{w} = (a \hat{x} + b \hat{y} + c \hat{z})\cdot(d \hat{x} + e \hat{y} + f \hat{z})$$
$$\hspace{15 pt} = (a*d) + (b*e) + (c*f) = ad+be+cf$$
Interestingly, a dot product of two vectors does not yield a new vector with magnitude and direction, but rather a "scalar" which just has magnitude. This little quirk becomes very important when we get to electromagnetism. It should also be noted that the dot product is commutative, which is to say that 
$$ \vec{v} \cdot \vec{w} = \vec{w} \cdot \vec{v} $$
The same cannot be said of the other type of vector multiplication, which we will get to shortly. 

So, what does a dot product tell you? Speaking geometrically, it gives you the product of the length of the projection of one vector onto another vector. That's about as clear as mud, so lets look at some pictures. Here we have two vectors, \(\vec{A}\) (red), and \( \vec{B}\) (green), with an angle \( \theta \)  between them. 

If we draw a line between the end of \(\vec{A}\) and the point on \( \vec{B}\) where that line can intersect normally

we now have a right triangle. And we know from two weeks ago in the trigonometric section what to do with right triangles. We are given the angle, so by using the cosine function we can find the length of the projection of \(\vec{A}\) onto \(\vec{B}\), kind of like finding the length of your shadow. Let's label that projection \(A_B\) since it's the shadow of \(\vec{A}\) on \( \vec{B}\). So
$$ A_B = |\vec{A}| \cos{(\theta)} $$
with the vertical lines around indicating that we are using the total length, the magnitude, of A and not the vector form. The direction information is not helpful when dealing with triangles. So now we have the length of the projection of \(\vec{A}\) onto \(\vec{B}\). Assuming we have the length of \(\vec{B}\)  we can now find the geometric value of the dot product
$$ \vec{A} \cdot \vec{B} = |\vec{A}| \cos{(\theta)} |\vec{B}|$$
or, more prettily and more commonly,
$$ \vec{A} \cdot \vec{B} = |\vec{A}|  |\vec{B}|\cos{(\theta)}$$
This is another way to calculate the dot product, and is handy if you have been given magnitudes and angles, and not the component form for your vectors. 

But what does this mean, I can imagine you asking. Recall the idea of orthogonality from the previous section. The dot product allows you to relate two vectors based on the degree to which they are orthogonal to each other. If two vectors are perfectly orthogonal, the angle between them is \(90^{\circ}\), there is no projection, no 'shadow' of one vector onto the other, the cosine is zero, and the dot product is zero. The vectors are completely unrelated to one another, and they have no multiplicative interaction. If, on the other hand, the angle between them is \( 0^{\circ}\), then the cosine between them is 1 and they are parallel. They are both going in the same direction and can have the largest multiplicative effect on one another. (If you happen to dot a vector with itself, the angle will be zero, the magnitudes will be identical and you will get the square of the magnitudes. This is in fact how the magnitude of a vector is defined--the square root of the vector dotted with itself.) For any angle in between the effect is proportionately diminished. This will be easier to see when we can give some physical examples. 

The second type of vector multiplication is the "cross product", and the \( \times \) symbol which you probably learned to use for regular old multiplication back in elementary school gets reserved for this particular operation from here on in. Regular multiplication is usually indicated either by abutting parentheses, by an asterisk, or in the case of variable/coefficient terms just writing them next to each other without a space, as in the dot product example above. However, it is not generally as straightforward a calculation as the dot product, because the result of a cross product is still a vector. 

In order to calculate the cross product from scratch, as it were, we need to borrow a tool from linear algebra, namely the determinant*. The determinant is a way to arrange vectors so that you can easily calculate the cross product, no matter the size of your vectors. It is basically an organizational tool.  Once again, let's use the general vectors \(\vec{v}\), \(\vec{w}\), and calculate the cross product of \( \vec{v} \times \vec{w}\). 
$$ \vec{v} \times \vec{w} =\begin{vmatrix}\hat{x}& \hat{y} & \hat{z} \\ a & b & c \\ d & e & f  \end{vmatrix}$$
First, let's dissect what this thing is, line by line. 

 The first row is a label row of sorts. They label each column and they will help label the results. All the \(\hat{x}\) components go in the column under the \(\hat{x}\); if there is no \(\hat{x}\) component to a particular vector, that column gets a 0 for an entry for that vector's row. The same goes for the rest of the directions, namely \( \hat{y}\) and \(\hat{z}\).

 The second row is where the elements from the first vector, in this case \(\vec{v}\) are placed in their respective columns. Remember, if a vector is lacking an element, it is entered as a zero; the column is not deleted entirely, even if all it contains is the label and zeros.

The third row is treated in the same manner as the second row, except that it contains the second vector, in this case \(\vec{w}\). It's rather like filling out a spreadsheet. 

Now what do we do with this thing? 

You first calculate the \(\hat{x}\) component. To do this, you imagine blocking out everything that shares the \(\hat{x}\) column and row, leaving you with a square of components that are not the \(\hat{x}\) component

With those remaining 4 elements, you multiply the diagonal elements, and subtract the lower left/upper right pairing from the upper left/lower right pairing. So the results of this step are 
$$\hat{x} (bf-ec) $$
Note that the \(\hat{x}\) component of the product contains every element except the \(\hat{x}\) components of the  original vectors. Cool, right?

The second step is a little strange, because by blocking out everything in the \(\hat{y}\) element's column and row, we get a kind of split square, or two rectangles. 

What this means is that you have to subtract this result from the final product. You multiply the elements from the two rectangles as though it were one square, so this component of the product gives us
$$-\hat{y}(af - cd)$$
So far, so good. 

The last component is almost identical to the first. 

And you find the results the same way you did for the \(\hat{x}\) portion of the product. So the final bit is 
Weird, but not too horribly complicated.  So the final result is 
$$\vec{v} \times \vec{w} = \hat{x} (bf-ec) -\hat{y}(af - cd) + \hat{z}(ae-db) $$

Some people are able to simply memorize the result given above, and don't bother to use the determinant. I am not one of those people gifted in memorizing formulae.  It's easier for me to remember a compact method or tool than to memorize lines of elements. 

It should be noted that the cross product is not commutative--the order in which things are multiplied matters, unlike the dot product. That is to say, \( \vec{v} \times \vec{w} \neq \vec{w} times \vec{v}\). It is however anticommutative, which means that reversing the order negates the result: \( \vec{v} \times \vec{w} = - \vec{w} \times \vec{v}\). 

So, what does the cross product give you? This is a little easier put than the dot product. The cross product calculates the area of the parallelogram whose sides are defined by the two vectors. 
So what's all that vector information doing? Well, hold on to your socks, it turns out that area is a vector quantity. Yep, and the direction of that area is normal to the surface which the area is a measure of. So if you can imagine standing on that parallelogram, which every direction is straight out of your head is the direction that that area points! This also make clearer an important point about the cross-product, which is that the resulting vector from a cross product is necessarily orthogonal to its two parent vectors. The two vectors must lie in a plane to form a parallelogram, and the normal to that parallelogram must be normal to the two vectors defining that parallelogram. 

The fact that the cross product calculates the area of a parallelogram leads us to our last point. What if you don't need to know which way the area is pointing, for some reason? You just need the magnitude of the result, not the whole thing. Well, if you remember your geometry you might recall that the area of a parallelogram can be found by multiplying the lengths of the sides and the sine of the angle between the sides. The same formula works for the cross product in a way. Assuming you have the magnitudes of each vector and the angle between them, you can find the magnitude of the cross product, at the cost of the direction information. 
$$|\vec{v} \times \vec{w} | = |\vec{v}| |\vec{w}| \sin{(\theta)}$$

That wraps up what you need to know from vectors. I know it may seem like a lot, but remember that it's a tool in our tool kit for physics. We will be using these tools frequently, and like any skill it becomes easier with use, and you get to build more incredible things the better you become!

As always, I hope that I have explained things clearly. If I haven't, please let me know in the comments, and I'll do my best to clarify!

*DH objected to this section, because what physicists call a determinant is technically a 'formal determinant', as it has the right form but does not adhere to the strict definition used by mathematicians. If you should show this to a mathematician, they will twitch, and possibly rant about physicists. This is the normal reaction of mathematicians to physicist notation. 

However, the math editor of this blog, sirluke777, objects to DH's objection, and says it's perfectly fine. His background is math, physics and chemistry, so make of that what you will. If there is an outcome to this math geek debate, I will update here. 

Wednesday, July 16, 2014

Almond Biscotti (Low Carb, Gluten Free)

I love almond biscotti. I learned to make it when I was young (like, middle school?) and somehow never got cut on gratter boxes. I also haven't had one in about 2 years, because traditionally they aren't very low carb.

But I have recently discovered the power of XANTHAN GUM. And yes it sounds like a space alien but it's 'all natural' and more importantly it acts like the protein/binder gluten, which DH can't have and isn't found naturally in any of my low carb flours, especially not my prefered one, almond flour.

The first step is to make almond paste that isn't full of sugar. Fortunately, if you have almond flour, egg whites, and your prefered granulated sweetener of choice (I like Splenda, because I don't have to convert the volume measurement, and erythritol makes my tongue break out, so goodbye Truvia) you can make almond paste! Just throw equal quantities of almond flour and sweetener into a food processor, pulse to combine, then add a couple of egg whites. Start with one and keep adding until it is roughly the consistency of play dough. Voila! Almond paste. If you want marzipan, add more sugar. A nice extra touch is to add about 1/2 tsp of almond extract per cup of almond flour.

After that, it follows standard biscotti procedure. How many it makes depends on how you cut them--if you like your biscotti thick there will be fewer. at about a 3/4 inch slice, I got about 18 large biscotti. They stay slightly moist on the inside. You could leave them in a warm oven, or slice them thinner


 1 batch almond paste (1 cup almond flour, '1 cup' sugar sub, 2 egg whites, 1/2 tsp almond extract)
 4 tablespoons cold butter, cut into small pieces

 1 3/4 cup almond flour + 1 1/2 tsp xanthan gum
 1/2 cup sweetener (3 tbsp honey also works, but raises carb count)
 1/2 tsp baking powder
 heavy pinch salt (to taste)
 2 eggs
 1/2 tsp vanilla

Pulse almond paste and butter together in food processor (or stand mixer). Add almond flour, xanthan gum, sweetener, baking powder and salt. Pulse/mix until well blended. Add eggs and vanilla and pulse/mix until mixture is uniform. Form mixture into a large loaf measuring about 6 inches by 18 inches and 1 inch high for large biscotti,or two loaves about 4 inches by 12 inches by an inch for small biscotti, on a baking sheet lined with a silpat or parchment paper. Bake at 350 F about 30 minutes. It should be lightly golden and springy to the touch. Gently transfer to a cutting board and let cool 5 minutes. Use large, sharp knife to cut into even slices, 1/2 inch to 1 inch thick. Lay them cut side down on the baking sheet, bake another 10 minutes. Flip slices gently, and bake another 10 minutes. Cool on tray, then store in a dry place. May soften over time; if that happens, place in a warm oven(even a toaster oven works!) for 10 minutes, and the crunch should be restored.

Monday, July 14, 2014

Basic Physics: Part 0, Section 2: Vectors and Coordinate systems

In the previous sections, I cover some basic algebra topics and necessary trig functions.

In this section, I'm going to lay out some basics of coordinate systems and vectors. I'm pairing these concepts because vectors without coordinate systems are a little esoteric for this series, and coordinate systems are necessary, but easily dealt with, at least compared to things like trig functions.

These two concepts are needed because when we talk about a problem and set out to solve it, we need a way to describe where things are, where they are going and how they are getting there. The combination of vectors and coordinate systems allows us to know exactly what we are referring to  and what it's relation is to anything else that might be relevant to the problem. Without this tool, problems in more than one dimension can easily become a hopeless jumble.

Let's start with coordinate systems. There are many possible coordinate systems of which  11 of which are commonly used and  only 1 of which we need for right now. That coordinate system would be the Cartesian coordinate system, apocryphally realized by M. Rene Descartes  (he of "I think therefore I am" fame) as he lay in bed watching a fly buzzing above him. It was also realized by M. Fermat, though he failed to publish it. If you ever had to plot things by hand in a math class, you have used the two-dimensional cartesian coordinate system! The cartesian coordinate system can be thought of as a grid system in 3 dimensions, that lets you specify a  location based 3 numbers, one for each dimension. You can think of it as giving someone a latitude, longitude and altitude. You have given them all the information they need to locate a particular spot on planet earth. (I will officially note that the earth is NOT a cartesian system, since the lines of longitude are not parallel but intersect. But for small distances, say NYC, it is a decent approximation).

Formally, a location in any coordinate system  is the intersection of three orthogonal planes. Orthogonal, for our purposes, means that the lines/planes intersect at \( 90^{\circ}\) to each other. Think of the walls in your house. Your walls (hopefully!) intersect your floors at right angles most of your walls will intersect each other at right angles, unless you have a very interesting house. So your walls are orthogonal to each other and they all orthogonal to the floor.

An example is probably the easiest way to show this. Let's start with a basic three-dimensional (3D) cartesian coordinate system:
In the picture above, I have drawn the same coordinate system from two slightly different perspectives. The top one you are staring down the barrel, so to speak, of the z-axis, looking at the x-y plane straight on. In the bottom one the picture has been rotated \(45^{\circ}\) about the y-axis so you can see along the z-axis as well. This becomes very useful if you are talking about things in 3D, while the top one is fine if you are only worried about two dimensions.

Another word about terminology and notation. What is an axis, and why are those letters wearing hats? As with a lot of math-stuff, it comes down to the dual needs for conciseness and precision. Let's start with the hats. When you draw a coordinate system for a problem and you label the axis, you are defining your directions. It's as if you are creating a mini universe and saying "this is East/West, this is North/South and this is Up/Down". But rather than label things in poetic victorian manner as "easterly direction" mathematicians and their ilk like to label things with letters. So "easterly direction" becomes "x-direction", but that's still too wordy. So 'direction' becomes 'axis', and that can get further shortened with vector notation as \( \hat{x} \) said "x-hat".  So an axis defines the direction of your coordinate system, but it also serves as a point of reference, much like the equator, the Greenwich meridian, and sea level  serve as reference points for finding places on the earth. So if you are on the x-axis, you are not moving in a y-direction or a z-direction. If you want to give a location in the coordinate system, you can notate it either as \( \left\langle a, b, c \right\rangle \) or you can use vector notation, to get a little ahead of ourselves: \( a \hat{x} + b \hat{y} + c \hat{z} \). The latter notation is preferably simply because it is more flexible, as we shall see.

Getting back to our example. Let's say we want to find a point \( \left\langle 3, 2, 2 \right\rangle\) (\( 3 \hat{x} + 2 \hat{y} + 2 \hat{z} \) ). For the moment we don't care what the units are. We start by locating the \( x = 3\) plane, that is, the plane that contains every point of the form \( \left\langle 3, b, c \right\rangle \) where \( b \), \(c\) are every real number. Then our diagram looks like this

with the red dot noting the point where the plane intersect the x-axis  in the bottom view, since it's a little hard to see. Next, we locate the \( y = 2\) plane.

The blue dot notes it's intersection in the y-axis in the bottom diagram because again, it's hard to tell. It's much easier to see in the top image, but there' a reason why the bottom diagram is actual preferable in some ways. This can be most easily seen when we try to add in the last point, the \( z = 2\) plane to give us our three-plane intersection.
Amazing what you can do with a basic drawing program and a little insanity.
The top image doesn't really allow us to visualize that last necessary dimension. You can mentally add it, but you can't draw it into the top one. The bottom one you can see the last plane and pinpoint their intersection (marked with a black dot here). 

That's pretty much all there is to coordinate systems. They let you pick a frame of reference so you can locate things in a mini-universe for the purpose of problem solving. What I find particularly neat is that you can place your coordinate system anywhere you like and the problem will still be solvable. It may be easier to solve from a computational standpoint if you center it nicely, but you don't have to. Why this is the case is something that I'll get into more when we start doing physics properly. 

Now, on to vectors. If a coordinate system gives you a frame of reference, vectors are what let you move around in that frame of reference and deal with more than just static problems. Now, what they are precisely requires linear algebra and is way outside the scope of this series, so we are going to stick to just definitions and not get into the nitty gritty. So, here goes.
A vector pairs a quantity with the direction that quantity is in, going or pointing to. A vector has both "magnitude" (quantity) and "direction". So long as you can describe a quantity as having these two qualities, you can express it as a vector*. We've already shown how we can describe position as a vector. You can also describe velocity as as a vector. "He's going 80 miles per hour" gives you a speed (a magnitude). "He's going 80 miles per hour due north" gives you a magnitude and a direction. We'll get more deeply into what physical quantities can be described using vectors in the first section of Part 1. 

For now, let's just work with two arbitrary vectors and see what we can do with them. As discussed in the algebra post, we'll use letters to stand in for numbers that we can plug in later. 
$$\vec{v} = a \hat{x} + b \hat{y} + c \hat{z}$$
$$\vec{w} = d \hat{x} + e \hat{y} + f \hat{z}$$
The little arrow above \(v\) and \(w\) indicates that they are vectors. In math everything has its own shorthand because you never know when you will want to deal with something in its entirety, or just don't want to write out the whole thing for the umpteenth time. 

So, what can we do to these things? Well, we can add them. The trick is that you can only combine things attached to like 'hats'. So you combine all the x-hat components, all the y-hat components and all the z-hat components, but you can't combine x-hat components with non-x-hat components. So 
$$\vec{v} + \vec{w} = a \hat{x} + b \hat{y} + c \hat{z}+d \hat{x} + e \hat{y} + f \hat{z}$$
$$\hspace{10 pt} = (a+d) \hat{x} + (b+e) \hat{y} + (c+f) \hat{z}$$
Subtraction works the same way:
$$\vec{v} - \vec{w} = a \hat{x} + b \hat{y} + c \hat{z}-d \hat{x} - e \hat{y} - f \hat{z}$$
$$\hspace{10 pt} = (a-d) \hat{x} + (b-e) \hat{y} + (c-f) \hat{z}$$

At this point you are probably wondering about multiplication and division, since addition and subtraction have been relatively straight forward. The answer is that there are two types of multiplication for vectors, and no types of valid division. Why this is starts getting into linear algebra and "outside the scope of this course". So I'm going to ask you to trust me on this one, because it's absolutely true even if I can't show you right now why it's true. They are also rather more involved than vector addition/subtraction, so I am going to move them to their own post so we can really take our time with them. 

I hope that this all was clear. If it wasn't, please let me know in the comments!

*There are also a few things that we'll get to over the course of this series that you wouldn't think you could describe as vectors, but they behave identically to the ones we deal with here. 

Sunday, July 6, 2014

Shifting Cooking Gears

Cooking has always been about experimenting for me. Going by the recipe is fine for somethings, like candy making or if I know I enjoy this person's particular formulation of the dish, but by and large I like to tweak recipes. Non-baking recipes I'll usually just make up as I go along. Baking recipes are more chemistry dependant and therefore harder to do on the fly.

Lately though, I feel like I am having to reinvent the wheel, knowing what a wheel looks like but having to make it with very limited and somewhat unsuitable materials. It's exhausting trying to do that for every dinner.

Lately I've been having PCOS flare ups, partly just because and partly because there has been some stress in my life, so I've had to switch back to a stricter low-carb/low-GI diet. This would be annoying but par for the course, if we hadn't started a gluten-free diet for DH. He's had 'stomach problems' all his life, which I have been trying to solve for the 3 years we've been married and I've been in charge of procuring his food. Gluten-free was the last on a long list of things we've tried, and so far seems to be the most successful. We'll look into having proper testing done at some point, but since he just took a new job in a new city, the timing is not right for finding a specialist in our current area.

So in short, I am facing the challenge of cooking both low-carb/low-GI (LCLGI) and gluten-free. Lots of LCLGI food is gluten-free because if you aren't using any grain-flours you aren't going to be including gluten. It's also rarely recognizable as analogous to its carb-loaded counterparts, and to a certain extent just requires recognizing that there is no substitute for pasta or bread. Gluten-free foods, of which there are TONS on the market right now all nicely labeled, are rarely LCLGI because they are made with rice starch, tapioca starch, potato starch, etc. Pretty much everything I can't eat. Thus I am faced with the choice to make two different dinners, or to try and find food that lies in the overlap that we both find palatable.

Of course, some things don't really change. Meat is gluten-free and low-carb. Vegetables, pace potatoes, ditto. But there is something so fundamental to having some kind of starchy thing, and that's mostly where the problem lies. DH can have rice, but I can't. There are both low-GI and gluten-free pastas on the market, but of course they occupy opposite ends of the spectrum. I can have rye or spelt bread in small amounts, but he can't. I can make risotto with rice for him and risotto with barley for me, but that seems absurd. The best LCLGI and gluten-free recipes feature coconut flour, which has a noticeable taste for me that I don't always want. Most of the recipes I've come up with use almond flour which is unavoidably gritty, or oat flour which is gritty and whole-wheat tasting unless you really work to hide it.

 Cookbooks are typically one or the other, and if they are both they are typically one of the crazier diet fads, like paleo. While that is the closest to what we are eating, I just can't say we are going paleo. The whole diet is based on bad or non-existent science, cheese is something I rely on, and I can't get over the absurdity whenever I see a paleo recipe call for things bananas or brussel sprouts. Those yellow bananas you get in the grocery store have existed for less than 200 years, and look nothing like a paleolithic banana. Brussel sprouts only popped into existence in the 1300s. Coconut flour also did not exist in paleolithic times.

In short, food posts are probably going to be a little less "here's a recipe I made up last week" and more musing on what works and doesn't work as I try to reformulate, replace and otherwise revamp my repertoire of foods in the coming months. A journaling of success and failures so I hopefully don't have to repeat the latter too often. Also most likely they will be shorter interludes as I work on my Basic Physics series. And if you happen to follow me on Twitter (@PhysicsGal1701), now you know what all the food posting is about.