Before we begin, I tried to make this as language independent as possible because I have no idea what language(s) you want to learn. Also, I don't claim to even be a particularly good programmer. This is just what I've figured out is a good basis for programming in certain areas.
A BASIC introduction to programming:
One of the fundamental principles of programming is that you are giving a series of instructions to a machine to be executed.1 In most languages, these instructions are executed serially; the order in which they are written is the order in which they are executed. As an example, let us say that you are getting bored of doing addition by hand. It just takes too long. If you were to write a program to automate this, you would write a series of steps that the computer can understand to produce the result. With addition, this might look something like:
a = 5
b = 6
Answer = a + b
This is a program. What the program is doing is storing the value "5" to the variable "a" and the value "6" to the variable "b." After executing those pieces of code, the computer looks at the next line in the program and sees that it needs to perform the "+" operation on the values of the variables "a" and "b," then set the variable "Answer" equal to the result. As it happens, the "+" operation is exactly the same as the addition operation we wanted to automate, so the computer just adds the variables together and stores the result. We've now automated addition.
At this point, you may be asking "what is a variable?" The answer is actually quite simple, especially if you've ever taken a course on Algebra that had equations like y=x^2. Variables here are almost exactly the same. To put it succinctly, a variable is an area of memory that can hold any piece of data you place in it. The variable will [ideally] "remember" that data until you store something else to it.
Now that we've written our first program, the question arises "How does the computer know that I wanted it to add two numbers and not say, subtract them?" This too is an important question and the answer is surprisingly complex. However, I'll try to explain in the simplest way possible. The first thing is what we call computers are actually collections of different components that work together to collect and process information. A computer such as a laptop contains many individual pieces such as a Central Processing Unit (commonly called a CPU or processor) that performs most of the computations in a computer and controls the rest of the components, a screen to communicate information to the user, a screen controller translate the commands of the CPU into pixels on the screen, different types of memory (RAM, the many types of ROM, a Hard drive) to store data for the CPU to access, a "Bus" to transfer information between the CPU and memory, a keyboard and mouse to get input from the user, etc... All of these parts work together in a system called a computer. It needs to be reiterated that the CPU is the main component of the computer. All other parts of a computer in some way work to allow the CPU to perform computations for the user.
The second thing is that, somewhat surprisingly, computers can't actually understand most of what are termed computer languages. Computers really only understand one language,2 called machine code. Machine code is composed of a series of 1's and 0's.3 As one might expect, Machine code is difficult to read or write for humans. Here is the same program we wrote earlier, but in machine code for a Renesas SuperH 3 processor. It looks fun to program in, doesn't it?
Because even the simplest things are very difficult to do in machine code, programmers invented something called Assembly Language (often called ASM or Assembly) to make programming easier. To create Assembly, each instruction the processor was capable of executing was given a short mnemonic to help programmers remember them. These mnemonics were written down as if they were the machine code and run through another program called an Assembler. The Assembler looked at the Assembly and for each mnemonic outputted the corresponding machine code.
The machine code from above in Assembly looks like this:
While Assembly is definitely an improvement over machine code, it's still very difficult to do a lot of "simple" things in it. For example, the Assembly necessary to print out the words "Hello World" on the screen involves a lot of advanced code and tricks that are difficult to program.
After getting frustrated with spending hours doing even the simplest things in Assembly, people started looking for other ways to write code. The first way developed was something called an "Compiler" by a women named Grace Hopper in the early 1950's. The Compiler was a lot like an Assembler. It took a file containing code as an input and produced code in another language as output. However, Hopper's Compiler didn't read Assembly. Instead, it read a much higher level language called Fortran. When I say "higher level," I mean that the programmer didn't have to worry about the individual instructions on the computer. Instead, if someone put in the code , the screen would display "Hello World". The concept of a Compiler revolutionized programming and a lot of extremely popular languages use Compilers to produce code for the computer. Examples include Axe, C, C++, Fortran, etc...
There were a few problems with compilers, though. For one thing, compilers took a long time to operate. You could make a change to one line of code among thousands, but it may take several minutes or hours of compilation before you can test your code. Secondly, the machine code produced wasn't portable between different types of computers. These problems were solved by what are probably the most common types of languages today, interpreted languages. These used a program called an interpreter written in Assembly or a compiled language to translate or "interpret" a language written in a universal language into machine code. To put it in simpler terms, the Interpreter translates code from a language recognized by all interpreters for that language into CPU specific machine code. To run interpreted code on a new system, all you had to have was an interpreter for the language the code was written in. Also, since Interpreted code was not [originally] compiled, you could test changes as soon as you made them. Example languages include TI- and Casio- BASIC, BBC BASIC, Python, Java, Groovy, Lua, etc...4
The relationships between the different types of languages are illustrated by the following diagram. Note that the compiler produces "Compiled Assembly." Almost all modern compilers produce Assembly as an intermediate output for portability and optimization reasons, not raw machine code. This "Compiled Assembly" is then run through through a system specific Assembler.
To answer the question that provoked this long discussion, the computer knows what to do with your code because there's another program that converts your program to the machine instructions that the computer knows how to execute. The details of how this all is done are numerous enough to fill several college courses and a library's worth of books.
To get back to a more relevant subject, the meat of programming isn't the programming paradigm or language syntax, it's algorithms. The syntax of any language is generally well documented by its designers. Similarly, the point of any programming paradigm is to make some set of tasks easier. These are merely aesthetic aspects of programming. However, underlying all non-trivial programs are algorithms. Indeed, as one may learn, the whole art of programming is solving problems. The logical procedures used to solve these problems are what we call algorithms. More simply, an algorithm is a series of steps that can be taken to solve a particular problem. Earlier, we wrote a program to perform addition for us. That program can easily be translated into an algorithm by observing the steps we took:
Step 1: Load Value1 into variable "a"
Step 2: Load Value2 into variable "b"
Step 3: Add variable "a" to variable "b"
Step 4: Load answer from Step 3 into variable "Answer"
While this may seem obvious, other more complex pieces of code are most certainly easier to comphrehend when stated as an algorithm. Here is some perfectly valid Axe code:
Stores the bottom 8 bits of A in HL, pushes HL, takes the value of X mod 3 and puts in HL, attempts to find the address of symbol port but resolves to nothing, adds 9, adds $30 to the .data section of the program, reallocated the standard variables so that A corresponds to W, B corresponds to X, etc., pushes HL, gets the address of the var r1*6, puts 2 in HL, adds $23 to the .data section of the program, pops HL, adds the address of the var theta, takes the state of the link port and stores to the byte pointed to by the short pointed to by the value of A.
The explanation is admittedly still confusing, but at least it's somewhat readable by normal humans.
However, aside from readability, algorithms have another use: code portability. Imagine trying to write the example code given above in another language. It'd be very difficult because syntax is often language specific. You couldn't just copy and paste the code. However, the algorithmic description allows you to understand the logical flow of the program. It's then much easier to write the code in a completely different language.
But, even more than the previous two reasons for algorithms, their main application is in the area of effort saving. So many programs have been written and algorithms discovered that one can often create impressive programs just by arranging implementations of algorithms properly. There is little reason to spend hours or days searching for a way to solve a particular problem when there is a good chance someone else has encountered the problem before and solved it. Some examples of this are cryptographic encryption algorithms. A encryption algorithm takes data as input and produces data that is apparently unrelated to it as output, along with a key. When the key is reapplied to the output, the input is produced and can easily be read. Good encryption algorithms are notoriously difficult to design, though. There are numerous examples of people attempting it on their own, only to lose millions of dollars when it's discovered that the algorithm doesn't encrypt data as well as it was thought to. To fix this, many people have spent years designing better and better encryption algorithms. That's a rather dramatic example, but it illustrates the need for commonly known algorithms and the dangers that can be present when you don't use the work of others to build upon and improve your own projects.
By this point, you should have a basic understanding of the art and science of programming, if not that of any particular language. In order to continue your education, here are some excellent links that I have found useful in the past.
TI-BASIC Developer: The best online resource for TI-BASIC
Lua homepage (documentation and interpreter/development environment downloads)
Python Software Foundation (documentation and downloads)
A large collection of Python tutorials
Ruby in Twenty Minutes
This was the first language I learned. I still don't understand how to really program in it
Just BASIC homepage
Just BASIC tutorials and help
This isn't much of a beginner's language unless you come from a highly mathematical background. However, the documentation is some of the best I've ever seen for a language and the language is enormously powerful for a lot of mathematical tasks. On the downside, it's extremely expensive, so unless you can get a copy from a school or employer, I wouldn't bother trying it.
The best language of all
Learn it, love it, live it. It's the world's most powerful language when you need information to solve a problem in one of the other languages. Always go to Google first before asking for help.
Important Algorithms and functions to get you started
Even if you don't learn everything there is to know about all of these, it's an excellent idea to learn what they are and when [or when not] to use them.
Sorting algorithms: Quicksort, Heapsort, and Flashsort are generally considered the best.
Marching cubes algorithm
MD5 hash algorithm
Boolean Logic: Learn this. It's used in almost every programming language out there.
Bitwise Operators: Again, pretty important
Floating point vs Integer arithmetic: Too many programmers don't learn this and screw up as result. LEARN IT.
Fourier Transform: One of the most well researched and important functions in programming.
More root finding
1 This isn't quite true when dealing with fifth generation languages such as Prolog, but the differences are beyond the scope of this introduction
2 Machine code is actually a family of languages and most modern processors such as x86-64 understand several different (but similar!) variants of machine code.
3 As it turns out, different types of memory have different ways of storing this information. A CD may use a bump on a surface to represent a "1" while a hard drive will use the polarity of a tiny ferromagnet to represent that same 1. However, *all* types of memory currently in common use have some way of representing 1's and 0's. Any piece of memory that can contain exactly one "1" or "0" is termed a bit. A bit is the smallest possible unit of memory and can be used to create any larger piece of memory. In most computers, bits are commonly collected in groups of 8, which are called bytes. Bytes are the smallest *addressable* unit of memory in a computer.
4 Java and Groovy are actually hybrid languages that use both compilers and interpreters. The compiler compiles the source code into an Assembly-like language called a byte-code. The byte-code is then executed by an interpreter (or more properly, a Virtual Machine). The details of this are more advanced than this tutorial, though. Lua is a special language that has officially supported byte-code compilers and interpreters, so it's a hybrid of the hybrids.