Titles are overrated

Warning: The entire blog is centered around (dah dah dah!) ME. It's self-serving, self-indulgent, and self-centered. Deal.

Monday, July 12, 2004

Okay, so I'm still working on this damn computer science project. I swear I'll get it done one of these days. Right now, though, the problems I'm having with it occur at a very high level of detail. So, since it runs somewhere around 20 vertices/second, and I have to run it over and over again to test the changes I make, and the smallest model that has problems has almost 25000 vertices, I have a lot of time to sit and surf the internet. So, I've run into something cool. It's both exciting and disappointing, but it's pretty nifty to me. Note: anyone who isn't interested in how calculus and neural networks are related should go elsewhere now.

So, since I've been desperately trying to find topics in computer science that are capable of keeping my interest while this project sinks my passion for the subject to all-time lows, I've been spending some time looking at neural network theory. After delving a bit into the subject, I'm actually kinda disappointed. The mystique is gone. I understand them (more or less) now, and they were a lot cooler when they were just a black box. Anyway, the nitty-gritty is this: A neural network is essentially a set of nodes in a tree of sorts. You have some input nodes through which you feed values. These values are passed en masse to each node farther down the line, where all the incoming values are multiplied by their respective weights and added together to form a single value, which is then passed on, along with a bunch of other values from other nodes, to nodes farther down the line. The process repeats until you run out of nodes. To train the network, you take a set of data for which everything is known: the input is known, as well as what you would like the output of the network to be. You plug the input values into the network, run it, and look at the output. Then you use what's called the backpropagation algorithm (there are others, but this is the most common and arguably most effective) to modify the weights so that the network becomes better at recognizing the trained pattern and giving the desired output (and, theoretically, giving close to the same output for similar inputs).

This is all well and good, until you look at it the way I do. You can, without loss of funtionality and with a significant gain in control and precision, convert each output node of the neural network into a function of N variables (the input values). Then, you can calculate the ability of the neural network to recognize similar input patterns by taking the partial derivative of the set of output functions with respect to each input variable. The larger the magnitude of the partial derivative around each trained input, the smaller the network's ability to recognize similar patterns. You can calculate the absolute error involved in the network's ability to reconize trained inputs by simply subtracting the calculated output from the desired output. Furthermore, you can train the function by calculating the partial derivatives of the error function of each output with respect to the WEIGHT that you're modifying. This is what the backpropagation algorithm does, but it numerically approximates the errors and uses a fraction of the error as the approximate partial derivative.

Two things: I'm a bit disappointed with the whole idea of neural networks now that I realize they're just intuitive models for creating nonlinear functions for analyzing data. There's nothing special about them, they're just convenient facades to simplify the nitty-gritty. (Note that I'm just talking about feed-forward neural networks. For nets that allow connection loops, the math gets really complicated because you have to take into account the circular dependencies of nodes, and for calculation and training you have to iteratively calculate the value of the system until it converges to some value. I don't have a clue what the math would be, but aside from trying to simplify it into infinite series and use Fourier analysis, I'm not sure how you could do the transformation I'm talking about.) On the other hand, I'm excited because I haven't been able to find notes about anyone doing what I'm talking about -- simplifying the neural network architecture into a collection of nonlinear functions for precise symbolic processing rather than the numerical approximation upon which the network model depends. If I can study stuff like this, I may want to get a master's in computer science after all... On the other hand, if they make me study compilers, operating systems, and databases, I'm calling it quits after the first semester. (c;

Thanks for anyone who managed to read all the way through. Hope it didn't bore you too much.

2 Comments:

  • At 6:46 AM, Blogger esunasoul said…

    Okay... so... what I'm wondering is what you can DO with all that once its been created? Make THINKING computers? Or is it on a lower level than that? If you do your masters on that..... will you build me a robot afterwards? ^_~

     
  • At 1:46 PM, Blogger Dathan said…

    Theoretically, neural networks could be used to create "thinking" computers. However, at the heart of the neural net implementation lies the problem that it's built on a binary system. Hence, at any given point you're either training the network or using its calculations. In order to make a thinking computer, you'd have to use a _very_ complex neural network, and then immerse it in a responsive environment, so it could get constant feedback and learn from _every_ calculation it makes, just like organisms do.

    On a smaller scale, though, scientists have "exactly" recreated the brain of the roundworm as a neural network. The roundworm's brain isn't differentiated like ours into cerebrum, cerebellum, brain stem, etc. It only has 96 (I think) neurons, as compared to our billions. I say "exactly" because there's always a little bit of uncertainty inherent in the fact that true brains are electrochemical machines, and its diet can affect the functioning of its brain. For the roundworm, the effects on brain function are minimal, so the neural net approximation is pretty accurate. It's a _long_ step from there to any sort of sophisticated brain recreation. Hell, even goldfish have millions of neurons.

     

Post a Comment

<< Home