Monday, November 17, 2008

Fortran vs C++ vs Java

Here are the latest from the labs. I had a few surprises myself. Not trying to make fun - but already doing - Visual C++ was a shame. It is amazing how much different scenarios you can have on this lab testing. Here is what I did:

1) Visual C++ against Java - Windows/Form -based application
Implementing the exact same algorithm on a Windows-based application, the Visual C++ code was two and a half times slower than Java. I implemented the exact same user interface in both environments but I guess that Visual Studio adds so much controlling code that the performance becomes extremely poor. Java (120sec) x C++ (260sec)

2) Visual C++ against Java - Command Line
The command line user interface unleashes the real power of the C++ language. The game turns completely in favor of C++; the C++ code becomes twice as fast as the similar Java implementation. Java (95sec) x C++(52sec)

3) C++ against Java against FORTRAN (yes, I did learn FORTRAN :-) )
Here, there is another catch. I was not very surprised with my initial results. I translated my C++ code to FORTRAN and it was running ten (REALLY 10) times slower than C++. Then, I have realized that there is an entire community that loves FORTRAN and that this result would never make sense.

The catch I was telling you about is on the memory allocation for arrays. This algorithm manipulates very large arrays in order to reach the result. This is something that I have never paid attention before but it made a huge difference.

Even that arrays are considered multi-dimensional entities. We need to remember that the computer memory is, in fact, single dimensional. Arrays are allocated as a single string of memory slots. This means that when you are accessing random positions from the array, there is a pointer moving back and forward on that string. If the position is near, the reading is really fast but if you have to read from a part that has been swapped to the disk, it can get veeerrrryy slow.

This particularity was exactly what was causing the slowness on my code. The way FORTRAN allocates that memory string is different than C. It would be like this:

In C++ the array is organized by columns: a[0,0] a[0,1] a[1,0] a[1,1]
In FORTRAN the array is organized by lines: a[0,0] a[1,0] a[0,1] a[1,1]

Since my program was running the second dimension first, this was making my system go back and forward on the string just to read the very first elements. After modifying my software to turn my lines into columns and vice-versa, the FORTRAN power came to place.

This was my surprise. After this very simple change, the FORTRAN code started behaving even faster than the C++ code. Java(95sec) C++(52sec) FORTRAN (50sec).

As soon as I find somewhere to host the code I will make the several versions available for download.

-Luciano

Tuesday, November 4, 2008

Mathematical Methods vs Simulation Methods on Radiative Transfer

Hi Folks

This time I'm going a little slower on the posts because I have just got into one of those crazy phases at work. As usual, there are not enough hours on the day :-)

After implementing the initial algorithm in Java language, I was requested to implement the same algorithm in C and FORTRAN. The objective of those implementations is comparing the speed of the algorithm into the different languages.

This algorithm is very CPU intensive. I have noticed that it does not take much benefit of parallel processing because it is basically accumulating operations. I could not see any way to make it parallel because each cycle depends on values generated by the previous one.

At this point, my expectation is that Java and C will have about the same processing speed. I have executed a few comparisons between these languages on the past (good and old undergraduate days :-) ).

As one might know, Java uses a pre-compiler. Based on previous experience, the only speed issue I have ever found with Java was when the Garbage Collector got triggered on the middle of some processing. After disabling the Garbage Collector and assuring that enough memory would be available for the processing, the speed was pretty much the same as the C/C++ code.

I have never been a big FORTRAN fan, but it seems that it is still widely used by the academical community. I will implement (learn) this in FORTRAN and compare the results as well.

Anyway, the thing that I thought really interesting during the algorithm discussions was when we were talking about other methods to solve the Radiative Transfer Equation. We got into a discussion about the mathematical methods versus the simulation methods. The mathematical methods try solving the equation by applying techniques to solve the equation itself. The simulation methods use equations to create a simulated test-field by where the simulated photons would actually cross the simulated liquid.

I do not think I will have the time to implement this other set of methods during the course, but I do believe that it would be very nice to tell you a little bit about how it works.

In general, the simulation methods use computers to simulate the path of each photon across the liquid. They use equations in order to define a set of behaviors that would represent the same situations that the photon would find on the real liquid. Those behaviors are linked together as if they were a tunnel. The photon will enter on that tunnel, it will be transformed by one equation and the result will be the remaining of such photon what will be used as input to the next equation and so on.

For each equation on that test-tunnel, there are random events that might affect different photons in different ways. Each photon passing by the test-tunnel will record a new system history.

In order to calculate the final result of the equation, an statistical analysis is executed in order to consolidate the thousands/millions of recorded histories. This way, the system seems to reach an acceptable level of accuracy in order to solve the equation.

I have ever been a big fan of simulations. You do not need to guess much about which method I'd prefer :)

==================================

Just a side note for someone who may be interested on working on this field. I heard that people use pseudo-random algorithms to simulate the events to interfere on the system. As one may know, pseudo-random generators are not really random. They do offer a nice randomicity but they are not truly random. I wonder if it would make sense try optimizing the histories generated on the systems by running a pattern analysis on the stream of pseudo-random numbers.
I'm not even close to be an expert on the simulation methods, I already apologize in case I'm saying something completely wrong. I mean, the pseudo-random numbers could be easily pre-generated by using the same generator seed. If the numbers were used as actual values to add/sub/mul/div on the equations, this would not work. However, if the pseudo-random numbers were used in order to take decisions, those decisions could be predicted by analysing the pseudo-random stream of numbers. By doing this, lots of processing could be saved by knowing in advance in what stage the photon would be lost for example.

As my professor said, understanding these methods would require a PhD on its own. In this case, based on as little as I know about them, I believe this makes sense. It would be where I'd focus my research in case I'd be engaged on this subject.

Next post, I hope I will have more news about the comparison of the algorithm implementation on the several languages.

-Luciano