This time I'm going a little slower on the posts because I have just got into one of those crazy phases at work. As usual, there are not enough hours on the day :-)
After implementing the initial algorithm in Java language, I was requested to implement the same algorithm in C and FORTRAN. The objective of those implementations is comparing the speed of the algorithm into the different languages.
This algorithm is very CPU intensive. I have noticed that it does not take much benefit of parallel processing because it is basically accumulating operations. I could not see any way to make it parallel because each cycle depends on values generated by the previous one.
At this point, my expectation is that Java and C will have about the same processing speed. I have executed a few comparisons between these languages on the past (good and old undergraduate days :-) ).
As one might know, Java uses a pre-compiler. Based on previous experience, the only speed issue I have ever found with Java was when the Garbage Collector got triggered on the middle of some processing. After disabling the Garbage Collector and assuring that enough memory would be available for the processing, the speed was pretty much the same as the C/C++ code.
I have never been a big FORTRAN fan, but it seems that it is still widely used by the academical community. I will implement (learn) this in FORTRAN and compare the results as well.
Anyway, the thing that I thought really interesting during the algorithm discussions was when we were talking about other methods to solve the Radiative Transfer Equation. We got into a discussion about the mathematical methods versus the simulation methods. The mathematical methods try solving the equation by applying techniques to solve the equation itself. The simulation methods use equations to create a simulated test-field by where the simulated photons would actually cross the simulated liquid.
I do not think I will have the time to implement this other set of methods during the course, but I do believe that it would be very nice to tell you a little bit about how it works.
In general, the simulation methods use computers to simulate the path of each photon across the liquid. They use equations in order to define a set of behaviors that would represent the same situations that the photon would find on the real liquid. Those behaviors are linked together as if they were a tunnel. The photon will enter on that tunnel, it will be transformed by one equation and the result will be the remaining of such photon what will be used as input to the next equation and so on.
For each equation on that test-tunnel, there are random events that might affect different photons in different ways. Each photon passing by the test-tunnel will record a new system history.
In order to calculate the final result of the equation, an statistical analysis is executed in order to consolidate the thousands/millions of recorded histories. This way, the system seems to reach an acceptable level of accuracy in order to solve the equation.
I have ever been a big fan of simulations. You do not need to guess much about which method I'd prefer :)
Just a side note for someone who may be interested on working on this field. I heard that people use pseudo-random algorithms to simulate the events to interfere on the system. As one may know, pseudo-random generators are not really random. They do offer a nice randomicity but they are not truly random. I wonder if it would make sense try optimizing the histories generated on the systems by running a pattern analysis on the stream of pseudo-random numbers.
I'm not even close to be an expert on the simulation methods, I already apologize in case I'm saying something completely wrong. I mean, the pseudo-random numbers could be easily pre-generated by using the same generator seed. If the numbers were used as actual values to add/sub/mul/div on the equations, this would not work. However, if the pseudo-random numbers were used in order to take decisions, those decisions could be predicted by analysing the pseudo-random stream of numbers. By doing this, lots of processing could be saved by knowing in advance in what stage the photon would be lost for example.
As my professor said, understanding these methods would require a PhD on its own. In this case, based on as little as I know about them, I believe this makes sense. It would be where I'd focus my research in case I'd be engaged on this subject.
Next post, I hope I will have more news about the comparison of the algorithm implementation on the several languages.