Thursday, December 31, 2009
It has been a while since I do not write on the blog. It has been a very busy couple months :)
However, I'd like to wish a Great New Year to all of you !!
About my last post, iToon 2.0 was submitted, approved and it is available on the App Store. However, I can't wait to submit the next release. If you thought that version 2.0 was a major update, you will be surprised by the 2.1 version.
Before telling about 2.1, let me share a few issues with you: Apple has changed their "New Release" list policy on the App Store and, unfortunately, "Updates" are no longer considered "New Releases". As such, I have no efficient way to let everybody know about new iToon versions.
On the previous policy, iToon used to show up on the very first page of the "New Releases" list for each update. However, on the new policy, it not longer happens. I trust that you will be happy with the new releases and let your friends know. I really need this "advertisement" help from you in order to keep the updates going.
In a glance, here are some of the iToon 2.1 features:
- Frames : You will be able to add frames to you picture, take a look on some of them at the end of this post.
- Background Painter : You will be able to overlap pictures and interchange the backgrounds.
- Artistic Highlight : You will be able to build dramatic effects with Colored Themes on the top of Black and White background.
- In-App Purchases : This will allow you to buy features and content packs directly from iToon Store.
These are major enhancements that will bring iToon to a new level. 2010 is promising great things.
Best luck to us all !
Tuesday, November 10, 2009
iToon 2.0 is on it way for approval again. we have changed the requested items by Apple and hopefully they will find no other issue. We expect to have iToon generally available by next week. I will keep you posted.
Wednesday, November 4, 2009
I'm sorry to say that things did not go as planned. Apple just sent me a message saying that iToon 2.0 has been rejected. The reason is because there are icons on my UI that resembles Polaroid features. According to Apple : "it appears to include features that resemble Polariod photographs".
They told me that and sent me a set of screenshots:
Remember that iToon 2.0 will be a free upgrade to all previous iToon owners.
Wednesday, October 28, 2009
We will keep you posted. Until then, feel free to take a look at the new iToon 2.0 User Guide available here and check the upcoming features.
Sunday, October 18, 2009
I'm glad to share with you that iToon 2.0 has been sent to Apple for approval. I have added major updates on this release and I really hope that you will enjoy it. Here are the major enhancements of the upcoming release:
- iToon Camera - Grid : The camera got enhanced with a Grid in order to make the picture composition easier. If you are not familiar with "The Rule of Third", take a look at this link and you will see how useful the grid is.
- iToon Camera - Digital Zoom : Because I would not like you to spend a lot of money with additional iPhone Apps, I have decided to add the digital zoom feature to iToon as well. You will be able to zoom in your pictures up to 5 times in real time.
- iToon Sessions - On the new version you will be able to save your unfinished sessions and get back to them on a later time. No need to rush anymore in order to get the work done at once. You can take the picture, do some editing, close it and get back to it when you have time.
- iToon User Interface - iToon 2.0 has a completely new user interface. I have minimized the number of items on the work area in order to get more space to what really matter, you pictures.
- This version User Interface is available only in US English.
Because of these changes (specially on the camera), iToon 2.0 will be available only for IPhone OS 3.1 or later.
Here are some early screen-shots. Hopefully, the App will be available on the App Store in a couple weeks.
Sunday, October 11, 2009
10th Assignment: Write a review about the Windowed Fourier Transform. What are the most common windows? How is the Brillouin plane of this transform?
The core idea of the Windowed Fourier Transform is the addition of a window function to the traditional Fourier Transform. This Time Window function will segment the signal and will turn into 0 (zero) all values that do not belong to the specified time-window. From the notation perspective, this is what one will have:
As T is the time windows that one wants to study:
+T/2 + ∞
X(w) = ∫ x(t)e-jwt dt = ∫ w(t) x(t) e-jwt dt
-T/2 - ∞
w(t) = 1 when |t|<=T/2
w(t) = 0 when |t| > T/2
There are ups and downs about this method. By using time windows instead of the entire signal, the local maximum not necessarily represents the signal maximum. However, assuming that for the specific study, knowing only the local frequencies would be enough, this could be an interesting and effective way to locate the studied frequencies inside a time period.
How you choose your time-window is also important. Different time-windows will have different impacts on your results. Many different window functions have been proposed over time, each with its own advantages and disadvantages relative to the others. Here is a list with the most common time window functions:
Window Best with Frequency Amplitude
Signal Types Resolution Accuracy
Barlett Random Good Fair
Blackman Random/Mixed Poor Good
Flat top Sinusoids Poor Best
Hanning Random Good Fair
Hamming Random Good Fair
Kaiser-Bessel Random Fair Good
None(boxcar) Transient& Best Poor
Tukey Random Good Poor
Welch Random Good Fair
OBS: I could not find any information about the Brilloin Information Plan. Any tips would be appreciated.
Monday, September 28, 2009
1st Assignment : Review text : "Wavelets: seeing the forest and the trees", by Dana Mackienzie, 2003
I have spent a lot of time writing mostly about my IPhone Apps. It is time to go back to school and start writing about a few assignments again. I have enrolled to two very interesting courses this period. I will be working on "Theory of Intelligent Control" and "Introduction to Wavelet theory".
Stay tuned because I already have quite a few assignments that I will make available on the blog over the next couple weeks.
Tuesday, August 11, 2009
Due to the great i-Toon success, I have speed-up the implementation of i-Toon 1.1. This is going to be a free upgrade for everyone who already bought i-Toon 1.0.
Here are the main enhancements for i-Toon 1.1:
- single tap at ballon rotates it (no longer attached to the center of the picture)
- double tap on picture resizes it
- Added Resize Slider for handling small balloons
- resize picture and ballon with pinch gesture
- automatically resizes text to fit ballon
- prevents missing balloons by keeping them inside picture boundaries
- save space in your device (iToon App binary has reduced its size in ~30%)
- prevents saving duplicated cartoons by checking if there were any modifications before saving it.
Here are the new User Guides:
i-Toon 1.1 User`s Guide
i-Toon 1.1 Manual do Usuário
This release was submitted to Apple on August 11th. Hopefully it will be available on App Store in a couple weeks.
Friday, August 7, 2009
i-Toon está totalmente traduzido para Inglês e Português. Acesse o Manual do Usuário aqui.
i-Toon is completly localized for English and Portuguese. Check User Manual here.
I'm glad to share that our new App will be available starting this weekend on the App Store.
As we continue experiencing all different App categories, this time we bring to you i-Toon. It is a mix of Entertainment and Photography App. We will position it inside Photography but I get to say that I may change its category to Entertainment based on the users feedback.
On this tool, I got engaged into reshape the code for the very first App I designed. Back then I was going to build a cartoon generator that would automatically build a cartoon based on a picture. I found several constraints to build such application, specially because of storage and memory space.
This time, I have restructured the idea a little bit and made it more interactive. For this first release, here are the main features:
- Acquire Image from your camera or from our picture library;
- Apply "Border" filter;
- Paint image with Cartoon Colors
- Paint picture as Vivid Colors
- Add Ballon titles
- Save i-Toon pictures to your Photo Library
- Send i-Toon pictures via e-mail
Next release(free upgrade), with new features expected for early September. Feel free to send me a message with your i-Toon wish-list. I might very well decide to implement the suggested features if they fit into my planned schedulle.
Monday, July 13, 2009
I'm glad to share with you that Apple has finally approved "I-Dig-The Recycle Challenge" and it is available on the App Store.
As our technology matures, we are adding quite a few features to our games. This time the game is available in English and Portuguese. We have also added support to three of the main OS 3.0 features : IPod Sound as background music, Multiplayer mode and Voice Chat.
Right now, I'm brainstorming in order to get ready to start working on the next release. Here is my current wish-list:
- Add earthquakes (mess up the user tunnels)
- Add at least one big “Boss” in order to complete each phase
- Shake the iPhone to change background music
- improve visual effects
- minimize App footprint
- fix any possible bugs that Users report
My windows for defining new feature should be closed by the end of this month. Let me know if you have any suggestions.
I hope you enjoy the game!
Thursday, June 18, 2009
I'm glad to share with you that i-Dig is about to be sent to apple for entering on the App Store.
Here is its description and a few screenshots. If you are interested on writing a review or becoming a beta tester of our future Apps, let me know.
By this weekend, I intend posting a quick demonstration video of the game on You-Tube. Stay tunned, more news to come.
I-Dig : The Recycling Challenge is an ecological adventure. On this first release, I-Dig uses the full potential of IPhone OS 3 and mixes a fun game with an ecological story.
On the single player mode, the game objective is helping out a little worm to collect as much recycle material as possible. By bringing the cargo to the Recycling Center, the user gets Worm$ that can be used to buy upgrades. If a worm were as tall as a human, each game map would allow it to dig into about 650 feet under the ground. The random map generator is capable of generating up to 1000 different maps to make each new campaign unique. As the user goes deeper he/she will find garbage and obstacles. Natural obstacles such as gas pockets will slow the worm down while drag its life away. Snakes are moving around and they may fall on you if you dig under them. Be specially careful about spiders, they can follow you around and keep dragging your life away; they will not stop until you terminate them. Use your bombs to terminate spiders and snakes. Do not forget that your ethanol tank may go out of fuel as you dig in, remember to go back and refill it from time to time.
The multiplayer mode is the greatest part! The same rules of the single player game apply to the multi-player mode. However, on the multiplayer mode you have to deal with a much smarter adversary that will control a second Worm. After the multi-player campaign starts, the users also have the option to start the Voice Chat tool and tease each other while playing. The game will keep track of both user's scores and it will assume that the highest score is the winner. However, blowing up your opponent will make you feel better in case you got a lower score :-) .
Well, help the worm, learn more about the environment and have fun !
-BABs 2Go Team
Tuesday, June 2, 2009
A time capsule is used when one generation wants to pass information to another generation. It is not more than a chest with a bunch of letters, drawings, written messages, prety much anything that people would like to share with their kids, grand-kids, and so on.
The time capsule is fine, however, as usual, I think it is too simplistic. As a good scientist, I could not see something simple, working perfectly and let it be. On that case, I was trying to find other uses to a time capsule. That was what leaded me to a few strange but accurate thoughts.
OK, the initial idea is sending a time capsule to your relatives in the future. Moving on, I though, "Why couldn't I sent a time-capsule to myself?". All right, that concept alone could be interesting but not new; it would be just like "I will buy an Apple Stock today and leave it on the closet until I retire". Simple and not fun at all.
When I was about to give-up, I though "Well, what about reincarnation? Would it be totally unthinkable to leave something for me when I get back to life?".
Well, if you are still reading this post is because you are not too upset with me by merging science and religion. Fine, I will keep this way because I'm talking only science here.
Even that reincarnation is most of the times linked to a second life, or something mystical. I have been thinking that it is in fact a mathematical possibility. I mean, all your memories would start fresh but as long as there are human beings around, you have the chance to be born again, even at the same time! From the scientific perspective, at least as far as the current state of the science goes, your body is defined by a set of DNA molecules. Those sets of molecules have been around and mixing and matching together ever since the first Human being was born. Several mutations have created pretty much every single individual that we see today.
Even on the worst case scenario, assuming that the DNA molecules are arranging themselves in total random ways, it only means that there is a very small probability that they will ever get together again in the exact same formation that you are composed today. However, if we take another well accepted truth of these days, we know that the universe is infinite on time. Considering that the Human species will last for at least another X billion of years, that small probability that you have to be reborn becomes a possibility. Yes, you may have a really terrible luck and never be born again but you could be also be a lottery winner and be born twice at the same time (identical twins).
Of course that people can argue that those are two different individuals and that is perfectly acceptable. However, they share the very same project design. I'm not really trying to sell anything here, I sure respects other people's opinions. My point is that, just for fun, wouldn't it be nice if you could share information with yourself whenever you/him is reborn?
Imagine the impacts of this if you embrace this possibility. Ecologists could say "Save the planet for yourself" and really mean it! Anything that we do here will not only affect our children but now it could also impact your very own next life. Those Apple stocks would be even more valuable if you think you could cash them in 100.000 years from now.
Well, getting back to the time-capsule idea: Imagine if we could digitalize you life. Just create a log that would be saved on the network (I assume that Internet will survive until them) and it would be encrypted with a DNA-based key. In X thousand years from now, the other young version of "you" would get to a website, use his DNA to unlock the information and learn about an entire life that he/she had several hundreds of years before.
Well, I'm not sure if this would be a nice PhD project (the most of the technology to build this is already available), however, I'd buy a ticket to see a movie about it.
Tuesday, May 12, 2009
1)Computer Feelings – Because I wrote a short paper about this before (just browse http://labtricks.blogspot.com and check it out) I will not go into many details. The basic idea here is mixing several AI techniques in order to enable feelings on a computer. The procedure to accomplish (or having a starting point on) this would be by creating a frozen neural network “hard-wired” into emotion sensors. The purpose of such network would be interfering on the normal function of the emotion sensors just like Human Feelings do with the human “sensors” (tired, hungry, anger, etc). Similar to someone who loose track of time because it is reading something that he/she likes. Adding this “like” type of feeling to a machine would enable it to find a purpose to itself. A “common-sense” knowledge base would be used by them in order to balance whatever the computer “likes” to do versus what is best to its society. The common-sense plus the “like” feeling would enable the computer to guide itself during its learning activities. Always trying to perfect his leaning on whatever it likes most.
2)Ryodoraku temporal analysis – Ryodoraku is part of the traditional Chinese Medicine. It is a diagnose/treatment tool that allows the practitioner to have an energetic picture of his/her patient. The method is based on the evaluation of 24 acupuncture points. The practitioner uses an equipment similar to a multimeter in order to collect measures from each point. Those measures are plotted into a Ryodoraku chart. Based on the evaluation of the chart, the practitioner is able to diagnose the patient and know exactly what acupuncture points should be used on his/her treatment. I have already developed an Expert System that assist practitioner on this technique some time ago (available on the Apple App Store, i-Ryodoraku). My idea for a PhD degree would be composed by two parts. The first part would be running a detailed analysis on the temporal behavior of the Ryodoraku points. I'd collect and run data mining techniques in order to understand the cross-relationship between all Ryodoraku points over time. This initial analysis would provide resources to identify the behavior of health individuals and also the progression of states that bring a health individual to a sick state. On the second phase, assuming that there is enough evidence and understanding of the progression of states, I'd build a neural network which could be used to interpolate the several different Ryodoraku states of a single individual in order to predict the upcoming health state. This research would target the prediction of disturbing health symptoms into a currently health individual. Deploying such system into a Cell phone, PDA, or smart closes would allow people to prevent health problems before they happen.
3)Neural Network and/or Membrane Computing runtime deployed on a Cell Phone network. Neural networks and Membrane computing share at least one behavior. They both can be deployed on a highly parallel architecture. Membrane computing maps computer instructions into genetic cellular functions. I really do not think it is a good idea to write about this here. I promise I will write a more introductory paper about Membrane Computing at some point in the future. Right now, I know that Nei Soma has been researching a bit into this area in his lab at ITA (Air force Institute of Technology, in Brazil) – at least he was the one who introduced me into this topic. From the neural network perspective, the neurones are the smallest computing nodes of the system. I'm not sure if most people will agree but by now, Neural networks are, for me, a really clever way to build a mathematical functions. Specifically, it is possible to build mathematical functions that map anything into anything. You could map a digital representation of your face to your Social Security number. Your could map the digital representation of a flower smell into a description of the flower. It kind enhances the traditional mathematical functions into functions that map whatever you want into any other thing you want. The way the Neural networks operate is based on a lot of training. There are special algorithms that receive several (some times thousands or millions) of input x output pairs and train the network to do the mapping. This training is a very exhaustive process. However, after the network is trained, the actual execution of the “function”(neural network) is very fast. The PhD work here would be building a runtime and training environment that could be deployed on Cellular phones. The reason for using mobile devices is basically because there are millions of them widely available in the world and because the operation of a single node in a neural network requires low enough computer power that a mobile phone would be more than enough to execute it. Several problems would have to be solved; here is an interesting one: The neural network training requires lots of communication among the nodes, one way to mitigate this issue would be by using wifi-enabled or blue-tooth enabled devices closely located in order to train the network (a good scenario for this would be using the traditional high-school building to train all the nearby devices – imagine that it would be a cell phone high school as well). After the training is completed, the nodes could be activated from anywhere via SMS messages or the internet. This would not be a good solution for a problem that requires low processing power. However, for complex problems, the time to get a solution would only depend on how long it would take to send a message to the cell phones and receiving the reply from all of them. Tens of millions of nodes could be activated simultaneously. This has the potential to bit any super-computer available today.
Well, three is my lucky number. These are the ideas I have for now. I have to find a final candidate until the end of this year. So, do not be surprised if new posts like this come up soon.
Friday, April 24, 2009
Anyway, I got back to this expression because it has everything to do with the Fuzzy Logic. I'd say that Fuzzy logic is the "Jeitinho Brasileiro" for logical expressions.
One could say : "I like that girl, she is 1.62 meters tall, 56.5 kilos, her eyes are 78% black".
This would be the traditional mathematical/straight way of saying it. It is a precise description that lets no doubts. Any normal person would hear that and laugh because only a complete geek would describe a girl like that.
With Fuzzy logic, the description would be more like: "I like that girl, she is about one and a half meters tall, a little more than 50 Kilos, her eyes are light gray".
As it might be noticed, the fuzzy description is a lot more human friendly than the first one. Humans are imprecise by nature. Fuzzy logic fits on situations that precision is not required or when it is impossible to have it.
I could go forward and explain the Fuzzy Sets but today is Friday night and I'm not desperate to keep updating the blog. I may get back to this subject next week :-)
Monday, April 20, 2009
Great news for our Babs2Go again. Apple just approved yesterday our latest application. i-Ryodoraku is our first app on the medical category. After helping me with my i-NVADERS, my wife asked me to bring Ryodoraku to her iPhone. Making a long story short, I translated this application from Java to Objective C. I developed this application a few years ago just for her own use during her practice. Now that she has an iPhone, she said she really needed to have this application ported. In this case, why not turn it into a real app? That is what I did. Hopefully, more people will have the chance to take advantage of it.
First, let's first understand what Ryodoraku is. I'm not an acupuncturist myself, but if you have questions, I'm sure that my wife will be able to answer them. As a regular IT guy, Ryodoraku is one of those techniques that sounds like magic to me. It amazes me every time that I see her using it.
The purpose of this technique is using the chart to diagnose health problems and propose treatments. The practitioner uses an special equipment (looks a lot like an adapted multimeter) to measure the energetic levels on several acupuncture points (24 total). Those values are plotted in a special scale inside the Ryodoraku chart. After plotting the points, the practitioner has a clear picture of the current energetic state of the patient.
Following the Ryodoraku rules, two boundaries are drawn. Those borders limit the normality area. Ideally, every single value should be plotted inside that area. Depending on what points are left outside, Ryodoraku indicates symptoms and treatment. Even more impressive to my computer-oriented brain, is that the system not only suggests the correct symptoms on the most of the times, but it also propose acupuncture treatment in order to bring those "bad" points back to normality. Even more surprising is that if a second chart is built right after the session, it is very likely that the energetic levels will get back to normal or will clearly move into that direction.
Anyway, in my opinion, Ryodoraku is one of the most helpful techniques on acupuncture. However, several practitioners do not use it because it requires lots of manual work. Hopefully, i-Ryodoraku will enable practitioners with the tools they need to improve even more their patients' lifes.
As aways, if you find anything wrong or if you have any suggestion of improvements, just let me know and I will do my best to include those on the next release.
Thursday, April 9, 2009
Here is the formatted/filtered versin that I will bring to the AI class. Send to me any comments if there is anything too crazy.
Friday, April 3, 2009
This is the 1st draft for a paper that I'm writing to my Artificial Intelligence class. This has been a very pleasant class so far. Hopefully you will enjoy reading it as much as I enjoyed writing it.
1 - Introduction
Over the last three weeks, the AI classes went from classical to modern. The skeptical people got even more skeptical and the romantic people got even more excited. Based on History, IA has been a continuous cycle of frustrations where Objectives and Expectation have been continuously frustrated by reality. Neural networks have been made out of geniality sparks, frustrated by one man's opinion (Minsk) and raised from the dashes to keep fighting an endless war. Not much seems to be written on stone about AI, however, there are a few facts to come: Current AI system seems to be on one of two categories: 1) Play a magician role where computers try fooling a human being (Turing Test); or 2) Computers are used to classify input and generate outputs (what is considered intelligent because it is similar to some of the Human's behavior). Neither of those categories are exciting enough to bring AI to a brilliant romantic future. For skeptical people, the main goal to be accomplished would be finding new applications to the current technology, train the neural networks and watch them solving specific problems (nothing like C3PO interacting with Skywalker). For the romantic enthusiasts, this only means that the field is still wide open to build a real thinking machine.
A rough time-line for AI could be drawn starting with Alan Turing. That was possibly the first time a person recognized the possibility of inserting intelligence into a machine as we know of. The basic idea behind the Turing test would be verifying experimentally if a computer brain could mislead a human in such a way that the human subject would not be able to distinguish if he/she has been communicating to another Human or a machine. This idea has defined the first reported method on how to qualify if a machine could be considered intelligent or not. This method is not unanimously accepted but it is considered the mark that starts the AI field.
As the time goes by, the classical age objective was building an artificial intelligence that would be capable of simulating the entire Human Cognition and Rationale. No need to say that they got into a dead-lock; Since Humans are building the machines and Humans themselves have absolutely no idea on how their intellectual processes work, they have not been able to reproduced that condition on the computers yet. Getting a little closer to reality, researchers have narrowed down their scope on the Romantic age. This second cycle brought to live the expert systems. On this attempt, scientists were using computer to mimic human experts solving specialized problems in their field of expertise. Inside this cycle, it is possible to find success stories. However, the complexity around accumulating and organizing the set of rules to enable the expert systems ended-up causing frustration and brought scientists to the Modern Age.
The Modern age was marked by the Neural Networks. When mathematicians brought to life the concept of an artificial Neurones, the possibilities around those models seemed end-less. Several Neural Network definitions can be found on the literature. A simple description of a Neuron can be seen next:
Figure 1: Artificial Neurone Model
On figure 1, one may realize that an artificial neuron is composed by three main parts. input signals, Summing Junction + Activation function and Outputs. Since this is not a biology paper, here is what happens from the computational perspective. Building a Neural Networks is, from a 20.000 feet perspective, a really clever way to design mathematical functions. Each of the synoptics have one associated input and one associated balance factor. In order to agglutinate the received information, the core runs a balanced sum of its input. The input "importance" is weighted on the equation by the factor associated to its entry synoptics. This way, does not matter how many inputs there are, the core will aways have a single value as its input. That single value is used as input to an activation function. The activation function could be considered the "cell activity". The activation function is influenced by the inputs on each synaptic and also by a threshold. The result of the activation function is the output of the neuron.
The neurone itself is a model, the clever part is how it can be trained to generate the required results. Assuming that there is a set of inputs and outputs (from now on called the "training kit), the training of a neural network consists of entering the input values into the artificial neurones and evaluating their outputs. If the neurone cannot match a particular output to its input, the balance factors of the synoptics are adjusted until the proper combination is found. If the neuron can generate proper outputs for the entire training kit, the neuron is considered trained and the "intelligence" to solve the problem is saved into the system. This way, one has just created a mathematical function that matches the desired inputs to the desired outputs. A special features that come with the neuron model is that, after trained, as a mathematical function, it can extrapolate that knowledge to guess intermediary values for inputs that were not part of the original training kit.
Assuming that several of the Human behaviors are not much more than action and responses (I'm hungry -> I have to eat, I feel pain --> I have to protect myself, etc), Neural networks have found a good fit on that area. Of course that the Human Being, at first sight, seems to be a lot more than just action and response. In this team opinion, the neural network could be seen as a good step on the right direction.
The Turing test and the Action/Response perspective ended-up being a good match. This is the closest that Man was able to get to the Intelligent Machines. By several different training techniques, artificial personalities have been developed. They get very close to pass the Turing test. However, even that widely accepted, the Turing Test may not be enough to really qualify intelligence. For example, let's bring the test to the following scenario:
a) keep a human painter in a room
b) keep a monkey on the second room
c) the Human judge will be a painter and will communicate to a) and b) only by looking into their paintings.
If the exercises involves a free-form of modern art (pretty much only random drops of painting on a frame). There is a chance that the human judge will be miss leaded to believe that the monkey is actually a human. Would this mean that the monkey is just as intelligent as a human?
On a few exercises with on-line personalities (www.a-i.com), it is possible to dream with a real artificial intelligence being made. However, one can realize right away that the artificial personalities are not real. There is a very clear lack lack of awareness of the world. As good as their knowledge base is, they cannot provide context and temporal realization yet. If one asks Hall about a TV show, it will promptly tell you that it loves "Seinfeld". It can tells you a lot about each character and about his favorite, George. However, it gets completely lost when one asks about the show from last Thursday... May be, this is just a matter of expanding the knowledge base but, until them, AI systems like HAL as just playing the magician role...
I consider myself part of the Classical thinkers of the Artificial Intelligence. If there is a real challenge on the Artificial Intelligence area to be accomplished is the creation of an artificial being. This my not be accomplished in my life time, however, this is the real challenge. Using AI techniques to solve other problems are not more than valuable applied engineering. On the next session, since this is not a pure scientific paper (and I do not have a scientific agenda on this area yet), there is going to be a brain storm of opinions and possible technologies that could lead to a potential work on developing a smart computer in the future.
2 - Adding feelings to Computers
I still have no idea why, but Neural Networks sound just right to me. As limited as they may be today, I do not think that they are limited because of any internal reasons. They seem limited because Humans have not been able to use them properly yet.
As great classifiers, their initial role should be composing the computer sensors. Everything from vision, tactile, hearing could take great advantage of the natural Neural Networks capabilities.
Just like the neurone model, a single specialized network is nothing without a proper training algorithm. On the case of enabling a computer to think, feelings and purpose would be required.
Evaluating the Human being, it is possible to realize that everybody is born without a purpose. When a new baby is born, there is absolutely no clue on why he/she was born and what his/her future will look like. Based on external inputs, Humans are driven to find what they like most and what they believe to be their purposes. There is going to be a balance between the boy who has become a doctor just because his father wanted and the other boy who has become a suicide just because he was old enough and still could not find his purpose. Both behaviors might indicate that without purpose, Humans get just as lost as computers. The main difference here is that Humans' "firmware" is designed to make sure that Humans keep looking until a purpose is found.
Moving to the theory ground, a "clean" brain could be compared to a neural network with specialized areas for all sensors and memory. As describe before, the single algorithm to be hard-coded would be one that enables the network for finding its purpose. However, there is a key part of this concept that is missing. Sane people do not drive their actions only by external inputs. They use their feelings (just like me saying that Neural Networks sound good without knowing why) in order to validate their actions. In this case, feelings need to be part of the system.
Reviewing the Human feeling, they are abstract by definition. In this case, they can be evaluated by their side-effects. In general, they allow people to do things that they would not normally do if they were not "taken" by their feelings. For example, a regular boy whose main exercise is playing IPhone games on his coach would run like crazy if he had to run away from a wolf on a forest. A normal guy plays a foolish role by singing on his girlfriend's front-window just because he is in-love, etc.
By abstracting those effects, one could propose that feelings are internal driven actions that over-rule or mislead the common sense in order to allow humans to reach their purposes.
Let's assume this set of feelings and their associated functions:
Figure2 - Pertaining functions for Computer feelings
As you can see in Figure 2, it uses fuzzy logic to define its value, and as such, each feeling overlaps with each other. The Excited state might even bring all other states to their balance levels in case it goes very high. In this case, the computer firmware would be designed to train the neural network to make sure that those "feelings" are kept as long as possible on the balanced state.
Since the emotion qualifiers have been set. It is time to enable the computer to trigger them. The initial state of the brain would be totally random (or shaped by the species evolutionary path). The fact is that the weights from each neurone would be different for each individual. A specific area could be called "talent" area. That area would be hard-wired to the feeling counters. This means that whatever outputs of the neurones get generated on the talent area would have immediate effect on the feeling counters. For example, the Hungry counter could be associated to a sensor monitoring the computer battery level. However, if the computer finds something that it "Likes" or that makes it "Excited", the lack of battery would not be realized until it leaves the current state or an emergency alarm is generated. It would be same as people studying all night long, without sleep, just because of the fear of going bad on the test that will take place on the next morning. Crazy, but reasonable.
The diversity among these artificial beings would be dependent on the topology of the "talent" area. The talent area would be an area with non-supervised learning area. In fact, the learning process would barely have a place there. Computers would be born with the talent just like Humans are. From an anonymous source, I heard that a Human can become an expert in any area; however, if he/she decides to become an expert on its talent Area, he/she will certainly become a genius. This statement would also be true for artificial beings.
The secondary purpose of those new beings would be getting as much information as possible in order to be helpful to their society (do what your father tells you to do). The primary purpose would be making sure that the basic feelings are balanced (follow your feelings). The common sense would be balancing the society needs (do not hurt anyone) and the internal needs (just because it pleases you). The talent area would be responsible by enabling the systems to "like" special subjects. By having their "body" functions enhanced by the talent areas, their firmware would drive their actions to do/learn more of whatever it is causing those feelings. This way, just like Humans, neural networks would be able to learn anything, however, whenever the talent area gets excited, it would be able to learn and do things that it was not designed to.
3- Final comments
As science fiction as it might sound, it makes sense. There is no intention here to state that this is the right way to evolve the AI area, neither it is just a crazy thought that, hopefully, nobody will care to read it. As previously stated, this is supposed to be a brainstorm exercise on how a smart brain could be designed.
As not many things seems to be written in stone on AI, hopefully this paper has been able to reach some of the kids who will not be born in 10 to 15 years from now and enable them with their technology to move one step closer to the truly Artificial Intelligence beings.
Sunday, March 29, 2009
Yesterday, march 29th 2009, our "I-NVADERS"application got released. At this time, I has kindly supported by my wife. I had to realize that by myself I was not going to be able to deliver this app. I consider myself an OK developer, however, I really stink on building Digital Graphics.
My wife is a Physiotherapist Practitioner, however, I think that she got the digital virus from me :-). One of her hobbies is painting. She has already painted several frames, bringing her talent to the world of Digital Content was critical to build I-NVADERS. Without her, any of you, who bought the game, would be playing with a black/red ()square as the defense ship and a bunch of "50's"-style ships as the invader ships ( ) on a blue background. All the great backgrounds and ships that you see now are on her :-).
Well, the game is inspired on an old arcade game that I loved. In our version, there are about 8 ships flying on the sky and you are a defensive ship. Your objective is blowing all the alien ships. The alien ships fly in random directions on the sky. Every once in a while a few of them try shooting you.
From time to time, the cargo ships reveal themselves. The cargo ships are carrying different gifts. When the cargo ships get destroyed, the gifts are dropped and can be used by the defender ship. The gifts are:
- SHIELD: Recharge your shield
- LIFE: Gives you a new space ship
- Triple Shot: Let you fire three shots at the same time
- CANON: Gives you a special cannons that shoot larger bullets.
The alien ships get faster and smarter in each level. This way, you will be continuously challenged in each new level.
As things move forward, I intend adding new levels, bosses and building a multi-player environment. I'm evaluating the new Iphone 3.0 OS. The new features seem to make the multiplayer development much easier. I will keep you posted in case I find anything interesting on that end.
By the way, I forgot to include this information on the help windows. In case you want to mute the game, you just need to tap on the top/right side of the window where there is a representation of a speaker (well, everything that us ugly is mine, the nice stuff is my wife's). When you tap there, the game goes on mute. A second tap would bring the sound back.
I hope you enjoy it. Let me know if you find any problem with the application and I will do my best to fix it.
BTW, next week I will start posting again on some of my PhD activities. This time, I'm taking a very interesting class. I'm attending classes on Artificial Intelligence. This is a very hot field. Our first subject is about Neural Networks. There is going to be lots of posts about this here in the future.
Thursday, March 5, 2009
I'm glad to share that my first iPhone application has been released. Its name is "iMess". I'm still a newbie on iTunes apple store, so I'm not sure what to expect. The status of my application has just changed from "In Review" to "Ready for Sale".
According to Apple documentation, this means that the app should be available on the App Store by now. However, I cannot find it anywhere yet. Hopefully it will show up soon (after the servers get refreshed).
Anyway, this is a fairly simple and fun application. I have chosen it because it does not require an advanced AI engine or special Graphic Effects. It is a puzzle game and it is very popular among Brazilian people.
It is composed by a background image sliced and applied into several squares. The difficulty level is based on the number of squares and on how much you ask the system to mess the pieces up. The system will iMess the board and the challenge is getting all pieces back on the right places by moving them around. The only move strategy is via an empty slot on the board.
From the developer side, I have included a feature that I wish existed when I was a kid. Believe it or not, I have never been able to win this board when the pieces were REALLY messed up. Because of this frustration, I have included an IA feature that can tell the player all the right moves to organize the board. This way, even I myself can play with the game :-) .
I will post some screen-shots later on so you can see what it looks like. If you have any interesting ideas for iPhone Apps, let me know and I may be able to make it work.
Monday, January 12, 2009
I'm back from the Holidays. Christmas is over, New Year has just arrived, I still have one month of Vacations left for 2009 but in January, I will be out for one week.
Anyway, I'm not here to talk about my vacation schedule. Let's talk about the upcoming adventures on the lab. PhD Classes have not started yet, in this case, I have looked for alternative activities for my vacations.
I have decided start working with IPhone SDK. Yeah, Apple just got a new fan. I bought myself an IPhone and a Macbook. I have to say that Mac OS X is the best OS I have ever used. It is ridiculous how I could spend so much time on windows. I kept always waiting for the next great windows release. I wish I new MAC before. Of course, the fact that a Macbook in Brazil costs twice as much as a computer of the same configuration and that it is nearly impossible to find good games for MAC have also helped me to stay away from MAC. However, this time it was love at first sight. The system is incredible, pretty much everything that I considered easy to do in Windows it is ridiculously easy to do in MAC.
I have only two complaints that prevent me of using MAC as my primary computer: 1) No Games (I cannot live without them) and 2) Outdated MSN Version. Hopefully, at some point those issues will be addressed and my Windows time will be over. Until them, I keep a small Windows Vista Partition on my Macbook.
Getting back to IPhone SDK. Apple has enhanced their XCode IDE, the Objective C language and added the IPhone simulator to the package in order to allow IPhone development. The main inconvenient to this development is that XCode only works on the top of MAC OS. That is why I was required to buy my Macbook. This IDE will allow you to design, debug and benchmark your application. However, do not expect that you will be able to run your software on your IPhone. Running application on IPhone requires a payed subscription with Apple (Of course that I'm talking about the official procedure, I'm not using any of the jail-break software available on the Internet).
This leads the discussion to the first pain point of the process. IPhone software needs a cipher key in order to be signed and accepted by the IPhone hardware. That is why you need a subscription with Apple. On this subscription, Apple would act as a Certification Authority and would allow you to have a private and public key. With those keys, you'd be able to sign your application and run on IPhone. It should be a simple procedure to get this subscription. However, as usual, everything is more complicated to International people.
I have been waiting for over a month. The Apple website allows you to enroll and pay your subscription on-line. If you are a luck citizen from one of the listed countries, I heard that you could get your subscription in a matter of hours. In may case, I have been waiting a total of 3~4 weeks since my first contact with Apple. I have heard back from them with instructions on how to pay the enrollment fee last Friday (three weeks after the initial contact). I sent to them a fax with my Credit Card information and I'm still waiting for an answer from them if they received the fax. Hopefully I will be able to start posting my application at some time soon.
Objective C is not that bad. Considering that I have used mostly Java over the last several years, getting back to C was a little challenging but not a major problem. The XCode environment is good but it clearly feels outdated when compared to Netbeans or Visual Studio. The upside of the IDE is composed by the several automation and monitoring tools that come on the package. Tool to monitor memory and processor usage and a lot more. I have never seen tools like that before as part of the standard product. The downside is that the UI Builder and XCode are not really the same tools. They are separated tools which are integrated. This means that you need to really understand how they work together in order to write the code properly and see the changes being reflected in both ends. After you get used to it, it works fine. However, you will curse Apple several times during your learning period.
IPhone simulator is the bright side of the IDE. It is almost a fully functional IPhone running on your Macbook. It makes very easy to test and debug the applications. Almost all features work, the only feature that I really missed was the camera. I could not test the camera application at all because it does not work on the simulator. However, you can still get images from the Photo folder. It works just fine.
Well, I think that this is enough information for now. I will keep you posted as I evolve my work on the IPhone. For now, I already have a puzzle application working. As soon as Apple gets back to me I will submit it to iTunes Store. Make sure you look for the "iMess" application next time you go to App Store.