We were discussing this on the Hubski IRC the other day. At some point in the near future, transistors would have to be smaller than the electrons they're switching in order for Moore's Law to hold up. Devac mentioned that there are physicists working on atomic transistors. Even if those do make it out of bleeding edge research labs, I suspect we'll have a lot of issues just because anything that small is incredibly susceptible to glitches due to electromagnetic radiation from cell phones and wifi and whatnot.
Personally, I think the future of computing lies in programming language theory. Effectively, we have three problems: writing fast code is hard, writing code that runs across multiple cores/CPUs/systems is hard, and people want programs that don't fail (or fail less). I think that developing languages that incorporate enough information for the compiler to prove that they are correct is the way to solve these problems. Once your compiler can prove things about your code, it can take advantage of this to generate fast, parallelizable, correct code.
But...who knows! The future may just as easily lie in various quantumn-ish computers or some computational application of biology.
While we are severa billion in R&D away from this being realistic, it would be remiss of me to not point out a recent-ish breakthrough: http://phys.org/news/2015-09-disordered-evolving-breakthrough-nanoparticle-boolean.html
electrons generally have a much smaller wavelength(which is roughly analogous to 'size' of a fundamental particle, which in principle is infinitesimal) than a photon does. Unless you want your computer to give you radiation poisoning, electrons are the way to go. Good effort though.
I think the future of programming lies in the same place the future of the motor does. A quick era of massive expansion followed by the technology hitting a fairly impassable brick wall and our ideals about it's possibilities becoming sensible. There are still people out there that believe it is possible to create a technological singularity where a computer can become infinitely smart by improving itself. This is like people pre-relativity thinking we will travel instantly to anywhere if we keep making vehicles faster. Ultimately we've had hyper efficient, complex, thinking machines around for years, easily produced, and constantly finding ways to improve their capabilities. They just demand food, water, and payment, and aren't great at menial tasks. So we make something that is, and we are finding that with things like neural networks and machine learning, as computers become as smart and capable as humans are they are losing that "ultra efficient" trait and start making the same stupid mistakes, generalities, and creative failures that humans are. We will reach a day where the computer is nothing more than a device we use for things. Today it is a cultural and technological icon of the present power of technology, tomorrow computers will be a fact of life like the telephone, the atom bomb, or the car.
Yep, people way oversell the ability of AI to do stuff. In reality, AI is only good because computers are really fast at failing at a task over and over again. And heck, even if we get very smart computers, there's still an uncountably infinite number of uncomputable problems that we'll never be able to rigorously solve.
That is fascinating. Can you elaborate on that?So we make something that is, and we are finding that with things like neural networks and machine learning, as computers become as smart and capable as humans are they are losing that "ultra efficient" trait and start making the same stupid mistakes, generalities, and creative failures that humans are.
I'm partially talking out of my ass here, just casual observation of things I've seen online rather than expert knowledge. Take my words with a grain of salt. Normally people assume computers are big predictive machines that solve a problem 100% or 0%. A computer does it's job perfect, or it doesn't do the job at all. People imagine an intelligent computer to be this super-machine that is fare superior to human knowledge, super logical, and all that other stuff. But with things like neural networks and other machine learning systems you can see the machine slowly develop much like a kid might, making really funny decisions that make sense if you are the machine, having a subjective set of knowledge it is acting from, but doesn't really make sense in the grand scheme of things. For example, one of these programs being trained to navigate a maze might find out that you can hug the right wall to get to the end, and be satisfied with that solution. No attempt to find the optimal, the program lazily grabs the first strong solution and follows that one every time, because trying to take a new path is more harmful than not. Sound familiar? This is stuff the AI designer has to actively tweak variables to fight against, making sure the machine has to experiment every once in a while, or making it so that following the same path over and over again starts to result in a negative or non-reward for the AI. Bored yet? Neural networks often have to be reduced to a certain size when being trained, otherwise the network literally sets itself up to memorize everything in the data-set you train it on rather than actually learning to solve the problem. It's called over-fitting in an official setting. http://www.mathworks.com/help/nnet/ug/improve-neural-network-generalization-and-avoid-overfitting.html?s_tid=gn_loc_drop Memorizing everything is the best solution to the problem, just not the one we want. One example of an algorithm acting weird is in those "teaching a bot to play NES games" video. There are examples where it will jump of a big ledge because doing so lets it move right for a while, which the program considers a "good" thing. However it then dies, but apparently the "good" best action kept getting selected for a long time until the program figured out a new way of doing things.
I can see that now. Our "common sense" output is only due to experience with a number of factors, which an AI may not possess or even be able to simulate in a way humans experience it. The difference between newborn humans and AIs, therefore, is a big portion of information that we process without thinking or even being able to notice. We subconsciously understand a lot of stuff from trial and error, and we utilize it without applying conscious effort. It's quite difficult to program what we don't understand working; in fact, it may be up to chance for us (or the self-improving AI) to do so right now, because neuroscience is nowhere near that far in explaining our thinking. Do you think it's possible to create an AI that's whole purpose would be to crunch through data (with the reward being more data connections made)? As in, is it possible to create one that won't loop onto sheer data-crunching or stuck with one level of data being crunched (if that's even possible)? This sounds like it would boost our progress significantly, if only because of how much data an artificial (and bigger/more responsive) neural network might go through in a unit of time. I read about it some years ago. Good to finally meet the maker of this demi-mythical beauty - the game that decides not to play because it's the best it can get to winning.One example of an algorithm acting weird is in those "teaching a bot to play NES games" video.
It's essentially what AI are doing right now, massive amounts of trial and error to hope the AI finds some solution that makes sense and goes along with it. The AI is an "optimizer" in that it continuously tries to find a better solution rather than thinking through the problem as a person might with a big set of previous experience. We just kind of hope it stumbles on the right answer with a whole lot of repetition and a bit of smart direction-picking. The problem is that two sets of data can look very different, with very different inputs, and it's almost impossible to make a single program that can deal with more than one or two "types" of data. Also we have to have a massive set of meaningful data in order to do something like this, which is not something easy to come by. That's the problem with neural networks is that it has a set of inputs then a set of nodes that represent the connections and ideas from those inputs. Put a new set of info from a different situation in and those nodes are going to suddenly become meaningless. I think the big thing here will have to be an AI that can manage other AI that deal with sets of data. One whose job is to create/identify information and find the best program to solve that problem. And all this training is expensive and hard to do, as well. Neural network/machine learning rigs are set up with a whole bunch of high end graphics cards, and eat up a whole lot of processing power. The bigger the network the more exponentially costly it is to run it. You can kind of see this problem having been solved in the brain as well, where it has billions of neurons but only a few percent are used at a time for any individual input. The brain doesn't even exist in a space where it has to have a processor go through each neuron to simulate it before it can get some sort of output.in fact, it may be up to chance for us (or the self-improving AI) to do so right now, because neuroscience is nowhere near that far in explaining our thinking.
Do you think it's possible to create an AI that's whole purpose would be to crunch through data (with the reward being more data connections made)?
So, compartmentalization, like in the brain. Have one subsystem deal with X, another - with Y, gives others Z, A, B and C and so on, and you have a somewhat-similar neural structure. My thinking is that those parts would still need to be capable of type conversion of some sort, given that the X module might output, say, strings while Y gives away arrays. Such a capability defeats the purpose of compartmentalization, though: if any subsystem is capable of dealing with anything, there's no reason to separate them but for parallel computation, and that's a whole other story.I think the big thing here will have to be an AI that can manage other AI that deal with sets of data.
Neural networks work by having information used as a set of numbers, weights, and functions. They don't really output "numbers" or "strings" or "arrays" they almost always have numbers in, numbers out, then the numbers are converted into whatever meaning they should need. For example, a network that identifies age from an image will take in numbers as R G B values across a massive array and use a big set of arrays and functions to narrow those down to a single number that slowly is "optimized" until it starts matching somewhat accurate examples. So a network that deals with strings may just break that string down into characters and have a node for each character. A network that deals with images uses a node for each RGB value. So if you have a network that was working with words in one type of sentence it will be used to having one region lighting up meaning one thing. However, if the sentence is of a new type then the network will have that meaning light up, but it will not mean what it should. That's the sort of issue with different datatypes, not so much arrays or ints or doubles or anything of that sort. Say you can have a sentence This is a great time. This is a horrible time. And you train a neural network on those. Now you input a sentence. This time was a great one The area that normally lights up in response to "great" is now staying dim with the input of "was", and will say that the sentence is not positive. It was trained in an environment where great or horrible always appear in the same place, and assumes that fact. It will mess up when given a different input, and will be useless in those cases.My thinking is that those parts would still need to be capable of type conversion of some sort, given that the X module might output, say, strings while Y gives away arrays