There are actual absolute answers to some of the questions in this thread. The science of this exact topic is game theory. Here are some of the major conclusions (without any proof or detailed explanation. sorry, but to learn this much has taken months of research):
NOTE: This post is HUGE. Skip to the section with the list with bold text for the conclusion (tl;dr). Read through for some insight into how those conclusions were reached.
Firstly, there is, in any game state, an absolutely optimal strategy, which will result in the greatest overall record if used consistently. An easy (to follow, not to explain, as seen by the length of this post) example is Rock-Paper-Scissors.
If your opponent picks rock, the best strategy is always paper. If you pick paper, your opponent's best strategy is always to pick scissors. But wait! You don't know what your opponent's move will be. This is called a
simultaneous game. You and your opponent make your decisions, with no knowledge of your opponent's decision before the "ply" (a turn in this case) is carried out (throwing your hand in the chosen shape).
So what's the optimal strategy for RPS? Well, suppose we assign the end of any game of RPS a value: 1 if you win, 0 if you tie, -1 if you lose. You would like, over time, to win as much as possible, and lose as much little as possible.
Draw or imagine a chart (I can't do that here, sadly) with each axis representing a player, with R, P, and S written along them. In the middle, write the values we chose earlier corresponding to each combination of moves. If you just chose Scissors over and over again, your opponent could easily determine that their best move is to choose Rock over and over again. Thus you get a net average score of -1. Obviously we can do better. The optimal strategy, as can be determined with a little thought, is to choose
randomly between your three options with equal probability each (33.3%). This way, regardless of what move your opponent makes, you get a 1/3 chance to win, a 1/3 chance to tie, and a 1/3 chance to lose. Then we can multiple these probabilities with their respective "payoffs" (the values we chose last paragraph), add them together, and we get 0.
That's the best strategy: choose randomly, and over a very large period of time (look up the Law of Large Numbers in Statistics if you're not sure of this) you'll win precisely as often as you lose. You cannot do better than this. There is no way to guarantee a better average payoff than 0.
But there are actually Rock-Paper-Scissors World Championships (just like Pokemon)! How could you conceivably have a competitive games taken seriously if all they're doing is choosing randomly and hoping that luck is on their side? Well if you look back, you may notice (or you may have already noticed) that we made an assumption: we don't know what the opponent's decision will be. Humans are
remarkably poor at choosing things at random. If you ask someone to recite numbers at random, you can predict, with exceptional accuracy, their next choice, based on just their past two. This is a flaw, and the best RPS players exploit this to their advantage. This depth of competition is not an inherent property of the game; it is an effect of the imperfections in the players. If everyone were perfect, the best strategy would, of course, be random choice, and that's not much of a game is it?
So Pokemon is kinda similar to RPS. The big difference between them is that Pokemon is a much more complex game (luck and all). You'll die long before you analyze every possible combination of decisions that leads from the beginning to the end of a battle. Even the most powerful computers in the world couldn't do it in any realistic amount of time. So now, you can't even prove an optimal strategy, even if you
do assume both players are perfect. How could anyone even hope to play Pokemon strategically?!
Obviously, you can, but our problem becomes a lot more difficult. Ever heard of Deep Blue, the first computer to beat a Chess Grand Master? It faced a similar problem: a game with so many possible combinations that plotting out the full game was an impossibility, especially on regulation time limits. Deep Blue had to somehow evaluate the results of decisions without knowing anything about how the game might end. In fact, this is what good chess players really do (whether they realize it or not). The results of every decision get churned through the machinations of one's mind to produce an artificial estimate of whether that decision is any good.
Deep Blue did the same, albeit more precisely. It assigned every games state a value based on certain known factors -- the number of pieces each player had remaining (material), the overall number of moves available to each player (mobility), etc.-- and made a great big chart of each players decisions going back and forth some predetermined number of turns (since, again, you can't look through the whole game). At the end, all the conceivable game states resulting from that number of back-and-forth moves is assigned a numeric value, and an algorithm (the fundamental version of which is known as "Minimax", if you're interested in researching this) sorts through the chart to determine the theoretically optimal move -- the move that should put Deep Blue in the best position to proceed to win.
If you, say, wanted to write an AI which could play Pokemon competitively, you might go about it in a similar fashion to coding a chess computer (BIG grain of salt her, especially since Pokemon is a simultaneous game whereas Chess is not, but a better explanation would cost you the rest of your evening in all likelihood): make a chart of each players decisions over some number of turns, and evaluate each resulting game state based on how much of an advantage either player has with regards to winning. When you have a random factor, just average the values for the possible results based on the probability of each one (this prevents our AI from, say, choosing the win condition that has a .1% chance to win, 99.9% chance to lose, over the option to play the game out further for a better chance).
Just like Deep Blue works like a more mechanical version of a good Chess player, our Pokemon AI would end up working like a good Pokemon player. Realize it or not, they do the same process: consider each option and the corresponding options of your opponent, consider random effects, and weigh the potential payoffs. And so, finally, we can say that those things are the most important factors to being the best Pokemon player you can be:
- Consider each option and the corresponding options of your opponent. One of the biggest flaws new players may make is failing to consider all of their options. Often times, even experienced players will overlook something: some move, switch, or some complicated play over several turns. This is also where metagaming (team building) comes into play; the best team is the one that supplies you with the most viable options at any point. Stronger, more versatile Pokemon can allow for more and better plays. Additionally, this is why double battles (VGC) are so different from single battles (Smogon OU): a tremendous increase in the number of options, putting pressure on players to train themselves to process the information even more quickly.
- Consider random effects. This is the "luck management" that has been debated a bit in this thread. Yes, this is a demonstrably real concept. It is not something that relies on talent or intuition, as for simpler cases you can always find a provably optimal solution, but it's still real. Like the rest of the decision making process, difficulty lies in evaluating all the possibilities quickly and precisely -- a skill that is developed over time. You should never forget to consider secondary effects, critical hits, paralyses, etc. when deciding which move is best. Going back to the weighted averages concept, this also influences team building in that consistent options are generally superior unless the risky option has a sufficiently large reward to make up for getting watered down by probability.
- Weigh the potential payoffs. This is perhaps the most absolutely critical, most overlooked aspects of competitive Pokemon battling, probably because of the work involved in even realizing it's there (see the rest of this post). Once again, most of the time, you will not have the capacity to look all the way towards the end of the game. Even the best players rarely look more than 2-3 turns ahead. You have to develop the skill of quickly assessing which of your options are viable, seeing the results of these solutions given all of your opponent's expected viable options, looking at the result, and seeing if it's favorable to you. Everyone has a different way of doing this, but to me the most universal concept is "momentum". It's a term that gets thrown around a lot, but generally "momentum" refers to either player's capacity (from a given position) to at some point take knockouts. Setting up a setup-sweeper provides a player with a colossal amount of momentum, but is typically costly to achieve. Every time you take a knockout, you actually lose momentum (!) because your opponent gets a free switch. Next time you choose a move, don't just think if that move is going to immediately get you a knockout. Think: am I going to be in a position to take knockouts if I do this? What about my opponent? How can I prevent them from gaining momentum?
That's all. I hope this clarified something for someone, as this took at least an hour to write. Congratulations if you read the whole thing!