In order to answer that question I'm going to have to delve a little into how normal maps work and what the nvidia plugin does when it produces it's normal maps. What a normal map does to your texture when applied in a game engine is tells the game what direction light hitting the texture should be bouncing off the normal map. the red channel contains the direction the field is pointing in the x dimension, the green channel contains the direction the field is pointing in the y dimension and the blue channel contains the direction the field is pointing in the z dimension. Each pixel therefore will reflect light in a different direction then each adjoining pixel based on the normal map values. What this practically means is that the only perfect way to produce a normal map is to know the 3d dimensions of the original texture your 2D texture image is based off of.
The nvidia plugin does not do this, what the nvidia plugin does is some fancy guess work. The plugin reads the surface of the texture you provide and looks for dark spots and light spots then guesses based on what your dark/light distribution is what the height map and x/y maps should look like for the texture. For many textures this isn't that bad, it largely does a very good job, but for some, like the foliage rockface texture that has lots of light variation due to the different shades of rock and the different lights and darknesses caused by the plants on the face it ends up thinking there are alot more cracks then there should be and that the texture is much more flat then it is intended to look.
There are various methods of manually creating a normal map. Hand drawing it, while possible, will rarely produce good results. If you are making your textures off a photo referance, using
this technique can be incredibly useful. However, in the case we're dealing with here what I intend to do is make high poly models that represent the way the texture is supposed to look the most ideally and then using a normal map generator that will digitally use the process shown in that tutorial on the model and produce the results automatically. Some normal maps will see a massive improvement from this process, others, like sands and flatter surfaces will most likely look best when normal mapped through the nvidia plugin.
To clarify, the nvidia plugin isn't the wrong way to do it, it just isn't a method that can't be improved upon in some cases. For a perfect example of the nvidia plugin's limitations look at ficwill's post with the large images half way down the page
here. The normal map the nvidia plugin has produced for the macdonalds logo hasn't estimated the correct shape of the macdonalds logo, and will make it look like a metal plate rather then pyramided the way it is on macdonalds signs. The only way to improve that would be to create a model of the logo and make a normal map for that model, ficwill says he doesn't want to do so in his next post after mine in that thread.
[edit]
Fun normal mapping tutorial I found