what? GAN
For the development of GANs, a feasible strategy is to first occupy the market in the image and video field, and then expand to other fields.For example, the simulated data set can be used in HPC (High Performance Computer Group) applications.
But the coordinated development of infrastructure and software is not yet known when it comes to adapting more applications. Even so, the role and influence of GANs is very eye-catching, it is enough to complete the professional complex work, and prepare for the next stage of AI.
Unfamiliar people will have questions. Why do you have a lot of mature machine learning (ML) methods, but also have to work hard on GANs?
In fact, the results achieved by GANs outweigh the simple methods of identification and classification. They generate output based on references or samples, and the results are extraordinary.
Functionally, GANs are similar to other convolutional neural networks. The core calculation of the discriminator in the GAN is similar to the basic image classifier, and the generator is similar to the convolutional neural network that produces the content.
GAN is composed of two deep learning networks: generating networks and discriminating networks. They are actually existing concepts in ML, but they work together in new ways, which is unique to GANs.
When working on a graphics class, the generator takes the data set and attempts to convert it into an image. For example, it synthesizes the image through the data and passes it to the discriminator, which gives a decision to distinguish the image as "real". Or "forged".
The generator learns its weaknesses from the feedback of the discriminator, and the two achieve better results in the mutual game. But this way makes the calculations required for training more complicated and faces new difficulties.
GANs difficulties
The performance of GANs is excellent, but it is not easy to make full use of it. For example, a mode crash will occur, which will lead to instability in the training and feedback process.
Another common problem is that one network in the confrontation overwhelm the other. For example, the generator produces an image that the discriminator cannot distinguish. In this case, the generator cannot get good feedback and cannot learn effectively.
Fortunately, the problem of countering imbalances can be adjusted in time, but the high hardware requirements are not so easy to deal with.
Training a simple neural network requires some computing power, so GAN puts pressure on the system, especially in terms of memory.
It’s hard to do this kind of work on a CPU-only machine, once it’s used GPUIt is necessary to face the problem of limited resources in reality.
Bryan Catanzaro, vice president of application ML at Nvidia, said, "GANs need more computing power and infrastructure is on track. When using GANs, you need more data traffic because these models are very large and There are many parameters, so training requires a lot of computing power and memory."
"When we train, many GANs are limited by memory. Even if we only train one or two batch sizes, it will fill the entire GPU memory, because the models are usually very large."
Good GANs with a good saddle
Catanzaro added, “It would be helpful to build a larger system during training, and it is also valuable to batch-multiple GPUs. But this requires a powerful GPU center interconnect, such as on DGX-1. Do NVlink for video GANs."
In this respect, their work on game interactive video generation demonstrates the excellent performance of GANs, and GAN provides a near real-time dynamic generation environment.
He also mentioned DGX-2, "Once it is ready, it will speed up our work."
Nvidia loves GANs' research in video synthesis, but for them, running a large model on a GPU is not an easy task.
"We care about graphics and are committed to using them to make video games. This is a great way to create content. By training real-world videos, you can easily create virtual worlds."
"But this process is also very complicated, especially for video GANs, because it not only generates the current image, but also generates a series of images associated with it. This requires better memory and computational performance."
For example, when mentioning some applications of GANs in medicine, some people pointed out that in these processes, in addition to adversarial networks, feedback from learning components and discriminators are also required, which ultimately leads to improved infrastructure performance.
Drug startup Insilico Medicine is a leader in the industry. They use high-performance GPU clusters to adapt to the models in the system. Although some success has been made, to go further, more computing power, more memory and more are needed. Good memory bandwidth.
The future of GANs
"Gan s of any size can be used in areas other than image and video generation, but hardware and software constraints need to be addressed before widespread adoption, which is still too early for the moment," Catanzaro said.
"Some people try to use GANs in other places, such as text and audio applications, but the results are not as good as images and videos."
This also explains that it is difficult to prove what is effective before trying.
"For now, GANs has achieved great success in the field of vision, which is why it has the upper hand in medical imaging," Catanzaro added.
Not surprisingly, more companies will explore in game or content generation, and future applications of GANs will expand to other application spaces, but how far this future is, no one can predict.
For the GANs research, it seems that there are new ideas and progress every day, but the lack of applications that can run efficiently on hardware may cause a situation of unrewarding.
However, as we can see from the development of AI, continuous optimization and adjustment may bring distant technology into our field of vision in the short term.
It’s time to go to GAN.
Since GPU is the main training platform of the moment, Nvidia seems to be leading the groundbreaking wave of GANs, but the disappointing is that even if they have the best DGX system, this is still a challenging task.
It's not hard to predict that Nvidia, with its strengths in the future of graphics and games, may change the rules of the game.
But seeing the GPU from the consumer's gaming equipment has become a power accelerator for supercomputers. Perhaps what we can learn is that we can't despise research on a technology because it only brings a good gaming experience.
All in all, the new year, in addition to video and image creation, is expected to see the GANs in more fields.
Of course, when using GANs, you may need to have enough hardware environment. Then, don't say it, go to GAN! Good luck~
Reprinted from the public number Hyper-Neural HyperAI,Original address
Comments