This article is reproduced from the public number AI Technology Review.Original address

A few days ago, Josh Gordon published a blog on the TensorFlow official website, detailing the symbolic API (symbolic API) and imperative API (imperative API), and detailed the advantages and limitations of the two styles, as well as their respective applications. In which scenarios. The AI ​​Technology Review is compiled as follows.

One of my favorite things about TensorFlow 2.0 is that it provides multiple levels of abstraction, so you can pick the best level for your project. In this article, I will explain how to weigh the two styles of creating a neural network:

  • The first one is symbolic, that is, you create a model by manipulating a hierarchy diagram;
  • The second is an imperative, that is, you create a model by extending the class.

In addition to introducing these two styles, I will share some important things to note about important design and applicability, and at the end of the article I will give you some suggestions to help you choose the right style.

 Symbolic API

A symbolic API, also known as a Declarative API.

When we think of a neural network, we usually use the "hierarchical map" shown in the following figure to represent the mental model:

When we think of a neural network, we usually use a "level graph" to represent the mental model (the image is the schema of Inception-ResNet)

The graph can be a directed acyclic graph (DAG), as shown on the left; it can also be a stack graph (stack), as shown on the right.When we use symbols to create a model, we create it by describing the architecture of the diagram.Although this operation sounds technical, if you have ever used Keras, you will be surprised to find that you already have relevant experience.Here is a simple example of using symbols to create a model. In this example, Keras' Sequential API is used.

Use Keras' Sequential API to create neural networks symbolically.You can run this example at this address http://u6.gg/pqzA2.

In the above example, we define a stack of layers, and then use the built-in training loop (model.fit) to train it.

Creating a model with Keras feels as simple as "packing together LEGO bricks." Why do you say that? In addition to matching the mental model, for the technical reasons that will be introduced later, since the framework can provide detailed errors, using this method to create a model can easily troubleshoot.

This figure shows the model created with the above code (created using plot_model, which you can reuse in the next example in this article)

TensorFlow 2.0 also provides another symbolic API: Keras Functional. Sequential is an API for stack diagrams; and Functional, as you might think, is an API for DAGs.

Use Functional API to create multiple input/multiple output models.

The Functional API is a way to create more flexible models that can operate on nonlinear topologies, shared layer models, and models with multiple inputs or outputs. In general, the Functional API is a collection of tools for creating these hierarchies, and we are now preparing some new tutorials for you to use these APIs.

There are also other symbolic APIs that you may have used. For example, TensorFlow v1 (and Theano) provides a much lower level API. When compiling and executing, you can create a model by designing an ops diagram. Sometimes, using this API may make you feel like you are interacting directly with a compiler. For many people, including authors, this API is more difficult to use.

In contrast, using Keras's Functional API, the level of abstraction matches the mental model: splicing the levels like the LEGO puzzle. This API feels more natural to use and is one of the model creation methods we have standardized in TensorFlow 2.0. Next I will introduce another API style (at the same time, you may have used this style, or you may try this API soon).

Imperative API

Imperative API, also known as Model Subclassing API.

In an imperative API, you have to write a model just like you would write NumPy. Creating a model with this API feels like developing object-oriented Python. Here's a simple example of a subclassing model:

Create a model for a captioned image using an imperative API (note: this example is currently being updated).

From a developer's perspective, the way it works is to extend the model categories defined by the framework, instantiate the layers in the model, and then implicitly write the forward pass of the model, and backward. Pass) is automatically generated.

TensorFlow 2.0 supports the use of off-the-shelf Keras subclassing APIs to create models.Like Sequential API and Functional API, it is also one of the recommended methods when creating models with TensorFlow 2.0.

Although this method is still relatively new to TensorFlow, you will be surprised to find that as early as 2015, ChaiNer It's introduced (time flies!) Since then, many frameworks have adopted similar methods, including Gluon, PyTorch, and TensorFlow (and Keras Subclassing). Surprisingly, the code written using this style in different frameworks looks very similar, and researchers may have difficulty distinguishing which code is which framework!

This style gives developers a lot of flexibility, but it also introduces some insignificant applicability and maintenance costs. We will discuss this in more detail later.

Training Loop

Custom models can be trained in two ways, whether using the Sequential API, the Functional API, or using subclassing styles:

  • One is to use the built-in training path and loss function (as mentioned in the first example, we use model.fit and model.compile);
  • The other is to customize a more complex training loop (for example, when you want to write your own gradient clipping code) or loss function, you can easily do it as follows:
Example of custom training loop and loss function for Pix2Pix

It is very important to open these methods to the outside world, and it is very convenient to use them to reduce code complexity and maintenance costs. In general, if adding complexity is helpful, then increase and use it; if you don't need it, use the built-in method to spend more of your time on your research or project. .

Now that we've built awareness of both the symbolic and imperative APIs, let's take a look at each of their strengths and weaknesses.

The advantages and limitations of the symbolic API

Advantage

A model created using the symbolic API is a graphically similar data architecture, which means your model can be monitored or aggregated.

  • You can use the model as an image to draw graphs for it (using keras.utils.plot_model); or simply use model.summary() to present descriptions of layers, weights, and shapes.

Similarly, when splicing layers together, the developer of the development library can run extended layer compatibility checks (before creating the model and before executing the model).

  • This is similar to type checking in the compiler, which can greatly reduce developer errors.
  • Most troubleshooting is done during the model customization phase, not during execution. You can guarantee that all compiled models will work, which speeds up iterations and makes troubleshooting easier.

The symbolic model provides a consistent API that makes it easy to reuse and share these models. For example, in migration learning, you can access neurons in the middle layer to create new models from existing neurons, like this:

The symbolic model is defined by a data schema that can be naturally replicated and cloned.

  • For example, Sequential API and Functional API can provide model.get_config(), model.to_json(), model.save(), clone_model(model), and at the same time, the same model can be recreated only by data architecture (without access To define and train the original code of the model).

While a well-designed API should match the mental model of a neural network, it is equally important to match our mental model as a programmer. For most of our programmers, this mental model is an imperative programming style. In the symbolic API, you create a chart by manipulating "declarative tensors" (these tensors are of no value). Keras' Sequential API and Functional API "feel like" are commandable, and they are designed when developers are unaware that they are using symbol definition models.

limitation

The current generation of the symbolic API is well suited for model creation of directed acyclic graphs, which can satisfy the needs of most practical applications. However, there are some special cases that cannot match this simple abstraction, for example, tree shape. Dynamic networks such as recurrent neural networks and recurrent neural networks.

That's why TensorFlow also provides an imperative model to create API styles (the subclassing APIs mentioned above). Whether you use the Sequential API or the Functional API, you'll use all the familiar layers, initializers, and optimizers. At the same time, these two types of APIs are fully interoperable, so you can mix and match them (for example, nesting one model into another). You can use a symbolic model and use it as a layer in a subclassed model, and vice versa.

Advantages and limitations of imperative APIs

Advantage

The forward pass is written in an imperative way, which makes it easy to replace the parts implemented by the development library (such as a layer, a neuron and a loss function) with its own implementation. Programming in this way is also very natural and a good way to get to know the basics of deep learning.

  • It also makes it easy to quickly try new ideas (deep learning development workflows become the same as object-oriented Python), and it's especially helpful for researchers.
  • It is also easy to use Python to specify any control flow in the forward transfer of the model.

The imperative API gives you maximum flexibility, but at the same time comes at a price. I like to write code in this style, but I still want to take a moment to highlight its limitations (be aware that it is good to weigh the advantages and limitations of this approach).

limitation

When using an imperative API, the model is defined by a class method. In this case, the model is no longer a clear data structure, but an opaque bytecode. The flexibility gained by this API style is in exchange for usability and reusability.

Troubleshooting occurs during execution, not when the model is defined.

  • With this API style, a lot of troubleshooting pressure is transferred from the framework to the developer because there is almost no inspection of input or inter-layer compatibility.

Imperative models are difficult to reuse. For example, you can't use a consistent API to access intermediate layers or neurons.

  • Conversely, the way to extract neurons is to use a new call (or advance) method to write a new category. In the beginning, you might find this operation interesting and simple, but if there is no standard, it will accumulate as a tech debt. 

Imperative models are also difficult to detect, copy, and clone.

  • For example, model.save(), model.get_config(), and clone_model have no effect on subclassed models, and model.summary() can only give you a list of layers (and will not provide any information about them How to connect information, because this information is not accessible).

Technical debt in machine learning systems (Technical debt)

Remember: Model creation is just a small part of the actual application of machine learning. On this topic, there is a description I like very much: the model itself (the part of the code that specifies the layer, the training loop, etc.) is a small box in the center of machine learning.

Only a small part of the real machine learning system is composed of machine learning code, as shown by the small block in the middle of the figure above. Source: Hidden Technical Debt in Machine Learning Systems

Symbolically defined models are heavyAdvantages in terms of usability, troubleshooting and testingFor example, during a professor, if a student uses the Sequential API, I can troubleshoot it immediately; if they are using a subclassed model (regardless of the framework), it takes longer to troubleshoot (the failure will be even more Not easy to detect, more types).

Final Thoughts

TensorFlow 2.0 directly supports both symbolic APIs and imperative APIs, so you can choose the level of abstraction (complexity) that best suits your project.

If your goal is easy to use, low budget, and you tend to consider the model as a hierarchyThen use Keras' Sequential API or Functional API (just like assembling LEGO bricks) and built-in training loops. This method works for most problems.

If you prefer to consider the model as an object-oriented Python/Numpy developerAt the same time, limited consideration of the flexibility and decipherability of the modelAn API like Keras' Subclassing would be a good fit for you.