" # Map ASCII chars to int64 indexes using the vocab\n",
" dataset.map(to_ids)\n",
" # Split into individual chars\n",
" .apply(tf.data.experimental.unbatch())\n",
" .unbatch()\n",
" # Form example sequences of SEQ_LENGTH +1\n",
" .batch(SEQ_LENGTH + 1, drop_remainder=True)\n",
" # Shuffle and form minibatches\n",
...
...
%% Cell type:markdown id: tags:
##### Copyright 2019 The TensorFlow Authors.
%% Cell type:code id: tags:
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
%% Cell type:markdown id: tags:
# Federated Learning for Text Generation
%% Cell type:markdown id: tags:
<tableclass="tfo-notebook-buttons"align="left">
<td>
<atarget="_blank"href="https://www.tensorflow.org/federated/tutorials/federated_learning_for_text_generation"><imgsrc="https://www.tensorflow.org/images/tf_logo_32px.png"/>View on TensorFlow.org</a>
</td>
<td>
<atarget="_blank"href="https://colab.research.google.com/github/tensorflow/federated/blob/v0.9.0/docs/tutorials/federated_learning_for_text_generation.ipynb"><imgsrc="https://www.tensorflow.org/images/colab_logo_32px.png"/>Run in Google Colab</a>
</td>
<td>
<atarget="_blank"href="https://github.com/tensorflow/federated/blob/v0.9.0/docs/tutorials/federated_learning_for_text_generation.ipynb"><imgsrc="https://www.tensorflow.org/images/GitHub-Mark-32px.png"/>View source on GitHub</a>
</td>
</table>
%% Cell type:markdown id: tags:
**NOTE**: This colab has been verified to work with the [latest released version](https://github.com/tensorflow/federated#compatibility) of the `tensorflow_federated` pip package, but the Tensorflow Federated project is still in pre-release development and may not work on `master`.
This tutorial builds on the concepts in the [Federated Learning for Image Classification](federated_learning_for_image_classification.md) tutorial, and demonstrates several other useful approaches for federated learning.
In particular, we load a previously trained Keras model, and refine it using federated training on a (simulated) decentralized dataset. This is practically important for several reasons . The ability to use serialized models makes it easy to mix federated learning with other ML approaches. Further, this allows use of an increasing range of pre-trained models --- for example, training language models from scratch is rarely necessary, as numerous pre-trained models are now widely available (see, e.g., [TF Hub](https://www.tensorflow.org/hub)). Instead, it makes more sense to start from a pre-trained model, and refine it using Federated Learning, adapting to the particular characteristics of the decentralized data for a particular application.
For this tutorial, we start with a RNN that generates ASCII characters, and refine it via federated learning. We also show how the final weights can be fed back to the original Keras model, allowing easy evaluation and text generation using standard tools.
%% Cell type:code id: tags:
```
# NOTE: If you are running a Jupyter notebook, and installing a locally built
# pip package, you may need to edit the following to point to the '.whl' file
# on your local filesystem.
!pip install --quiet tensorflow_federated
!pip install --quiet tf-nightly
# NOTE: Jupyter requires a patch to asyncio.
!pip install --upgrade nest_asyncio
import nest_asyncio
nest_asyncio.apply()
```
%% Cell type:code id: tags:
```
from __future__ import absolute_import, division, print_function
We load a model that was pre-trained following the TensorFlow tutorial
[Text generation using a RNN with eager execution](https://www.tensorflow.org/tutorials/sequences/text_generation). However,
rather than training on [The Complete Works of Shakespeare](http://www.gutenberg.org/files/100/100-0.txt), we pre-trained the model on the text from the Charles Dickens'
[A Tale of Two Cities](http://www.ibiblio.org/pub/docs/books/gutenberg/9/98/98.txt)
Other than expanding the vocabularly, we didn't modify the original tutorial, so this initial model isn't state-of-the-art, but it produces reasonable predictions and is sufficient for our tutorial purposes. The final model was saved with `tf.keras.models.save_model(include_optimizer=False)`.
We will use federated learning to fine-tune this model for Shakespeare in this tutorial, using a federated version of the data provided by TFF.
%% Cell type:markdown id: tags:
## Generate the vocab lookup tables
%% Cell type:code id: tags:
```
# A fixed vocabularly of ASCII chars that occur in the works of Shakespeare and Dickens:
What of TensorFlow Federated, you ask? Stryver, seemed, unaternight,
Fruncied eyebrows at his forgery and the rest of its contempt.
Mr. Cruncher had made his opportunity outsidere Stryver, even had some
fardens--natured impossible wor
%% Cell type:markdown id: tags:
# Load and Preprocess the Federated Shakespeare Data
The `tff.simulation.datasets` package provides a variety of datasets that are split into "clients", where each client corresponds to a dataset on a particular device that might participate in federated learning.
These datasets provide realistic non-IID data distributions that replicate in simulation the challenges of training on real decentralized data. Some of the pre-processing of this data was done using tools from the [Leaf project](https://arxiv.org/abs/1812.01097)([github](https://github.com/TalwalkarLab/leaf)).
The datasets provided by `shakespeare.load_data()` consist of a sequence of
string `Tensors`, one for each line spoken by a particular character in a
Shakespeare play. The client keys consist of the name of the play joined with
the name of the character, so for example `MUCH_ADO_ABOUT_NOTHING_OTHELLO` corresponds to the lines for the character Othello in the play *Much Ado About Nothing*. Note that in a real federated learning scenario
clients are never identified or tracked by ids, but for simulation it is useful
to work with keyed datasets.
Here, for example, we can look at some data from King Lear:
%% Cell type:code id: tags:
```
# Here the play is "The Tragedy of King Lear" and the character is "King".
# Compile the model and test on the preprocessed data
%% Cell type:markdown id: tags:
We loaded an uncompiled keras model, but in order to run `keras_model.evaluate`, we need to compile it with a loss and metrics. We will also compile in an optimizer, which will be used as the on-device optimizer in Federated Learning.
%% Cell type:markdown id: tags:
The original tutorial didn't have char-level accuracy (the fraction
of predictions where the highest probability was put on the correct
next char). This is a useful metric, so we add it.
However, we need to define a new metric class for this because
our predictions have rank 3 (a vector of logits for each of the
`BATCH_SIZE * SEQ_LENGTH` predictions), and `SparseCategoricalAccuracy`
expects only rank 2 predictions.
%% Cell type:code id: tags:
```
class FlattenedCategoricalAccuracy(tf.keras.metrics.SparseCategoricalAccuracy):
TFF serializes all TensorFlow computations so they can potentially be run in a
non-Python environment (even though at the moment, only a simulation runtime implemented in Python is available). Even though we are running in eager mode, (TF 2.0), currently TFF serializes TensorFlow computations by constructing the
necessary ops inside the context of a "`with tf.Graph.as_default()`" statement.
Thus, we need to provide a function that TFF can use to introduce our model into
a graph it controls. We do this as follows:
%% Cell type:code id: tags:
```
# Clone the keras_model inside `create_tff_model()`, which TFF will
# call to produce a new copy of the model inside the graph that it will serialize.
def create_tff_model():
# TFF uses a `dummy_batch` so it knows the types and shapes
# that your model expects.
x = tf.constant(np.random.randint(1, len(vocab), size=[BATCH_SIZE, SEQ_LENGTH]))
Now we are ready to construct a Federated Averaging iterative process, which we will use to improve the model (for details on the Federated Averaging algorithm, see the paper [Communication-Efficient Learning of Deep Networks from Decentralized Data](https://arxiv.org/abs/1602.05629)).
We use a compiled Keras model to perform standard (non-federated) evaluation after each round of federated training. This is useful for research purposes when doing simulated federated learning and there is a standard test dataset.
In a realistic production setting this same technique might be used to take models trained with federated learning and evaluate them on a centralized benchmark dataset for testing or quality assurance purposes.
%% Cell type:code id: tags:
```
# This command builds all the TensorFlow graphs and serializes them:
With the default changes, we haven't done enough training to make a big difference, but if you train longer on more Shakespeare data, you should see a difference in the style of the text generated with the updated model:
%% Cell type:code id: tags:
```
keras_model_batch1.set_weights([v.numpy() for v in keras_model.weights])
# Text generation requires batch_size=1
print(generate_text(keras_model_batch1, 'What of TensorFlow Federated, you ask? '))
```
%% Output
What of TensorFlow Federated, you ask? Says it with the knitting.
g of a spy, kept from such hopes to a passenger, whom, one. Piploing And my will be a moment!" said Monsieur Defarge.
There were more than twice, went them, and clieding
%% Cell type:markdown id: tags:
# Suggested extensions
This tutorial is just the first step! Here are some ideas for how you might try extending this notebook:
* Write a more realistic training loop where you sample clients to train on randomly.
* Use "`.repeat(NUM_EPOCHS)`" on the client datasets to try multiple epochs of local training (e.g., as in [McMahan et. al.](https://arxiv.org/abs/1602.05629)). See also [Federated Learning for Image Classification](federated_learning_for_image_classification.md) which does this.
* Change the `compile()` command to experiment with using different optimization algorithms on the client.
* Try the `server_optimizer` argument to `build_federated_averaging_process` to try different algorithms for applying the model updates on the server.
* Try the `client_weight_fn` argument to to `build_federated_averaging_process` to try different weightings of the clients. The default weights client updates by the number of examples on the client, but you can do e.g. `client_weight_fn=lambda _: tf.constant(1.0)`.