"# Unless required by applicable law or agreed to in writing, software\n",
"# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
"# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
"# See the License for the specific language governing permissions and\n",
"# limitations under the License."
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "d5DZ2c-xfa9m"
},
"source": [
"# TFF for Federated Learning Research: Model and Update Compression\n",
"\n",
"**NOTE**: This colab has been verified to work with the [latest released version](https://github.com/tensorflow/federated#compatibility) of the `tensorflow_federated` pip package, but the Tensorflow Federated project is still in pre-release development and may not work on `master`.\n",
"\n",
"In this tutorial, we use the [EMNIST](https://www.tensorflow.org/federated/api_docs/python/tff/simulation/datasets/emnist) dataset to demonstrate how to enable lossy compression algorithms to reduce communication cost in the Federated Averaging algorithm using the `tff.learning.build_federated_averaging_process` API and the [tensor_encoding](http://jakubkonecny.com/files/tensor_encoding.pdf) API. For more details on the Federated Averaging algorithm, see the paper [Communication-Efficient Learning of Deep Networks from Decentralized Data](https://arxiv.org/abs/1602.05629)."
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "qrPTFv7ngz-P"
},
"source": [
"## Before we start\n",
"\n",
"Before we start, please run the following to make sure that your environment is\n",
"correctly setup. If you don't see a greeting, please refer to the\n",
"[Installation](../install.md) guide for instructions."
"In this section we load and preprocess the EMNIST dataset included in TFF. Please check out [Federated Learning for Image Classification](https://www.tensorflow.org/federated/tutorials/federated_learning_for_image_classification#preparing_the_input_data) tutorial for more details about EMNIST dataset.\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {},
"colab_type": "code",
"id": "oTP2Dndbl2Oe"
},
"outputs": [],
"source": [
"# This value only applies to EMNIST dataset, consider choosing appropriate\n",
"Here we define a keras model based on the orginial FedAvg CNN, and then wrap the keras model in an instance of [tff.learning.Model](https://www.tensorflow.org/federated/api_docs/python/tff/learning/Model) so that it can be consumed by TFF.\n",
"\n",
"Note that we'll need a **function** which produces a model instead of simply a model directly. In addition, the function **cannot** just capture a pre-constructed model, it must create the model in the context that it is called. The reason is that TFF is designed to go to devices, and needs control over when resources are constructed so that they can be captured and packaged up."
"## Training the model and outputting training metrics\n",
"\n",
"Now we are ready to construct a Federated Averaging algorithm and train the defined model on EMNIST dataset.\n",
"\n",
"First we need to build a Federated Averaging algorithm using the [tff.learning.build_federated_averaging_process](https://www.tensorflow.org/federated/api_docs/python/tff/learning/build_federated_averaging_process) API."
"Now let's run the Federated Averaging algorithm. The execution of a Federated Learning algorithm from the perspective of TFF looks like this:\n",
"\n",
"1. Initialize the algorithm and get the inital server state. The server state contains necessary information to perform the algorithm. Recall, since TFF is functional, that this state includes both any optimizer state the algorithm uses (e.g. momentum terms) as well as the model parameters themselves--these will be passed as arguments and returned as results from TFF computations.\n",
"2. Execute the algorithm round by round. In each round, a new server state will be returned as the result of each client training the model on its data. Typically in one round:\n",
" 1. Server broadcast the model to all the participating clients.\n",
" 2. Each client perform work based on the model and its own data.\n",
" 3. Server aggregates all the model to produce a sever state which contains a new model.\n",
"\n",
"For more details, please see [Custom Federated Algorithms, Part 2: Implementing Federated Averaging](https://www.tensorflow.org/federated/tutorials/custom_federated_algorithms_2) tutorial.\n",
"\n",
"Training metrics are written to the Tensorboard directory for displaying after the training."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"colab": {},
"colab_type": "code",
"id": "t5n9fXsGOO6-"
},
"outputs": [],
"source": [
"#@title Load utility functions\n",
"\n",
"def format_size(size):\n",
" \"\"\"A helper function for creating a human-readable size.\"\"\"\n",
"Start TensorBoard with the root log directory specified above to display the training metrics. It can take a few seconds for the data to load. Except for Loss and Accuracy, we also output the amount of broadcasted and aggregated data. Broadcasted data refers to tensors the server pushes to each client while aggregated data refers to tensors each client returns to the server."
"## Build a custom broadcast and aggregate function\n",
"\n",
"Now let's implement function to use lossy compression algorithms on broadcasted data and aggregated data using the [tensor_encoding](http://jakubkonecny.com/files/tensor_encoding.pdf) API.\n",
"\n",
"First, we define two functions:\n",
"* `broadcast_encoder_fn` which creates an instance of [te.core.SimpleEncoder](https://github.com/tensorflow/model-optimization/blob/ee53c9a9ae2e18ac1e443842b0b96229f0afb6d6/tensorflow_model_optimization/python/core/internal/tensor_encoding/core/simple_encoder.py#L30) to encode tensors or variables in server to client communication (Broadcast data).\n",
"* `mean_encoder_fn` which creates an instance of [te.core.GatherEncoder](https://github.com/tensorflow/model-optimization/blob/ee53c9a9ae2e18ac1e443842b0b96229f0afb6d6/tensorflow_model_optimization/python/core/internal/tensor_encoding/core/gather_encoder.py#L30) to encode tensors or variables in client to server communicaiton (Aggregation data).\n",
"\n",
"It is important to note that we do not apply a compression method to the entire model at once. Instead, we decide how (and whether) to compress each variable of the model independently. The reason is that generally, small variables such as biases are more sensitive to inaccuracy, and being relatively small, the potential communication savings are also relatively small. Hence we do not compress small variables by default. In this example, we apply uniform quantization to 8 bits (256 buckets) to every variable with more than 10000 elements, and only apply identity to other variables."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {},
"colab_type": "code",
"id": "lkRHkZTTnKn2"
},
"outputs": [],
"source": [
"def broadcast_encoder_fn(value):\n",
" \"\"\"Function for building encoded broadcast.\"\"\"\n",
"TFF provides APIs to convert the encoder function into a format that `tff.learning.build_federated_averaging_process` API can consume. By using the `tff.learning.framework.build_encoded_broadcast_from_model` and `tff.learning.framework.build_encoded_mean_from_model`, we can create two functions that can be passed into `broadcast_process` and `aggregation_process` agruments of `tff.learning.build_federated_averaging_process` to create a Federated Averaging algorithms with a lossy compression algorithm."
"Start TensorBoard again to compare the training metrics between two runs.\n",
"\n",
"As you can see in Tensorboard, there is a significant reduction between the `orginial` and `compression` curves in the `broadcasted_bits` and `aggregated_bits` plots while in the `loss` and `sparse_categorical_accuracy` plot the two curves are pretty similiar.\n",
"\n",
"In conclusion, we implemented a compression algorithm that can achieve similar performance as the orignial Federated Averaging algorithm while the comminucation cost is significently reduced."
"Potentially valuable open research questions include: non-uniform quantization, lossless compression such as huffman coding, and mechanisms for adapting compression based on the information from previous training rounds.\n",
"\n",
"Recommended reading materials:\n",
"* [Expanding the Reach of Federated Learning by Reducing Client Resource Requirements](https://research.google/pubs/pub47774/)\n",
"* [Federated Learning: Strategies for Improving Communication Efficiency](https://research.google/pubs/pub45648/)\n",
"* _Section 3.5 Communication and Compression_ in [Advanced and Open Problems in Federated Learning](https://arxiv.org/abs/1912.04977)"
]
}
],
"metadata": {
"colab": {
"collapsed_sections": [],
"last_runtime": {
"build_target": "",
"kind": "local"
},
"name": "TFF for Federated Learning Research: Model and Update Compression",
"provenance": [],
"toc_visible": true
},
"kernelspec": {
"display_name": "Python 3",
"name": "python3"
}
},
"nbformat": 4,
"nbformat_minor": 0
}
%% Cell type:markdown id: tags:
##### Copyright 2020 The TensorFlow Authors.
%% Cell type:code id: tags:
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
%% Cell type:markdown id: tags:
# TFF for Federated Learning Research: Model and Update Compression
**NOTE**: This colab has been verified to work with the [latest released version](https://github.com/tensorflow/federated#compatibility) of the `tensorflow_federated` pip package, but the Tensorflow Federated project is still in pre-release development and may not work on `master`.
In this tutorial, we use the [EMNIST](https://www.tensorflow.org/federated/api_docs/python/tff/simulation/datasets/emnist) dataset to demonstrate how to enable lossy compression algorithms to reduce communication cost in the Federated Averaging algorithm using the `tff.learning.build_federated_averaging_process` API and the [tensor_encoding](http://jakubkonecny.com/files/tensor_encoding.pdf) API. For more details on the Federated Averaging algorithm, see the paper [Communication-Efficient Learning of Deep Networks from Decentralized Data](https://arxiv.org/abs/1602.05629).
%% Cell type:markdown id: tags:
## Before we start
Before we start, please run the following to make sure that your environment is
correctly setup. If you don't see a greeting, please refer to the
[Installation](../install.md) guide for instructions.
from tensorflow_model_optimization.python.core.internal import tensor_encoding as te
```
%% Cell type:markdown id: tags:
Verify if TFF is working.
%% Cell type:code id: tags:
```
@tff.federated_computation
def hello_world():
return 'Hello, World!'
hello_world()
```
%% Output
b'Hello, World!'
%% Cell type:markdown id: tags:
## Preparing the input data
In this section we load and preprocess the EMNIST dataset included in TFF. Please check out [Federated Learning for Image Classification](https://www.tensorflow.org/federated/tutorials/federated_learning_for_image_classification#preparing_the_input_data) tutorial for more details about EMNIST dataset.
%% Cell type:code id: tags:
```
# This value only applies to EMNIST dataset, consider choosing appropriate
Here we define a keras model based on the orginial FedAvg CNN, and then wrap the keras model in an instance of [tff.learning.Model](https://www.tensorflow.org/federated/api_docs/python/tff/learning/Model) so that it can be consumed by TFF.
Note that we'll need a **function** which produces a model instead of simply a model directly. In addition, the function **cannot** just capture a pre-constructed model, it must create the model in the context that it is called. The reason is that TFF is designed to go to devices, and needs control over when resources are constructed so that they can be captured and packaged up.
## Training the model and outputting training metrics
Now we are ready to construct a Federated Averaging algorithm and train the defined model on EMNIST dataset.
First we need to build a Federated Averaging algorithm using the [tff.learning.build_federated_averaging_process](https://www.tensorflow.org/federated/api_docs/python/tff/learning/build_federated_averaging_process) API.
Now let's run the Federated Averaging algorithm. The execution of a Federated Learning algorithm from the perspective of TFF looks like this:
1. Initialize the algorithm and get the inital server state. The server state contains necessary information to perform the algorithm. Recall, since TFF is functional, that this state includes both any optimizer state the algorithm uses (e.g. momentum terms) as well as the model parameters themselves--these will be passed as arguments and returned as results from TFF computations.
2. Execute the algorithm round by round. In each round, a new server state will be returned as the result of each client training the model on its data. Typically in one round:
1. Server broadcast the model to all the participating clients.
2. Each client perform work based on the model and its own data.
3. Server aggregates all the model to produce a sever state which contains a new model.
For more details, please see [Custom Federated Algorithms, Part 2: Implementing Federated Averaging](https://www.tensorflow.org/federated/tutorials/custom_federated_algorithms_2) tutorial.
Training metrics are written to the Tensorboard directory for displaying after the training.
%% Cell type:code id: tags:
```
#@title Load utility functions
def format_size(size):
"""A helper function for creating a human-readable size."""
Start TensorBoard with the root log directory specified above to display the training metrics. It can take a few seconds for the data to load. Except for Loss and Accuracy, we also output the amount of broadcasted and aggregated data. Broadcasted data refers to tensors the server pushes to each client while aggregated data refers to tensors each client returns to the server.
%% Cell type:code id: tags:
```
%tensorboard --logdir /tmp/logs/scalars/ --port=0
```
%% Cell type:markdown id: tags:
## Build a custom broadcast and aggregate function
Now let's implement function to use lossy compression algorithms on broadcasted data and aggregated data using the [tensor_encoding](http://jakubkonecny.com/files/tensor_encoding.pdf) API.
First, we define two functions:
*`broadcast_encoder_fn` which creates an instance of [te.core.SimpleEncoder](https://github.com/tensorflow/model-optimization/blob/ee53c9a9ae2e18ac1e443842b0b96229f0afb6d6/tensorflow_model_optimization/python/core/internal/tensor_encoding/core/simple_encoder.py#L30) to encode tensors or variables in server to client communication (Broadcast data).
*`mean_encoder_fn` which creates an instance of [te.core.GatherEncoder](https://github.com/tensorflow/model-optimization/blob/ee53c9a9ae2e18ac1e443842b0b96229f0afb6d6/tensorflow_model_optimization/python/core/internal/tensor_encoding/core/gather_encoder.py#L30) to encode tensors or variables in client to server communicaiton (Aggregation data).
It is important to note that we do not apply a compression method to the entire model at once. Instead, we decide how (and whether) to compress each variable of the model independently. The reason is that generally, small variables such as biases are more sensitive to inaccuracy, and being relatively small, the potential communication savings are also relatively small. Hence we do not compress small variables by default. In this example, we apply uniform quantization to 8 bits (256 buckets) to every variable with more than 10000 elements, and only apply identity to other variables.
TFF provides APIs to convert the encoder function into a format that `tff.learning.build_federated_averaging_process` API can consume. By using the `tff.learning.framework.build_encoded_broadcast_from_model` and `tff.learning.framework.build_encoded_mean_from_model`, we can create two functions that can be passed into `broadcast_process` and `aggregation_process` agruments of `tff.learning.build_federated_averaging_process` to create a Federated Averaging algorithms with a lossy compression algorithm.
Start TensorBoard again to compare the training metrics between two runs.
As you can see in Tensorboard, there is a significant reduction between the `orginial` and `compression` curves in the `broadcasted_bits` and `aggregated_bits` plots while in the `loss` and `sparse_categorical_accuracy` plot the two curves are pretty similiar.
In conclusion, we implemented a compression algorithm that can achieve similar performance as the orignial Federated Averaging algorithm while the comminucation cost is significently reduced.
%% Cell type:code id: tags:
```
%tensorboard --logdir /tmp/logs/scalars/ --port=0
```
%% Cell type:markdown id: tags:
## Exercises
To implement a custom compression algorithm and apply it to the training loop,
you can:
1. Implement a new compression algorithm as a subclass of
Potentially valuable open research questions include: non-uniform quantization, lossless compression such as huffman coding, and mechanisms for adapting compression based on the information from previous training rounds.
Recommended reading materials:
*[Expanding the Reach of Federated Learning by Reducing Client Resource Requirements](https://research.google/pubs/pub47774/)
*[Federated Learning: Strategies for Improving Communication Efficiency](https://research.google/pubs/pub45648/)
* _Section 3.5 Communication and Compression_ in [Advanced and Open Problems in Federated Learning](https://arxiv.org/abs/1912.04977)