Commit 53eb3a5d authored by Keith Rush's avatar Keith Rush Committed by A. Unique TensorFlower
Browse files

Increment version and add release notes for TFF 0.6.0.

PiperOrigin-RevId: 255260630
parent f72d215c
...@@ -88,6 +88,7 @@ versions. ...@@ -88,6 +88,7 @@ versions.
TensorFlow Federated | TensorFlow TensorFlow Federated | TensorFlow
------------------------------------------------------------ | ---------- ------------------------------------------------------------ | ----------
[master](https://github.com/tensorflow/federated) | [tf-nightly 1.x](https://pypi.org/project/tf-nightly/) [master](https://github.com/tensorflow/federated) | [tf-nightly 1.x](https://pypi.org/project/tf-nightly/)
[0.6.0](https://github.com/tensorflow/federated/tree/v0.5.0) | [tf-nightly 1.15.0.dev20190626](https://pypi.org/project/tf-nightly/1.15.0.dev20190626/)
[0.5.0](https://github.com/tensorflow/federated/tree/v0.5.0) | [tf-nightly 1.14.1.dev20190528](https://pypi.org/project/tf-nightly/1.14.1.dev20190528/) [0.5.0](https://github.com/tensorflow/federated/tree/v0.5.0) | [tf-nightly 1.14.1.dev20190528](https://pypi.org/project/tf-nightly/1.14.1.dev20190528/)
[0.4.0](https://github.com/tensorflow/federated/tree/v0.4.0) | [tensorflow 1.13.1](https://pypi.org/project/tensorflow/1.13.1) [0.4.0](https://github.com/tensorflow/federated/tree/v0.4.0) | [tensorflow 1.13.1](https://pypi.org/project/tensorflow/1.13.1)
[0.3.0](https://github.com/tensorflow/federated/tree/v0.3.0) | [tensorflow 1.13.1](https://pypi.org/project/tensorflow/1.13.1) [0.3.0](https://github.com/tensorflow/federated/tree/v0.3.0) | [tensorflow 1.13.1](https://pypi.org/project/tensorflow/1.13.1)
......
# Release 0.6.0
## Major Features and Improvements
* Support for multiple outputs and loss functions in `keras` models.
* Support for stateful broadcast and aggregation functions in federated
averaging and federated SGD APIs.
* `tff.utils.update_state` extended to handle more general `state` arguments.
* Addition of `tff.utils.federated_min` and `tff.utils.federated_max`.
* Shuffle `client_ids` in `create_tf_dataset_from_all_clients` by default to
aid optimization.
## Breaking Changes
* Dependencies added to `requirements.txt`; in particular, `grpcio` and
`portpicker`.
## Bug Fixes
* Removes dependency on `tf.data.experimental.NestedStructure`.
## Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
Dheeraj R Reddy, @Squadrick.
# Release 0.5.0 # Release 0.5.0
## Major Features and Improvements ## Major Features and Improvements
* Removed source level TF dependencies and switched from `tensorflow` to * Removed source level TF dependencies and switched from `tensorflow` to
`tf-nightly` dependency. `tf-nightly` dependency.
* Add support for `attr` module in TFF type system. * Add support for `attr` module in TFF type system.
* Introduced new `tff.framework` interface layer. * Introduced new `tff.framework` interface layer.
* New AST transformations and optimizations. * New AST transformations and optimizations.
* Preserve Python container usage in `tff.tf_computation`. * Preserve Python container usage in `tff.tf_computation`.
## Bug Fixes ## Bug Fixes
* Updated TFF model to reflect Keras `tf.keras.model.weights` order. * Updated TFF model to reflect Keras `tf.keras.model.weights` order.
* Keras model with multiple inputs. #416 * Keras model with multiple inputs. #416
# Release 0.4.0 # Release 0.4.0
## Major Features and Improvements ## Major Features and Improvements
* New `tff.simulation.TransformingClientData` API and associated inifinite * New `tff.simulation.TransformingClientData` API and associated inifinite
EMNIST dataset (see http://tensorflow.org/federated/api_docs/python/tff for EMNIST dataset (see http://tensorflow.org/federated/api_docs/python/tff for
details) details)
## Breaking Change ## Breaking Change
* Normalized `func` to `fn` across the repository (rename some parameters and * Normalized `func` to `fn` across the repository (rename some parameters and
functions) functions)
## Bug Fixes ## Bug Fixes
* Wrapped Keras models can now be used with * Wrapped Keras models can now be used with
`tff.learning.build_federated_evaluation` `tff.learning.build_federated_evaluation`
* Keras models with non-trainable variables in intermediate layers (e.g. * Keras models with non-trainable variables in intermediate layers (e.g.
BatchNormalization) can be assigned back to Keras models with BatchNormalization) can be assigned back to Keras models with
`tff.learning.ModelWeights.assign_weights_to` `tff.learning.ModelWeights.assign_weights_to`
# Release 0.3.0 # Release 0.3.0
## Breaking Changes ## Breaking Changes
* Rename `tff.learning.federated_average` to `tff.learning.federated_mean`. * Rename `tff.learning.federated_average` to `tff.learning.federated_mean`.
* Rename 'func' arguments to 'fn' throughout the API. * Rename 'func' arguments to 'fn' throughout the API.
## Bug Fixes ## Bug Fixes
* Assorted fixes to typos in documentation and setup scripts. * Assorted fixes to typos in documentation and setup scripts.
# Release 0.2.0 # Release 0.2.0
## Major Features and Improvements ## Major Features and Improvements
* Updated to use TensorFlow version 1.13.1. * Updated to use TensorFlow version 1.13.1.
* Implemented Federated SGD in `tff.learning.build_federated_sgd_process()`. * Implemented Federated SGD in `tff.learning.build_federated_sgd_process()`.
## Breaking Changes ## Breaking Changes
* `next()` function of `tff.utils.IteratedProcess`s returned by * `next()` function of `tff.utils.IteratedProcess`s returned by
`build_federated_*_process()` no longer unwraps single value tuples `build_federated_*_process()` no longer unwraps single value tuples (always
(always returns a tuple). returns a tuple).
## Bug Fixes ## Bug Fixes
* Modify setup.py to require TensorFlow 1.x and not upgrade to 2.0 alpha. * Modify setup.py to require TensorFlow 1.x and not upgrade to 2.0 alpha.
* Stop unpacking single value tuples in `next()` function of objects returned by * Stop unpacking single value tuples in `next()` function of objects returned
`build_federated_*_process()`. by `build_federated_*_process()`.
* Clear cached Keras sessions when wrapping Keras models to avoid referencing * Clear cached Keras sessions when wrapping Keras models to avoid referencing
stale graphs. stale graphs.
# Release 0.1.0 # Release 0.1.0
......
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
##### Copyright 2019 The TensorFlow Authors. ##### Copyright 2019 The TensorFlow Authors.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` ```
#@title Licensed under the Apache License, Version 2.0 (the "License"); #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License. # you may not use this file except in compliance with the License.
# You may obtain a copy of the License at # You may obtain a copy of the License at
# #
# https://www.apache.org/licenses/LICENSE-2.0 # https://www.apache.org/licenses/LICENSE-2.0
# #
# Unless required by applicable law or agreed to in writing, software # Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, # distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
# Custom Federated Algorithms, Part 1: Introduction to the Federated Core # Custom Federated Algorithms, Part 1: Introduction to the Federated Core
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
<table class="tfo-notebook-buttons" align="left"> <table class="tfo-notebook-buttons" align="left">
<td> <td>
<a target="_blank" href="https://www.tensorflow.org/federated/tutorials/custom_federated_algorithms_1"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> <a target="_blank" href="https://www.tensorflow.org/federated/tutorials/custom_federated_algorithms_1"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td> </td>
<td> <td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/federated/blob/v0.5.0/docs/tutorials/custom_federated_algorithms_1.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/federated/blob/v0.6.0/docs/tutorials/custom_federated_algorithms_1.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td> </td>
<td> <td>
<a target="_blank" href="https://github.com/tensorflow/federated/blob/v0.5.0/docs/tutorials/custom_federated_algorithms_1.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> <a target="_blank" href="https://github.com/tensorflow/federated/blob/v0.6.0/docs/tutorials/custom_federated_algorithms_1.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td> </td>
</table> </table>
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
This tutorial is the first part of a two-part series that demonstrates how to This tutorial is the first part of a two-part series that demonstrates how to
implement custom types of federated algorithms in TensorFlow Federated (TFF) implement custom types of federated algorithms in TensorFlow Federated (TFF)
using the [Federated Core (FC)](../federated_core.md) - a set of lower-level using the [Federated Core (FC)](../federated_core.md) - a set of lower-level
interfaces that serve as a foundation upon which we have implemented the interfaces that serve as a foundation upon which we have implemented the
[Federated Learning (FL)](../federated_learning.md) layer. [Federated Learning (FL)](../federated_learning.md) layer.
This first part is more conceptual; we introduce some of the key concepts and This first part is more conceptual; we introduce some of the key concepts and
programming abstractions used in TFF, and we demonstrate their use on a very programming abstractions used in TFF, and we demonstrate their use on a very
simple example with a distributed array of temperature sensors. In simple example with a distributed array of temperature sensors. In
[the second part of this series](custom_federated_alrgorithms_2.ipynb), we use [the second part of this series](custom_federated_alrgorithms_2.ipynb), we use
the mechanisms we introduce here to implement a simple version of federated the mechanisms we introduce here to implement a simple version of federated
training and evaluation algorithms. As a follow-up, we encourage you to study training and evaluation algorithms. As a follow-up, we encourage you to study
[the implementation](https://github.com/tensorflow/federated/blob/master/tensorflow_federated/python/learning/federated_averaging.py) [the implementation](https://github.com/tensorflow/federated/blob/master/tensorflow_federated/python/learning/federated_averaging.py)
of federated averaging in `tff.learning`. of federated averaging in `tff.learning`.
By the end of this series, you should be able to recognize that the applications By the end of this series, you should be able to recognize that the applications
of Federated Core are not necessarily limited to learning. The programming of Federated Core are not necessarily limited to learning. The programming
abstractions we offer are quite generic, and could be used, e.g., to implement abstractions we offer are quite generic, and could be used, e.g., to implement
analytics and other custom types of computations over distributed data. analytics and other custom types of computations over distributed data.
Although this tutorial is designed to be self-contained, we encourage you to Although this tutorial is designed to be self-contained, we encourage you to
first read tutorials on first read tutorials on
[image classification](federated_learning_for_image_classification.ipynb) and [image classification](federated_learning_for_image_classification.ipynb) and
[text generation](federated_learning_for_text_generation.ipynb) for a [text generation](federated_learning_for_text_generation.ipynb) for a
higher-level and more gentle introduction to the TensorFlow Federated framework higher-level and more gentle introduction to the TensorFlow Federated framework
and the [Federated Learning](../federated_learning.md) APIs (`tff.learning`), as and the [Federated Learning](../federated_learning.md) APIs (`tff.learning`), as
it will help you put the concepts we describe here in context. it will help you put the concepts we describe here in context.
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## Intended Uses ## Intended Uses
In a nutshell, Federated Core (FC) is a development environment that makes it In a nutshell, Federated Core (FC) is a development environment that makes it
possible to compactly express program logic that combines TensorFlow code with possible to compactly express program logic that combines TensorFlow code with
distributed communication operators, such as those that are used in distributed communication operators, such as those that are used in
[Federated Averaging](https://arxiv.org/abs/1602.05629) - computing [Federated Averaging](https://arxiv.org/abs/1602.05629) - computing
distributed sums, averages, and other types of distributed aggregations over a distributed sums, averages, and other types of distributed aggregations over a
set of client devices in the system, broadcasting models and parameters to those set of client devices in the system, broadcasting models and parameters to those
devices, etc. devices, etc.
You may be aware of You may be aware of
[`tf.contrib.distribute`](https://www.tensorflow.org/api_docs/python/tf/contrib/distribute), [`tf.contrib.distribute`](https://www.tensorflow.org/api_docs/python/tf/contrib/distribute),
and a natural question to ask at this point may be: in what ways does this and a natural question to ask at this point may be: in what ways does this
framework differ? Both frameworks attempt at making TensorFlow computations framework differ? Both frameworks attempt at making TensorFlow computations
distributed, after all. distributed, after all.
One way to think about it is that, whereas the stated goal of One way to think about it is that, whereas the stated goal of
`tf.contrib.distribute` is *to allow users to use existing models and training `tf.contrib.distribute` is *to allow users to use existing models and training
code with minimal changes to enable distributed training*, and much focus is on code with minimal changes to enable distributed training*, and much focus is on
how to take advantage of distributed infrastructure to make existing training how to take advantage of distributed infrastructure to make existing training
code more efficient, the goal of TFF's Federated Core is to give researchers and code more efficient, the goal of TFF's Federated Core is to give researchers and
practitioners explicit control over the specific patterns of distributed practitioners explicit control over the specific patterns of distributed
communication they will use in their systems. The focus in FC is on providing a communication they will use in their systems. The focus in FC is on providing a
flexible and extensible language for expressing distributed data flow flexible and extensible language for expressing distributed data flow
algorithms, rather than a concrete set of implemented distributed training algorithms, rather than a concrete set of implemented distributed training
capabilities. capabilities.
One of the primary target audiences for TFF's FC API is researchers and One of the primary target audiences for TFF's FC API is researchers and
practitioners who might want to experiment with new federated learning practitioners who might want to experiment with new federated learning
algorithms and evaluate the consequences of subtle design choices that affect algorithms and evaluate the consequences of subtle design choices that affect
the manner in which the flow of data in the distributed system is orchestrated, the manner in which the flow of data in the distributed system is orchestrated,
yet without getting bogged down by system implementation details. The level of yet without getting bogged down by system implementation details. The level of
abstraction that FC API is aiming for roughly corresponds to pseudocode one abstraction that FC API is aiming for roughly corresponds to pseudocode one
could use to describe the mechanics of a federated learning algorithm in a could use to describe the mechanics of a federated learning algorithm in a
research publication - what data exists in the system and how it is transformed, research publication - what data exists in the system and how it is transformed,
but without dropping to the level of individual point-to-point network message but without dropping to the level of individual point-to-point network message
exchanges. exchanges.
TFF as a whole is targeting scenarios in which data is distributed, and must TFF as a whole is targeting scenarios in which data is distributed, and must
remain such, e.g., for privacy reasons, and where collecting all data at a remain such, e.g., for privacy reasons, and where collecting all data at a
centralized location may not be a viable option. This has implication on the centralized location may not be a viable option. This has implication on the
implementation of machine learning algorithms that require an increased degree implementation of machine learning algorithms that require an increased degree
of explicit control, as compared to scenarios in which all data can be of explicit control, as compared to scenarios in which all data can be
accumulated in a centralized location at a data center. accumulated in a centralized location at a data center.
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## Before we start ## Before we start
Before we dive into the code, please try to run the following "Hello World" Before we dive into the code, please try to run the following "Hello World"
example to make sure your environment is correctly setup. If it doesn't work, example to make sure your environment is correctly setup. If it doesn't work,
please refer to the [Installation](../install.md) guide for instructions. please refer to the [Installation](../install.md) guide for instructions.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` ```
#@test {"skip": true} #@test {"skip": true}
# NOTE: If you are running a Jupyter notebook, and installing a locally built # NOTE: If you are running a Jupyter notebook, and installing a locally built
# pip package, you may need to edit the following to point to the '.whl' file # pip package, you may need to edit the following to point to the '.whl' file
# on your local filesystem. # on your local filesystem.
!pip install --quiet tensorflow_federated !pip install --quiet tensorflow_federated
!pip install --quiet tf-nightly !pip install --quiet tf-nightly
``` ```
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` ```
from __future__ import absolute_import, division, print_function from __future__ import absolute_import, division, print_function
import collections import collections
import numpy as np import numpy as np
from six.moves import range from six.moves import range
import tensorflow as tf import tensorflow as tf
import tensorflow_federated as tff import tensorflow_federated as tff
tf.enable_resource_variables() tf.enable_resource_variables()
``` ```
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` ```
@tff.federated_computation @tff.federated_computation
def hello_world(): def hello_world():
return 'Hello, World!' return 'Hello, World!'
hello_world() hello_world()
``` ```
%%%% Output: execute_result %%%% Output: execute_result
'Hello, World!' 'Hello, World!'
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## Federated data ## Federated data
One of the distinguishing features of TFF is that it allows you to compactly One of the distinguishing features of TFF is that it allows you to compactly
express TensorFlow-based computations on *federated data*. We will be using the express TensorFlow-based computations on *federated data*. We will be using the
term *federated data* in this tutorial to refer to a collection of data items term *federated data* in this tutorial to refer to a collection of data items
hosted across a group of devices in a distributed system. For example, hosted across a group of devices in a distributed system. For example,
applications running on mobile devices may collect data and store it locally, applications running on mobile devices may collect data and store it locally,
without uploading to a centralized location. Or, an array of distributed sensors without uploading to a centralized location. Or, an array of distributed sensors
may collect and store temperature readings at their locations. may collect and store temperature readings at their locations.
Federated data like those in the above examples are treated in TFF as Federated data like those in the above examples are treated in TFF as
[first-class citizens](https://en.wikipedia.org/wiki/First-class_citizen), i.e., [first-class citizens](https://en.wikipedia.org/wiki/First-class_citizen), i.e.,
they may appear as parameters and results of functions, and they have types. To they may appear as parameters and results of functions, and they have types. To
reinforce this notion, we will refer to federated data sets as *federated reinforce this notion, we will refer to federated data sets as *federated
values*, or as *values of federated types*. values*, or as *values of federated types*.
The important point to understand is that we are modeling the entire collection The important point to understand is that we are modeling the entire collection
of data items across all devices (e.g., the entire collection temperature of data items across all devices (e.g., the entire collection temperature
readings from all sensors in a distributed array) as a single federated value. readings from all sensors in a distributed array) as a single federated value.
For example, here's how one would define in TFF the type of a *federated float* For example, here's how one would define in TFF the type of a *federated float*
hosted by a group of client devices. A collection of temperature readings that hosted by a group of client devices. A collection of temperature readings that
materialize across an array of distributed sensors could be modeled as a value materialize across an array of distributed sensors could be modeled as a value
of this federated type. of this federated type.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` ```
federated_float_on_clients = tff.FederatedType(tf.float32, tff.CLIENTS) federated_float_on_clients = tff.FederatedType(tf.float32, tff.CLIENTS)
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
More generally, a federated type in TFF is defined by specifying the type `T` of More generally, a federated type in TFF is defined by specifying the type `T` of
its *member constituents* - the items of data that reside on individual devices, its *member constituents* - the items of data that reside on individual devices,
and the group `G` of devices on which federated values of this type are hosted and the group `G` of devices on which federated values of this type are hosted
(plus a third, optional bit of information we'll mention shortly). We refer to (plus a third, optional bit of information we'll mention shortly). We refer to
the group `G` of devices hosting a federated value as the value's *placement*. the group `G` of devices hosting a federated value as the value's *placement*.
Thus, `tff.CLIENTS` is an example of a placement. Thus, `tff.CLIENTS` is an example of a placement.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` ```
str(federated_float_on_clients.member) str(federated_float_on_clients.member)
``` ```
%%%% Output: execute_result %%%% Output: execute_result
'float32' 'float32'
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` ```
str(federated_float_on_clients.placement) str(federated_float_on_clients.placement)
``` ```
%%%% Output: execute_result %%%% Output: execute_result
'CLIENTS' 'CLIENTS'
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
A federated type with member constituents `T` and placement `G` can be A federated type with member constituents `T` and placement `G` can be
represented compactly as `{T}@G`, as shown below. represented compactly as `{T}@G`, as shown below.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` ```
str(federated_float_on_clients) str(federated_float_on_clients)
``` ```
%%%% Output: execute_result %%%% Output: execute_result
'{float32}@CLIENTS' '{float32}@CLIENTS'
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
The curly braces `{}` in this concise notation serve as a reminder that the The curly braces `{}` in this concise notation serve as a reminder that the
member constituents (items of data on different devices) may differ, as you member constituents (items of data on different devices) may differ, as you
would expect e.g., of temperature sensor readings, so the clients as a group are would expect e.g., of temperature sensor readings, so the clients as a group are
jointly hosting a [multi-set](https://en.wikipedia.org/wiki/Multiset) of jointly hosting a [multi-set](https://en.wikipedia.org/wiki/Multiset) of
`T`-typed items that together constitute the federated value. `T`-typed items that together constitute the federated value.