Commit 7c0121a5 authored by Jason Roselander's avatar Jason Roselander Committed by tensorflow-copybara
Browse files

Creating TFF release 0.13.0

PiperOrigin-RevId: 300591979
parent a8ea2945
......@@ -91,6 +91,7 @@ versa.
TensorFlow Federated | TensorFlow
--------------------------------------------------------------------- | ----------
[0.13.0](https://github.com/tensorflow/federated/tree/v0.13.0) | [tensorflow 2.1.0](https://pypi.org/project/tensorflow/2.1.0/)
[0.12.0](https://github.com/tensorflow/federated/tree/v0.12.0) | [tensorflow 2.1.0](https://pypi.org/project/tensorflow/2.1.0/)
[0.11.0](https://github.com/tensorflow/federated/tree/v0.11.0) | [tensorflow 2.0.0](https://pypi.org/project/tensorflow/2.0.0/)
[0.10.1](https://github.com/tensorflow/federated/tree/v0.10.1) | [tensorflow 2.0.0](https://pypi.org/project/tensorflow/2.0.0/)
......
# Release 0.13.0
## Major Features and Improvements
* Updated `absl-py` package dependency to `0.9.0`.
* Updated `h5py` package dependency to `2.8.0`.
* Updated `numpy` package dependency to `1.17.5`.
* Updated `tensorflow-privacy` package dependency to `0.2.2`.
## Breaking Changes
* Deprecated `dummy_batch` parameter of the `tff.learning.from_keras_model`
function.
## Bug Fixes
* Fixed issues with executor service using old executor API.
* Fixed issues with remote executor test using old executor API.
* Fixed issues in tutorial notebooks.
# Release 0.12.0
## Major Features and Improvements
......
%% Cell type:markdown id: tags:
##### Copyright 2019 The TensorFlow Authors.
%% Cell type:code id: tags:
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
%% Cell type:markdown id: tags:
# Custom Federated Algorithms, Part 1: Introduction to the Federated Core
%% Cell type:markdown id: tags:
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/federated/tutorials/custom_federated_algorithms_1"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/federated/blob/v0.12.0/docs/tutorials/custom_federated_algorithms_1.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/federated/blob/v0.13.0/docs/tutorials/custom_federated_algorithms_1.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/federated/blob/v0.12.0/docs/tutorials/custom_federated_algorithms_1.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
<a target="_blank" href="https://github.com/tensorflow/federated/blob/v0.13.0/docs/tutorials/custom_federated_algorithms_1.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
%% Cell type:markdown id: tags:
This tutorial is the first part of a two-part series that demonstrates how to
implement custom types of federated algorithms in TensorFlow Federated (TFF)
using the [Federated Core (FC)](../federated_core.md) - a set of lower-level
interfaces that serve as a foundation upon which we have implemented the
[Federated Learning (FL)](../federated_learning.md) layer.
This first part is more conceptual; we introduce some of the key concepts and
programming abstractions used in TFF, and we demonstrate their use on a very
simple example with a distributed array of temperature sensors. In
[the second part of this series](custom_federated_alrgorithms_2.ipynb), we use
the mechanisms we introduce here to implement a simple version of federated
training and evaluation algorithms. As a follow-up, we encourage you to study
[the implementation](https://github.com/tensorflow/federated/blob/master/tensorflow_federated/python/learning/federated_averaging.py)
of federated averaging in `tff.learning`.
By the end of this series, you should be able to recognize that the applications
of Federated Core are not necessarily limited to learning. The programming
abstractions we offer are quite generic, and could be used, e.g., to implement
analytics and other custom types of computations over distributed data.
Although this tutorial is designed to be self-contained, we encourage you to
first read tutorials on
[image classification](federated_learning_for_image_classification.ipynb) and
[text generation](federated_learning_for_text_generation.ipynb) for a
higher-level and more gentle introduction to the TensorFlow Federated framework
and the [Federated Learning](../federated_learning.md) APIs (`tff.learning`), as
it will help you put the concepts we describe here in context.
%% Cell type:markdown id: tags:
## Intended Uses
In a nutshell, Federated Core (FC) is a development environment that makes it
possible to compactly express program logic that combines TensorFlow code with
distributed communication operators, such as those that are used in
[Federated Averaging](https://arxiv.org/abs/1602.05629) - computing
distributed sums, averages, and other types of distributed aggregations over a
set of client devices in the system, broadcasting models and parameters to those
devices, etc.
You may be aware of
[`tf.contrib.distribute`](https://www.tensorflow.org/api_docs/python/tf/contrib/distribute),
and a natural question to ask at this point may be: in what ways does this
framework differ? Both frameworks attempt at making TensorFlow computations
distributed, after all.
One way to think about it is that, whereas the stated goal of
`tf.contrib.distribute` is *to allow users to use existing models and training
code with minimal changes to enable distributed training*, and much focus is on
how to take advantage of distributed infrastructure to make existing training
code more efficient, the goal of TFF's Federated Core is to give researchers and
practitioners explicit control over the specific patterns of distributed
communication they will use in their systems. The focus in FC is on providing a
flexible and extensible language for expressing distributed data flow
algorithms, rather than a concrete set of implemented distributed training
capabilities.
One of the primary target audiences for TFF's FC API is researchers and
practitioners who might want to experiment with new federated learning
algorithms and evaluate the consequences of subtle design choices that affect
the manner in which the flow of data in the distributed system is orchestrated,
yet without getting bogged down by system implementation details. The level of
abstraction that FC API is aiming for roughly corresponds to pseudocode one
could use to describe the mechanics of a federated learning algorithm in a
research publication - what data exists in the system and how it is transformed,
but without dropping to the level of individual point-to-point network message
exchanges.
TFF as a whole is targeting scenarios in which data is distributed, and must
remain such, e.g., for privacy reasons, and where collecting all data at a
centralized location may not be a viable option. This has implication on the
implementation of machine learning algorithms that require an increased degree
of explicit control, as compared to scenarios in which all data can be
accumulated in a centralized location at a data center.
%% Cell type:markdown id: tags:
## Before we start
Before we dive into the code, please try to run the following "Hello World"
example to make sure your environment is correctly setup. If it doesn't work,
please refer to the [Installation](../install.md) guide for instructions.
%% Cell type:code id: tags:
```
#@test {"skip": true}
!pip install --quiet --upgrade tensorflow_federated
# Note: Jupyter requires a patch to asyncio.
!pip install --quiet --upgrade nest_asyncio
import nest_asyncio
nest_asyncio.apply()
```
%% Cell type:code id: tags:
```
import collections
import numpy as np
import tensorflow as tf
import tensorflow_federated as tff
tf.compat.v1.enable_v2_behavior()
```
%% Cell type:code id: tags:
```
@tff.federated_computation
def hello_world():
return 'Hello, World!'
hello_world()
```
%%%% Output: execute_result
b'Hello, World!'
%% Cell type:markdown id: tags:
## Federated data
One of the distinguishing features of TFF is that it allows you to compactly
express TensorFlow-based computations on *federated data*. We will be using the
term *federated data* in this tutorial to refer to a collection of data items
hosted across a group of devices in a distributed system. For example,
applications running on mobile devices may collect data and store it locally,
without uploading to a centralized location. Or, an array of distributed sensors
may collect and store temperature readings at their locations.
Federated data like those in the above examples are treated in TFF as
[first-class citizens](https://en.wikipedia.org/wiki/First-class_citizen), i.e.,
they may appear as parameters and results of functions, and they have types. To
reinforce this notion, we will refer to federated data sets as *federated
values*, or as *values of federated types*.
The important point to understand is that we are modeling the entire collection
of data items across all devices (e.g., the entire collection temperature
readings from all sensors in a distributed array) as a single federated value.
For example, here's how one would define in TFF the type of a *federated float*
hosted by a group of client devices. A collection of temperature readings that
materialize across an array of distributed sensors could be modeled as a value
of this federated type.
%% Cell type:code id: tags:
```
federated_float_on_clients = tff.FederatedType(tf.float32, tff.CLIENTS)
```
%% Cell type:markdown id: tags:
More generally, a federated type in TFF is defined by specifying the type `T` of
its *member constituents* - the items of data that reside on individual devices,
and the group `G` of devices on which federated values of this type are hosted
(plus a third, optional bit of information we'll mention shortly). We refer to
the group `G` of devices hosting a federated value as the value's *placement*.
Thus, `tff.CLIENTS` is an example of a placement.
%% Cell type:code id: tags:
```
str(federated_float_on_clients.member)
```
%%%% Output: execute_result
'float32'
%% Cell type:code id: tags:
```
str(federated_float_on_clients.placement)
```
%%%% Output: execute_result
'CLIENTS'
%% Cell type:markdown id: tags:
A federated type with member constituents `T` and placement `G` can be
represented compactly as `{T}@G`, as shown below.
%% Cell type:code id: tags:
```
str(federated_float_on_clients)
```
%%%% Output: execute_result
'{float32}@CLIENTS'
%% Cell type:markdown id: tags:
The curly braces `{}` in this concise notation serve as a reminder that the
member constituents (items of data on different devices) may differ, as you
would expect e.g., of temperature sensor readings, so the clients as a group are
jointly hosting a [multi-set](https://en.wikipedia.org/wiki/Multiset) of
`T`-typed items that together constitute the federated value.
It is important to note that the member constituents of a federated value are
generally opaque to the programmer, i.e., a federated value should not be
thought of as a simple `dict` keyed by an identifier of a device in the system -
these values are intended to be collectively transformed only by *federated
operators* that abstractly represent various kinds of distributed communication
protocols (such as aggregation). If this sounds too abstract, don't worry - we
will return to this shortly, and we will illustrate it with concrete examples.
Federated types in TFF come in two flavors: those where the member constituents
of a federated value may differ (as just seen above), and those where they are
known to be all equal. This is controlled by the third, optional `all_equal`
parameter in the `tff.FederatedType` constructor (defaulting to `False`).
%% Cell type:code id: tags:
```
federated_float_on_clients.all_equal
```
%%%% Output: execute_result
False
%% Cell type:markdown id: tags:
A federated type with a placement `G` in which all of the `T`-typed member
constituents are known to be equal can be compactly represented as `T@G` (as
opposed to `{T}@G`, that is, with the curly braces dropped to reflect the fact
that the multi-set of member constituents consists of a single item).
%% Cell type:code id: tags:
```
str(tff.FederatedType(tf.float32, tff.CLIENTS, all_equal=True))
```
%%%% Output: execute_result
'float32@CLIENTS'
%% Cell type:markdown id: tags:
One example of a federated value of such type that might arise in practical
scenarios is a hyperparameter (such as a learning rate, a clipping norm, etc.)
that has been broadcasted by a server to a group of devices that participate in
federated training.
Another example is a set of parameters for a machine learning model pre-trained
at the server, that were then broadcasted to a group of client devices, where
they can be personalized for each user.
For example, suppose we have a pair of `float32` parameters `a` and `b` for a
simple one-dimensional linear regression model. We can construct the
(non-federated) type of such models for use in TFF as follows. The angle braces
`<>` in the printed type string are a compact TFF notation for named or unnamed
tuples.
%% Cell type:code id: tags:
```
simple_regression_model_type = (
tff.NamedTupleType([('a', tf.float32), ('b', tf.float32)]))
str(simple_regression_model_type)
```
%%%% Output: execute_result
'<a=float32,b=float32>'
%% Cell type:markdown id: tags:
Note that we are only specifying `dtype`s above. Non-scalar types are also
supported. In the above code, `tf.float32` is a shortcut notation for the more
general `tff.TensorType(dtype=tf.float32, shape=[])`.
When this model is broadcasted to clients, the type of the resulting federated
value can be represented as shown below.
%% Cell type:code id: tags:
```
str(tff.FederatedType(
simple_regression_model_type, tff.CLIENTS, all_equal=True))
```
%%%% Output: execute_result
'<a=float32,b=float32>@CLIENTS'
%% Cell type:markdown id: tags:
Per symmetry with *federated float* above, we will refer to such a type as a
*federated tuple*. More generally, we'll often use the term *federated XYZ* to
refer to a federated value in which member constituents are *XYZ*-like. Thus, we
will talk about things like *federated tuples*, *federated sequences*,
*federated models*, and so on.
Now, coming back to `float32@CLIENTS` - while it appears replicated across
multiple devices, it is actually a single `float32`, since all member are the
same. In general, you may think of any *all-equal* federated type, i.e., one of
the form `T@G`, as isomorphic to a non-federated type `T`, since in both cases,
there's actually only a single (albeit potentially replicated) item of type `T`.
Given the isomorphism between `T` and `T@G`, you may wonder what purpose, if
any, the latter types might serve. Read on.
%% Cell type:markdown id: tags:
## Placements
### Design Overview
In the preceding section, we've introduced the concept of *placements* - groups
of system participants that might be jointly hosting a federated value, and
we've demonstrated the use of `tff.CLIENTS` as an example specification of a
placement.
To explain why the notion of a *placement* is so fundamental that we needed to
incorporate it into the TFF type system, recall what we mentioned at the
beginning of this tutorial about some of the intended uses of TFF.
Although in this tutorial, you will only see TFF code being executed locally in
a simulated environment, our goal is for TFF to enable writing code that you
could deploy for execution on groups of physical devices in a distributed
system, potentially including mobile or embedded devices running Android. Each
of of those devices would receive a separate set of instructions to execute
locally, depending on the role it plays in the system (an end-user device, a
centralized coordinator, an intermediate layer in a multi-tier architecture,
etc.). It is important to be able to reason about which subsets of devices
execute what code, and where different portions of the data might physically
materialize.
This is especially important when dealing with, e.g., application data on mobile
devices. Since the data is private and can be sensitive, we need the ability to
statically verify that this data will never leave the device (and prove facts
about how the data is being processed). The placement specifications are one of
the mechanisms designed to support this.
TFF has been designed as a data-centric programming environment, and as such,
unlike some of the existing frameworks that focus on *operations* and where
those operations might *run*, TFF focuses on *data*, where that data
*materializes*, and how it's being *transformed*. Consequently, placement is
modeled as a property of data in TFF, rather than as a property of operations on
data. Indeed, as you're about to see in the next section, some of the TFF
operations span across locations, and run "in the network", so to speak, rather
than being executed by a single machine or a group of machines.
Representing the type of a certain value as `T@G` or `{T}@G` (as opposed to just
`T`) makes data placement decisions explicit, and together with a static
analysis of programs written in TFF, it can serve as a foundation for providing
formal privacy guarantees for sensitive on-device data.
An important thing to note at this point, however, is that while we encourage
TFF users to be explicit about *groups* of participating devices that host the
data (the placements), the programmer will never deal with the raw data or
identities of the *individual* participants.
(Note: While it goes far outside the scope of this tutorial, we should mention
that there is one notable exception to the above, a `tff.federated_collect`
operator that is intended as a low-level primitive, only for specialized
situations. Its explicit use in situations where it can be avoided is not
recommended, as it may limit the possible future applications. For example, if
during the course of static analysis, we determine that a computation uses such
low-level mechanisms, we may disallow its access to certain types of data.)
Within the body of TFF code, by design, there's no way to enumerate the devices
that constitute the group represented by `tff.CLIENTS`, or to probe for the
existence of a specific device in the group. There's no concept of a device or
client identity anywhere in the Federated Core API, the underlying set of
architectural abstractions, or the core runtime infrastructure we provide to
support simulations. All the computation logic you write will be expressed as
operations on the entire client group.
Recall here what we mentioned earlier about values of federated types being
unlike Python `dict`, in that one cannot simply enumerate their member
constituents. Think of values that your TFF program logic manipulates as being
associated with placements (groups), rather than with individual participants.
Placements *are* designed to be a first-class citizen in TFF as well, and can
appear as parameters and results of a `placement` type (to be represented by
`tff.PlacementType` in the API). In the future, we plan to provide a variety of
operators to transform or combine placements, but this is outside the scope of
this tutorial. For now, it suffices to think of `placement` as an opaque
primitive built-in type in TFF, similar to how `int` and `bool` are opaque
built-in types in Python, with `tff.CLIENTS` being a constant literal of this
type, not unlike `1` being a constant literal of type `int`.
### Specifying Placements
TFF provides two basic placement literals, `tff.CLIENTS` and `tff.SERVER`, to
make it easy to express the rich variety of practical scenarios that are
naturally modeled as client-server architectures, with multiple *client* devices
(mobile phones, embedded devices, distributed databases, sensors, etc.)
orchestrated by a single centralized *server* coordinator. TFF is designed to
also support custom placements, multiple client groups, multi-tiered and other,
more general distributed architectures, but discussing them is outside the scope
of this tutorial.
TFF doesn't prescribe what either the `tff.CLIENTS` or the `tff.SERVER` actually
represent.
In particular, `tff.SERVER` may be a single physical device (a member of a
singleton group), but it might just as well be a group of replicas in a
fault-tolerant cluster running state machine replication - we do not make any
special architectural assumptions. Rather, we use the `all_equal` bit mentioned
in the preceding section to express the fact that we're generally dealing with
only a single item of data at the server.
Likewise, `tff.CLIENTS` in some applications might represent all clients in the
system - what in the context of federated learning we sometimes refer to as the
*population*, but e.g., in
[production implementations of Federated Averaging](https://arxiv.org/abs/1602.05629),
it may represent a *cohort* - a subset of the clients selected for paticipation
in a particular round of training. The abstractly defined placements are given
concrete meaning when a computation in which they appear is deployed for
execution (or simply invoked like a Python function in a simulated environment,
as is demonstrated in this tutorial). In our local simulations, the group of
clients is determined by the federated data supplied as input.
%% Cell type:markdown id: tags:
## Federated computations
### Declaring federated computations
TFF is designed as a strongly-typed functional programming environment that
supports modular development.
The basic unit of composition in TFF is a *federated computation* - a section of
logic that may accept federated values as input and return federated values as
output. Here's how you can define a computation that calculates the average of
the temperatures reported by the sensor array from our previous example.
%% Cell type:code id: tags:
```
@tff.federated_computation(tff.FederatedType(tf.float32, tff.CLIENTS))
def get_average_temperature(sensor_readings):
return tff.federated_mean(sensor_readings)
```
%% Cell type:markdown id: tags:
Looking at the above code, at this point you might be asking - aren't there
already decorator constructs to define composable units such as
[`tf.function`](https://www.tensorflow.org/api_docs/python/tf/function)
in TensorFlow, and if so, why introduce yet another one, and how is it
different?
The short answer is that the code generated by the `tff.federated_computation`
wrapper is *neither* TensorFlow, *nor is it* Python - it's a specification of a
distributed system in an internal platform-independent *glue* language. At this
point, this will undoubtedly sound cryptic, but please bear this intuitive
interpretation of a federated computation as an abstract specification of a
distributed system in mind. We'll explain it in a minute.
First, let's play with the definition a bit. TFF computations are generally
modeled as functions - with or without parameters, but with well-defined type
signatures. You can print the type signature of a computation by querying its
`type_signature` property, as shown below.
%% Cell type:code id: tags:
```
str(get_average_temperature.type_signature)
```
%%%% Output: execute_result
'({float32}@CLIENTS -> float32@SERVER)'
%% Cell type:markdown id: tags:
The type signature tells us that the computation accepts a collection of
different sensor readings on client devices, and returns a single average on the
server.
Before we go any further, let's reflect on this for a minute - the input and
output of this computation are *in different places* (on `CLIENTS` vs. at the
`SERVER`). Recall what we said in the preceding section on placements about how
*TFF operations may span across locations, and run in the network*, and what we
just said about federated computations as representing abstract specifications
of distributed systems. We have just a defined one such computation - a simple
distributed system in which data is consumed at client devices, and the
aggregate results emerge at the server.
In many practical scenarios, the computations that represent top-level tasks
will tend to accept their inputs and report their outputs at the server - this
reflects the idea that computations might be triggered by *queries* that
originate and terminate on the server.
However, FC API does not impose this assumption, and many of the building blocks
we use internally (including numerous `tff.federated_...` operators you may find
in the API) have inputs and outputs with distinct placements, so in general, you
should not think about a federated computation as something that *runs on the
server* or is *executed by a server*. The server is just one type of participant
in a federated computation. In thinking about the mechanics of such
computations, it's best to always default to the global network-wide
perspective, rather than the perspective of a single centralized coordinator.
In general, functional type signatures are compactly represented as `(T -> U)`
for types `T` and `U` of inputs and outputs, respectively. The type of the
formal parameter (such `sensor_readings` in this case) is specified as the
argument to the decorator. You don't need to specify the type of the result -
it's determined automatically.
Although TFF does offer limited forms of polymorphism, programmers are strongly
encouraged to be explicit about the types of data they work with, as that makes
understanding, debugging, and formally verifying properties of your code easier.
In some cases, explicitly specifying types is a requirement (e.g., polymorphic
computations are currently not directly executable).
### Executing federated computations
In order to support development and debugging, TFF allows you to directly invoke
computations defined this way as Python functions, as shown below. Where the
computation expects a value of a federated type with the `all_equal` bit set to
`False`, you can feed it as a plain `list` in Python, and for federated types
with the `all_equal` bit set to `True`, you can just directly feed the (single)
member constituent. This is also how the results are reported back to you.
%% Cell type:code id: tags:
```
get_average_temperature([68.5, 70.3, 69.8])
```
%%%% Output: execute_result
69.53334
%% Cell type:markdown id: tags:
When running computations like this in simulation mode, you act as an external
observer with a system-wide view, who has the ability to supply inputs and
consume outputs at any locations in the network, as indeed is the case here -
you supplied client values at input, and consumed the server result.
Now, let's return to a note we made earlier about the
`tff.federated_computation` decorator emitting code in a *glue* language.
Although the logic of TFF computations can be expressed as ordinary functions in
Python (you just need to decorate them with `tff.federated_computation` as we've
done above), and you can directly invoke them with Python arguments just