This project is mirrored from https://github.com/tensorflow/federated.
Pull mirroring failed .
Repository mirroring has been paused due to too many failed attempts. It can be resumed by a project maintainer or owner.
Last successful update .
Repository mirroring has been paused due to too many failed attempts. It can be resumed by a project maintainer or owner.
Last successful update .
- Jul 22, 2020
-
-
Zachary Charles authored
PiperOrigin-RevId: 322377616
-
- Jul 21, 2020
-
-
Zachary Garrett authored
Also while were here, update to some more modern TFF coding practices: - Use `tff.learning.framework.ModelWeights` instead of a custom type. - Delete `from_tff_result` conversion helper, it is no longer needed. - Remove unused test helper methods. - Replace numpy dependency with TF ops. PiperOrigin-RevId: 322359152
-
Zachary Charles authored
PiperOrigin-RevId: 322264433
-
Galen Andrew authored
PiperOrigin-RevId: 322257637
-
Zachary Charles authored
PiperOrigin-RevId: 322256044
-
Zachary Charles authored
PiperOrigin-RevId: 322249517
-
Keith Rush authored
This is a pattern we have used frequently (e.g. in ReferenceResolvingExecutor's evaluate method) to persist more information in traces. Currently a little difficult to narrow down hotspots in serialization code. Additionally adds PyType annotations and cleans up the contracts a little bit, easier with this new factorization. PiperOrigin-RevId: 322246977
-
Zachary Garrett authored
This will preserve the returned container value, making it usable with utilities such as `tf.nest`. PiperOrigin-RevId: 322236441
-
Taylor Cramer authored
- Remove use of `tff.NamedTupleType` and use Python containers and avoid exposing AnonymousTuple. - Add handling of `tff.SequenceType` to type conversion code. - Change ReferenceExecutor to convert values unconditionally. PiperOrigin-RevId: 322232543
-
Zachary Garrett authored
PiperOrigin-RevId: 322178848
-
- Jul 18, 2020
-
-
Taylor Cramer authored
Previously, NamedTupleTypeWithPyContainerType was often lost and turned into a NamedTupleType without the container. This resulted in users being given AnonymousTuples when a more specific container should have been returned. This change fixes a large number of sites where container types were lost, and adjusts usage sites as appropriate, including the removal of `from_tff_result` functions. This change also makes the `__eq__` function for `NamedTupleTypeWithPyContainerType` require equivalent container types, rather than just equivalent field structure. Call sites that wished to compare only field structure are adjusted to use `Type.{is, check}_equivalent_to`. PiperOrigin-RevId: 321883012
-
Weikang Song authored
PiperOrigin-RevId: 321852405
-
Zachary Garrett authored
stackoverflow so it can be passed to the iterative process. Make shuffling conidition on the buffer size. PiperOrigin-RevId: 321846124
-
Zachary Garrett authored
Extend iterative process builder and the LR scheduling iterative process to accept an optional"datasset preprocessing computation. This allows for pushing the dataset preprocessing methods down to the client executors, which is required for multimachine simulations since stateful datasets (e.g. datasets which suffling) cannot be serialized. PiperOrigin-RevId: 321835968
-
Galen Andrew authored
The zeroing threshold is adapted to a multiple of a specified quantile of the value norm distribution. PiperOrigin-RevId: 321831230
-
A. Unique TensorFlower authored
PiperOrigin-RevId: 321812938
-
Keith Rush authored
PiperOrigin-RevId: 321790124
-
- Jul 17, 2020
-
-
Karan Singhal authored
PiperOrigin-RevId: 321785542
-
Michael Reneer authored
PiperOrigin-RevId: 321657180
-
Weikang Song authored
PiperOrigin-RevId: 321639468
-
Michael Reneer authored
PiperOrigin-RevId: 321617729
-
Weikang Song authored
PiperOrigin-RevId: 321611005
-
Shanshan Wu authored
A problem shows up with using subclasses of `tff.learning.Model`: after wrapping the model as an EnhancedModel, one cannot access the methods that are specifically defined by the subclass model. This CL removes the EnhancedModel wrapper used when computing baseline metrics and training personalized models. This makes sure that users can access the full functionality of the model returned by `model_fn`. PiperOrigin-RevId: 321578480
-
- Jul 16, 2020
-
-
Michael Reneer authored
PiperOrigin-RevId: 321431928
-
Keith Rush authored
PiperOrigin-RevId: 321423023
-
Michael Reneer authored
* Removed the `tff.framework.set_default_executor` API. * Updated the callers to use the higher-level convienent API `tff.backends.*` instead. PiperOrigin-RevId: 321421612
-
Michael Reneer authored
PiperOrigin-RevId: 321411362
-
Michael Reneer authored
PiperOrigin-RevId: 321393738
-
Zachary Garrett authored
PiperOrigin-RevId: 321378169
-
- Jul 15, 2020
-
-
Michael Reneer authored
PiperOrigin-RevId: 321268189
-
Weikang Song authored
PiperOrigin-RevId: 321266409
-
Keith Rush authored
PiperOrigin-RevId: 321265603
-
Michael Reneer authored
This change adds a compiler function to the `ExecutionContext` object. Conceptually a `Context` can be thought of as an "environment" which owns compilation and owns execution for a given computation. Additionally, this change replaces `set_default_executor` with higher level functions in order to simplify how contexts are constructed. * Added compiler function to the `ExecutionContext`. * Deprecated `set_default_executor`. * Removed all usage of `set_default_executor` internally. * Added convenience high level functions that set an execution context: * tff.backends.native.set_local_execution_context * Updated `set_default_executor` call-sites to either use the convenience high level functions or to manually construct a context and use `set_default_context`. Note that we should consider creating the following convenience high level functions: * tff.backends.native.set_remote_execution_context * tff.backends.native.set_sizing_execution_context * tff.backends.iree.set_iree_execution_context PiperOrigin-RevId: 321263709
-
Zachary Garrett authored
Semantic changes: - Use the LR schedule with round 0 during the server intializiation. Previously this relied on the optimizer builder specifying a default for the learning rate, which was overriden in later rounds. Explicitly call with a "round 0" learning rate to ensure the documented type is adhered to. This can be seen in the type annotations for the `server_optimizer_fn` argument to `build_server_init_fn` and `build_fed_avg_process`. PiperOrigin-RevId: 321237803
-
Keith Rush authored
Taking string rep of flattened ndarray can cause non-equal ndarrays to hash to identical values. E.g., added test here failed previously--arrays that have identical values on the ends of their flattened reps, but differ in the middle, will result in the same key. PiperOrigin-RevId: 321234171
-
Keith Rush authored
PiperOrigin-RevId: 321234129
-
Michael Reneer authored
There are a few users to consider: 1. A developer on TFF wants source at HEAD. 2. A researcher using TFF wants pip package at release in 90% of scenarios and sometimes (ideally rarely) wants nightly, but never wants to build a pip package. 3. A developer making a framework using TFF wants pip release and nightly, but never wants to build a pip package. 4. A researcher upstreaming change to TFF basically has moved from #2 to #3. Fixes: #878 PiperOrigin-RevId: 321226038
-
Tomer Kaftan authored
Explicitly raise a (clearer) error message when models end up in invalid states due to interleaving graph and eager. In rare cases code may have run w/o crashing when in these invalid states, but it's safer to error with an explanation rather than risk silent failures/fragile behavior. PiperOrigin-RevId: 321192744
-
Zachary Garrett authored
TrainableModel has been removed from TFF for a while. PiperOrigin-RevId: 321184501
-
- Jul 11, 2020
-
-
Galen Andrew authored
PiperOrigin-RevId: 320693256
-