Add pytype annotations to shared research training utilities.
Semantic changes: - Use the LR schedule with round 0 during the server intializiation. Previously this relied on the optimizer builder specifying a default for the learning rate, which was overriden in later rounds. Explicitly call with a "round 0" learning rate to ensure the documented type is adhered to. This can be seen in the type annotations for the `server_optimizer_fn` argument to `build_server_init_fn` and `build_fed_avg_process`. PiperOrigin-RevId: 321237803
Showing
- tensorflow_federated/python/research/optimization/shared/fed_avg_schedule.py 56 additions, 37 deletions...d/python/research/optimization/shared/fed_avg_schedule.py
- tensorflow_federated/python/research/optimization/shared/iterative_process_builder.py 18 additions, 8 deletions...research/optimization/shared/iterative_process_builder.py
Loading
Please register or sign in to comment