!3883 Fix docs and fix writer pool not exit when max_file_size too small

Merge pull request !3883 from LiHongzhang/fix_writer_pool_m
This commit is contained in:
mindspore-ci-bot 2020-08-04 15:38:18 +08:00 committed by Gitee
commit 5b7875ba82
3 changed files with 13 additions and 5 deletions

View File

@ -109,10 +109,18 @@ class SummaryCollector(Callback):
custom_lineage_data (Union[dict, None]): Allows you to customize the data and present it on the MingInsight
lineage page. In the custom data, the key type support str, and the value type support str/int/float.
Default: None, it means there is no custom data.
collect_tensor_freq (Optional[int]): Same as the `collect_freq`, but controls TensorSummary specifically.
Default: None, which means the frequency is auto-calculated just to collect at most 20 steps TensorSummary.
collect_tensor_freq (Optional[int]): Same semantic as the `collect_freq`, but controls TensorSummary only.
Because TensorSummary data is too large compared to other summary data, this parameter is used to reduce
its collection. By default, TensorSummary data will be collected at most 21 steps, but not more than how
many steps other summary data will be collected.
Default: None, which means to follow the behavior as described above. For example, given `collect_freq=10`,
when the total steps is 600, TensorSummary will be collected 21 steps, while other summary data 61 steps,
but when the total steps is 20, both TensorSummary and other summary will be collected 3 steps.
Also note that when in parallel mode, the total steps will be splitted evenly, which will
affect how many steps TensorSummary will be collected.
max_file_size (Optional[int]): The maximum size in bytes each file can be written to the disk.
Default: None, which means no limit.
Default: None, which means no limit. For example, to write not larger than 4GB,
specify `max_file_size=4 * 1024**3`.
Raises:
ValueError: If the parameter value is not expected.

View File

@ -66,7 +66,7 @@ class WriterPool(Process):
for plugin, data in deq.popleft().get():
self._write(plugin, data)
if not self._queue.empty() and self._writers:
if not self._queue.empty():
action, data = self._queue.get()
if action == 'WRITE':
deq.append(pool.apply_async(_pack_data, (data, time.time())))

View File

@ -81,7 +81,7 @@ class SummaryRecord:
file_suffix (str): The suffix of file. Default: "_MS".
network (Cell): Obtain a pipeline through network for saving graph summary. Default: None.
max_file_size (Optional[int]): The maximum size in bytes each file can be written to the disk. \
Unlimited by default.
Unlimited by default. For example, to write not larger than 4GB, specify `max_file_size=4 * 1024**3`.
Raises:
TypeError: If `max_file_size`, `queue_max_size` or `flush_time` is not int, \