tensorpack.utils package

tensorpack.utils.argtools module

tensorpack.utils.argtools.map_arg(**maps)[source]

Apply a mapping on certain argument before calling the original function.

Parameters:maps (dict) – {argument_name: map_func}
tensorpack.utils.argtools.memoized(user_function)

Alias to functools.lru_cache() WARNING: memoization will keep keys and values alive!

tensorpack.utils.argtools.memoized_method(func)[source]

A decorator that performs memoization on methods. It stores the cache on the object instance itself.

tensorpack.utils.argtools.graph_memoized(func)[source]

Like memoized, but keep one cache per default graph.

tensorpack.utils.argtools.shape2d(a)[source]

Ensure a 2D shape.

Parameters:a – a int or tuple/list of length 2
Returns:list – of length 2. if a is a int, return [a, a].
tensorpack.utils.argtools.shape4d(a, data_format='channels_last')[source]

Ensuer a 4D shape, to use with 4D symbolic functions.

Parameters:a – a int or tuple/list of length 2
Returns:list
of length 4. if a is a int, return [1, a, a, 1]
or [1, 1, a, a] depending on data_format.
tensorpack.utils.argtools.memoized_ignoreargs(func)[source]

A decorator. It performs memoization ignoring the arguments used to call the function.

tensorpack.utils.argtools.log_once[source]

Log certain message only once. Call this function more than one times with the same message will result in no-op.

Parameters:
  • message (str) – message to log

  • func (str) – the name of the logger method. e.g. “info”, “warn”, “error”.

tensorpack.utils.argtools.call_only_once(func)[source]

Decorate a method or property of a class, so that this method can only be called once for every instance. Calling it more than once will result in exception.

tensorpack.utils.concurrency module

class tensorpack.utils.concurrency.StoppableThread(evt=None)[source]

Bases: threading.Thread

A thread that has a ‘stop’ event.

__init__(evt=None)[source]
Parameters:evt (threading.Event) – if None, will create one.
queue_get_stoppable(q)[source]

Take obj from queue, but will give up when the thread is stopped

queue_put_stoppable(q, obj)[source]

Put obj to queue, but will give up when the thread is stopped

stop()[source]

Stop the thread

stopped()[source]
Returns:bool – whether the thread is stopped or not
class tensorpack.utils.concurrency.LoopThread(func, pausable=True)[source]

Bases: tensorpack.utils.concurrency.StoppableThread

A pausable thread that simply runs a loop

__init__(func, pausable=True)[source]
Parameters:func – the function to run
pause()[source]

Pause the loop

resume()[source]

Resume the loop

run()[source]

Method representing the thread’s activity.

You may override this method in a subclass. The standard run() method invokes the callable object passed to the object’s constructor as the target argument, if any, with sequential and keyword arguments taken from the args and kwargs arguments, respectively.

class tensorpack.utils.concurrency.ShareSessionThread(th=None)[source]

Bases: threading.Thread

A wrapper around thread so that the thread uses the default session at “start()” time.

__init__(th=None)[source]
Parameters:th (threading.Thread or None) –
default_sess()[source]
run()[source]

Method representing the thread’s activity.

You may override this method in a subclass. The standard run() method invokes the callable object passed to the object’s constructor as the target argument, if any, with sequential and keyword arguments taken from the args and kwargs arguments, respectively.

start()[source]

Start the thread’s activity.

It must be called at most once per thread object. It arranges for the object’s run() method to be invoked in a separate thread of control.

This method will raise a RuntimeError if called more than once on the same thread object.

tensorpack.utils.concurrency.ensure_proc_terminate(proc)[source]

Make sure processes terminate when main process exit.

Parameters:proc (multiprocessing.Process or list) –
class tensorpack.utils.concurrency.OrderedResultGatherProc(data_queue, nr_producer, start=0)[source]

Bases: multiprocessing.context.Process

Gather indexed data from a data queue, and produce results with the original index-based order.

__init__(data_queue, nr_producer, start=0)[source]
Parameters:
  • data_queue (multiprocessing.Queue) – a queue which contains datapoints.

  • nr_producer (int) – number of producer processes. This process will terminate after receiving this many of DIE sentinel.

  • start (int) – the rank of the first object

get()[source]
run()[source]

Method to be run in sub-process; can be overridden in sub-class

class tensorpack.utils.concurrency.OrderedContainer(start=0)[source]

Bases: object

Like a queue, but will always wait to receive item with rank (x+1) and produce (x+1) before producing (x+2).

Warning

It is not thread-safe.

__init__(start=0)[source]
Parameters:start (int) – the starting rank.
get()[source]
has_next()[source]
put(rank, val)[source]
Parameters:
  • rank (int) – rank of th element. All elements must have different ranks.

  • val – an object

class tensorpack.utils.concurrency.DIE[source]

Bases: object

A placeholder class indicating end of queue

tensorpack.utils.concurrency.mask_sigint()[source]
Returns:If called in main thread, returns a context where SIGINT is ignored, and yield True. Otherwise yield False.
tensorpack.utils.concurrency.start_proc_mask_signal(proc)[source]

Start process(es) with SIGINT ignored.

Parameters:proc – (multiprocessing.Process or list)

Note

The signal mask is only applied when called from main thread.

tensorpack.utils.fs module

tensorpack.utils.fs.mkdir_p(dirname)[source]

Like “mkdir -p”, make a dir recursively, but do nothing if the dir exists

Parameters:dirname (str) –
tensorpack.utils.fs.download(url, dir, filename=None, expect_size=None)[source]

Download URL to a directory. Will figure out the filename automatically from URL, if not given.

tensorpack.utils.fs.recursive_walk(rootdir)[source]
Yields:str – All files in rootdir, recursively.
tensorpack.utils.fs.get_dataset_path(*args)[source]

Get the path to some dataset under $TENSORPACK_DATASET.

Parameters:args – strings to be joined to form path.
Returns:str – path to the dataset.

tensorpack.utils.loadcaffe module

tensorpack.utils.loadcaffe.load_caffe(model_desc, model_file)[source]

Load a caffe model. You must be able to import caffe to use this function. :param model_desc: path to caffe model description file (.prototxt). :type model_desc: str :param model_file: path to caffe model parameter file (.caffemodel). :type model_file: str

Returns:dict – the parameters.
tensorpack.utils.loadcaffe.get_caffe_pb()[source]

Get caffe protobuf. :returns: The imported caffe protobuf module.

tensorpack.utils.logger module

tensorpack.utils.logger.set_logger_dir(dirname, action=None)[source]

Set the directory for global logging.

Parameters:
  • dirname (str) – log directory

  • action (str) –

    an action of [“k”,”d”,”q”] to be performed when the directory exists. Will ask user by default.

    ”d”: delete the directory. Note that the deletion may fail when the directory is used by tensorboard.

    ”k”: keep the directory. This is useful when you resume from a previous training and want the directory to look as if the training was not interrupted. Note that this option does not load old models or any other old states for you. It simply does nothing.

tensorpack.utils.logger.auto_set_dir(action=None, name=None)[source]

Use logger.set_logger_dir() to set log directory to “./train_log/{scriptname}:{name}”. “scriptname” is the name of the main python file currently running

tensorpack.utils.logger.get_logger_dir()[source]
Returns:The logger directory, or None if not set. The directory is used for general logging, tensorboard events, checkpoints, etc.

tensorpack.utils.serialize module

tensorpack.utils.serialize.loads(*args, **kwargs)
tensorpack.utils.serialize.dumps(*args, **kwargs)

tensorpack.utils.compatible_serialize module

tensorpack.utils.compatible_serialize.loads(*args, **kwargs)
tensorpack.utils.compatible_serialize.dumps(*args, **kwargs)

tensorpack.utils.stats module

class tensorpack.utils.stats.StatCounter[source]

Bases: object

A simple counter

average
count
feed(v)[source]
Parameters:v (float or np.ndarray) – has to be the same shape between calls.
max
min
reset()[source]
sum
class tensorpack.utils.stats.BinaryStatistics[source]

Bases: object

Statistics for binary decision, including precision, recall, false positive, false negative

false_negative
false_positive
feed(pred, label)[source]
Parameters:
  • pred (np.ndarray) – binary array.

  • label (np.ndarray) – binary array of the same size.

precision
recall
reset()[source]
class tensorpack.utils.stats.RatioCounter[source]

Bases: object

A counter to count ratio of something.

count

Returns – int: the total

feed(count, total=1)[source]
Parameters:
  • cnt (int) – the count of some event of interest.

  • tot (int) – the total number of events.

ratio
reset()[source]
total

Returns – int: the total

class tensorpack.utils.stats.Accuracy[source]

Bases: tensorpack.utils.stats.RatioCounter

A RatioCounter with a fancy name

accuracy
class tensorpack.utils.stats.OnlineMoments[source]

Bases: object

Compute 1st and 2nd moments online (to avoid storing all elements).

See algorithm at: https://www.wikiwand.com/en/Algorithms_for_calculating_variance#/Online_algorithm

feed(x)[source]
Parameters:x (float or np.ndarray) – must have the same shape.
mean
std
variance

tensorpack.utils.timer module

tensorpack.utils.timer.total_timer(msg)[source]

A context which add the time spent inside to TotalTimer.

tensorpack.utils.timer.timed_operation(msg, log_start=False)[source]

Surround a context with a timer.

Parameters:
  • msg (str) – the log to print.

  • log_start (bool) – whether to print also at the beginning.

Example

with timed_operation('Good Stuff'):
    time.sleep(1)

Will print:

Good stuff finished, time:1sec.
tensorpack.utils.timer.print_total_timer()[source]

Print the content of the TotalTimer, if it’s not empty. This function will automatically get called when program exits.

class tensorpack.utils.timer.IterSpeedCounter(print_every, name=None)[source]

Bases: object

Test how often some code gets reached.

Example

Print the speed of the iteration every 100 times.

speed = IterSpeedCounter(100)
for k in range(1000):
    # do something
    speed()
__init__(print_every, name=None)[source]
Parameters:
  • print_every (int) – interval to print.

  • name (str) – name to used when print.

reset()[source]

tensorpack.utils.viz module

tensorpack.utils.viz.interactive_imshow(img, lclick_cb=None, rclick_cb=None, **kwargs)[source]
Parameters:
  • img (np.ndarray) – an image (expect BGR) to show.

  • rclick_cb (lclick_cb,) – a callback func(img, x, y) for left/right click event.

  • kwargs – can be {key_cb_a: callback_img, key_cb_b: callback_img}, to specify a callback func(img) for keypress.

Some existing keypress event handler:

  • q: destroy the current window

  • x: execute sys.exit()

  • s: save image to “out.png”

tensorpack.utils.viz.stack_patches(patch_list, nr_row, nr_col, border=None, pad=False, bgcolor=255, viz=False, lclick_cb=None)[source]

Stacked patches into grid, to produce visualizations like the following:

https://github.com/tensorpack/tensorpack/raw/master/examples/GAN/demo/BEGAN-CelebA-samples.jpg
Parameters:
  • patch_list (list[ndarray] or ndarray) – NHW or NHWC images in [0,255].

  • nr_row (int), nr_col(int) – rows and cols of the grid. nr_col * nr_row must be no less than len(patch_list).

  • border (int) – border length between images. Defaults to 0.05 * min(patch_width, patch_height).

  • pad (boolean) – when patch_list is a list, pad all patches to the maximum height and width. This option allows stacking patches of different shapes together.

  • bgcolor (int or 3-tuple) – background color in [0, 255]. Either an int or a BGR tuple.

  • viz (bool) – whether to use interactive_imshow() to visualize the results.

  • lclick_cb – A callback function f(patch, patch index in patch_list) to get called when a patch get clicked in imshow.

Returns:

np.ndarray – the stacked image.

tensorpack.utils.viz.gen_stack_patches(patch_list, nr_row=None, nr_col=None, border=None, max_width=1000, max_height=1000, bgcolor=255, viz=False, lclick_cb=None)[source]

Similar to stack_patches() but with a generator interface. It takes a much-longer list and yields stacked results one by one. For example, if patch_list contains 1000 images and nr_row==nr_col==10, this generator yields 10 stacked images.

Parameters:
  • nr_row (int), nr_col(int) – rows and cols of each result.

  • max_width (int), max_height(int) – Maximum allowed size of the stacked image. If nr_row/nr_col are None, this number will be used to infer the rows and cols. Otherwise the option is ignored.

  • border, viz, lclick_cb (patch_list,) – same as in stack_patches().

Yields:

np.ndarray – the stacked image.

tensorpack.utils.viz.dump_dataflow_images(df, index=0, batched=True, number=1000, output_dir=None, scale=1, resize=None, viz=None, flipRGB=False)[source]

Dump or visualize images of a DataFlow.

Parameters:
  • df (DataFlow) – the DataFlow.

  • index (int) – the index of the image component.

  • batched (bool) – whether the component contains batched images (NHW or NHWC) or not (HW or HWC).

  • number (int) – how many datapoint to take from the DataFlow.

  • output_dir (str) – output directory to save images, default to not save.

  • scale (float) – scale the value, usually either 1 or 255.

  • resize (tuple or None) – tuple of (h, w) to resize the images to.

  • viz (tuple or None) – tuple of (h, w) determining the grid size to use with gen_stack_patches() for visualization. No visualization will happen by default.

  • flipRGB (bool) – apply a RGB<->BGR conversion or not.

tensorpack.utils.viz.intensity_to_rgb(intensity, cmap='cubehelix', normalize=False)[source]

Convert a 1-channel matrix of intensities to an RGB image employing a colormap. This function requires matplotlib. See matplotlib colormaps for a list of available colormap.

Parameters:
  • intensity (np.ndarray) – array of intensities such as saliency.

  • cmap (str) – name of the colormap to use.

  • normalize (bool) – if True, will normalize the intensity so that it has minimum 0 and maximum 1.

Returns:

np.ndarray – an RGB float32 image in range [0, 255], a colored heatmap.

tensorpack.utils.viz.draw_boxes(im, boxes, labels=None, color=None)[source]
Parameters:
  • im (np.ndarray) – a BGR image in range [0,255]. It will not be modified.

  • boxes (np.ndarray) – a numpy array of shape Nx4 where each row is [x1, y1, x2, y2].

  • labels – (list[str] or None)

  • color – a 3-tuple (in range [0, 255]). By default will choose automatically.

Returns:

np.ndarray – a new image.

tensorpack.utils.gpu module

tensorpack.utils.gpu.change_gpu(val)[source]
Returns:a context where CUDA_VISIBLE_DEVICES=val.
tensorpack.utils.gpu.get_num_gpu()[source]
Returns:int – #available GPUs in CUDA_VISIBLE_DEVICES, or in the system.