Skip to main content

A progress bar that aggregates the values of each iteration.

Project description

dlprog

Deep Learning Progress

PyPI


A Python library for progress bars with the function of aggregating each iteration's value.
It helps manage the loss of each epoch in deep learning or machine learning training.

demo

Installation

pip install dlprog

General Usage

Setup

from dlprog import Progress
prog = Progress()

Example

import random
import time
n_epochs = 3
n_iter = 10

prog.start(n_epochs=n_epochs, n_iter=n_iter, label='value') # Initialize start time and epoch.
for _ in range(n_epochs):
    for _ in range(n_iter):
        time.sleep(0.1)
        value = random.random()
        prog.update(value) # Update progress bar and aggregate value.
1/3: ######################################## 100% [00:00:01.06] value: 0.64755 
2/3: ######################################## 100% [00:00:01.05] value: 0.41097 
3/3: ######################################## 100% [00:00:01.06] value: 0.26648 

Get each epoch's value

>>> prog.values
[0.6475490908029968, 0.4109736504929395, 0.26648041702649705]

Call get_all_values() method to get all values of each iteration. And get_all_times() method to get all times of each iteration.

In machine learning training

Setup.
train_progress function is a shortcut for Progress class. Return a progress bar that is suited for machine learning training.

from dlprog import train_progress
prog = train_progress()

Example. Case of training a deep learning model with PyTorch.

n_epochs = 3
n_iter = len(dataloader)

prog.start(n_epochs=n_epochs, n_iter=n_iter)
for _ in range(n_epochs):
    for x, label in dataloader:
        optimizer.zero_grad()
        y = model(x)
        loss = criterion(y, label)
        loss.backward()
        optimizer.step()
        prog.update(loss.item())

Output

1/3: ######################################## 100% [00:00:03.08] loss: 0.34099 
2/3: ######################################## 100% [00:00:03.12] loss: 0.15259 
3/3: ######################################## 100% [00:00:03.14] loss: 0.10684 

If you want to obtain weighted exact values considering batch size:

prog.update(loss.item(), weight=len(x))

Advanced usage

Advanced arguments, functions, etc.
Also, see API Reference if you want to know more.

leave_freq

Argument that controls the frequency of leaving the progress bar.

n_epochs = 12
n_iter = 10
prog.start(n_epochs=n_epochs, n_iter=n_iter, leave_freq=4)
for _ in range(n_epochs):
    for _ in range(n_iter):
        time.sleep(0.1)
        value = random.random()
        prog.update(value)

Output

 4/12: ######################################## 100% [00:00:01.06] loss: 0.34203 
 8/12: ######################################## 100% [00:00:01.05] loss: 0.47886 
12/12: ######################################## 100% [00:00:01.05] loss: 0.40241 

unit

Argument that multiple epochs as a unit.

n_epochs = 12
n_iter = 10
prog.start(n_epochs=n_epochs, n_iter=n_iter, unit=4)
for _ in range(n_epochs):
    for _ in range(n_iter):
        time.sleep(0.1)
        value = random.random()
        prog.update(value)

Output

  1-4/12: ######################################## 100% [00:00:04.21] value: 0.49179 
  5-8/12: ######################################## 100% [00:00:04.20] value: 0.51518 
 9-12/12: ######################################## 100% [00:00:04.18] value: 0.54546 

Add note

You can add a note to the progress bar.

n_iter = 10
prog.start(n_iter=n_iter, note='This is a note')
for _ in range(n_iter):
    time.sleep(0.1)
    value = random.random()
    prog.update(value)

Output

1: ######################################## 100% [00:00:01.05] 0.58703, This is a note 

You can also add a note when update() as note argument.
Also, you can add a note when end of epoch usin memo() if defer=True.

n_epochs = 3
prog.start(
    n_epochs=n_epochs,
    n_iter=len(trainloader),
    label='train_loss',
    defer=True,
    width=20,
)
for _ in range(n_epochs):
    for x, label in trainloader:
        optimizer.zero_grad()
        y = model(x)
        loss = criterion(y, label)
        loss.backward()
        optimizer.step()
        prog.update(loss.item())
    test_loss = eval_model(model)
    prog.memo(f'test_loss: {test_loss:.5f}')

Output

1/3: #################### 100% [00:00:02.83] train_loss: 0.34094, test_loss: 0.18194 
2/3: #################### 100% [00:00:02.70] train_loss: 0.15433, test_loss: 0.12987 
3/3: #################### 100% [00:00:02.79] train_loss: 0.10651, test_loss: 0.09783 

Multiple values

If you want to aggregate multiple values, set n_values and input values as a list.

n_epochs = 3
n_iter = 10
prog.start(n_epochs=n_epochs, n_iter=n_iter, n_values=2)
for _ in range(n_epochs):
    for _ in range(n_iter):
        time.sleep(0.1)
        value1 = random.random()
        value2 = random.random() * 10
        prog.update([value1, value2])

Output

1/3: ######################################## 100% [00:00:01.05] 0.47956, 4.96049 
2/3: ######################################## 100% [00:00:01.05] 0.30275, 4.86003 
3/3: ######################################## 100% [00:00:01.05] 0.43296, 3.31025 

You can input multiple labels as a list instead of n_values.

prog.start(n_iter=n_iter, label=['value1', 'value2'])

Default attributes

Progress object keeps constructor arguments as default attributes.
These attributes are used when not specified in start().

Attributes specified in start() is used preferentially while this running (until next start() or reset()).

If a required attribute (n_iter) has already been specified, start() can be skipped.

Version History

1.0.0 (2023-07-13)

  • Add Progress class.
  • Add train_progress function.

1.1.0 (2023-07-13)

  • Add values attribute.
  • Add leave_freq argument.
  • Add unit argument.

1.2.0 (2023-09-24)

  • Add note argument, memo() method, and defer argument.
  • Support multiple values.
  • Add round argument.
  • Support changing separator strings.
  • Support skipping start().
  • Write API Reference.
  • Other minor adjustments.

1.2.1 (2023-09-25)

  • Support note=None in memo().
  • Change timing of note reset from epoch_reset to bar_reset.

1.2.2 (2023-09-25)

  • Fix bug that not set note=None defaultly in memo().

1.2.3 (2023-11-28)

  • Fix bug that argument label is not available when with_test=True in train_progress().

1.2.4 (2023-11-29)

  • Fix bug that argument width is not available when with_test=True in train_progress().

1.2.5 (2024-01-17)

  • Add get_all_values() method.
  • Add get_all_times() method.

1.2.6 (2024-01-18)

  • Fix bug that the time (minutes) is not displayed correctly.

1.2.7 (2024-05-10, Latest)

  • Add store_all_values and store_all_times arguments.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

dlprog-1.2.7.tar.gz (10.5 kB view hashes)

Uploaded Source

Built Distribution

dlprog-1.2.7-py3-none-any.whl (8.9 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page