🤾
MLStudy
  • README
  • Linux
    • command
    • Basic_commands
    • Advanced_commands
    • Linuxvirtual_machine_operate
    • virtual machine
  • Python
    • Deploy_Issue
    • Model_Analysis
    • Model_concept
    • Model_Grammar
    • Python_print_format
    • Model_ComFuc
    • Deep_learning_know
  • Theoretical knowledge
    • MLE_MAP_Bayesian
    • TV_denoise
  • Research
    • Writing
    • How to read
  • To_be_classified
    • tqdm
    • fast_visit_github
    • windows_issue
    • Zotero
    • data_struct
    • dataset
      • chapter_1
      • chapter_2
      • chapter_3
      • chapter_4
      • chapter_5
      • chapter_6
      • chapter_7
Powered by GitBook
On this page
  1. To_be_classified

tqdm

Tqdm 是 Python 进度条库,可以在 Python 长循环中添加一个进度提示信息。用户只需要封装任意的迭代器,是一个快速、扩展性强的进度条工具库。

安装方法 ==pip install tqdm==

使用方法

1.传入可迭代对象range

import time
from tqdm import *
for i in tqdm(range(1000)):
    time.sleep(.01)   #进度条每0.01s前进一次,总时间为1000*0.01=10s 

# 运行结果如下
100%|██████████| 1000/1000 [00:10<00:00, 93.21it/s]  

使用trange, trange(i) 是 tqdm(range(i)) 的简单写法

from tqdm import trange

for i in trange(1000):
    time.sleep(.01)

# 运行结果如下
100%|██████████| 1000/1000 [00:10<00:00, 93.21it/s]  

2.为进度条设置描述set_description

在for循环外部初始化tqdm,可以打印其他信息:

import time
from tqdm import tqdm

pbar = tqdm(["a","b","c","d"])

for char in pbar:
    pbar.set_description("Processing %s" % char) # 设置描述
    time.sleep(1)  # 每个任务分配1s
    
# 结果如下
  0%|          | 0/4 [00:00<?, ?it/s]

Processing a:   0%|          | 0/4 [00:00<?, ?it/s]
Processing a:  25%|██▌       | 1/4 [00:01<00:03,  1.01s/it]
Processing b:  25%|██▌       | 1/4 [00:01<00:03,  1.01s/it]
Processing b:  50%|█████     | 2/4 [00:02<00:02,  1.01s/it]
Processing c:  50%|█████     | 2/4 [00:02<00:02,  1.01s/it]
Processing c:  75%|███████▌  | 3/4 [00:03<00:01,  1.01s/it]
Processing d:  75%|███████▌  | 3/4 [00:03<00:01,  1.01s/it]
Processing d: 100%|██████████| 4/4 [00:04<00:00,  1.01s/it]

3.手动控制进度

import time
from tqdm import tqdm

with tqdm(total=200) as pbar:
    for i in range(20):
        pbar.update(10)
        time.sleep(.1)

# 结果如下,一共更新了20次
0%|          | 0/200 [00:00<?, ?it/s]
10%|█         | 20/200 [00:00<00:00, 199.48it/s]
 15%|█▌        | 30/200 [00:00<00:01, 150.95it/s]
 20%|██        | 40/200 [00:00<00:01, 128.76it/s]
 25%|██▌       | 50/200 [00:00<00:01, 115.72it/s]

4.tqdm的write方法

5.手动设置处理的进度

通过update方法可以控制每次进度条更新的进度:

from tqdm import tqdm 
import time
#total参数设置进度条的总长度
with tqdm(total=100) as pbar:
    for i in range(100):
        time.sleep(0.1)
        pbar.update(1)  #每次更新进度条的长度
#结果
  0%|          | 0/100 [00:00<?, ?it/s]
  1%|          | 1/100 [00:00<00:09,  9.98it/s]
  2%|▏         | 2/100 [00:00<00:09,  9.83it/s]
  3%|▎         | 3/100 [00:00<00:10,  9.65it/s]
  4%|▍         | 4/100 [00:00<00:10,  9.53it/s]
  5%|▌         | 5/100 [00:00<00:09,  9.55it/s]
  ...
  100%|██████████| 100/100 [00:10<00:00,  9.45it/s]

除了使用with之外,还可以使用另外一种方法实现上面的效果:

from tqdm import tqdm
import time
 
#total参数设置进度条的总长度
pbar = tqdm(total=100)
for i in range(100):
  time.sleep(0.05)
  #每次更新进度条的长度
  pbar.update(1)
#别忘了关闭占用的资源
pbar.close()

6.自定义进度条显示信息

通过set_description和set_postfix方法设置进度条显示信息:

from tqdm import trange
from random import random,randint
import time
 
with trange(10) as t:
  for i in t:
    #设置进度条左边显示的信息
    t.set_description("GEN %i"%i)
    #设置进度条右边显示的信息
    t.set_postfix(loss=random(),gen=randint(1,999),str="h",lst=[1,2])
    time.sleep(0.1)

tqdm同一行显示进度条

from tqdm import tqdm
import time
for i in tqdm(range(1000), ncols=10): 
    time.sleep(0.001)

因为ncols默认值超过了窗口宽度,导致需要在下一行显示进度。所以把宽度设置到合适的值就可以在同一行内显示。而进度条的宽度设置参数为ncols=10。

在深度学习中如何使用

下面是一段手写数字识别代码

import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
from torch.utils.data import DataLoader
import torchvision.datasets as datasets
import torchvision.transforms as transforms
from tqdm import tqdm
class CNN(nn.Module):
    def __init__(self,in_channels=1,num_classes=10):
        super().__init__()
        self.conv1 = nn.Conv2d(in_channels=1,out_channels=8,kernel_size=(3,3),stride=(1,1),padding=(1,1))
        self.pool = nn.MaxPool2d(kernel_size=(2,2),stride=(2,2))
        self.conv2 = nn.Conv2d(in_channels=8,out_channels=16,kernel_size=(3,3),stride=(1,1),padding=(1,1))
        self.fc1 = nn.Linear(16*7*7,num_classes)
    def forward(self,x):
        x = F.relu(self.conv1(x))
        x = self.pool(x)
        x = F.relu(self.conv2(x))
        x = self.pool(x)
        x = x.reshape(x.shape[0],-1)
        x = self.fc1(x)
        return x

# Set device
device = torch.device("cuda"if torch.cuda.is_available() else "cpu")
print(device)
# Hyperparameters
in_channels = 1
num_classes = 10
learning_rate = 0.001
batch_size = 64
num_epochs = 5

# Load Data
train_dataset = datasets.MNIST(root="dataset/",train=True,transform=transforms.ToTensor(),download=True)
train_loader = DataLoader(dataset=train_dataset,batch_size=batch_size,shuffle=True)

test_dataset = datasets.MNIST(root="dataset/",train=False,transform=transforms.ToTensor(),download=True)
test_loader = DataLoader(dataset=train_dataset,batch_size=batch_size,shuffle=True)

# Initialize network
model = CNN().to(device)

# Loss and optimizer
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(),lr=learning_rate)

# Train Network

for epoch in range(num_epochs):
    # for data,targets in tqdm(train_loadr,leave=False) # 进度显示在一行
    for data,targets in tqdm(train_loader):
        # Get data to cuda if possible
        data = data.to(device=device)
        targets = targets.to(device=device)

        # forward
        scores = model(data)
        loss = criterion(scores,targets)

        # backward
        optimizer.zero_grad()
        loss.backward()

        # gardient descent or adam step
        optimizer.step()

如果不给train_loader加上tqdm会什么也不现实,加入后显示如下:

100%|██████████| 938/938 [00:06<00:00, 152.23it/s]
100%|██████████| 938/938 [00:06<00:00, 153.74it/s]
100%|██████████| 938/938 [00:06<00:00, 155.11it/s]
100%|██████████| 938/938 [00:06<00:00, 153.08it/s]
100%|██████████| 938/938 [00:06<00:00, 153.57it/s]

对应5个eopch的5个进度显示 如果我们想要它显示在一行,在tqdm中添加leave=False参数即可

for data,targets in tqdm(train_loadr,leave=False) # 进度显示在一行

注意 我们将tqdm加到train_loader无法得到索引,要如何得到索引呢?可以使用下面的代码

for index,(data,targets) in tqdm(enumerate(train_loader),total=len(train_loader),leave = True):

我们觉得还有点不太满足现在的进度条,我们得给他加上我们需要的信息,比如准确率,loss值,如何加呢?

for epoch in range(num_epochs):
    losses = []
    accuracy = []
    # for data,targets in tqdm(train_loadr,leave=False) # 进度显示在一行
    
    loop = tqdm((train_loader), total = len(train_loader))
    for data,targets in loop:
        # Get data to cuda if possible
        data = data.to(device=device)
        targets = targets.to(device=device)

        # forward
        scores = model(data)
        loss = criterion(scores,targets)
        losses.append(loss)
        # backward
        optimizer.zero_grad()
        loss.backward()
        _,predictions = scores.max(1)
        num_correct = (predictions == targets).sum()
        running_train_acc = float(num_correct) / float(data.shape[0])
        accuracy.append(running_train_acc)
        # gardient descent or adam step
        optimizer.step()
        
        loop.set_description(f'Epoch [{epoch}/{num_epochs}]')
        loop.set_postfix(loss = loss.item(),acc = running_train_acc)
PreviousTo_be_classifiedNextfast_visit_github

Last updated 2 years ago

可以看到我们的acc和epoch 还有loss都打在了控制台中。以上就是相关tqdm的使用

本文转自 ,如有侵权,请联系删除。

https://blog.csdn.net/wxd1233/article/details/118371404
在这里插入图片描述