YOLO11鏀硅繘 | 娉ㄦ剰鍔涙満鍒?| 杞婚噺绾х殑绌洪棿缁勫寮烘ā鍧桽GE銆愬叏缃戠嫭瀹躲€?

绉嬫嫑闈㈣瘯涓撴爮鎺ㄨ崘聽锛?/span>娣卞害瀛︿範绠楁硶宸ョ▼甯堥潰璇曢棶棰樻€荤粨銆愮櫨闈㈢畻娉曞伐绋嬪笀銆戔€斺€旂偣鍑诲嵆鍙烦杞?/p>


馃挕馃挕馃挕鏈笓鏍忔墍鏈夌▼搴忓潎缁忚繃娴嬭瘯锛屽彲鎴愬姛鎵ц馃挕馃挕馃挕


鏈枃浠嬬粛浜嗕竴涓┖闂寸粍澧炲己锛圫GE锛夋ā鍧楋紝瀹冨彲浠ヤ负姣忎釜璇箟缁勪腑姣忎釜绌洪棿浣嶇疆鐢熸垚涓€涓敞鎰忓姏鍥犲瓙锛屼粠鑰岃皟鏁存瘡涓瓙鐗瑰緛鐨勯噸瑕佹€э紝浠ヤ究姣忎釜缁勫彲浠ヨ嚜涓诲湴澧炲己鍏跺涔犵殑琛ㄨ揪骞舵姂鍒跺彲鑳界殑鍣0銆傛枃绔犲湪浠嬬粛涓昏鐨勫師鐞嗗悗锛屽皢鎵嬫妸鎵嬫暀瀛﹀浣曡繘琛屾ā鍧楃殑浠g爜娣诲姞鍜屼慨鏀?/strong>锛屽苟灏嗕慨鏀瑰悗鐨?span >瀹屾暣浠g爜鏀惧湪鏂囩珷鐨勬渶鍚?/strong>锛屾柟渚垮ぇ瀹朵竴閿繍琛岋紝灏忕櫧涔熷彲杞绘澗涓婃墜瀹炶返銆備互甯姪鎮ㄦ洿濂藉湴瀛︿範娣卞害瀛︿範鐩爣妫€娴媃OLO绯诲垪鐨勬寫鎴樸€?/p>

涓撴爮鍦板潃锛?/span>YOLO11鍏ラ棬 + 鏀硅繘娑ㄧ偣鈥斺€旂偣鍑诲嵆鍙烦杞?娆㈣繋璁㈤槄

鐩綍

1. 璁烘枃

2. 灏哠GE娣诲姞鍒癥OLO11涓?/p>

2.1 浠g爜瀹炵幇

2.2 鏇存敼init.py鏂囦欢

2.3 鏂板yaml鏂囦欢

2.4 娉ㄥ唽妯″潡

2.5 鎵ц绋嬪簭

3.淇敼鍚庣殑缃戠粶缁撴瀯鍥?/p>

4. 瀹屾暣浠g爜鍒嗕韩

5. GFLOPs

6. 杩涢樁

7.鎬荤粨


1. 璁烘枃

YOLO11鏀硅繘 | 娉ㄦ剰鍔涙満鍒?| 杞婚噺绾х殑绌洪棿缁勫寮烘ā鍧桽GE銆愬叏缃戠嫭瀹躲€?></p> 
<p><strong>璁烘枃鍦板潃锛?/strong>Spatial Group-wise Enhance: Improving Semantic Feature Learning in Convolutional Networks鈥斺€旂偣鍑诲嵆鍙烦杞?/p> 
<p><strong>瀹樻柟浠g爜锛?/strong>瀹樻柟浠g爜浠撳簱鈥斺€旂偣鍑诲嵆鍙烦杞?/p> 
<h2 id=2. 灏哠GE娣诲姞鍒癥OLO11涓?/h2>

2.1 浠g爜瀹炵幇

鍏抽敭姝ラ涓€锛?/strong>灏嗕笅闈唬鐮佺矘璐村埌鍦?ultralytics/ultralytics/nn/modules/block.py涓?/p>

from torch.nn import init

class SpatialGroupEnhance(nn.Module):
    def __init__(self, groups=8):
        super().__init__()
        self.groups = groups
        self.avg_pool = nn.AdaptiveAvgPool2d(1)
        self.weight = nn.Parameter(torch.zeros(1, groups, 1, 1))
        self.bias = nn.Parameter(torch.zeros(1, groups, 1, 1))
        self.sig = nn.Sigmoid()
        self.init_weights()

    def init_weights(self):
        for m in self.modules():
            if isinstance(m, nn.Conv2d):
                init.kaiming_normal_(m.weight, mode='fan_out')
                if m.bias is not None:
                    init.constant_(m.bias, 0)
            elif isinstance(m, nn.BatchNorm2d):
                init.constant_(m.weight, 1)
                init.constant_(m.bias, 0)
            elif isinstance(m, nn.Linear):
                init.normal_(m.weight, std=0.001)
                if m.bias is not None:
                    init.constant_(m.bias, 0)

    def forward(self, x):
        b, c, h, w = x.shape
        x = x.view(b * self.groups, -1, h, w)  # bs*g,dim//g,h,w
        xn = x * self.avg_pool(x)  # bs*g,dim//g,h,w
        xn = xn.sum(dim=1, keepdim=True)  # bs*g,1,h,w
        t = xn.view(b * self.groups, -1)  # bs*g,h*w

        t = t - t.mean(dim=1, keepdim=True)  # bs*g,h*w
        std = t.std(dim=1, keepdim=True) + 1e-5
        t = t / std  # bs*g,h*w
        t = t.view(b, self.groups, h, w)  # bs,g,h*w

        t = t * self.weight + self.bias  # bs,g,h*w
        t = t.view(b * self.groups, 1, h, w)  # bs*g,1,h*w
        x = x * self.sig(t)
        x = x.view(b, c, h, w)
        return x

2.2 鏇存敼init.py鏂囦欢

鍏抽敭姝ラ浜岋細淇敼modules鏂囦欢澶逛笅鐨刜_init__.py鏂囦欢锛屽厛瀵煎叆鍑芥暟

YOLO11鏀硅繘 | 娉ㄦ剰鍔涙満鍒?| 杞婚噺绾х殑绌洪棿缁勫寮烘ā鍧桽GE銆愬叏缃戠嫭瀹躲€?></p> 
<p>鐒跺悗鍦ㄤ笅闈㈢殑__all__涓0鏄庡嚱鏁?/p> 
<p ><img class=2.3 鏂板yaml鏂囦欢

鍏抽敭姝ラ涓夛細鍦?\ultralytics\ultralytics\cfg\models\11涓嬫柊寤烘枃浠?yolo11_SGE.yaml骞跺皢涓嬮潰浠g爜澶嶅埗杩涘幓

  • 鐩爣妫€娴?/li>
# Ultralytics YOLO 馃殌, AGPL-3.0 license
# YOLO11 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect

# Parameters
nc: 80 # number of classes
scales: # model compound scaling constants, i.e. 'model=yolo11n.yaml' will call yolo11.yaml with scale 'n'
  # [depth, width, max_channels]
  n: [0.50, 0.25, 1024] # summary: 319 layers, 2624080 parameters, 2624064 gradients, 6.6 GFLOPs
  s: [0.50, 0.50, 1024] # summary: 319 layers, 9458752 parameters, 9458736 gradients, 21.7 GFLOPs
  m: [0.50, 1.00, 512] # summary: 409 layers, 20114688 parameters, 20114672 gradients, 68.5 GFLOPs
  l: [1.00, 1.00, 512] # summary: 631 layers, 25372160 parameters, 25372144 gradients, 87.6 GFLOPs
  x: [1.00, 1.50, 512] # summary: 631 layers, 56966176 parameters, 56966160 gradients, 196.0 GFLOPs

# YOLO11n backbone
backbone:
  # [from, repeats, module, args]
  - [-1, 1, Conv, [64, 3, 2]] # 0-P1/2
  - [-1, 1, Conv, [128, 3, 2]] # 1-P2/4
  - [-1, 2, C3k2, [256, False, 0.25]]
  - [-1, 1, Conv, [256, 3, 2]] # 3-P3/8
  - [-1, 2, C3k2, [512, False, 0.25]]
  - [-1, 1, Conv, [512, 3, 2]] # 5-P4/16
  - [-1, 2, C3k2, [512, True]]
  - [-1, 1, Conv, [1024, 3, 2]] # 7-P5/32
  - [-1, 2, C3k2, [1024, True]]
  - [-1, 1, SPPF, [1024, 5]] # 9
  - [-1, 2, C2PSA, [1024]] # 10

# YOLO11n head
head:
  - [-1, 1, nn.Upsample, [None, 2, "nearest"]]
  - [[-1, 6], 1, Concat, [1]] # cat backbone P4
  - [-1, 2, C3k2, [512, False]] # 13

  - [-1, 1, nn.Upsample, [None, 2, "nearest"]]
  - [[-1, 4], 1, Concat, [1]] # cat backbone P3
  - [-1, 2, C3k2, [256, False]] # 16 (P3/8-small)

  - [-1, 1, Conv, [256, 3, 2]]
  - [[-1, 13], 1, Concat, [1]] # cat head P4
  - [-1, 2, C3k2, [512, False]] # 19 (P4/16-medium)

  - [-1, 1, Conv, [512, 3, 2]]
  - [[-1, 10], 1, Concat, [1]] # cat head P5
  - [-1, 2, C3k2, [1024, True]] # 22 (P5/32-large)
  - [-1, 1, SpatialGroupEnhance, [1024]]

  - [[16, 19, 23], 1, Detect, [nc]] # Detect(P3, P4, P5)

  • 璇箟鍒嗗壊
# Ultralytics YOLO 馃殌, AGPL-3.0 license
# YOLO11 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect

# Parameters
nc: 80 # number of classes
scales: # model compound scaling constants, i.e. 'model=yolo11n.yaml' will call yolo11.yaml with scale 'n'
  # [depth, width, max_channels]
  n: [0.50, 0.25, 1024] # summary: 319 layers, 2624080 parameters, 2624064 gradients, 6.6 GFLOPs
  s: [0.50, 0.50, 1024] # summary: 319 layers, 9458752 parameters, 9458736 gradients, 21.7 GFLOPs
  m: [0.50, 1.00, 512] # summary: 409 layers, 20114688 parameters, 20114672 gradients, 68.5 GFLOPs
  l: [1.00, 1.00, 512] # summary: 631 layers, 25372160 parameters, 25372144 gradients, 87.6 GFLOPs
  x: [1.00, 1.50, 512] # summary: 631 layers, 56966176 parameters, 56966160 gradients, 196.0 GFLOPs

# YOLO11n backbone
backbone:
  # [from, repeats, module, args]
  - [-1, 1, Conv, [64, 3, 2]] # 0-P1/2
  - [-1, 1, Conv, [128, 3, 2]] # 1-P2/4
  - [-1, 2, C3k2, [256, False, 0.25]]
  - [-1, 1, Conv, [256, 3, 2]] # 3-P3/8
  - [-1, 2, C3k2, [512, False, 0.25]]
  - [-1, 1, Conv, [512, 3, 2]] # 5-P4/16
  - [-1, 2, C3k2, [512, True]]
  - [-1, 1, Conv, [1024, 3, 2]] # 7-P5/32
  - [-1, 2, C3k2, [1024, True]]
  - [-1, 1, SPPF, [1024, 5]] # 9
  - [-1, 2, C2PSA, [1024]] # 10

# YOLO11n head
head:
  - [-1, 1, nn.Upsample, [None, 2, "nearest"]]
  - [[-1, 6], 1, Concat, [1]] # cat backbone P4
  - [-1, 2, C3k2, [512, False]] # 13

  - [-1, 1, nn.Upsample, [None, 2, "nearest"]]
  - [[-1, 4], 1, Concat, [1]] # cat backbone P3
  - [-1, 2, C3k2, [256, False]] # 16 (P3/8-small)

  - [-1, 1, Conv, [256, 3, 2]]
  - [[-1, 13], 1, Concat, [1]] # cat head P4
  - [-1, 2, C3k2, [512, False]] # 19 (P4/16-medium)

  - [-1, 1, Conv, [512, 3, 2]]
  - [[-1, 10], 1, Concat, [1]] # cat head P5
  - [-1, 2, C3k2, [1024, True]] # 22 (P5/32-large)
  - [-1, 1, SpatialGroupEnhance, [1024]]

  - [[16, 19, 23], 1, Segment, [nc, 32, 256]] # Segment(P3, P4, P5)
  • 鏃嬭浆鐩爣妫€娴嬄?/li>
# Ultralytics YOLO 馃殌, AGPL-3.0 license
# YOLO11 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect

# Parameters
nc: 80 # number of classes
scales: # model compound scaling constants, i.e. 'model=yolo11n.yaml' will call yolo11.yaml with scale 'n'
  # [depth, width, max_channels]
  n: [0.50, 0.25, 1024] # summary: 319 layers, 2624080 parameters, 2624064 gradients, 6.6 GFLOPs
  s: [0.50, 0.50, 1024] # summary: 319 layers, 9458752 parameters, 9458736 gradients, 21.7 GFLOPs
  m: [0.50, 1.00, 512] # summary: 409 layers, 20114688 parameters, 20114672 gradients, 68.5 GFLOPs
  l: [1.00, 1.00, 512] # summary: 631 layers, 25372160 parameters, 25372144 gradients, 87.6 GFLOPs
  x: [1.00, 1.50, 512] # summary: 631 layers, 56966176 parameters, 56966160 gradients, 196.0 GFLOPs

# YOLO11n backbone
backbone:
  # [from, repeats, module, args]
  - [-1, 1, Conv, [64, 3, 2]] # 0-P1/2
  - [-1, 1, Conv, [128, 3, 2]] # 1-P2/4
  - [-1, 2, C3k2, [256, False, 0.25]]
  - [-1, 1, Conv, [256, 3, 2]] # 3-P3/8
  - [-1, 2, C3k2, [512, False, 0.25]]
  - [-1, 1, Conv, [512, 3, 2]] # 5-P4/16
  - [-1, 2, C3k2, [512, True]]
  - [-1, 1, Conv, [1024, 3, 2]] # 7-P5/32
  - [-1, 2, C3k2, [1024, True]]
  - [-1, 1, SPPF, [1024, 5]] # 9
  - [-1, 2, C2PSA, [1024]] # 10

# YOLO11n head
head:
  - [-1, 1, nn.Upsample, [None, 2, "nearest"]]
  - [[-1, 6], 1, Concat, [1]] # cat backbone P4
  - [-1, 2, C3k2, [512, False]] # 13

  - [-1, 1, nn.Upsample, [None, 2, "nearest"]]
  - [[-1, 4], 1, Concat, [1]] # cat backbone P3
  - [-1, 2, C3k2, [256, False]] # 16 (P3/8-small)

  - [-1, 1, Conv, [256, 3, 2]]
  - [[-1, 13], 1, Concat, [1]] # cat head P4
  - [-1, 2, C3k2, [512, False]] # 19 (P4/16-medium)

  - [-1, 1, Conv, [512, 3, 2]]
  - [[-1, 10], 1, Concat, [1]] # cat head P5
  - [-1, 2, C3k2, [1024, True]] # 22 (P5/32-large)
  - [-1, 1, SpatialGroupEnhance, [1024]]

  - [[16, 19, 23], 1, OBB, [nc, 1]] # Detect(P3, P4, P5)

娓╅Θ鎻愮ず锛氭湰鏂囧彧鏄yolo11鍩虹涓婃坊鍔犳ā鍧楋紝濡傛灉瑕佸yolo11n/l/m/x杩涜娣诲姞鍒欏彧闇€瑕佹寚瀹氬搴旂殑depth_multiple 鍜?width_multiple銆偮?/p>


# YOLO11n
depth_multiple: 0.50  # model depth multiple
width_multiple: 0.25  # layer channel multiple
max_channel锛?024
 
# YOLO11s
depth_multiple: 0.50  # model depth multiple
width_multiple: 0.50  # layer channel multiple
max_channel锛?024
 
# YOLO11m
depth_multiple: 0.50  # model depth multiple
width_multiple: 1.00  # layer channel multiple
max_channel锛?12
 
# YOLO11l 
depth_multiple: 1.00  # model depth multiple
width_multiple: 1.00  # layer channel multiple
max_channel锛?12 
 
# YOLO11x
depth_multiple: 1.00  # model depth multiple
width_multiple: 1.50 # layer channel multiple
max_channel锛?12

2.4 娉ㄥ唽妯″潡

鍏抽敭姝ラ鍥涳細鍦╬arse_model鍑芥暟涓繘琛屾敞鍐岋紝鍦╬arse_model鍑芥暟涓坊鍔犱笅闈㈠唴瀹?/p>

鍏堝湪task.py瀵煎叆鍑芥暟

YOLO11鏀硅繘 | 娉ㄦ剰鍔涙満鍒?| 杞婚噺绾х殑绌洪棿缁勫寮烘ā鍧桽GE銆愬叏缃戠嫭瀹躲€?></p> 
<p>鐒跺悗鍦╰ask.py鏂囦欢涓嬫壘鍒皃arse_model杩欎釜鍑芥暟锛屽涓嬪浘锛屾坊鍔犅燬patialGroupEnhance</p> 
<p ><img class=2.5 鎵ц绋嬪簭

鍦╰rain.py涓紝灏唌odel鐨勫弬鏁拌矾寰勮缃负yolo11_SGE.yaml鐨勮矾寰?/p>

寤鸿澶у鍐欑粷瀵硅矾寰勶紝纭繚涓€瀹氳兘鎵惧埌

from ultralytics import YOLO
import warnings
warnings.filterwarnings('ignore')
from pathlib import Path
 
if __name__ == '__main__':
 
 
    # 鍔犺浇妯″瀷
    model = YOLO("ultralytics/cfg/11/yolo11.yaml")  # 浣犺閫夋嫨鐨勬ā鍨媦aml鏂囦欢鍦板潃
    # Use the model
    results = model.train(data=r"浣犵殑鏁版嵁闆嗙殑yaml鏂囦欢鍦板潃",
                          epochs=100, batch=16, imgsz=640, workers=4, name=Path(model.cfg).stem)  # 璁粌妯″瀷

馃殌杩愯绋嬪簭锛屽鏋滃嚭鐜颁笅闈㈢殑鍐呭鍒欒鏄庢坊鍔犳垚鍔燄煔€聽

                   from  n    params  module                                       arguments
  0                  -1  1       464  ultralytics.nn.modules.conv.Conv             [3, 16, 3, 2]
  1                  -1  1      4672  ultralytics.nn.modules.conv.Conv             [16, 32, 3, 2]
  2                  -1  1      6640  ultralytics.nn.modules.block.C3k2            [32, 64, 1, False, 0.25]      
  3                  -1  1     36992  ultralytics.nn.modules.conv.Conv             [64, 64, 3, 2]
  4                  -1  1     26080  ultralytics.nn.modules.block.C3k2            [64, 128, 1, False, 0.25]     
  5                  -1  1    147712  ultralytics.nn.modules.conv.Conv             [128, 128, 3, 2]
  6                  -1  1     87040  ultralytics.nn.modules.block.C3k2            [128, 128, 1, True]
  7                  -1  1    295424  ultralytics.nn.modules.conv.Conv             [128, 256, 3, 2]
  8                  -1  1    346112  ultralytics.nn.modules.block.C3k2            [256, 256, 1, True]
  9                  -1  1    164608  ultralytics.nn.modules.block.SPPF            [256, 256, 5]
 10                  -1  1    249728  ultralytics.nn.modules.block.C2PSA           [256, 256, 1]
 11                  -1  1         0  torch.nn.modules.upsampling.Upsample         [None, 2, 'nearest']
 12             [-1, 6]  1         0  ultralytics.nn.modules.conv.Concat           [1]
 13                  -1  1    111296  ultralytics.nn.modules.block.C3k2            [384, 128, 1, False]
 14                  -1  1         0  torch.nn.modules.upsampling.Upsample         [None, 2, 'nearest']
 15             [-1, 4]  1         0  ultralytics.nn.modules.conv.Concat           [1]
 16                  -1  1     32096  ultralytics.nn.modules.block.C3k2            [256, 64, 1, False]
 17                  -1  1     36992  ultralytics.nn.modules.conv.Conv             [64, 64, 3, 2]
 18            [-1, 13]  1         0  ultralytics.nn.modules.conv.Concat           [1]
 19                  -1  1     86720  ultralytics.nn.modules.block.C3k2            [192, 128, 1, False]
 20                  -1  1    147712  ultralytics.nn.modules.conv.Conv             [128, 128, 3, 2]
 21            [-1, 10]  1         0  ultralytics.nn.modules.conv.Concat           [1]
 22                  -1  1    378880  ultralytics.nn.modules.block.C3k2            [384, 256, 1, True]
 23                  -1  1       512  ultralytics.nn.modules.block.SpatialGroupEnhance[256]
 24        [16, 19, 22]  1    464912  ultralytics.nn.modules.head.Detect           [80, [64, 128, 256]]
YOLO11_SpatialGroupEnhance summary: 322 layers, 2,624,592 parameters, 2,624,576 gradients, 6.6 GFLOPs

3.淇敼鍚庣殑缃戠粶缁撴瀯鍥?/h2>

YOLO11鏀硅繘 | 娉ㄦ剰鍔涙満鍒?| 杞婚噺绾х殑绌洪棿缁勫寮烘ā鍧桽GE銆愬叏缃戠嫭瀹躲€?></p> 
<h2 id=4. 瀹屾暣浠g爜鍒嗕韩

杩欎釜鍚庢湡琛ュ厖鍚锛屽厛鎸夌収姝ラ鏉ュ嵆鍙?/strong>

5. GFLOPs

鍏充簬GFLOPs鐨勮绠楁柟寮忓彲浠ユ煡鐪?/strong>锛?/span>鐧鹃潰绠楁硶宸ョ▼甯?| 鍗风Н鍩虹鐭ヨ瘑鈥斺€擟onvolution

鏈敼杩涚殑YOLO11n GFLOPs

YOLO11鏀硅繘 | 娉ㄦ剰鍔涙満鍒?| 杞婚噺绾х殑绌洪棿缁勫寮烘ā鍧桽GE銆愬叏缃戠嫭瀹躲€?></p> 
<p>鏀硅繘鍚庣殑GFLOPs</p> 
<p ><img class=6. 杩涢樁

鍙互涓庡叾浠栫殑娉ㄦ剰鍔涙満鍒舵垨鑰呮崯澶卞嚱鏁扮瓑缁撳悎锛岃繘涓€姝ユ彁鍗囨娴嬫晥鏋?/p>

7.鎬荤粨

閫氳繃浠ヤ笂鐨勬敼杩涙柟娉曪紝鎴戜滑鎴愬姛鎻愬崌浜嗘ā鍨嬬殑琛ㄧ幇銆傝繖鍙槸涓€涓紑濮嬶紝鏈潵杩樻湁鏇村浼樺寲鍜屾妧鏈繁鎸栫殑绌洪棿銆傚湪杩欓噷锛屾垜鎯抽殕閲嶅悜澶у鎺ㄨ崘鎴戠殑涓撴爮鈥斺€?strong>銆奩OLO11鏀硅繘鏈夋晥娑ㄧ偣銆?/strong>銆傝繖涓笓鏍忎笓娉ㄤ簬鍓嶆部鐨勬繁搴﹀涔犳妧鏈紝鐗瑰埆鏄洰鏍囨娴嬮鍩熺殑鏈€鏂拌繘灞曪紝涓嶄粎鍖呭惈瀵筜OLO11鐨勬繁鍏ヨВ鏋愬拰鏀硅繘绛栫暐锛岃繕浼氬畾鏈熸洿鏂版潵鑷悇澶ч《浼氾紙濡侰VPR銆丯eurIPS绛夛級鐨勮鏂囧鐜板拰瀹炴垬鍒嗕韩銆?/p>

涓轰粈涔堣闃呮垜鐨勪笓鏍忥紵聽鈥斺€?strong>銆奩OLO11鏀硅繘鏈夋晥娑ㄧ偣銆?/strong>

  1. 鍓嶆部鎶€鏈В璇?/strong>锛氫笓鏍忎笉浠呴檺浜嶻OLO绯诲垪鐨勬敼杩涳紝杩樹細娑电洊鍚勭被涓绘祦涓庢柊鍏寸綉缁滅殑鏈€鏂扮爺绌舵垚鏋滐紝甯姪浣犵揣璺熸妧鏈疆娴併€?/p>

  2. 璇﹀敖鐨勫疄璺靛垎浜?/strong>锛氭墍鏈夊唴瀹瑰疄璺垫€т篃鏋佸己銆傛瘡娆℃洿鏂伴兘浼?span >闄勫甫浠g爜鍜屽叿浣撶殑鏀硅繘姝ラ锛屼繚璇佹瘡浣嶈鑰呴兘鑳借繀閫熶笂鎵嬨€?/p>

  3. 闂浜掑姩涓庣瓟鐤?/strong>锛氳闃呮垜鐨勪笓鏍忓悗锛屼綘灏嗗彲浠ラ殢鏃跺悜鎴戞彁闂紝鑾峰彇鍙婃椂鐨?span >绛旂枒銆?/p>

  4. 瀹炴椂鏇存柊锛岀揣璺熻涓氬姩鎬?/strong>锛氫笉瀹氭湡鍙戝竷鏉ヨ嚜鍏ㄧ悆椤朵細鐨勬渶鏂扮爺绌舵柟鍚戝拰澶嶇幇瀹為獙鎶ュ憡锛岃浣犳椂鍒昏蛋鍦ㄦ妧鏈墠娌裤€?/p>

涓撴爮閫傚悎浜虹兢锛?/strong>

  • 瀵圭洰鏍囨娴嬨€乊OLO绯诲垪缃戠粶鏈夋繁鍘氬叴瓒g殑鍚屽

  • 甯屾湜鍦ㄧ敤YOLO绠楁硶鍐欒鏂?/strong>鐨勫悓瀛?/p>

  • 瀵筜OLO绠楁硶鎰熷叴瓒g殑鍚屽绛?/p>

YOLO11鏀硅繘 | 娉ㄦ剰鍔涙満鍒?| 杞婚噺绾х殑绌洪棿缁勫寮烘ā鍧桽GE銆愬叏缃戠嫭瀹躲€?></p> </div>
                  <p class=

上一篇:Wordpress网站搭建步骤总结概括版,2023年最新总结,小白必看
下一篇:falogincn登录(falogincn设置路由器入口)