Upload Date | January 08 2024 09:09 PM |
Views | 2 |
System Information | |
---|---|
Operating System | Microsoft Windows 11 Pro (64-bit) |
Model | Gigabyte Technology Co., Ltd. B650 AORUS ELITE AX |
Motherboard | Gigabyte Technology Co., Ltd. B650 AORUS ELITE AX |
Power Plan | Bilanciato |
CPU Information | |
---|---|
Name | AMD Ryzen 9 7950X3D |
Topology | 1 Processor, 16 Cores, 32 Threads |
Identifier | AuthenticAMD Family 25 Model 97 Stepping 2 |
Base Frequency | 4.20 GHz |
Cluster 1 | 16 Cores |
L1 Instruction Cache | 32.0 KB x 16 |
L1 Data Cache | 32.0 KB x 16 |
L2 Cache | 1.00 MB x 16 |
L3 Cache | 32.0 MB x 2 |
Memory Information | |
---|---|
Size | 64.00 GB |
Inference Information | |
---|---|
Framework | ONNX |
Backend | DirectML |
Device | AMD Radeon(TM) Graphics |
Workload | Accuracy | Score | |
---|---|---|---|
Image Classification (F32)
|
100% |
591
110.7 IPS |
|
Image Classification (F16)
|
100% |
594
111.1 IPS |
|
Image Classification (I8)
|
100% |
559
104.5 IPS |
|
Image Segmentation (F32)
|
100% |
1191
19.9 IPS |
|
Image Segmentation (F16)
|
100% |
1190
19.9 IPS |
|
Image Segmentation (I8)
|
98% |
1025
17.1 IPS |
|
Pose Estimation (F32)
|
100% |
6571
7.96 IPS |
|
Pose Estimation (F16)
|
100% |
6567
7.95 IPS |
|
Pose Estimation (I8)
|
100% |
5920
7.17 IPS |
|
Object Detection (F32)
|
100% |
691
51.6 IPS |
|
Object Detection (F16)
|
100% |
687
51.3 IPS |
|
Object Detection (I8)
|
62% |
653
48.7 IPS |
|
Face Detection (F32)
|
100% |
1909
22.7 IPS |
|
Face Detection (F16)
|
100% |
1909
22.7 IPS |
|
Face Detection (I8)
|
89% |
1645
19.6 IPS |
|
Depth Estimation (F32)
|
100% |
3674
28.5 IPS |
|
Depth Estimation (F16)
|
100% |
3673
28.5 IPS |
|
Depth Estimation (I8)
|
94% |
3348
26.0 IPS |
|
Style Transfer (F32)
|
100% |
11100
14.6 IPS |
|
Style Transfer (F16)
|
100% |
11110
14.6 IPS |
|
Style Transfer (I8)
|
98% |
9917
13.0 IPS |
|
Image Super-Resolution (F32)
|
100% |
2845
101.6 IPS |
|
Image Super-Resolution (F16)
|
100% |
2840
101.4 IPS |
|
Image Super-Resolution (I8)
|
99% |
2207
78.8 IPS |
|
Text Classification (F32)
|
100% |
686
985.6 IPS |
|
Text Classification (F16)
|
100% |
685
985.0 IPS |
|
Text Classification (I8)
|
98% |
493
708.5 IPS |
|
Machine Translation (F32)
|
100% |
1241
22.8 IPS |
|
Machine Translation (F16)
|
100% |
1239
22.8 IPS |
|
Machine Translation (I8)
|
70% |
678
12.5 IPS |