Upload Date | July 27 2024 12:07 AM |
Views | 2 |
System Information | |
---|---|
Operating System | Android 14 |
Model | HONOR ELP-NX9 |
Model ID | HONOR ELP-NX9 |
Motherboard | ELP |
Governor | walt |
CPU Information | |
---|---|
Name | ARM ARMv8 |
Topology | 1 Processor, 8 Cores |
Identifier | ARM implementer 65 architecture 8 variant 0 part 3458 revision 1 |
Base Frequency | 2.02 GHz |
Cluster 1 | 3 Cores @ 2.02 GHz |
Cluster 2 | 4 Cores @ 2.80 GHz |
Cluster 3 | 1 Core @ 3.01 GHz |
Memory Information | |
---|---|
Size | 11.04 GB |
Inference Information | |
---|---|
Framework | TensorFlow Lite |
Backend | CPU |
Device | Default |
Workload | Accuracy | Score | |
---|---|---|---|
Image Classification (F32)
|
100% |
431
80.7 IPS |
|
Image Classification (F16)
|
100% |
430
80.5 IPS |
|
Image Classification (I8)
|
97% |
717
134.1 IPS |
|
Image Segmentation (F32)
|
100% |
532
8.88 IPS |
|
Image Segmentation (F16)
|
100% |
529
8.83 IPS |
|
Image Segmentation (I8)
|
98% |
567
9.47 IPS |
|
Pose Estimation (F32)
|
100% |
980
1.19 IPS |
|
Pose Estimation (F16)
|
100% |
912
1.10 IPS |
|
Pose Estimation (I8)
|
100% |
4232
5.12 IPS |
|
Object Detection (F32)
|
100% |
463
34.6 IPS |
|
Object Detection (F16)
|
100% |
462
34.5 IPS |
|
Object Detection (I8)
|
61% |
868
64.8 IPS |
|
Face Detection (F32)
|
100% |
1059
12.6 IPS |
|
Face Detection (F16)
|
100% |
1039
12.4 IPS |
|
Face Detection (I8)
|
86% |
1889
22.5 IPS |
|
Depth Estimation (F32)
|
100% |
1009
7.82 IPS |
|
Depth Estimation (F16)
|
100% |
986
7.65 IPS |
|
Depth Estimation (I8)
|
95% |
1915
14.9 IPS |
|
Style Transfer (F32)
|
100% |
1751
2.30 IPS |
|
Style Transfer (F16)
|
100% |
1698
2.23 IPS |
|
Style Transfer (I8)
|
98% |
3793
4.99 IPS |
|
Image Super-Resolution (F32)
|
100% |
614
21.9 IPS |
|
Image Super-Resolution (F16)
|
100% |
627
22.4 IPS |
|
Image Super-Resolution (I8)
|
98% |
1662
59.4 IPS |
|
Text Classification (F32)
|
100% |
484
694.9 IPS |
|
Text Classification (F16)
|
100% |
489
703.3 IPS |
|
Text Classification (I8)
|
92% |
358
515.1 IPS |
|
Machine Translation (F32)
|
100% |
1000
18.4 IPS |
|
Machine Translation (F16)
|
100% |
968
17.8 IPS |
|
Machine Translation (I8)
|
64% |
447
8.23 IPS |