Solution to Allwinner H616 going into sleep during low temperature reboot process

Theme

When H618 is supporting DDR material adaptation, the reboot experiment goes into sleep abnormally, and the following log1 is reported in the reboot aging test.

[2023-07-11,16:56:44][ 40.325238][ T1] init: Untracked pid 1888 exited with status 0
[2023-07-11,16:56:44][40.325295][T5] binder: undelivered death notification, 00000000eae863b8
[2023-07-11,16:56:44][40.332300][T1] init: Service 'vendor.bluetooth-1-0' (pid 1861) received signal 11
[2023-07-11,16:56:44][ 40.348931][ T1] init: Sending signal 9 to service 'vendor.bluetooth-1-0' (pid 1861) process group...
[2023-07-11,16:56:44][40.359943][T1] libprocessgroup: Successfully killed process cgroup uid 1002 pid 1861 in 0ms
[2023-07-11,16:56:44][ 40.370774][ T1] init: Untracked pid 1890 exited with status 0
[2023-07-11,16:56:45][ 40.835661][ T193] type=1400 audit(1685954038.968:313): avc: denied { search } for comm="pool-1-thread-1" name=" com.clock.pt1.keeptesting" dev="dm-5" ino=2713 scontext=u:r:platform_app:s0:c512,c768 tcontext=u:object_r:app_data_file:s0:c512,c768 tclass=dir permissive=1 app=com.clock.pt1.keeptesting
[2023-07-11,16:56:45][ 40.865824][ T193] type=1400 audit(1685954038.968:314): avc: denied { write } for comm="pool-1-thread-1" name=" shared_prefs" dev="dm-5" ino=2737 scontext=u:r:platform_app:s0:c512,c768 tcontext=u:object_r:app_data_file:s0:c512,c768 tclass=dir permissive=1 app=com.clock. pt1.keeptesting
[2023-07-11,16:56:45][ 40.895506][ T193] type=1400 audit(1685954038.968:315): avc: denied { remove_name } for comm="pool-1-thread-1" name=" umeng_general_config.xml" dev="dm-5" ino=982382 scontext=u:r:platform_app:s0:c512,c768 tcontext=u:object_r:app_data_file:s0:c512,c768 tclass=dir permissive=1 app=com. clock.pt1.keeptesting
[2023-07-11,16:56:45][40.927110][T193] type=1400 audit(1685954038.968:316): avc: denied { add_name } for comm="pool-1-thread-1" name=" umeng_general_config.xml.bak" scontext=u:r:platform_app:s0:c512,c768 tcontext=u:object_r:app_data_file:s0:c512,c768 tclass=dir permissive=1 app=com.clock.pt1.keeptesting
[2023-07-11,16:56:45][ 41.090281][ T214] audit: audit_lost=197 audit_rate_limit=5 audit_backlog_limit=64
[2023-07-11,16:56:45][ 41.091551][ T193] type=1400 audit(1685954039.224:317): avc: denied { read } for comm="Binder:207_2" name="event_count" dev= "sysfs" ino=21828 scontext=u:r:system_suspend:s0 tcontext=u:object_r:sysfs:s0 tclass=file permissive=1
[2023-07-11,16:56:45][ 41.099158][ T214] audit: rate limit exceeded
[2023-07-11,16:56:45][41.176369][T164] init: Received sys.powerctl='reboot,' from pid: 490 (system_server)
[2023-07-11,16:56:45][41.185428][T164] init: sys.powerctl: do_shutdown: 0 IsShuttingDown: 0
[2023-07-11,16:56:46][41.489695][T142] disp_runtime_idle
[2023-07-11,16:56:46][41.494260][T142] disp_runtime_suspend
[2023-07-11,16:56:46][ 41.509803][ T490] binder: 490:490 transaction failed 29189/-22, size 116-0 line 2714
[2023-07-11,16:56:46][41.549310][T662] binder_alloc: 288: binder_alloc_buf, no vma
[2023-07-11,16:56:46][ 41.556327][ T662] binder: 662:662 transaction failed 29189/-3, size 88-0 line 2904
[2023-07-11,16:56:46][41.577077][T43] binder: release 288:316 transaction 28975 in, still active
[2023-07-11,16:56:46][ 41.585214][ T43] binder: send failed reply for transaction 28975 to 662:901
[2023-07-11,16:56:46][41.662344][T142] disp_runtime_suspend finish
[2023-07-11,16:56:46]Gatekeeper_TA_DestroyEntryPoint
[2023-07-11,16:56:46][41.870414][T1] libprocessgroup: Successfully killed process cgroup uid 0 pid 246 in 164ms
[2023-07-11,16:56:46][ 42.095552][ T1] init: Sending signal 9 to service 'netd' (pid 245) process group...
[2023-07-11,16:56:46][42.110584][T1] libprocessgroup: Successfully killed process cgroup uid 0 pid 245 in 5ms
[2023-07-11,16:56:46][ 42.120586][ T1] init: Sending signal 9 to service 'statsd' (pid 244) process group...
[2023-07-11,16:56:46][42.135427][T1] libprocessgroup: Successfully killed process cgroup uid 1066 pid 244 in 5ms
[2023-07-11,16:56:46][ 42.145660][ T1] init: Sending signal 9 to service 'optee' (pid 211) process group...
[2023-07-11,16:56:46][42.160205][T1] libprocessgroup: Successfully killed process cgroup uid 0 pid 211 in 5ms
[2023-07-11,16:56:46][ 42.170237][ T1] init: Sending signal 9 to service 'vendor.keymint-default' (pid 210) process group...
[2023-07-11,16:56:46][42.186469][T1] libprocessgroup: Successfully killed process cgroup uid 9999 pid 210 in 5ms
[2023-07-11,16:56:46][ 42.196962][ T1] init: Sending signal 9 to service 'vendor.boot-hal-1-2' (pid 209) process group...
[2023-07-11,16:56:46][ 42.201237][ T469] binder: undelivered death notification, 00000000f39a1bc8
[2023-07-11,16:56:46][ 42.201864][ T1935] PM: suspend entry (deep)
[2023-07-11,16:56:46][42.233610][T1935] Filesystems sync: 0.031 seconds
[2023-07-11,16:56:47][ 42.438211][ T1935] Freezing user space processes ... (elapsed 0.001 seconds) done.
[2023-07-11,16:56:47][ 42.448235][ T1935] OOM killer disabled.
[2023-07-11,16:56:47][ 42.452739][ T1935] Freezing remaining freezable tasks ... (elapsed 0.001 seconds) done.

Specific performance

During the reboot aging process, the reboot process was suspended and entered standby.

Problem analysis

When the reboot ages, turn off hibernation.

Judging from the kernel log, the Android layer initiated the reboot first, and the kernel also received the reboot message.

#1[2023-07-11,16:56:45][ 41.176369][ T164] init: Received sys.powerctl=reboot,’ from pid: 490 (system_server)
[2023-07-11,16:56:45][41.185428][T164] init: sys.powerctl: do_shutdown: 0 IsShuttingDown: 0

At the same time, judging from the Android log, the Android layer will also perform the screen shutdown action.

So at this time, during the reboot process, Android executed hibernation, interrupted the reboot, and the hibernation was successful. It is generally recommended to turn off hibernation when performing a reboot and aging. A similar phenomenon has occurred before.

Original post link: https://bbs.aw-ol.com/topic/4320/
To obtain resources and discuss issues, you can go to the Allwinner online developer community: https://www.aw-ol.com
For the latest news about QuanZhi and its developers, you can follow QuanZhi Online WeChat Official Account