Will the lima driver be used with kernel 5.1?
Early community images for H3, H6 and A64
-
jernej -
September 13, 2018 at 8:00 PM -
Closed -
Thread is Unresolved
-
-
- Official Post
What do you mean by "used"? Lima definitively won't be included in Linux 5.1. I'm also unsure if I want to fiddle with patches since it will probably be in 5.2, which is just ~4 months away and I rather spend time on HW decoding.
-
Which image exactly? I rebuild all addons to be compatible with latest update (few days back).
Besides, I didn't test any addon and I don't use docker at all (on LibreELEC or PC) so I can't really help you with that.
I have your latest update
Error when I try run docker manualy:Code
Display More/storage/.kodi/addons/service.system.docker/bin/dockerd WARN[2019-03-07T16:57:59.527796432Z] could not change group /var/run/docker.sock to docker: group docker not found INFO[2019-03-07T16:57:59.534824051Z] libcontainerd: started new containerd process pid=4071 INFO[2019-03-07T16:57:59.535682598Z] parsed scheme: "unix" module=grpc INFO[2019-03-07T16:57:59.536138371Z] scheme "unix" not registered, fallback to default scheme module=grpc INFO[2019-03-07T16:57:59.537474189Z] ccResolverWrapper: sending new addresses to cc: [{unix:///var/run/docker/containerd/containerd.sock 0 <nil>}] module=grpc INFO[2019-03-07T16:57:59.538004083Z] ClientConn switching balancer to "pick_first" module=grpc INFO[2019-03-07T16:57:59.538688054Z] pickfirstBalancer: HandleSubConnStateChange: 0x2c9e170, CONNECTING module=grpc INFO[2019-03-07T16:57:59.643492328Z] starting containerd revision=1.2.2 version=v1.2.2 INFO[2019-03-07T16:57:59.646860144Z] loading plugin "io.containerd.content.v1.content"... type=io.containerd.content.v1 INFO[2019-03-07T16:57:59.647483617Z] loading plugin "io.containerd.snapshotter.v1.aufs"... type=io.containerd.snapshotter.v1 WARN[2019-03-07T16:57:59.655669812Z] failed to load plugin io.containerd.snapshotter.v1.aufs error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /lib/modules/5.0.0-rc8\n": exit status 1" INFO[2019-03-07T16:57:59.656325326Z] loading plugin "io.containerd.snapshotter.v1.native"... type=io.containerd.snapshotter.v1 INFO[2019-03-07T16:57:59.656867553Z] loading plugin "io.containerd.snapshotter.v1.overlayfs"... type=io.containerd.snapshotter.v1 INFO[2019-03-07T16:57:59.657837387Z] loading plugin "io.containerd.snapshotter.v1.zfs"... type=io.containerd.snapshotter.v1 WARN[2019-03-07T16:57:59.659022629Z] failed to load plugin io.containerd.snapshotter.v1.zfs error="path /storage/.kodi/userdata/addon_data/service.system.docker/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter" INFO[2019-03-07T16:57:59.659428487Z] loading plugin "io.containerd.metadata.v1.bolt"... type=io.containerd.metadata.v1 WARN[2019-03-07T16:57:59.659756556Z] could not use snapshotter zfs in metadata plugin error="path /storage/.kodi/userdata/addon_data/service.system.docker/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter" WARN[2019-03-07T16:57:59.660011629Z] could not use snapshotter aufs in metadata plugin error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /lib/modules/5.0.0-rc8\n": exit status 1" INFO[2019-03-07T16:57:59.662177704Z] loading plugin "io.containerd.differ.v1.walking"... type=io.containerd.differ.v1 INFO[2019-03-07T16:57:59.662590061Z] loading plugin "io.containerd.gc.v1.scheduler"... type=io.containerd.gc.v1 INFO[2019-03-07T16:57:59.663023751Z] loading plugin "io.containerd.service.v1.containers-service"... type=io.containerd.service.v1 INFO[2019-03-07T16:57:59.663412443Z] loading plugin "io.containerd.service.v1.content-service"... type=io.containerd.service.v1 INFO[2019-03-07T16:57:59.663751887Z] loading plugin "io.containerd.service.v1.diff-service"... type=io.containerd.service.v1 INFO[2019-03-07T16:57:59.664027459Z] loading plugin "io.containerd.service.v1.images-service"... type=io.containerd.service.v1 INFO[2019-03-07T16:57:59.664285323Z] loading plugin "io.containerd.service.v1.leases-service"... type=io.containerd.service.v1 INFO[2019-03-07T16:57:59.664539395Z] loading plugin "io.containerd.service.v1.namespaces-service"... type=io.containerd.service.v1 INFO[2019-03-07T16:57:59.664853091Z] loading plugin "io.containerd.service.v1.snapshots-service"... type=io.containerd.service.v1 INFO[2019-03-07T16:57:59.665133620Z] loading plugin "io.containerd.runtime.v1.linux"... type=io.containerd.runtime.v1 INFO[2019-03-07T16:57:59.666492063Z] loading plugin "io.containerd.runtime.v2.task"... type=io.containerd.runtime.v2 INFO[2019-03-07T16:57:59.667345193Z] loading plugin "io.containerd.monitor.v1.cgroups"... type=io.containerd.monitor.v1 INFO[2019-03-07T16:57:59.671485268Z] loading plugin "io.containerd.service.v1.tasks-service"... type=io.containerd.service.v1 INFO[2019-03-07T16:57:59.671841503Z] loading plugin "io.containerd.internal.v1.restart"... type=io.containerd.internal.v1 INFO[2019-03-07T16:57:59.672242570Z] loading plugin "io.containerd.grpc.v1.containers"... type=io.containerd.grpc.v1 INFO[2019-03-07T16:57:59.672426937Z] loading plugin "io.containerd.grpc.v1.content"... type=io.containerd.grpc.v1 INFO[2019-03-07T16:57:59.672584013Z] loading plugin "io.containerd.grpc.v1.diff"... type=io.containerd.grpc.v1 INFO[2019-03-07T16:57:59.672728091Z] loading plugin "io.containerd.grpc.v1.events"... type=io.containerd.grpc.v1 INFO[2019-03-07T16:57:59.672869668Z] loading plugin "io.containerd.grpc.v1.healt=hcheck"... type=io.containerd.grpc.v1 INFO[2019-03-07T16:57:59.673015578Z] loading plugin "io.containerd.grpc.v1.images"... type=io.containerd.grpc.v1 INFO[2019-03-07T16:57:59.673151864Z] loading plugin "io.containerd.grpc.v1.leases"... type=io.containerd.grpc.v1 INFO[2019-03-07T16:57:59.673285650Z] loading plugin "io.containerd.grpc.v1.namespaces"... type=io.containerd.grpc.v1 INFO[2019-03-07T16:57:59.673420186Z] loading plugin "io.containerd.internal.v1.opt"... type=io.containerd.internal.v1 WARN[2019-03-07T16:57:59.674189195Z] failed to load plugin io.containerd.internal.v1.opt error="mkdir /opt: read-only file system" INFO[2019-03-07T16:57:59.674329023Z] loading plugin "io.containerd.grpc.v1.snapshots"... type=io.containerd.grpc.v1 INFO[2019-03-07T16:57:59.674496099Z] loading plugin "io.containerd.grpc.v1.tasks"... type=io.containerd.grpc.v1 INFO[2019-03-07T16:57:59.674638801Z] loading plugin "io.containerd.grpc.v1.version"... type=io.containerd.grpc.v1 INFO[2019-03-07T16:57:59.674799211Z] loading plugin "io.containerd.grpc.v1.introspection"... type=io.containerd.grpc.v1 INFO[2019-03-07T16:57:59.675851250Z] serving... address="/var/run/docker/containerd/containerd-debug.sock" INFO[2019-03-07T16:57:59.676245650Z] serving... address="/var/run/docker/containerd/containerd.sock" INFO[2019-03-07T16:57:59.676469682Z] containerd successfully booted in 0.036411s INFO[2019-03-07T16:57:59.677092114Z] pickfirstBalancer: HandleSubConnStateChange: 0x2c9e170, READY module=grpc INFO[2019-03-07T16:57:59.737720715Z] parsed scheme: "unix" module=grpc INFO[2019-03-07T16:57:59.737903124Z] scheme "unix" not registered, fallback to default scheme module=grpc INFO[2019-03-07T16:57:59.738260568Z] parsed scheme: "unix" module=grpc INFO[2019-03-07T16:57:59.738377646Z] scheme "unix" not registered, fallback to default scheme module=grpc INFO[2019-03-07T16:57:59.739084491Z] ccResolverWrapper: sending new addresses to cc: [{unix:///var/run/docker/containerd/containerd.sock 0 <nil>}] module=grpc INFO[2019-03-07T16:57:59.739327147Z] ClientConn switching balancer to "pick_first" module=grpc INFO[2019-03-07T16:57:59.739678258Z] blockingPicker: the picked transport is not ready, loop back to repick module=grpc INFO[2019-03-07T16:57:59.739678216Z] pickfirstBalancer: HandleSubConnStateChange: 0x30d18f0, CONNECTING module=grpc INFO[2019-03-07T16:57:59.741968161Z] ccResolverWrapper: sending new addresses to cc: [{unix:///var/run/docker/containerd/containerd.sock 0 <nil>}] module=grpc INFO[2019-03-07T16:57:59.742329645Z] ClientConn switching balancer to "pick_first" module=grpc INFO[2019-03-07T16:57:59.742791001Z] pickfirstBalancer: HandleSubConnStateChange: 0x30d18f0, READY module=grpc INFO[2019-03-07T16:57:59.742792459Z] pickfirstBalancer: HandleSubConnStateChange: 0x2d2e3a0, CONNECTING module=grpc INFO[2019-03-07T16:57:59.746172733Z] pickfirstBalancer: HandleSubConnStateChange: 0x2d2e3a0, READY module=grpc INFO[2019-03-07T16:57:59.751938114Z] [graphdriver] using prior storage driver: overlay2 INFO[2019-03-07T16:58:00.625658849Z] Graph migration to content-addressability took 0.00 seconds WARN[2019-03-07T16:58:00.626578810Z] Your kernel does not support cgroup memory limit WARN[2019-03-07T16:58:00.626693181Z] Unable to find cpu cgroup in mounts WARN[2019-03-07T16:58:00.626848591Z] Unable to find blkio cgroup in mounts WARN[2019-03-07T16:58:00.626921296Z] Unable to find cpuset cgroup in mounts WARN[2019-03-07T16:58:00.627221033Z] mountpoint for pids not found INFO[2019-03-07T16:58:00.629451647Z] stopping healt=hcheck following graceful shutdown module=libcontainerd INFO[2019-03-07T16:58:00.631820880Z] pickfirstBalancer: HandleSubConnStateChange: 0x30d18f0, TRANSIENT_FAILURE module=grpc INFO[2019-03-07T16:58:00.632099785Z] pickfirstBalancer: HandleSubConnStateChange: 0x30d18f0, CONNECTING module=grpc INFO[2019-03-07T16:58:00.632445604Z] stopping event stream following graceful shutdown error="context canceled" module=libcontainerd namespace=plugins.moby Error starting daemon: Devices cgroup isn't mounted
-
Thank you for the ansver, but this way not working, undo I update stok firmware. =)
But after update, all work ok!
Thank you!
-
What do you mean by "used"? Lima definitively won't be included in Linux 5.1. I'm also unsure if I want to fiddle with patches since it will probably be in 5.2, which is just ~4 months away and I rather spend time on HW decoding.
Just a question as I read on phoronix lima was on the merging list, but it seems it's for 5.2.
-
Hi Jernej! I'm very bad speak english.
And there will be a version for H5? ( Orangepi PC2)
-
Anybody tried the last versions on an OrangePI One?
It seems to be very unstable, properly related to the low memory of 512mb?
Read some comments before about memory problems with 512mb.
I can confirm that it works almost stable with a lightwight SKIN like Eminence and even better with the old default Skin "Confluence".
Hardware rendering is working pretty well now ... It just stucks randomly at some point (like after 5minutes or 10), after a few moments sound comes back but video is frozen.
I am using Orange PI One.
-
- Official Post
And there will be a version for H5? ( Orangepi PC2)
Check topic about H5 issue. In short, not until someone solves it.
-
jernej how can I have dynamic partitions, I see in config that
CONFIG_CMDLINE="root=/dev/ram0 rdinit=/init" but then it will fail with
Code
Display More[ 5.494752] VFS: Cannot open root device "ram0" or unknown-block(1,0): error -2 [ 5.508541] Please append a correct "root=" boot option; here are the available partitions: [ 5.523302] 0100 4096 ram0 [ 5.523307] (driver?) [ 5.541918] 0101 4096 ram1 [ 5.541922] (driver?) [ 5.560341] 0102 4096 ram2 [ 5.560344] (driver?) [ 5.578479] 0103 4096 ram3 [ 5.578481] (driver?) [ 5.596412] 0104 4096 ram4 [ 5.596414] (driver?) [ 5.614186] 0105 4096 ram5 [ 5.614189] (driver?) [ 5.631956] 0106 4096 ram6 [ 5.631960] (driver?) [ 5.649566] 0107 4096 ram7 [ 5.649569] (driver?) [ 5.666967] 0108 4096 ram8 [ 5.666970] (driver?) [ 5.684096] 0109 4096 ram9 [ 5.684099] (driver?) [ 5.701093] 010a 4096 ram10 [ 5.701095] (driver?) [ 5.718087] 010b 4096 ram11 [ 5.718090] (driver?) [ 5.734929] 010c 4096 ram12 [ 5.734931] (driver?) [ 5.751460] 010d 4096 ram13 [ 5.751462] (driver?) [ 5.767700] 010e 4096 ram14 [ 5.767703] (driver?) [ 5.783731] 010f 4096 ram15 [ 5.783735] (driver?) [ 5.799664] b300 3872256 mmcblk0 [ 5.799667] driver: mmcblk [ 5.816122] b301 524288 mmcblk0p1 f183f041-01 [ 5.816125] [ 5.832472] b302 32768 mmcblk0p2 f183f041-02 [ 5.832474] [ 5.848631] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(1,0) [ 5.861656] CPU1: stopping [ 5.869091] CPU: 1 PID: 0 Comm: swapper/1 Not tainted 5.0.0 #1 [ 5.879740] Hardware name: Allwinner sun7i (A20) Family
but the u-boot image is booting with
CodeKernel command line: boot=UUID=0803-5609 disk=UUID=7faabc68-44a0-446a-a69b-dcdcbbf4aaf3 console=ttyS0,115200 console=tty1 root=/dev/ram0 rdinit=/init
can this be set outside of the generic kernel config or per device type so I won't have to change it in the generic one ?
Regards
-
- Official Post
lucize You can change kernel arguments in extlinux/extlinux.conf file on FAT32 partition. I hope it helps.
-
I think CONFIG_CMDLINE_FROM_BOOTLOADER is needed for that
-
- Official Post
No, just change "APPEND" line to anything you want. This is the place where I change kernel boot parameters if I need to.
-
Yes, tried that but didn't work, either of them (the append or the added option to kernel)
Maybe if root is not set in kernel config it would take the u-boot root settings, root= from kernel is appended to root= from u-boot if both sections are present
but even if I changed the config to /dev/mmcblk0p1 or /dev/mmcblk0p2 it will fail, maybe the image is not correctly created ?
I have this fat32 partition 1 and the partition 2 is a 32MB one
-
I also have a Beelink GS1.
Seems really close the Orange Pi in specs.
Unless you REALLY want to support it, I would NOT recommend buying it. It's already been abandoned by Beelink.
Their own forums only get answered about once every 3 months.
But if you guys DO pull off a miracle, I would LOVE to get off the anemic version of android it comes with.
Specs are in the link.
Gearbest Beelink GS1 6K -
hi there im gone try it on orange pi pc elt see if we can have performance on kodi 18 in orange pi!
-
New update files are uploaded for A64 and H3, along with updated addon repository (needed due to Kodi bump).
What's new:
- Kodi 18.1
- Linux 5.0-rc8
- 4K H264 decoding supported
- fixed 4K downscaling (watching 4K videos on 1080p screen)
- minor decoding speed improvements
- H264 memory optimizations (*might* help for OPi Lite & One, but I'm not sure)
- HDMI CEC fix
Known issues:
- some H264 and HEVC videos are still not correctly decoded, investigation is ongoing.
- Kodi screen calibration is not stored (fix in Kodi 18.2)
how can I update orange pi pc?
-
how can I update orange pi pc?
-
A local copy of the docbook.xsl wasn't found on your system consider installing package like docbook-xsl
A local copy of the docbook.xsl wasn't found on your system consider installing package like docbook-xsl
Traceback (most recent call last):
File "./buildtools/bin/waf", line 76, in
Scripting.prepare(t, cwd, VERSION, wafdir)
File "/home/dima/src/http://LibreELEC.tv/build.LibreELEC-H3.arm-9.1-devel/samba-4.9.4/third_party/waf/wafadmin/Scripting.py", line 145, in prepare
prepare_impl(t, cwd, ver, wafdir)
File "/home/dima/src/http://LibreELEC.tv/build.LibreELEC-H3.arm-9.1-devel/samba-4.9.4/third_party/waf/wafadmin/Scripting.py", line 135, in prepare_impl
main()
File "/home/dima/src/http://LibreELEC.tv/build.LibreELEC-H3.arm-9.1-devel/samba-4.9.4/wscript", line 450, in main
wildcard_main(wildcard_cmd)
File "./buildtools/wafsamba/samba_wildcard.py", line 84, in wildcard_main
rewrite_compile_targets()
File "./buildtools/wafsamba/samba_wildcard.py", line 56, in rewrite_compile_targets
bld = fake_build_environment(info=False)
File "./buildtools/wafsamba/samba_wildcard.py", line 141, in fake_build_environment
bld.load_dirs(proj[SRCDIR], proj[BLDDIR])
File "/home/dima/src/http://LibreELEC.tv/build.LibreELEC-H3.arm-9.1-devel/samba-4.9.4/third_party/waf/wafadmin/Build.py", line 427, in load_dirs
self.load()
File "/home/dima/src/http://LibreELEC.tv/build.LibreELEC-H3.arm-9.1-devel/samba-4.9.4/third_party/waf/wafadmin/Build.py", line 176, in load
if f: data = cPickle.load(f)
EOFError
FAILURE: scripts/install samba has failed!
[140/237] [FAIL] install sambaThe following logs for this failure are available:
stdout: /home/dima/src/http://LibreELEC.tv/build.LibreELEC-H3.arm-9.1-devel/.threads/logs/134/stdout
stderr: /home/dima/src/http://LibreELEC.tv/build.LibreELEC-H3.arm-9.1-devel/.threads/logs/134/stderrparallel: This job failed:
package_worker 2 134 237 'install samba'
FAILURE: samba:target.build.failed exists, a previous dependency process has failed (seq: 134)The following logs for this failure are available:
stdout: /home/dima/src/http://LibreELEC.tv/build.LibreELEC-H3.arm-9.1-devel/.threads/logs/134/stdout
stderr: /home/dima/src/http://LibreELEC.tv/build.LibreELEC-H3.arm-9.1-devel/.threads/logs/134/stderrFAILURE: scripts/install network has failed!
[141/237] [FAIL] install networkThe following logs for this failure are available:
stdout: /home/dima/src/http://LibreELEC.tv/build.LibreELEC-H3.arm-9.1-devel/.threads/logs/141/stdout
stderr: /home/dima/src/http://LibreELEC.tv/build.LibreELEC-H3.arm-9.1-devel/.threads/logs/141/stderrparallel: This job failed:
package_worker 1 141 237 'install network'
Parallel build failure - see log for details. Time of failure: Sun Mar 10 19:22:26 +04 2019
Makefile:12: recipe for target 'image' failed
make: *** [image] Error 1???
-