Compare commits

..

116 Commits

Author SHA1 Message Date
George Talusan
440926ef73 onvif: clear binaryTimeout on destroy (#1974) 2026-02-03 14:14:57 -08:00
Roman Sokolov
3d1d3727dc hikvision-doorbell: fixes (#1970)
* Let's try to fix the plugin freezing

* hikvision-doorbell version up after merging from main
2026-01-24 08:18:59 -08:00
Koushik Dutta
079878b663 core: allow PATH in terminal service 2026-01-21 16:21:15 -08:00
Koushik Dutta
0d02ea8f08 core: support cwd in terminalservice 2026-01-21 15:15:50 -08:00
Koushik Dutta
f23ad06eef snapshot: verify acls
Some checks failed
Build SDK / Build (push) Has been cancelled
2026-01-19 22:16:44 -08:00
Koushik Dutta
3c8b513c31 sdk: update 2026-01-19 21:34:46 -08:00
Koushik Dutta
35df17334c sdk: AccessControls 2026-01-19 21:21:12 -08:00
Koushik Dutta
2fff8b0044 predict: add segmentation models to onnx/coreml and refactor openvino 2026-01-18 13:58:28 -08:00
Koushik Dutta
f415e4f2e1 rebroadcast: publish beta with native rtmp support 2026-01-18 12:59:31 -08:00
Koushik Dutta
9607bcddcf core: publish 2026-01-17 12:24:48 -08:00
Koushik Dutta
1c7f16ed9f openvino: fix single segmentation shape crash 2026-01-17 12:24:44 -08:00
Koushik Dutta
961cb36a97 openvino: wip segmentation 2026-01-17 12:16:55 -08:00
Raman Gupta
a4d28791ed server: python rpc should use create_task instead of run_coroutine_threadsafe (#1953)
run_coroutine_threadsafe is designed for scheduling coroutines from a
different thread onto the event loop. Since readLoop is already running
as an async function on the event loop, using create_task is the correct
and more efficient approach.

This removes unnecessary thread-safe queue overhead for every RPC message.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-15 09:34:44 -08:00
Koushik Dutta
c1895df062 videoanalysis: fixup detection set 2026-01-12 15:10:36 -08:00
Koushik Dutta
bb902467eb videoanalysis: improve logging 2026-01-12 10:06:35 -08:00
Koushik Dutta
7202e99ab0 detect: publish betas 2026-01-10 15:30:07 -08:00
Koushik Dutta
38bac58fc6 openvino: new model, use huggingface as model source 2026-01-10 15:02:20 -08:00
Koushik Dutta
af8abb6072 rebroadcast: publish rtmp support beta 2026-01-09 12:45:48 -08:00
Koushik Dutta
7ef868e42d rebroadcast: rtmp window acks 2026-01-09 12:35:11 -08:00
Koushik Dutta
0185680791 rebroadcast: remove some bit shifting in favor of read/write uintbe 2026-01-08 21:35:04 -08:00
Koushik Dutta
1349bb7433 rebroadcast: slop rtmp implementation 2026-01-08 21:29:05 -08:00
Koushik Dutta
85074aaa7a detect: pubish betas 2026-01-08 09:37:00 -08:00
Koushik Dutta
beb7ec60ba detect: pubish betas 2026-01-08 09:13:42 -08:00
Koushik Dutta
126c96904b amcrest: publish 2026-01-08 08:31:57 -08:00
Koushik Dutta
70b7b4fa98 coreml: publish beta 2026-01-08 08:30:48 -08:00
Koushik Dutta
2cd73b5a6a openvino: new test model 2026-01-07 10:16:05 -08:00
Koushik Dutta
d6f13c7128 openvino: migrate to hugging face, remove old models. 2026-01-06 16:58:22 -08:00
Koushik Dutta
df1b389ef2 diagnostics: relax person detect for new models 2026-01-06 15:12:19 -08:00
Koushik Dutta
976204c439 coreml: change default model 2026-01-06 15:11:46 -08:00
Joey Stout
1adee0beb8 tuya: bump the tuya plugin and fix for devices (#1963)
* replace tool to use `ffmpeg` and bump v0.0.8

* format code

* wip

* wip: update components

* wip: remove websocket for cameras since they are not supported

* wip: allow changing between different login methods

It will prefer logging in with `Tuya (Smart Life) App` if there was no previous `userId`. Else, it will fall back to `Tuya Developer Account`.

* wip: fetch rtsp from Tuya Sharing SDK

* wip

* feat: add support for light accessory in camera

* fix: resolve indicator not updating

* wip: prevent setting motion if device has no motion detection

* improve mqtt reconnect, also update status

* bump version

* update commit

* bump to beta 3

* quick fix

* changelog

* fixchangelog

* bump version

* fix: resolve mqtt connection issues

* chore: bump version

* fix: use correct property for checking connection state

* chore: update changelog

* chore: bump version

* fix: ensure timeout is actually correct and bound corretly

* chore: update changelog

* bump version

* fix: fix setTimeout undefined function

* chore: update changelog

* fix: fix issue with camera not found

---------

Co-authored-by: ErrorErrorError <16653389+ErrorErrorError@users.noreply.github.com>
Co-authored-by: Erik Bautista Santibanez <erikbautista15@gmail.com>
2026-01-05 11:27:17 -08:00
radinsky
f5a10dd1cc wyze: add preset support (get/goto) and relevant webhook control (#1951)
* wyze: add preset support (get/goto) and relevant webhook control

* Update PTZ presets publishing

* Remove unnecessary ptzCapabilities emit

* removed unnecessary/debug leftovers

* Remove HttpRequestHandler and unused proprietry webhook gotopreset
2026-01-02 13:20:38 -08:00
Jackson Tomlinson
293a940771 amcrest: handle HTTP/1.0 responses in event listener (#1957)
Some Dahua/Amcrest NVRs (e.g., AMDV7208M) respond with HTTP/1.0 instead of
HTTP/1.1. The event listener was only checking for 'HTTP/1.1 200 OK',
causing it to throw 'expected boundary' errors and crash when receiving
HTTP/1.0 responses.

This fix adds support for both HTTP versions.

Fixes motion detection not working on older Dahua OEM NVRs.
2026-01-01 20:51:54 -08:00
Koushik Dutta
67728883cc core: publish oauth login fix 2025-12-31 12:15:35 -08:00
Koushik Dutta
5d02217a3e snapshot: make web hosted images bypass hotlink protection 2025-12-27 19:27:30 -08:00
Koushik Dutta
63a88e727a Merge branch 'main' of github.com:koush/scrypted 2025-12-27 19:07:57 -08:00
Koushik Dutta
1145caeb58 snapshot: make web hosted images bypass hotlink protection 2025-12-27 19:07:50 -08:00
apocaliss92
2cc7ab08fd reolink: add nvr support (#1947)
* work nvr

* Fix interfaces persisting

* Work

* Fix adopt imploding scrypted

* Skip undefined battery level values

* Preserve auth sessions on restart

* Move nvr creation in proper function

* Restore original createDevice with isNvr addition

* Typo

---------

Co-authored-by: Gianluca Ruocco <gianluca.ruocco@xarvio.com>
2025-12-27 09:21:54 -08:00
The Beholder
bfb8c233f4 openvino: avoid CLIP startup timeout by loading HF cache first (#1949)
Scrypted could restart the OpenVINO plugin on startup in offline/firewalled setups because CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32") triggers HuggingFace Hub network checks/retries that exceed the plugin startup watchdog.
Update predict/clip.py to:
- Load the CLIP processor from the local HF cache first (local_files_only=True) so startup is fast/offline-safe.
- Refresh the processor cache online asynchronously in a background thread (asyncio.to_thread) so update checks don’t block startup.
- Add simple log prints to indicate cache load vs refresh success/failure.
2025-12-26 18:38:13 -08:00
Koushik Dutta
ebe6bcc58f client: add worker/fork support to web client 2025-12-16 12:22:03 -08:00
Koushik Dutta
3b0042c922 diagnostics: fix ai slop tests 2025-12-15 16:33:35 -08:00
Koushik Dutta
2f4cd9807b diagnostics: clip/det tests 2025-12-14 15:07:11 -08:00
Koushik Dutta
1711d2a6f7 diagnostics: clip/det tests 2025-12-14 15:04:01 -08:00
Koushik Dutta
2818120b68 diagnostics: url tests 2025-12-13 10:05:51 -08:00
Koushik Dutta
61cf589800 sdk: update tool calls to include id 2025-12-09 16:03:26 -08:00
Koushik Dutta
2c267f6b26 coreml: disable auto restart to work around coreml caching bug filling macos disk until reboot 2025-12-09 12:25:31 -08:00
Koushik Dutta
aa85e7ec19 rebroadcast: avoid mjpeg codecs and warn 2025-12-06 12:38:22 -08:00
Koushik Dutta
e585a48084 sdk: update 2025-12-04 19:35:36 -08:00
Koushik Dutta
465b4a80bb Merge branch 'main' of github.com:koush/scrypted 2025-12-02 08:54:22 -08:00
Koushik Dutta
1b7e24fda7 openvino: publish beta, add notes on 2025.4.0 2025-12-02 08:53:29 -08:00
René
8ec6c61784 docker: Update Watchtower image name in docker-compose.yml (#1937)
I guess Image name is wrong, at least the image which was mentioned here doesn’t exist.
2025-12-01 08:41:36 -08:00
Koushik Dutta
e1f9397ef9 docker: switch to nicholas-fedor/watchtower 2025-11-30 17:39:50 -08:00
Koushik Dutta
3e54db1658 reolink: publish 2025-11-27 18:24:35 -08:00
apocaliss92
a7cc8d0e11 reolink: Check deviceInfo exists (#1935)
Co-authored-by: Gianluca Ruocco <gianluca.ruocco@xarvio.com>
2025-11-27 17:38:21 -08:00
Koushik Dutta
be4b772436 install: remove gstreamer 2025-11-27 12:54:25 -08:00
Koushik Dutta
5e0afa627c reolink: publish 2025-11-27 08:49:25 -08:00
apocaliss92
70c46f9894 - reolink: check and fix netData (#1934)
- reolink: restrict homehub streams to RTSP

Co-authored-by: Gianluca Ruocco <gianluca.ruocco@xarvio.com>
2025-11-27 08:42:35 -08:00
Koushik Dutta
fe94472282 Revert "Reolink: add check for net data, enable/disable RTMP/RTSP/ONVIF/HTTPS when necessary (#1931)"
This reverts commit 370a82dc56.
2025-11-27 08:19:19 -08:00
apocaliss92
c559212b2b reolink/hikvision: Add detection sources (#1932)
* add pluginId to detection objects

* add detection sourceId to hik

---------

Co-authored-by: Gianluca Ruocco <gianluca.ruocco@xarvio.com>
2025-11-26 12:00:57 -08:00
Koushik Dutta
10b097480f reolink: publish 2025-11-26 08:42:51 -08:00
Koushik Dutta
14050d4e3a unifi-protect: fixup ws timeouts 2025-11-26 08:41:44 -08:00
apocaliss92
370a82dc56 Reolink: add check for net data, enable/disable RTMP/RTSP/ONVIF/HTTPS when necessary (#1931)
- unify methods to get specific abilities
- allow only RTSP streams for homehub devices

Co-authored-by: Gianluca Ruocco <gianluca.ruocco@xarvio.com>
2025-11-26 07:29:32 -08:00
Koushik Dutta
77dd8cf2a8 reolink: fix dep 2025-11-25 16:13:56 -08:00
Koushik Dutta
2b2a5c3dd8 diagnostics: fix metadata retrieval failrue 2025-11-25 09:16:36 -08:00
Koushik Dutta
6a952bf104 diagnostics: fix metadata retrieval failrue 2025-11-25 08:45:56 -08:00
Koushik Dutta
72c7736b2a snapshot: aspect ratio description 2025-11-17 15:19:58 -08:00
Koushik Dutta
c6771ce8ae snapshot: publish with aspect ratio override fixes 2025-11-17 15:16:57 -08:00
Koushik Dutta
e691c71224 snapshot: publish beta with aspect ratio override 2025-11-17 14:57:37 -08:00
Koushik Dutta
d22183faa7 Merge branch 'main' of github.com:koush/scrypted 2025-11-17 11:06:18 -08:00
Koushik Dutta
12ce2dc6ce core: publish new lxc updater 2025-11-17 11:06:13 -08:00
Koushik Dutta
b4b17d420e windows: Upgrade node.js to version 22.21.0 2025-11-17 08:51:02 -08:00
Koushik Dutta
b69dd024e5 core: use new builtin docker image updater 2025-11-16 20:48:34 -08:00
Koushik Dutta
b43fdf83e2 openvino: legacy gpu crash fix for text recognition 2025-11-16 18:13:12 -08:00
Koushik Dutta
c4a12fe493 unifi-protect: publishb eta 2025-11-16 11:42:27 -08:00
Koushik Dutta
3c8a3132e5 Merge branch 'main' of github.com:koush/scrypted 2025-11-16 11:33:45 -08:00
Koushik Dutta
ef65a413e7 server: fix EventEmitter import 2025-11-16 11:33:40 -08:00
Koushik Dutta
7219c8bee3 hikvision: ensure device probe has data 2025-11-16 09:06:41 -08:00
Koushik Dutta
86160a74ac openvino: note vgg failure on latest openivno 2025-11-15 18:24:43 -08:00
Koushik Dutta
0dc7aec5c9 docker: update openvino legacy packages 2025-11-15 18:10:00 -08:00
Koushik Dutta
ec6ccb5826 cloud: code cleanup and alert clearing 2025-11-13 15:23:26 -08:00
Koushik Dutta
ef55c3f366 cloud: add periodic cloudflare health check 2025-11-13 11:16:05 -08:00
Koushik Dutta
923dff378c snapshot/rebroadcast: publish new privacy mode 2025-11-13 09:43:18 -08:00
Koushik Dutta
6356702ba3 rebroadcast: privacy mode 2025-11-13 09:40:28 -08:00
Koushik Dutta
a2576d5741 webrtc: fix https://github.com/koush/scrypted/issues/1909 2025-11-09 11:02:06 -08:00
Koushik Dutta
6e5782d734 common: fix microphone nre in BrowserSignalingSession 2025-11-09 10:52:19 -08:00
Koushik Dutta
7583d072cc sdk: add Buttons type 2025-11-09 10:05:40 -08:00
Koushik Dutta
34f0529691 videoanalysis: prefer libav for stability 2025-11-09 08:33:18 -08:00
Koushik Dutta
4ad594074a server: remove python cluster mode port logging 2025-11-09 08:23:06 -08:00
Koushik Dutta
8dba09e047 beta 2025-11-07 09:15:58 -08:00
Koushik Dutta
56b4a04e56 postbeta 2025-11-07 09:15:58 -08:00
Koushik Dutta
90f546c422 docker: fixup intel dockerfile 2025-11-07 08:40:47 -08:00
Koushik Dutta
ace1c74ec2 server: prevent invalid media converter from crashing all conversions 2025-11-07 08:08:16 -08:00
Koushik Dutta
99c0c53405 core/sdk: fix missing interface acl crash 2025-11-06 10:30:03 -08:00
Koushik Dutta
55fb215cab core/sdk: fix missing interface acl crash 2025-11-06 10:29:14 -08:00
Koushik Dutta
d8e17e9216 core: remove watchtower from proxmox totally 2025-11-05 10:32:25 -08:00
Koushik Dutta
618a33028b proxmox: install v0.143.0 2025-11-05 10:29:33 -08:00
Koushik Dutta
536d8f03ae proxmox: add install override 2025-11-05 09:33:20 -08:00
Koushik Dutta
6e5c73b48c proxmox: lxc setup fixes 2025-11-05 09:17:49 -08:00
Koushik Dutta
94c4b663f6 proxmox: lxc setup fixes 2025-11-05 09:15:32 -08:00
Koushik Dutta
c95cca0f81 proxmox: remove watchtower 2025-11-05 09:04:23 -08:00
Koushik Dutta
d515cc47d0 core: temporarily disable lxc-docker abort-on-container-exit 2025-11-05 08:52:26 -08:00
Koushik Dutta
12e60efd35 core: prevent apt updates 2025-11-05 08:02:19 -08:00
Koushik Dutta
9107558bab diagnostics: cloud ipv4 and ipv6 check 2025-11-03 14:38:08 -08:00
Koushik Dutta
a8bb431efb install: use last working release for nvidia-legacy 2025-10-31 10:15:37 -07:00
Koushik Dutta
22ffac1170 docker: fix nvidia legacy to use specific cudnn 2025-10-31 09:05:47 -07:00
Koushik Dutta
2f45e72bd3 client: add dev hook 2025-10-30 09:43:00 -07:00
Koushik Dutta
5749a522db docker: move amd opencl into amd image only 2025-10-30 08:31:11 -07:00
Koushik Dutta
38037d31b3 install: add nvidia legacy 2025-10-29 21:08:42 -07:00
Koushik Dutta
dd6e5cf854 postbeta 2025-10-29 20:47:42 -07:00
Koushik Dutta
f9b8715cc0 install: add nvidia legacy 2025-10-29 20:47:24 -07:00
Koushik Dutta
3186480f44 werift: update 2025-10-29 11:36:04 -07:00
Koushik Dutta
25521699e8 webrtc: update werift and publish beta 2025-10-29 11:03:12 -07:00
Koushik Dutta
b87906911c sdk: rollup terser support 2025-10-29 10:53:27 -07:00
Koushik Dutta
55e67c9eda sdk: update deps 2025-10-29 09:33:27 -07:00
Koushik Dutta
54c56ac4ce core: add platform images 2025-10-28 11:45:15 -07:00
Koushik Dutta
547db5bbbd install: update ha 2025-10-28 11:11:26 -07:00
Koushik Dutta
5b789b35ec postrelease 2025-10-28 10:13:05 -07:00
153 changed files with 13998 additions and 4346 deletions

View File

@@ -84,7 +84,7 @@ jobs:
strategy:
matrix:
BASE: ["noble"]
VENDOR: ["nvidia", "intel", "amd"]
VENDOR: ["nvidia", "nvidia-legacy", "intel", "amd"]
steps:
- name: Check out the repo
uses: actions/checkout@v3

View File

@@ -21,6 +21,7 @@ jobs:
matrix:
BASE: [
["noble-nvidia", ".s6", "noble-nvidia", "nvidia"],
["noble-nvidia-legacy", ".s6", "noble-nvidia-legacy", "nvidia-legacy"],
["noble-intel", ".s6", "noble-intel", "intel"],
["noble-amd", ".s6", "noble-amd", "amd"],
["noble-full", ".s6", "noble-full", "full"],

View File

@@ -110,7 +110,9 @@ export class BrowserSignalingSession implements RTCSignalingSession {
await this.microphone.replaceTrack(mic.getTracks()[0]);
}
this.microphone.track.enabled = enabled;
if (this.microphone?.track) {
this.microphone.track.enabled = enabled;
}
}
close() {

View File

@@ -1,6 +1,6 @@
# Home Assistant Addon Configuration
name: Scrypted
version: "v0.141.0-noble-full"
version: "v0.143.0-noble-full"
slug: scrypted
description: Scrypted is a high performance home video integration and automation platform
url: "https://github.com/koush/scrypted"

View File

@@ -1,4 +1,4 @@
ARG BASE="16-jammy"
ARG BASE="noble-full"
FROM ghcr.io/koush/scrypted-common:${BASE}
WORKDIR /
@@ -8,4 +8,4 @@ WORKDIR /scrypted/server
RUN npm install
RUN npm run build
CMD npm run serve-no-build
CMD ["npm", "run", "serve-no-build"]

View File

@@ -1,4 +1,4 @@
ARG BASE="ghcr.io/koush/scrypted-common:20-jammy-full"
ARG BASE="ghcr.io/koush/scrypted-common:noble-amd"
FROM $BASE
ENV SCRYPTED_DOCKER_FLAVOR="amd"

View File

@@ -35,19 +35,6 @@ RUN apt-get -y install \
python3-setuptools \
python3-wheel
# gstreamer native https://gstreamer.freedesktop.org/documentation/installing/on-linux.html?gi-language=c#install-gstreamer-on-ubuntu-or-debian
RUN echo "Installing gstreamer."
# python-codecs pygobject dependencies
RUN apt-get -y install libcairo2-dev libgirepository1.0-dev
RUN apt-get -y install \
gstreamer1.0-tools gstreamer1.0-plugins-base gstreamer1.0-plugins-good gstreamer1.0-plugins-bad gstreamer1.0-libav \
gstreamer1.0-vaapi
# python3 gstreamer bindings
RUN echo "Installing gstreamer bindings."
RUN apt-get -y install \
python3-gst-1.0
# allow pip to install to system
RUN rm -f /usr/lib/python**/EXTERNALLY-MANAGED
@@ -69,12 +56,9 @@ RUN apt -y install libvulkan1
# intel opencl for openvino
RUN curl https://raw.githubusercontent.com/koush/scrypted/main/install/docker/install-intel-graphics.sh | bash
# NPU driver will SIGILL on openvino prior to 2024.5.0
# intel NPU
RUN curl https://raw.githubusercontent.com/koush/scrypted/main/install/docker/install-intel-npu.sh | bash
# amd opencl
RUN curl https://raw.githubusercontent.com/koush/scrypted/main/install/docker/install-amd-graphics.sh | bash
# python 3.9 from ppa.
# 3.9 is the version with prebuilt support for tensorflow lite
RUN add-apt-repository -y ppa:deadsnakes/ppa && \

View File

@@ -1,9 +1,16 @@
ARG BASE="ghcr.io/koush/scrypted-common:20-jammy-full"
ARG BASE="ghcr.io/koush/scrypted-common:noble-intel"
FROM $BASE
ENV SCRYPTED_DOCKER_FLAVOR="intel"
RUN curl https://raw.githubusercontent.com/koush/scrypted/main/install/docker/install-intel-oneapi.sh | bash
# these paths must be updated if oneapi is updated via the install-intel-oneapi.sh script
# note that the 2022.2 seems to be a typo in the intel script...?
ENV LD_LIBRARY_PATH=/opt/intel/oneapi/tcm/1.4/lib:/opt/intel/oneapi/umf/0.11/lib:/opt/intel/oneapi/tbb/2022.2/env/../lib/intel64/gcc4.8:/opt/intel/oneapi/mkl/2025.2/lib:/opt/intel/oneapi/compiler/2025.2/opt/compiler/lib:/opt/intel/oneapi/compiler/2025.2/lib
ENV LD_LIBRARY_PATH=/opt/intel/oneapi/tcm/latest/lib
ENV LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/opt/intel/oneapi/umf/latest/lib
# gcc4.8 does not have a latest link however, it does seem to point to a relative lib path
ENV LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/opt/intel/oneapi/tbb/latest/env/../lib/intel64/gcc4.8
ENV LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/opt/intel/oneapi/tbb/latest/lib
ENV LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/opt/intel/oneapi/mkl/latest/lib
ENV LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/opt/intel/oneapi/compiler/latest/opt/compiler/lib
ENV LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/opt/intel/oneapi/compiler/latest/lib

View File

@@ -1,4 +1,4 @@
ARG BASE="jammy"
ARG BASE="noble-lite"
FROM ubuntu:${BASE} AS header
ENV SCRYPTED_DOCKER_FLAVOR="lite"

View File

@@ -1,4 +1,4 @@
ARG BASE="ghcr.io/koush/scrypted-common:20-jammy-full"
ARG BASE="ghcr.io/koush/scrypted-common:noble-nvidia"
FROM $BASE
ENV SCRYPTED_DOCKER_FLAVOR="nvidia"

View File

@@ -0,0 +1,11 @@
ARG BASE="ghcr.io/koush/scrypted-common:noble-nvidia-legacy"
FROM $BASE
ENV SCRYPTED_DOCKER_FLAVOR="nvidia"
ENV NVIDIA_DRIVER_CAPABILITIES=all
ENV NVIDIA_VISIBLE_DEVICES=all
# nvidia cudnn/libcublas etc.
# for some reason this is not provided by the nvidia container toolkit
RUN curl https://raw.githubusercontent.com/koush/scrypted/main/install/docker/install-nvidia-graphics-legacy.sh | bash

View File

@@ -1,4 +1,4 @@
ARG BASE="20-jammy-full"
ARG BASE="noble-full"
FROM ghcr.io/koush/scrypted-common:${BASE}
# avahi advertiser support

View File

@@ -1,3 +1,3 @@
./docker-build.sh
docker build -t ghcr.io/koush/scrypted:20-jammy-full.nvidia -f Dockerfile.nvidia .
docker build -t ghcr.io/koush/scrypted:nvidia -f Dockerfile.nvidia .

View File

@@ -4,7 +4,7 @@ set -x
NODE_VERSION=22
SCRYPTED_INSTALL_VERSION=beta
IMAGE_BASE=jammy
IMAGE_BASE=noble
FLAVOR=full
BASE=$NODE_VERSION-$IMAGE_BASE-$FLAVOR
echo $BASE

View File

@@ -145,7 +145,7 @@ services:
- WATCHTOWER_HTTP_API_UPDATE=true
- WATCHTOWER_SCOPE=scrypted
- WATCHTOWER_HTTP_API_PERIODIC_POLLS=${WATCHTOWER_HTTP_API_PERIODIC_POLLS:-true}
image: containrrr/watchtower
image: nickfedor/watchtower
container_name: scrypted-watchtower
restart: unless-stopped
volumes:
@@ -164,3 +164,5 @@ services:
dns:
- ${SCRYPTED_DNS_SERVER_0:-1.1.1.1}
- ${SCRYPTED_DNS_SERVER_1:-8.8.8.8}
# LXC usage only
# lxc profiles: ["disabled"]

View File

@@ -69,18 +69,29 @@ apt-get install -y ocl-icd-libopencl1
# install 24.35.30872.22 for legacy support. Then install latest.
# https://github.com/intel/compute-runtime/issues/770#issuecomment-2515166915
# https://github.com/intel/compute-runtime/releases/tag/24.35.30872.22
curl -O -L https://github.com/intel/intel-graphics-compiler/releases/download/igc-1.0.17537.20/intel-igc-core_1.0.17537.20_amd64.deb
curl -O -L https://github.com/intel/intel-graphics-compiler/releases/download/igc-1.0.17537.20/intel-igc-opencl_1.0.17537.20_amd64.deb
# curl -O -L https://github.com/intel/compute-runtime/releases/download/24.35.30872.22/intel-level-zero-gpu-dbgsym_1.3.30872.22_amd64.ddeb
# curl -O -L https://github.com/intel/compute-runtime/releases/download/24.35.30872.22/intel-level-zero-gpu-legacy1-dbgsym_1.3.30872.22_amd64.ddeb
curl -O -L https://github.com/intel/compute-runtime/releases/download/24.35.30872.22/intel-level-zero-gpu-legacy1_1.3.30872.22_amd64.deb
curl -O -L https://github.com/intel/compute-runtime/releases/download/24.35.30872.22/intel-level-zero-gpu_1.3.30872.22_amd64.deb
# curl -O -L https://github.com/intel/compute-runtime/releases/download/24.35.30872.22/intel-opencl-icd-dbgsym_24.35.30872.22_amd64.ddeb
# curl -O -L https://github.com/intel/compute-runtime/releases/download/24.35.30872.22/intel-opencl-icd-legacy1-dbgsym_24.35.30872.22_amd64.ddeb
curl -O -L https://github.com/intel/compute-runtime/releases/download/24.35.30872.22/intel-opencl-icd-legacy1_24.35.30872.22_amd64.deb
curl -O -L https://github.com/intel/compute-runtime/releases/download/24.35.30872.22/intel-opencl-icd_24.35.30872.22_amd64.deb
curl -O -L https://github.com/intel/compute-runtime/releases/download/24.35.30872.22/libigdgmm12_22.5.0_amd64.deb
# original legacy packages
# # https://github.com/intel/compute-runtime/releases/tag/24.35.30872.22
# curl -O -L https://github.com/intel/intel-graphics-compiler/releases/download/igc-1.0.17537.20/intel-igc-core_1.0.17537.20_amd64.deb
# curl -O -L https://github.com/intel/intel-graphics-compiler/releases/download/igc-1.0.17537.20/intel-igc-opencl_1.0.17537.20_amd64.deb
# # curl -O -L https://github.com/intel/compute-runtime/releases/download/24.35.30872.22/intel-level-zero-gpu-dbgsym_1.3.30872.22_amd64.ddeb
# # curl -O -L https://github.com/intel/compute-runtime/releases/download/24.35.30872.22/intel-level-zero-gpu-legacy1-dbgsym_1.3.30872.22_amd64.ddeb
# curl -O -L https://github.com/intel/compute-runtime/releases/download/24.35.30872.22/intel-level-zero-gpu-legacy1_1.3.30872.22_amd64.deb
# curl -O -L https://github.com/intel/compute-runtime/releases/download/24.35.30872.22/intel-level-zero-gpu_1.3.30872.22_amd64.deb
# # curl -O -L https://github.com/intel/compute-runtime/releases/download/24.35.30872.22/intel-opencl-icd-dbgsym_24.35.30872.22_amd64.ddeb
# # curl -O -L https://github.com/intel/compute-runtime/releases/download/24.35.30872.22/intel-opencl-icd-legacy1-dbgsym_24.35.30872.22_amd64.ddeb
# curl -O -L https://github.com/intel/compute-runtime/releases/download/24.35.30872.22/intel-opencl-icd-legacy1_24.35.30872.22_amd64.deb
# curl -O -L https://github.com/intel/compute-runtime/releases/download/24.35.30872.22/intel-opencl-icd_24.35.30872.22_amd64.deb
# curl -O -L https://github.com/intel/compute-runtime/releases/download/24.35.30872.22/libigdgmm12_22.5.0_amd64.deb
# new legacy packages
# https://github.com/intel/compute-runtime/releases/tag/24.35.30872.36
curl -O -L https://github.com/intel/intel-graphics-compiler/releases/download/igc-1.0.17537.24/intel-igc-core_1.0.17537.24_amd64.deb
curl -O -L https://github.com/intel/intel-graphics-compiler/releases/download/igc-1.0.17537.24/intel-igc-opencl_1.0.17537.24_amd64.deb
curl -O -L https://github.com/intel/compute-runtime/releases/download/24.35.30872.36/intel-level-zero-gpu-legacy1-dbgsym_1.5.30872.36_amd64.ddeb
curl -O -L https://github.com/intel/compute-runtime/releases/download/24.35.30872.36/intel-level-zero-gpu-legacy1_1.5.30872.36_amd64.deb
curl -O -L https://github.com/intel/compute-runtime/releases/download/24.35.30872.36/intel-opencl-icd-legacy1-dbgsym_24.35.30872.36_amd64.ddeb
curl -O -L https://github.com/intel/compute-runtime/releases/download/24.35.30872.36/intel-opencl-icd-legacy1_24.35.30872.36_amd64.deb
curl -O -L https://github.com/intel/compute-runtime/releases/download/24.35.30872.36/libigdgmm12_22.5.0_amd64.deb
dpkg -i *.deb
rm -f *.deb
@@ -101,7 +112,7 @@ set +e
dpkg -i *.deb
set -e
# the legacy + latest process says this may be necessary but it does not seem to be in a clean environment.
apt-get install --fix-broken
apt-get -y install --fix-broken
cd /tmp && rm -rf /tmp/gpu

View File

@@ -0,0 +1,54 @@
if [ "$(uname -m)" = "x86_64" ]
then
UBUNTU_22_04=$(lsb_release -r | grep "22.04")
UBUNTU_24_04=$(lsb_release -r | grep "24.04")
# needs either ubuntu 22.0.4 or 24.04
if [ -z "$UBUNTU_22_04" ] && [ -z "$UBUNTU_24_04" ]
then
echo "NVIDIA graphics package can not be installed. Ubuntu version could not be detected when checking lsb-release and /etc/os-release."
exit 1
fi
if [ -n "$UBUNTU_22_04" ]
then
distro="ubuntu2204"
else
distro="ubuntu2404"
fi
echo "Installing NVIDIA graphics packages."
apt update -q \
&& apt install -y wget \
&& wget -qO /cuda-keyring.deb https://developer.download.nvidia.com/compute/cuda/repos/$distro/$(uname -m)/cuda-keyring_1.1-1_all.deb \
&& dpkg -i /cuda-keyring.deb \
&& apt update -q \
&& apt install -y cuda-nvcc-12-6 libcublas-12-6 libcudnn9-cuda-12=9.10.2.21-1 cuda-libraries-12-6;
if [ "$?" != "0" ]
then
echo "Error: NVIDIA graphics packages failed to install."
exit 1
fi
# Update: the libnvidia-opencl.so.1 file is not present in the container image, it is
# mounted via the nvidia container runtime. This is why the following check is commented out.
# this file is present but for some reason the icd file is not created by nvidia runtime.
# if [ ! -f "/usr/lib/x86_64-linux-gnu/libnvidia-opencl.so.1" ]
# then
# echo "Error: NVIDIA OpenCL library not found."
# exit 1
# fi
# the container runtime doesn't mount this file for some reason. seems to be a bug.
# https://github.com/NVIDIA/nvidia-container-toolkit/issues/682
# but the contents are simply the .so file, which is a symlink the nvidia runtime
# will mount in.
mkdir -p /etc/OpenCL/vendors/
echo "libnvidia-opencl.so.1" > /etc/OpenCL/vendors/nvidia.icd
else
echo "NVIDIA graphics will not be installed on this architecture."
fi
exit 0

View File

@@ -9,12 +9,9 @@ RUN apt -y install libvulkan1
# intel opencl for openvino
RUN curl https://raw.githubusercontent.com/koush/scrypted/main/install/docker/install-intel-graphics.sh | bash
# NPU driver will SIGILL on openvino prior to 2024.5.0
# intel NPU
RUN curl https://raw.githubusercontent.com/koush/scrypted/main/install/docker/install-intel-npu.sh | bash
# amd opencl
RUN curl https://raw.githubusercontent.com/koush/scrypted/main/install/docker/install-amd-graphics.sh | bash
# python 3.9 from ppa.
# 3.9 is the version with prebuilt support for tensorflow lite
RUN add-apt-repository -y ppa:deadsnakes/ppa && \

View File

@@ -32,19 +32,6 @@ RUN apt-get -y install \
python3-setuptools \
python3-wheel
# gstreamer native https://gstreamer.freedesktop.org/documentation/installing/on-linux.html?gi-language=c#install-gstreamer-on-ubuntu-or-debian
RUN echo "Installing gstreamer."
# python-codecs pygobject dependencies
RUN apt-get -y install libcairo2-dev libgirepository1.0-dev
RUN apt-get -y install \
gstreamer1.0-tools gstreamer1.0-plugins-base gstreamer1.0-plugins-good gstreamer1.0-plugins-bad gstreamer1.0-libav \
gstreamer1.0-vaapi
# python3 gstreamer bindings
RUN echo "Installing gstreamer bindings."
RUN apt-get -y install \
python3-gst-1.0
# allow pip to install to system
RUN rm -f /usr/lib/python**/EXTERNALLY-MANAGED

View File

@@ -47,9 +47,6 @@ RUN_IGNORE sudo installer -pkg /tmp/node.pkg -target /
NODE_PATH=/usr/local # used to pass var test
NODE_BIN_PATH=/usr/local/bin
# gstreamer plugins
RUN_IGNORE brew install gstreamer
ARCH=$(arch)
if [ "$ARCH" = "arm64" ]
then

View File

@@ -19,7 +19,7 @@ sc.exe stop scrypted.exe
iex ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))
# Install node.js
choco upgrade -y nodejs-lts --version=22.15.0
choco upgrade -y nodejs-lts --version=22.21.0
# Install VC Redist, which is necessary for portable python
choco install -y vcredist140

View File

@@ -4,8 +4,11 @@ cd /root/.scrypted
# always immediately upgrade everything in case there's a broken update.
# this will also be preferable for troubleshooting via lxc reboot.
export DEBIAN_FRONTEND=noninteractive
yes | dpkg --configure -a
apt -y --fix-broken install && apt -y update && apt -y dist-upgrade
# auto updates may break the system?
# watchtower stopped working after a docker update, so disabling for now.
# yes | dpkg --configure -a
# apt -y --fix-broken install && apt -y update && apt -y dist-upgrade
function cleanup() {
IS_UP=$(docker compose ps scrypted -a | grep Up)
@@ -30,4 +33,8 @@ docker compose pull
# force a recreate as .env may have changed.
# furthermore force recreate gets the container back into a known state
# which is preferable in case the user has made manual changes and then restarts.
WATCHTOWER_HTTP_API_TOKEN=$(echo $RANDOM | md5sum | head -c 32) docker compose up --force-recreate --abort-on-container-exit
WATCHTOWER_HTTP_API_TOKEN=$(echo $RANDOM | md5sum | head -c 32) docker compose up --force-recreate
# abort on container exit is problematic if watchtower is the one that aborts.
# this is also redundant now that watchtower is disabled.
# WATCHTOWER_HTTP_API_TOKEN=$(echo $RANDOM | md5sum | head -c 32) docker compose up --force-recreate --abort-on-container-exit

View File

@@ -18,7 +18,10 @@ function readyn() {
}
cd /tmp
SCRYPTED_VERSION=v0.139.0
if [ -z "$SCRYPTED_VERSION" ]
then
SCRYPTED_VERSION=v0.143.0
fi
SCRYPTED_TAR_ZST=scrypted-$SCRYPTED_VERSION.tar.zst
if [ -z "$VMID" ]
then

View File

@@ -21,7 +21,7 @@
"typescript": "^5.8.3"
},
"peerDependencies": {
"@scrypted/types": "^0.5.45"
"@scrypted/types": "^0.5.52"
}
},
"node_modules/@cspotcode/source-map-support": {
@@ -104,9 +104,9 @@
}
},
"node_modules/@scrypted/types": {
"version": "0.5.45",
"resolved": "https://registry.npmjs.org/@scrypted/types/-/types-0.5.45.tgz",
"integrity": "sha512-ysySpWkGUrUpNj0BoTZpyn2HeVCyN0kfsQ2qyUoegdj7O8Z4VWROQa1mSrrPAAftM8zhTHrgYw8RcvMsfh0BTQ==",
"version": "0.5.52",
"resolved": "https://registry.npmjs.org/@scrypted/types/-/types-0.5.52.tgz",
"integrity": "sha512-c1ra1ENnoC8MqVHf7QQcXIU+5BvQnhU4x5oqx4b20LtoB0/TTXthYFFvEDBvLenBivUr8Bb6dWrji7TZXVax1g==",
"license": "ISC",
"peer": true,
"dependencies": {

View File

@@ -19,7 +19,7 @@
"typescript": "^5.8.3"
},
"peerDependencies": {
"@scrypted/types": "^0.5.45"
"@scrypted/types": "^0.5.52"
},
"dependencies": {
"engine.io-client": "^6.6.3",

View File

@@ -1,19 +1,17 @@
import { ConnectRPCObjectOptions, MediaObjectCreateOptions, ScryptedStatic } from "@scrypted/types";
import { ConnectRPCObjectOptions, ForkOptions, ForkWorker, MediaObjectCreateOptions, PluginFork, ScryptedInterface, ScryptedInterfaceProperty, ScryptedStatic } from "@scrypted/types";
import * as eio from 'engine.io-client';
import { SocketOptions } from 'engine.io-client';
import { timeoutPromise } from "../../../common/src/promise-utils";
import type { ClusterObject, ConnectRPCObject } from '../../../server/src/cluster/connect-rpc-object';
import { domFetch } from "../../../server/src/fetch";
import { httpFetch } from '../../../server/src/fetch/http-fetch';
import type { IOSocket } from '../../../server/src/io';
import { MediaObject } from '../../../server/src/plugin/mediaobject';
import { PluginAPIProxy, PluginRemote } from "../../../server/src/plugin/plugin-api";
import { attachPluginRemote } from '../../../server/src/plugin/plugin-remote';
import { RpcPeer } from '../../../server/src/rpc';
import { createRpcDuplexSerializer, createRpcSerializer } from '../../../server/src/rpc-serializer';
import packageJson from '../package.json';
import { isIPAddress } from "./ip";
import { domFetch } from "../../../server/src/fetch";
import { httpFetch } from '../../../server/src/fetch/http-fetch';
export * as rpc from '../../../server/src/rpc';
export * as rpc_serializer from '../../../server/src/rpc-serializer';
@@ -33,6 +31,15 @@ const sourcePeerId = RpcPeer.generateId();
type IOClientSocket = eio.Socket & IOSocket;
interface InternalFork extends Pick<ScryptedClientStatic, 'loginResult' | 'username' | 'address' | 'connectionType'> {
extraHeaders: {
[header: string]: string,
};
transports?: string[] | undefined;
clientName?: string;
admin: boolean;
};
function once(socket: IOClientSocket, event: 'open' | 'message') {
return new Promise<any[]>((resolve, reject) => {
const err = (e: any) => {
@@ -70,6 +77,7 @@ export interface ScryptedClientStatic extends ScryptedStatic {
connectionType: ScryptedClientConnectionType;
rpcPeer: RpcPeer;
loginResult: ScryptedClientLoginResult;
fork<T>(options: ForkOptions & { worker: Worker }): PluginFork<T>;
}
export interface ScryptedConnectionOptions {
@@ -151,6 +159,14 @@ export function getCurrentBaseUrlRaw() {
const url = getBaseUrl(window.location.href)
|| getBaseUrl(document.baseURI)
|| getBaseUrl(importMetaUrlWithoutAssetsPath());
if (!url) {
try {
return getBaseUrl(process.env.SCRYPTED_ENDPOINT_PATH);
}
catch (e) {
}
}
return url;
}
@@ -420,11 +436,10 @@ export async function connectScryptedClient(options: ScryptedClientOptions): Pro
const eioPath = `endpoint/${pluginId}/engine.io/api`;
const eioEndpoint = baseUrl ? new URL(eioPath, baseUrl).pathname : '/' + eioPath;
// https://github.com/socketio/engine.io/issues/690
const cacheBust = Math.random().toString(36).substring(3, 10);
const eioOptions: Partial<SocketOptions> = {
const eioOptions: eio.SocketOptions = {
path: eioEndpoint,
query: {
cacheBust,
cacheBust: Math.random().toString(36).substring(3, 10),
},
withCredentials: true,
extraHeaders,
@@ -462,7 +477,7 @@ export async function connectScryptedClient(options: ScryptedClientOptions): Pro
tryLocalAddressess: tryAddresses,
});
const localEioOptions: Partial<SocketOptions> = {
const localEioOptions: eio.SocketOptions = {
...eioOptions,
extraHeaders: {
...eioOptions.extraHeaders,
@@ -578,6 +593,8 @@ export async function connectScryptedClient(options: ScryptedClientOptions): Pro
endpointManager,
mediaManager,
clusterManager,
pluginHostAPI,
pluginRemoteAPI,
} = scrypted;
console.log('api attached', Date.now() - start);
@@ -604,201 +621,131 @@ export async function connectScryptedClient(options: ScryptedClientOptions): Pro
.map(id => systemManager.getDeviceById(id))
.find(device => device.pluginId === '@scrypted/core' && device.nativeId === `user:${username}`);
const clusterPeers = new Map<number, Promise<RpcPeer>>();
const finalizationRegistry = new FinalizationRegistry((clusterPeer: RpcPeer) => {
clusterPeer.kill('object finalized');
});
const ensureClusterPeer = (clusterObject: ClusterObject, connectRPCObjectOptions?: ConnectRPCObjectOptions) => {
// If dedicatedTransport is true, don't reuse existing cluster peers
if (!connectRPCObjectOptions?.dedicatedTransport) {
let clusterPeerPromise = clusterPeers.get(clusterObject.port);
if (clusterPeerPromise)
return clusterPeerPromise;
}
const connectRPCObject = clusterSetup(address, connectionType, queryToken, extraHeaders, options?.transports, sourcePeerId, clientName);
const clusterPeerPromise = (async () => {
const eioPath = 'engine.io/connectRPCObject';
const eioEndpoint = new URL(eioPath, address).pathname;
const eioQueryToken = connectionType === 'http' ? undefined : queryToken;
const clusterPeerOptions = {
path: eioEndpoint,
query: {
cacheBust,
clusterObject: JSON.stringify(clusterObject),
...eioQueryToken,
},
withCredentials: true,
extraHeaders,
rejectUnauthorized: false,
transports: options?.transports,
};
const loginResult: ScryptedClientLoginResult = {
username,
token,
directAddress,
localAddresses,
externalAddresses,
scryptedCloud,
queryToken,
authorization,
cloudAddress,
hostname,
serverId,
};
const clusterPeerSocket = new eio.Socket(address, clusterPeerOptions);
let peerReady = false;
type ForkType = ScryptedClientStatic['fork'];
const fork: ForkType = (forkOptions) => {
const { worker } = forkOptions;
// Timeout handling for dedicated transports
let receiveTimeout: NodeJS.Timeout | undefined;
let sendTimeout: NodeJS.Timeout | undefined;
let clusterPeer: RpcPeer | undefined;
const serializer = createRpcSerializer({
sendMessageBuffer: buffer => worker.postMessage(buffer),
sendMessageFinish: message => worker.postMessage(JSON.stringify(message)),
});
const clearTimers = () => {
if (receiveTimeout) {
clearTimeout(receiveTimeout);
receiveTimeout = undefined;
}
if (sendTimeout) {
clearTimeout(sendTimeout);
sendTimeout = undefined;
}
};
const threadPeer = new RpcPeer("main-client", 'thread', (message, reject, serializationContext) => {
try {
serializer.sendMessage(message, reject, serializationContext);
}
catch (e) {
reject?.(e as Error);
}
});
const resetReceiveTimeout = connectRPCObjectOptions?.dedicatedTransport?.receiveTimeout ? () => {
if (receiveTimeout) {
clearTimeout(receiveTimeout);
}
receiveTimeout = setTimeout(() => {
if (clusterPeer) {
clusterPeer.kill('receive timeout');
rpcPeer.killed.finally(() => threadPeer.kill('main rpc peer killed'));
worker.addEventListener('message', async event => {
if (event.data instanceof Uint8Array) {
serializer.onMessageBuffer(Buffer.from(event.data));
}
else {
serializer.onMessageFinish(JSON.parse(event.data));
}
});
serializer.setupRpcPeer(threadPeer);
// there is no worker close event?
const forkApi = new PluginAPIProxy(pluginHostAPI, mediaManager);
threadPeer.killed.finally(() => {
forkApi.removeListeners();
worker.terminate();
});
const internalFork: InternalFork = {
loginResult,
username,
address,
connectionType,
extraHeaders,
transports: options?.transports,
clientName,
admin,
};
threadPeer.params['client'] = internalFork;
const result = (async () => {
const getRemote = await threadPeer.getParam('getRemote');
const remote = await getRemote(forkApi, pluginId, {
serverVersion
}) as PluginRemote;
await remote.setSystemState(systemManager.getSystemState());
forkApi.listen((id, eventDetails, eventData) => {
// ScryptedDevice events will be handled specially and repropagated by the remote.
if (eventDetails.eventInterface === ScryptedInterface.ScryptedDevice) {
if (eventDetails.property === ScryptedInterfaceProperty.id) {
// a change on the id property means device was deleted
remote.updateDeviceState(eventData, undefined);
}
}, connectRPCObjectOptions.dedicatedTransport.receiveTimeout);
} : undefined;
const resetSendTimeout = connectRPCObjectOptions?.dedicatedTransport?.sendTimeout ? () => {
if (sendTimeout) {
clearTimeout(sendTimeout);
}
sendTimeout = setTimeout(() => {
if (clusterPeer) {
clusterPeer.kill('send timeout');
else {
// a change on anything else is a descriptor update
remote.updateDeviceState(id, systemManager.getSystemState()[id]);
}
}, connectRPCObjectOptions.dedicatedTransport.sendTimeout);
} : undefined;
clusterPeerSocket.on('close', () => {
clusterPeer?.kill('socket closed');
// Only remove from clusterPeers if it's not a dedicated transport
if (!connectRPCObjectOptions?.dedicatedTransport) {
clusterPeers.delete(clusterObject.port);
return;
}
if (!peerReady) {
throw new Error("peer disconnected before setup completed");
if (eventDetails.property && !eventDetails.mixinId) {
remote.notify(id, eventDetails, systemManager.getSystemState()[id]?.[eventDetails.property]).catch(() => { });
}
else {
remote.notify(id, eventDetails, eventData).catch(() => { });
}
});
try {
await once(clusterPeerSocket, 'open');
const serializer = createRpcDuplexSerializer({
write: data => {
resetSendTimeout?.();
clusterPeerSocket.send(data);
},
});
clusterPeerSocket.on('message', data => {
resetReceiveTimeout?.();
serializer.onData(Buffer.from(data));
});
clusterPeer = new RpcPeer(clientName || 'engine.io-client', "cluster-proxy", (message, reject, serializationContext) => {
try {
resetSendTimeout?.();
serializer.sendMessage(message, reject, serializationContext);
}
catch (e) {
reject?.(e as Error);
}
});
clusterPeer.killedSafe.finally(() => {
clearTimers();
clusterPeerSocket.close();
});
serializer.setupRpcPeer(clusterPeer);
clusterPeer.tags.localPort = sourcePeerId;
peerReady = true;
// Initialize timeouts if configured
resetReceiveTimeout?.();
resetSendTimeout?.();
return clusterPeer;
}
catch (e) {
clearTimers();
console.error('failure ipc connect', e);
clusterPeerSocket.close();
throw e;
}
const fork = await threadPeer.getParam('fork');
return fork;
})();
// Only store in clusterPeers if it's not a dedicated transport
if (!connectRPCObjectOptions?.dedicatedTransport) {
clusterPeers.set(clusterObject.port, clusterPeerPromise);
}
result.catch(() => {
threadPeer.kill('fork setup failed');
worker.terminate();
});
return clusterPeerPromise;
return {
[Symbol.dispose]() {
worker.terminate();
threadPeer.kill('disposed');
},
result,
worker: {
terminate() {
worker.terminate();
},
nativeWorker: worker,
} as any as ForkWorker,
};
};
const resolveObject = async (proxyId: string, sourcePeerPort: number) => {
const sourcePeer = await clusterPeers.get(sourcePeerPort);
if (sourcePeer?.remoteWeakProxies) {
return Object.values(sourcePeer.remoteWeakProxies).find(
v => v.deref()?.__cluster?.proxyId == proxyId
)?.deref();
}
return null;
}
const connectRPCObject = async (value: any, options?: ConnectRPCObjectOptions) => {
const clusterObject: ClusterObject = value?.__cluster;
if (!clusterObject) {
return value;
}
const { port, proxyId } = clusterObject;
// check if object is already connected
const resolved = await resolveObject(proxyId, port);
if (resolved) {
return resolved;
}
try {
const clusterPeerPromise = ensureClusterPeer(clusterObject, options);
const clusterPeer = await clusterPeerPromise;
const connectRPCObject: ConnectRPCObject = await clusterPeer.getParam('connectRPCObject');
try {
const newValue = await connectRPCObject(clusterObject);
if (!newValue)
throw new Error('ipc object not found?');
// If dedicatedTransport is true, register the object for cleanup
if (options?.dedicatedTransport) {
finalizationRegistry.register(newValue, clusterPeer);
}
return newValue;
}
catch (e) {
// If we have a clusterPeer and this is a dedicated transport, kill the connection
// to prevent resource leaks when connectRPCObject fails
if (options?.dedicatedTransport) {
clusterPeer.kill('connectRPCObject failed');
}
throw e;
}
}
catch (e) {
console.error('failure ipc', e);
return value;
}
}
const ret: ScryptedClientStatic = {
userId: userDevice?.id,
serverVersion,
username,
pluginRemoteAPI: undefined,
pluginRemoteAPI,
address,
connectionType,
admin,
@@ -810,23 +757,11 @@ export async function connectScryptedClient(options: ScryptedClientOptions): Pro
disconnect() {
rpcPeer.kill('disconnect requested');
},
pluginHostAPI: undefined,
pluginHostAPI,
rpcPeer,
loginResult: {
username,
token,
directAddress,
localAddresses,
externalAddresses,
scryptedCloud,
queryToken,
authorization,
cloudAddress,
hostname,
serverId,
},
loginResult,
connectRPCObject,
fork: undefined,
fork,
connect: undefined,
}
@@ -846,3 +781,308 @@ export async function connectScryptedClient(options: ScryptedClientOptions): Pro
throw e;
}
}
function clusterSetup(address: string, connectionType: ScryptedClientConnectionType, queryToken: any, extraHeaders: { [header: string]: string }, transports: string[] | undefined, sourcePeerId: string, clientName?: string) {
const clusterPeers = new Map<number, Promise<RpcPeer>>();
const finalizationRegistry = new FinalizationRegistry((clusterPeer: RpcPeer) => {
clusterPeer.kill('object finalized');
});
const ensureClusterPeer = (clusterObject: ClusterObject, connectRPCObjectOptions?: ConnectRPCObjectOptions) => {
// If dedicatedTransport is true, don't reuse existing cluster peers
if (!connectRPCObjectOptions?.dedicatedTransport) {
let clusterPeerPromise = clusterPeers.get(clusterObject.port);
if (clusterPeerPromise)
return clusterPeerPromise;
}
const clusterPeerPromise = (async () => {
const eioPath = 'engine.io/connectRPCObject';
const eioEndpoint = new URL(eioPath, address).pathname;
const eioQueryToken = connectionType === 'http' ? undefined : queryToken;
const clusterPeerOptions: eio.SocketOptions = {
path: eioEndpoint,
query: {
cacheBust: Math.random().toString(36).substring(3, 10),
clusterObject: JSON.stringify(clusterObject),
...eioQueryToken,
},
withCredentials: true,
extraHeaders,
rejectUnauthorized: false,
transports,
};
const clusterPeerSocket = new eio.Socket(address, clusterPeerOptions);
let peerReady = false;
// Timeout handling for dedicated transports
let receiveTimeout: NodeJS.Timeout | undefined;
let sendTimeout: NodeJS.Timeout | undefined;
let clusterPeer: RpcPeer | undefined;
const clearTimers = () => {
if (receiveTimeout) {
clearTimeout(receiveTimeout);
receiveTimeout = undefined;
}
if (sendTimeout) {
clearTimeout(sendTimeout);
sendTimeout = undefined;
}
};
const resetReceiveTimeout = connectRPCObjectOptions?.dedicatedTransport?.receiveTimeout ? () => {
if (receiveTimeout) {
clearTimeout(receiveTimeout);
}
receiveTimeout = setTimeout(() => {
if (clusterPeer) {
clusterPeer.kill('receive timeout');
}
}, connectRPCObjectOptions.dedicatedTransport.receiveTimeout);
} : undefined;
const resetSendTimeout = connectRPCObjectOptions?.dedicatedTransport?.sendTimeout ? () => {
if (sendTimeout) {
clearTimeout(sendTimeout);
}
sendTimeout = setTimeout(() => {
if (clusterPeer) {
clusterPeer.kill('send timeout');
}
}, connectRPCObjectOptions.dedicatedTransport.sendTimeout);
} : undefined;
clusterPeerSocket.on('close', () => {
clusterPeer?.kill('socket closed');
// Only remove from clusterPeers if it's not a dedicated transport
if (!connectRPCObjectOptions?.dedicatedTransport) {
clusterPeers.delete(clusterObject.port);
}
if (!peerReady) {
throw new Error("peer disconnected before setup completed");
}
});
try {
await once(clusterPeerSocket, 'open');
const serializer = createRpcDuplexSerializer({
write: data => {
resetSendTimeout?.();
clusterPeerSocket.send(data);
},
});
clusterPeerSocket.on('message', data => {
resetReceiveTimeout?.();
serializer.onData(Buffer.from(data));
});
clusterPeer = new RpcPeer(clientName || 'engine.io-client', "cluster-proxy", (message, reject, serializationContext) => {
try {
resetSendTimeout?.();
serializer.sendMessage(message, reject, serializationContext);
}
catch (e) {
reject?.(e as Error);
}
});
clusterPeer.killedSafe.finally(() => {
clearTimers();
clusterPeerSocket.close();
});
serializer.setupRpcPeer(clusterPeer);
clusterPeer.tags.localPort = sourcePeerId;
peerReady = true;
// Initialize timeouts if configured
resetReceiveTimeout?.();
resetSendTimeout?.();
return clusterPeer;
}
catch (e) {
clearTimers();
console.error('failure ipc connect', e);
clusterPeerSocket.close();
throw e;
}
})();
// Only store in clusterPeers if it's not a dedicated transport
if (!connectRPCObjectOptions?.dedicatedTransport) {
clusterPeers.set(clusterObject.port, clusterPeerPromise);
}
return clusterPeerPromise;
};
const resolveObject = async (proxyId: string, sourcePeerPort: number) => {
const sourcePeer = await clusterPeers.get(sourcePeerPort);
if (sourcePeer?.remoteWeakProxies) {
return Object.values(sourcePeer.remoteWeakProxies).find(
v => v.deref()?.__cluster?.proxyId == proxyId
)?.deref();
}
return null;
}
const connectRPCObject = async (value: any, options?: ConnectRPCObjectOptions) => {
const clusterObject: ClusterObject = value?.__cluster;
if (!clusterObject) {
return value;
}
const { port, proxyId } = clusterObject;
// check if object is already connected
const resolved = await resolveObject(proxyId, port);
if (resolved) {
return resolved;
}
try {
const clusterPeerPromise = ensureClusterPeer(clusterObject, options);
const clusterPeer = await clusterPeerPromise;
const connectRPCObject: ConnectRPCObject = await clusterPeer.getParam('connectRPCObject');
try {
const newValue = await connectRPCObject(clusterObject);
if (!newValue)
throw new Error('ipc object not found?');
// If dedicatedTransport is true, register the object for cleanup
if (options?.dedicatedTransport) {
finalizationRegistry.register(newValue, clusterPeer);
}
return newValue;
}
catch (e) {
// If we have a clusterPeer and this is a dedicated transport, kill the connection
// to prevent resource leaks when connectRPCObject fails
if (options?.dedicatedTransport) {
clusterPeer.kill('connectRPCObject failed');
}
throw e;
}
}
catch (e) {
console.error('failure ipc', e);
return value;
}
}
return connectRPCObject;
}
export async function connectScryptedClientFork(forkMain: (client: ScryptedClientStatic) => Promise<any>) {
const start = Date.now();
try {
const serializer = createRpcSerializer({
sendMessageBuffer: buffer => self.postMessage(buffer),
sendMessageFinish: message => self.postMessage(JSON.stringify(message)),
});
const rpcPeer = new RpcPeer('thread', "main-client", (message, reject, serializationContext) => {
try {
serializer.sendMessage(message, reject, serializationContext);
}
catch (e) {
reject?.(e as Error);
}
});
self.addEventListener('message', event => {
if (event.data instanceof Uint8Array) {
serializer.onMessageBuffer(Buffer.from(event.data));
}
else {
serializer.onMessageFinish(JSON.parse(event.data));
}
});
serializer.setupRpcPeer(rpcPeer);
const scrypted = await attachPluginRemote(rpcPeer, undefined);
const {
serverVersion,
systemManager,
deviceManager,
endpointManager,
mediaManager,
clusterManager,
pluginHostAPI,
pluginRemoteAPI,
} = scrypted;
console.log('api attached', Date.now() - start);
mediaManager.createMediaObject = async<T extends MediaObjectCreateOptions>(data: any, mimeType: string, options: T) => {
return new MediaObject(mimeType, data, options) as any;
}
console.log('api initialized', Date.now() - start);
const {
loginResult,
username,
address,
connectionType,
extraHeaders,
transports,
clientName,
admin,
} = await rpcPeer.getParam('client') as InternalFork;
const { queryToken } = loginResult;
const userDevice = Object.keys(systemManager.getSystemState())
.map(id => systemManager.getDeviceById(id))
.find(device => device.pluginId === '@scrypted/core' && device.nativeId === `user:${username}`);
const connectRPCObject = clusterSetup(address, connectionType, queryToken, extraHeaders, transports, sourcePeerId, clientName);
type ForkType = ScryptedClientStatic['fork'];
const fork: ForkType = (forkOptions) => {
throw new Error('not implemented');
};
const ret: ScryptedClientStatic = {
userId: userDevice?.id,
serverVersion,
username,
pluginRemoteAPI,
address,
connectionType,
admin,
systemManager,
clusterManager,
deviceManager,
endpointManager,
mediaManager,
disconnect() {
rpcPeer.kill('disconnect requested');
},
pluginHostAPI,
rpcPeer,
loginResult,
connectRPCObject,
fork,
connect: undefined,
}
rpcPeer.killed.finally(() => {
self.close();
ret.onClose?.();
});
const forked = await forkMain(ret);
rpcPeer.params['fork'] = forked;
}
catch (e) {
self.close();
throw e;
}
}

View File

@@ -1,12 +1,12 @@
{
"name": "@scrypted/amcrest",
"version": "0.0.166",
"version": "0.0.168",
"lockfileVersion": 3,
"requires": true,
"packages": {
"": {
"name": "@scrypted/amcrest",
"version": "0.0.166",
"version": "0.0.168",
"license": "Apache",
"dependencies": {
"@scrypted/common": "file:../../common",
@@ -16,7 +16,7 @@
},
"devDependencies": {
"@types/content-type": "^1.1.8",
"@types/node": "^20.11.30",
"@types/node": "^22.19.3",
"@types/xml2js": "^0.4.14"
}
},
@@ -26,39 +26,42 @@
"license": "ISC",
"dependencies": {
"@scrypted/sdk": "file:../sdk",
"@scrypted/types": "^0.5.27",
"http-auth-utils": "^5.0.1",
"typescript": "^5.5.3"
},
"devDependencies": {
"@types/node": "^20.11.0",
"@types/node": "^20.19.11",
"monaco-editor": "^0.50.0",
"ts-node": "^10.9.2"
}
},
"../../sdk": {
"name": "@scrypted/sdk",
"version": "0.3.114",
"version": "0.5.55",
"license": "ISC",
"dependencies": {
"@babel/preset-typescript": "^7.26.0",
"@rollup/plugin-commonjs": "^28.0.1",
"@babel/preset-typescript": "^7.27.1",
"@rollup/plugin-commonjs": "^28.0.9",
"@rollup/plugin-json": "^6.1.0",
"@rollup/plugin-node-resolve": "^15.3.0",
"@rollup/plugin-typescript": "^12.1.1",
"@rollup/plugin-node-resolve": "^16.0.1",
"@rollup/plugin-terser": "^0.4.4",
"@rollup/plugin-typescript": "^12.3.0",
"@rollup/plugin-virtual": "^3.0.2",
"adm-zip": "^0.5.16",
"axios": "^1.7.8",
"babel-loader": "^9.2.1",
"axios": "^1.10.0",
"babel-loader": "^10.0.0",
"babel-plugin-const-enum": "^1.2.0",
"ncp": "^2.0.0",
"openai": "^6.1.0",
"raw-loader": "^4.0.2",
"rimraf": "^6.0.1",
"rollup": "^4.27.4",
"rollup": "^4.52.5",
"tmp": "^0.2.3",
"ts-loader": "^9.5.1",
"ts-loader": "^9.5.4",
"tslib": "^2.8.1",
"typescript": "^5.6.3",
"webpack": "^5.96.1",
"typescript": "^5.9.3",
"webpack": "^5.99.9",
"webpack-bundle-analyzer": "^4.10.2"
},
"bin": {
@@ -71,9 +74,9 @@
"scrypted-webpack": "bin/scrypted-webpack.js"
},
"devDependencies": {
"@types/node": "^22.10.1",
"@types/node": "^24.9.2",
"ts-node": "^10.9.2",
"typedoc": "^0.26.11"
"typedoc": "^0.28.14"
}
},
"node_modules/@scrypted/common": {
@@ -91,12 +94,13 @@
"dev": true
},
"node_modules/@types/node": {
"version": "20.11.30",
"resolved": "https://registry.npmjs.org/@types/node/-/node-20.11.30.tgz",
"integrity": "sha512-dHM6ZxwlmuZaRmUPfv1p+KrdD1Dci04FbdEm/9wEMouFqxYoFl5aMkt0VMAUtYRQDyYvD41WJLukhq/ha3YuTw==",
"version": "22.19.3",
"resolved": "https://registry.npmjs.org/@types/node/-/node-22.19.3.tgz",
"integrity": "sha512-1N9SBnWYOJTrNZCdh/yJE+t910Y128BoyY+zBLWhL3r0TYzlTmFdXrPwHL9DyFZmlEXNQQolTZh3KHV31QDhyA==",
"dev": true,
"license": "MIT",
"dependencies": {
"undici-types": "~5.26.4"
"undici-types": "~6.21.0"
}
},
"node_modules/@types/xml2js": {
@@ -124,10 +128,11 @@
"license": "ISC"
},
"node_modules/undici-types": {
"version": "5.26.5",
"resolved": "https://registry.npmjs.org/undici-types/-/undici-types-5.26.5.tgz",
"integrity": "sha512-JlCMO+ehdEIKqlFxk6IfVoAUVmgz7cU7zD/h9XZ0qzeosSHmUJVOzSQvvYSYWXkFXC+IfLKSIffhv0sVZup6pA==",
"dev": true
"version": "6.21.0",
"resolved": "https://registry.npmjs.org/undici-types/-/undici-types-6.21.0.tgz",
"integrity": "sha512-iwDZqg0QAGrg9Rav5H4n0M64c3mkR59cJ6wQp+7C4nI0gsmExaedaYLNO44eT4AtBBwjbTiGPMlt2Md0T9H9JQ==",
"dev": true,
"license": "MIT"
},
"node_modules/xml2js": {
"version": "0.6.2",

View File

@@ -1,6 +1,6 @@
{
"name": "@scrypted/amcrest",
"version": "0.0.166",
"version": "0.0.168",
"description": "Amcrest Plugin for Scrypted",
"author": "Scrypted",
"license": "Apache",
@@ -44,7 +44,7 @@
},
"devDependencies": {
"@types/content-type": "^1.1.8",
"@types/node": "^20.11.30",
"@types/node": "^22.19.3",
"@types/xml2js": "^0.4.14"
}
}

View File

@@ -268,8 +268,8 @@ export class AmcrestCameraClient {
continue;
if (ignore === boundaryEnd)
continue;
// dahua bugs out and sends this.
if (ignore === 'HTTP/1.1 200 OK') {
// dahua bugs out and sends this (handle both HTTP/1.0 and HTTP/1.1).
if (ignore === 'HTTP/1.1 200 OK' || ignore === 'HTTP/1.0 200 OK') {
const message = await readAmcrestMessage(stream);
this.console.log('ignoring dahua http message', message);
message.unshift('');

View File

@@ -1,12 +1,12 @@
{
"name": "@scrypted/cloud",
"version": "0.2.49",
"version": "0.2.51",
"lockfileVersion": 3,
"requires": true,
"packages": {
"": {
"name": "@scrypted/cloud",
"version": "0.2.49",
"version": "0.2.51",
"dependencies": {
"@eneris/push-receiver": "^4.3.0",
"@scrypted/common": "file:../../common",

View File

@@ -52,5 +52,5 @@
"@types/node": "^22.10.1",
"ts-node": "^10.9.2"
},
"version": "0.2.49"
"version": "0.2.51"
}

View File

@@ -240,8 +240,10 @@ class ScryptedCloud extends ScryptedDeviceBase implements OauthClient, Settings,
upnpClient = upnp.createClient();
upnpStatus = 'Starting';
randomBytes = crypto.randomBytes(16).toString('base64');
healthCheckToken = crypto.randomBytes(16).toString('hex');
reverseConnections = new Set<Duplex>();
cloudflaredLoginController?: AbortController;
healthCheckInterval?: NodeJS.Timeout;
get portForwardingDisabled() {
return this.storageSettings.values.forwardingMode === 'Disabled' || this.storageSettings.values.forwardingMode === 'Default';
@@ -852,6 +854,12 @@ class ScryptedCloud extends ScryptedDeviceBase implements OauthClient, Settings,
}
res.end();
}
else if (url.pathname === '/_punch/cloudflared_callback') {
res.writeHead(200);
res.write(this.healthCheckToken);
res.end();
return;
}
else if (url.pathname === '/web/') {
const validDomain = this.getSSLHostname();
if (validDomain) {
@@ -1122,6 +1130,9 @@ class ScryptedCloud extends ScryptedDeviceBase implements OauthClient, Settings,
maxDelay: 300000,
});
// Start health check after cloudflared is successfully started
this.startHealthCheck();
await once(this.cloudflared.child, 'exit').catch(() => { });
// the successfully started cloudflared process may exit at some point, loop and allow it to restart.
this.console.error('cloudflared exited');
@@ -1131,6 +1142,8 @@ class ScryptedCloud extends ScryptedDeviceBase implements OauthClient, Settings,
this.console.error('cloudflared error', e);
}
finally {
clearInterval(this.healthCheckInterval);
this.healthCheckInterval = undefined;
this.cloudflared = undefined;
this.cloudflareTunnel = undefined;
this.updateExternalAddresses();
@@ -1138,6 +1151,59 @@ class ScryptedCloud extends ScryptedDeviceBase implements OauthClient, Settings,
}
}
async startHealthCheck() {
// Clear any existing health check interval
if (this.healthCheckInterval) {
clearInterval(this.healthCheckInterval);
}
// Local failure counter - only accessible within this method
let failureCount = 0;
const maxFailuresBeforeRestart = 3;
const alertTitle = 'Cloudflared health check failed 3 times consecutively. Restarting cloudflared process.';
const check = async () => {
// Only perform health check if cloudflare is enabled and we have a tunnel URL
if (!this.storageSettings.values.cloudflareEnabled || !this.cloudflareTunnel) {
return;
}
try {
const healthCheckUrl = `${this.cloudflareTunnel}/_punch/cloudflared_callback`;
this.console.log(`Performing health check: ${healthCheckUrl}`);
const response = await httpFetch({
url: healthCheckUrl,
responseType: 'text',
timeout: 30000, // 30 second timeout
});
this.log.clearAlert(alertTitle);
if (response.body !== this.healthCheckToken) {
throw new Error(`Health check failed: Expected token ${this.healthCheckToken}, got ${response.body}`);
}
failureCount = 0;
this.console.log('Cloudflared health check passed');
} catch (error) {
failureCount++;
this.console.error(`Cloudflared health check failed (${failureCount}/${maxFailuresBeforeRestart}):`, error);
if (failureCount >= maxFailuresBeforeRestart) {
this.console.warn('3 consecutive health check failures detected. Restarting cloudflared process.');
this.log.a(alertTitle);
this.cloudflared?.child?.kill();
failureCount = 0;
}
}
};
// Start a new health check interval (every 2 minutes)
this.healthCheckInterval = setInterval(check, 2 * 60 * 1000); // Run every 2 minutes
}
get serverIdentifier() {
const serverIdentifier = `${this.storageSettings.values.registrationSecret}@${this.storageSettings.values.serverId}`;
return serverIdentifier;

View File

@@ -1,12 +1,12 @@
{
"name": "@scrypted/core",
"version": "0.3.135",
"version": "0.3.146",
"lockfileVersion": 2,
"requires": true,
"packages": {
"": {
"name": "@scrypted/core",
"version": "0.3.135",
"version": "0.3.146",
"license": "Apache-2.0",
"dependencies": {
"@scrypted/common": "file:../../common",

View File

@@ -1,6 +1,6 @@
{
"name": "@scrypted/core",
"version": "0.3.135",
"version": "0.3.146",
"description": "Scrypted Core plugin. Provides the UI, websocket, and engine.io APIs.",
"author": "Scrypted",
"license": "Apache-2.0",

View File

@@ -12,7 +12,7 @@ import { AutomationCore, AutomationCoreNativeId } from './automations-core';
import { ClusterCore, ClusterCoreNativeId } from './cluster';
import { LauncherMixin } from './launcher-mixin';
import { MediaCore } from './media-core';
import { checkLegacyLxc, checkLxc } from './platform/lxc';
import { checkLegacyLxc, checkLxc, checkLxcVersionUpdateNeeded } from './platform/lxc';
import { ConsoleServiceNativeId, PluginSocketService, ReplServiceNativeId } from './plugin-socket-service';
import { ScriptCore, ScriptCoreNativeId, newScript } from './script-core';
import { TerminalService, TerminalServiceNativeId, newTerminalService } from './terminal-service';
@@ -64,7 +64,10 @@ class ScryptedCore extends ScryptedDeviceBase implements HttpRequestHandler, Dev
'Default',
'latest',
'beta',
`v${sdk.serverVersion}-jammy-full`,
'intel',
'amd',
'nvidia',
`v${sdk.serverVersion}-noble-full`,
],
combobox: true,
onPut: (ov, nv) => {
@@ -212,9 +215,14 @@ class ScryptedCore extends ScryptedDeviceBase implements HttpRequestHandler, Dev
);
})();
// check on workers once an hour.
// check on workers immediately and once an hour.
this.updateWorkers();
setInterval(() => this.updateWorkers(), 1000 * 60 * 60);
setInterval(() => this.updateWorkers(), 60 * 1000 * 60);
// check on worker images once an hour.
// checking immediately is problematic as a failed update may cause a restart loop on startup.
// images are also pruned 1 minute after startup, so avoid that.
setInterval(() => this.updateWorkerImages(), 60 * 1000 * 60);
}
async updateWorkers() {
@@ -239,6 +247,38 @@ class ScryptedCore extends ScryptedDeviceBase implements HttpRequestHandler, Dev
}
}
async updateWorkerImages() {
const workers = await sdk.clusterManager?.getClusterWorkers();
if (!workers)
return;
for (const [id, worker] of Object.entries(workers)) {
const forked = sdk.fork<ReturnType<typeof fork>>({
clusterWorkerId: id,
runtime: 'node',
});
(async () => {
try {
const result = await forked.result;
if (!await result.checkLxcVersionUpdateNeeded()) {
return;
}
// restart the worker to pick up the new image.
const clusterFork = await sdk.systemManager.getComponent('cluster-fork');
const serviceControl = await clusterFork.getServiceControl(worker.id);
await serviceControl.restart().catch(() => { });
}
catch (e) {
}
finally {
await sleep(1000);
forked.worker.terminate();
}
})();
}
}
async getSettings(): Promise<Setting[]> {
try {
const service = await sdk.systemManager.getComponent('addresses');
@@ -342,7 +382,6 @@ class ScryptedCore extends ScryptedDeviceBase implements HttpRequestHandler, Dev
const dockerCompose = yaml.parseDocument(readFileAsString('/root/.scrypted/docker-compose.yml'));
// @ts-ignore
dockerCompose.contents.get('services').get('scrypted').set('image', `ghcr.io/koush/scrypted${releaseChannel}`);
yaml.stringify(dockerCompose);
writeFileSync('/root/.scrypted/docker-compose.yml', yaml.stringify(dockerCompose));
this.setPullImage();
@@ -359,6 +398,7 @@ export async function fork() {
tsCompile,
newScript,
newTerminalService,
checkLxcVersionUpdateNeeded,
checkLxc: async () => {
try {
// console.warn('Checking for LXC installation...');

View File

@@ -1,5 +1,9 @@
import fs from 'fs';
import { Deferred } from '@scrypted/common/src/deferred';
import { readFileAsString } from '@scrypted/common/src/eval/scrypted-eval';
import sdk from '@scrypted/sdk';
import fs, { writeFileSync } from 'fs';
import http from 'http';
import yaml from 'yaml';
export const SCRYPTED_INSTALL_ENVIRONMENT_LXC = 'lxc';
export const SCRYPTED_INSTALL_ENVIRONMENT_LXC_DOCKER = 'lxc-docker';
@@ -18,6 +22,119 @@ export async function checkLxc() {
if (process.env.SCRYPTED_INSTALL_ENVIRONMENT !== SCRYPTED_INSTALL_ENVIRONMENT_LXC_DOCKER)
return;
await checkLxcCompose();
await checkLxcScript();
}
async function dockerRequest(options: http.RequestOptions, body?: string) {
const deferred = new Deferred<string>();
const req = http.request({
socketPath: '/var/run/docker.sock',
method: options.method,
path: options.path,
headers: {
'Host': 'localhost',
...options.headers
}
});
req.on('response', (res) => {
let data = '';
res.on('data', (chunk) => {
data += chunk;
});
res.on('end', () => {
deferred.resolve(data);
});
});
req.on('error', (err) => {
deferred.reject(err);
});
if (body) {
req.write(body);
}
req.end();
return deferred.promise;
}
async function dockerPullScryptedTag(tag: string) {
return dockerRequest({
method: 'POST',
path: `/v1.41/images/create?fromImage=ghcr.io%2Fkoush%2Fscrypted&tag=${tag}`,
});
}
async function dockerImageLsScryptedTag(tag: string) {
// List all images and find the specific one
const data = await dockerRequest({
method: 'GET',
path: '/v1.41/images/json'
});
const images = JSON.parse(data);
// Filter for your specific image
const targetImage = images.find(image => {
return image.RepoTags && image.RepoTags.some(t =>
t === `ghcr.io/koush/scrypted:${tag}`
);
});
if (!targetImage) {
throw new Error('Image not found');
}
return targetImage.Id;
}
async function dockerGetScryptedContainerImageId() {
// List running containers filtered by name
const data = await dockerRequest({
method: 'GET',
path: '/v1.41/containers/json?filters={"name":["scrypted"],"status":["running"]}'
});
const containers = JSON.parse(data);
if (!containers.length)
throw new Error('No running container named "scrypted" found');
const container = containers[0];
return container.ImageID;
}
export async function checkLxcVersionUpdateNeeded() {
if (process.env.SCRYPTED_INSTALL_ENVIRONMENT !== SCRYPTED_INSTALL_ENVIRONMENT_LXC_DOCKER)
return;
const dockerCompose = yaml.parseDocument(readFileAsString('/root/.scrypted/docker-compose.yml'));
// @ts-ignore
const image: string = dockerCompose.contents.get('services').get('scrypted').get('image');
const label = image.split(':')[1] || 'latest';
await dockerPullScryptedTag(label);
const imageId = await dockerImageLsScryptedTag(label);
const containerImageId = await dockerGetScryptedContainerImageId();
console.warn('LXC Scrypted latest image ID:', imageId);
console.warn('LXC Scrypted running image ID:', containerImageId);
return containerImageId !== imageId;
}
async function checkLxcCompose() {
// the lxc-docker used watchtower for automatic updates but watchtower started crashing in the lxc environment
// after a docker update.
// watchtower was removed from the lxc as a result.
// however existing installations may still have watchtower in their docker-compose.yml and need it removed.
const dockerCompose = yaml.parseDocument(readFileAsString('/root/.scrypted/docker-compose.yml'));
// @ts-ignore
const watchtower = dockerCompose.contents.get('services').get('watchtower');
if (watchtower.get('profiles'))
return;
watchtower.set('profiles', ['disabled']);
writeFileSync('/root/.scrypted/docker-compose.yml', yaml.stringify(dockerCompose));
}
async function checkLxcScript() {
const foundDockerComposeSh = await fs.promises.readFile(DOCKER_COMPOSE_SH_PATH, 'utf8');
const dockerComposeSh = await fs.promises.readFile(LXC_DOCKER_COMPOSE_SH_PATH, 'utf8');
@@ -34,4 +151,4 @@ export async function checkLxc() {
// console.warn(foundDockerComposeSh);
await fs.promises.copyFile(LXC_DOCKER_COMPOSE_SH_PATH, DOCKER_COMPOSE_SH_PATH);
await fs.promises.chmod(DOCKER_COMPOSE_SH_PATH, 0o755);
}
}

View File

@@ -1,7 +1,7 @@
import sdk, { ClusterForkInterface, ClusterForkInterfaceOptions, ScryptedDeviceBase, ScryptedInterface, ScryptedNativeId, StreamService, TTYSettings } from "@scrypted/sdk";
import type { IPty, spawn as ptySpawn } from 'node-pty';
import { createAsyncQueue } from '@scrypted/common/src/async-queue'
import { createAsyncQueue } from '@scrypted/common/src/async-queue';
import sdk, { ClusterForkInterface, ClusterForkInterfaceOptions, ScryptedDeviceBase, ScryptedInterface, ScryptedNativeId, StreamService, TTY, TTYSettings } from "@scrypted/sdk";
import { ChildProcess, spawn as childSpawn } from "child_process";
import type { IPty, spawn as ptySpawn } from 'node-pty';
import path from 'path';
export const TerminalServiceNativeId = 'terminalservice';
@@ -19,12 +19,24 @@ function toSpawnPathEnv(paths: string[]): string {
class InteractiveTerminal {
cp: IPty
constructor(cmd: string[], paths: string[], spawn: typeof ptySpawn) {
constructor(cmd: string[], paths: string[], spawn: typeof ptySpawn, cwd?: string) {
const spawnPath = toSpawnPathEnv(paths);
if (cmd?.length) {
this.cp = spawn(cmd[0], cmd.slice(1), { env: { ...process.env, PATH: spawnPath } });
this.cp = spawn(cmd[0], cmd.slice(1), {
env: {
...process.env,
PATH: spawnPath,
},
cwd,
});
} else {
this.cp = spawn(process.env.SHELL as string, [], { env: { ...process.env, PATH: spawnPath } });
this.cp = spawn(process.env.SHELL as string, [], {
env: {
...process.env,
PATH: spawnPath,
},
cwd,
});
}
}
@@ -111,7 +123,7 @@ class NoninteractiveTerminal {
}
export class TerminalService extends ScryptedDeviceBase implements StreamService<Buffer | string, Buffer>, ClusterForkInterface {
export class TerminalService extends ScryptedDeviceBase implements StreamService<Buffer | string, Buffer>, ClusterForkInterface, TTY {
private forks: { [clusterWorkerId: string]: TerminalService } = {};
private forkClients: 0;
@@ -186,7 +198,7 @@ export class TerminalService extends ScryptedDeviceBase implements StreamService
async connectStream(input: AsyncGenerator<Buffer | string, void>, options?: any): Promise<AsyncGenerator<Buffer, void>> {
let cp: InteractiveTerminal | NoninteractiveTerminal = null;
const queue = createAsyncQueue<Buffer>();
const extraPaths = await this.getExtraPaths();
const extraPaths = [...options?.env?.PATH?.split(path.delimiter) || [], ...await this.getExtraPaths()];
if (this.isFork) {
this.forkClients++;
@@ -259,7 +271,7 @@ export class TerminalService extends ScryptedDeviceBase implements StreamService
let spawn: typeof ptySpawn;
try {
spawn = require('@scrypted/node-pty').spawn as typeof ptySpawn;
cp = new InteractiveTerminal(cmd, extraPaths, spawn);
cp = new InteractiveTerminal(cmd, extraPaths, spawn, options?.cwd);
}
catch (e) {
this.console.error('Error starting pty', e);

View File

@@ -1,12 +1,12 @@
{
"name": "@scrypted/coreml",
"version": "0.1.83",
"version": "0.1.89",
"lockfileVersion": 2,
"requires": true,
"packages": {
"": {
"name": "@scrypted/coreml",
"version": "0.1.83",
"version": "0.1.89",
"devDependencies": {
"@scrypted/sdk": "file:../../sdk"
}

View File

@@ -50,5 +50,5 @@
"devDependencies": {
"@scrypted/sdk": "file:../../sdk"
},
"version": "0.1.83"
"version": "0.1.89"
}

View File

@@ -16,6 +16,7 @@ from common import yolo
from coreml.face_recognition import CoreMLFaceRecognition
from coreml.custom_detection import CoreMLCustomDetection
from coreml.clip_embedding import CoreMLClipEmbedding
from coreml.segment import CoreMLSegmentation
try:
from coreml.text_recognition import CoreMLTextRecognition
@@ -28,18 +29,11 @@ predictExecutor = concurrent.futures.ThreadPoolExecutor(1, "CoreML-Predict")
availableModels = [
"Default",
"scrypted_yolov10m_320",
"scrypted_yolov10n_320",
"scrypted_yolo_nas_s_320",
"scrypted_yolov9e_320",
"scrypted_yolov9c_320",
"scrypted_yolov9s_320",
"scrypted_yolov9t_320",
"scrypted_yolov6n_320",
"scrypted_yolov6s_320",
"scrypted_yolov8n_320",
"ssdlite_mobilenet_v2",
"yolov4-tiny",
"scrypted_yolov9t_relu_test",
"scrypted_yolov9c_relu",
"scrypted_yolov9m_relu",
"scrypted_yolov9s_relu",
"scrypted_yolov9t_relu",
]
@@ -79,60 +73,24 @@ class CoreMLPlugin(
def __init__(self, nativeId: str | None = None, forked: bool = False):
super().__init__(nativeId=nativeId, forked=forked)
# this used to work but a bug in macos is causing recompilation of the coreml models every time it restarts
# and the cache is not reused and also not cleared until the whole system reboots.
self.periodic_restart = False
self.custom_models = {}
model = self.storage.getItem("model") or "Default"
if model == "Default" or model not in availableModels:
if model != "Default":
self.storage.setItem("model", "Default")
model = "scrypted_yolov9c_320"
self.yolo = "yolo" in model
self.scrypted_yolov10n = "scrypted_yolov10" in model
self.scrypted_yolo_nas = "scrypted_yolo_nas" in model
self.scrypted_yolo = "scrypted_yolo" in model
self.scrypted_model = "scrypted" in model
model_version = "v8"
mlmodel = "model" if self.scrypted_yolo else model
model = "scrypted_yolov9c_relu"
self.modelName = model
print(f"model: {model}")
if not self.yolo:
# todo convert these to mlpackage
modelFile = self.downloadFile(
f"https://github.com/koush/coreml-models/raw/main/{model}/{mlmodel}.mlmodel",
f"{model}.mlmodel",
)
else:
if self.scrypted_yolo:
files = [
f"{model}/{model}.mlpackage/Data/com.apple.CoreML/weights/weight.bin",
f"{model}/{model}.mlpackage/Data/com.apple.CoreML/{mlmodel}.mlmodel",
f"{model}/{model}.mlpackage/Manifest.json",
]
for f in files:
p = self.downloadFile(
f"https://github.com/koush/coreml-models/raw/main/{f}",
f"{model_version}/{f}",
)
modelFile = os.path.dirname(p)
else:
files = [
f"{model}/{model}.mlpackage/Data/com.apple.CoreML/FeatureDescriptions.json",
f"{model}/{model}.mlpackage/Data/com.apple.CoreML/Metadata.json",
f"{model}/{model}.mlpackage/Data/com.apple.CoreML/weights/weight.bin",
f"{model}/{model}.mlpackage/Data/com.apple.CoreML/{mlmodel}.mlmodel",
f"{model}/{model}.mlpackage/Manifest.json",
]
for f in files:
p = self.downloadFile(
f"https://github.com/koush/coreml-models/raw/main/{f}",
f"{model_version}/{f}",
)
modelFile = os.path.dirname(p)
model_path = self.downloadHuggingFaceModelLocalFallback(model)
modelFile = os.path.join(model_path, f"{model}.mlpackage")
print(model_path, modelFile)
self.model = ct.models.MLModel(modelFile)
self.modelspec = self.model.get_spec()
@@ -148,6 +106,7 @@ class CoreMLPlugin(
self.faceDevice = None
self.textDevice = None
self.clipDevice = None
self.segmentDevice = None
if not self.forked:
asyncio.ensure_future(self.prepareRecognitionModels(), loop=self.loop)
@@ -192,6 +151,18 @@ class CoreMLPlugin(
"name": "CoreML CLIP Embedding",
}
)
await scrypted_sdk.deviceManager.onDeviceDiscovered(
{
"nativeId": "segment",
"type": scrypted_sdk.ScryptedDeviceType.Builtin.value,
"interfaces": [
scrypted_sdk.ScryptedInterface.ClusterForkInterface.value,
scrypted_sdk.ScryptedInterface.ObjectDetection.value,
],
"name": "CoreML Segmentation",
}
)
except:
pass
@@ -205,6 +176,9 @@ class CoreMLPlugin(
elif nativeId == "clipembedding":
self.clipDevice = self.clipDevice or CoreMLClipEmbedding(self, nativeId)
return self.clipDevice
elif nativeId == "segment":
self.segmentDevice = self.segmentDevice or CoreMLSegmentation(self, nativeId)
return self.segmentDevice
custom_model = self.custom_models.get(nativeId, None)
if custom_model:
return custom_model
@@ -244,94 +218,8 @@ class CoreMLPlugin(
return out_dicts
async def detect_once(self, input: Image.Image, settings: Any, src_size, cvss):
objs = []
# run in executor if this is the plugin loop
if self.yolo:
out_dict = await self.queue_batch({self.input_name: input})
if self.scrypted_yolov10n:
results = list(out_dict.values())[0][0]
objs = yolo.parse_yolov10(results)
ret = self.create_detection_result(objs, src_size, cvss)
return ret
if self.scrypted_yolo_nas:
predictions = list(out_dict.values())
objs = yolo.parse_yolo_nas(predictions)
ret = self.create_detection_result(objs, src_size, cvss)
return ret
if self.scrypted_yolo:
results = list(out_dict.values())[0][0]
objs = yolo.parse_yolov9(results)
ret = self.create_detection_result(objs, src_size, cvss)
return ret
out_blob = out_dict["Identity"]
objects = yolo.parse_yolo_region(
out_blob,
(input.width, input.height),
(81, 82, 135, 169, 344, 319),
# (23,27, 37,58, 81,82),
False,
)
for r in objects:
obj = Prediction(
r["classId"],
r["confidence"],
Rectangle(
r["xmin"],
r["ymin"],
r["xmax"],
r["ymax"],
),
)
objs.append(obj)
# what about output[1]?
# 26 26
# objects = yolo.parse_yolo_region(out_blob, (input.width, input.height), (23,27, 37,58, 81,82))
ret = self.create_detection_result(objs, src_size, cvss)
return ret
out_dict = await asyncio.get_event_loop().run_in_executor(
predictExecutor,
lambda: self.model.predict(
{"image": input, "confidenceThreshold": self.minThreshold}
),
)
coordinatesList = out_dict["coordinates"]
for index, confidenceList in enumerate(out_dict["confidence"]):
values = confidenceList
maxConfidenceIndex = max(range(len(values)), key=values.__getitem__)
maxConfidence = confidenceList[maxConfidenceIndex]
if maxConfidence < self.minThreshold:
continue
coordinates = coordinatesList[index]
def torelative(value: float):
return value * self.inputheight
x = torelative(coordinates[0])
y = torelative(coordinates[1])
w = torelative(coordinates[2])
h = torelative(coordinates[3])
w2 = w / 2
h2 = h / 2
l = x - w2
t = y - h2
obj = Prediction(
maxConfidenceIndex, maxConfidence, Rectangle(l, t, l + w, t + h)
)
objs.append(obj)
out_dict = await self.queue_batch({self.input_name: input})
results = list(out_dict.values())[0][0]
objs = yolo.parse_yolov9(results)
ret = self.create_detection_result(objs, src_size, cvss)
return ret

View File

@@ -6,6 +6,7 @@ import os
import asyncio
import coremltools as ct
import numpy as np
# import Quartz
# from Foundation import NSData, NSMakeSize
@@ -25,6 +26,7 @@ def cosine_similarity(vector_a, vector_b):
similarity = dot_product / (norm_a * norm_b)
return similarity
class CoreMLFaceRecognition(FaceRecognizeDetection):
def __init__(self, plugin, nativeId: str):
super().__init__(plugin, nativeId)
@@ -32,26 +34,12 @@ class CoreMLFaceRecognition(FaceRecognizeDetection):
self.recogExecutor = concurrent.futures.ThreadPoolExecutor(1, "recog-face")
def downloadModel(self, model: str):
model_version = "v7"
mlmodel = "model"
files = [
f"{model}/{model}.mlpackage/Data/com.apple.CoreML/weights/weight.bin",
f"{model}/{model}.mlpackage/Data/com.apple.CoreML/{mlmodel}.mlmodel",
f"{model}/{model}.mlpackage/Manifest.json",
]
for f in files:
p = self.downloadFile(
f"https://github.com/koush/coreml-models/raw/main/{f}",
f"{model_version}/{f}",
)
modelFile = os.path.dirname(p)
model_path = self.downloadHuggingFaceModelLocalFallback(model)
modelFile = os.path.join(model_path, f"{model}.mlpackage")
model = ct.models.MLModel(modelFile)
inputName = model.get_spec().description.input[0].name
return model, inputName
async def predictDetectModel(self, input: Image.Image):
def predict():
model, inputName = self.detectModel
@@ -70,11 +58,12 @@ class CoreMLFaceRecognition(FaceRecognizeDetection):
out_dict = model.predict({inputName: input})
results = list(out_dict.values())[0][0]
return results
results = await asyncio.get_event_loop().run_in_executor(
self.recogExecutor, lambda: predict()
)
return results
# def predictVision(self, input: Image.Image) -> asyncio.Future[list[Prediction]]:
# buffer = input.tobytes()
# myData = NSData.alloc().initWithBytes_length_(buffer, len(buffer))

View File

@@ -0,0 +1,48 @@
from __future__ import annotations
import asyncio
import os
import traceback
import numpy as np
import coremltools as ct
from common import async_infer
from common import yolov9_seg
from predict.segment import Segmentation
prepareExecutor, predictExecutor = async_infer.create_executors("Segment")
class CoreMLSegmentation(Segmentation):
def __init__(self, plugin, nativeId: str):
super().__init__(plugin=plugin, nativeId=nativeId)
def loadModel(self, name):
model_path = self.plugin.downloadHuggingFaceModelLocalFallback(name)
modelFile = os.path.join(model_path, f"{name}.mlpackage")
model = ct.models.MLModel(modelFile)
return model
async def detect_once(self, input, settings, src_size, cvss):
def predict():
input_name = self.model.get_spec().description.input[0].name
out_dict = self.model.predict({input_name: input})
outputs = list(out_dict.values())
pred = outputs[0]
proto = outputs[1]
pred = yolov9_seg.non_max_suppression(pred, nm=32)
return self.process_segmentation_output(pred, proto)
try:
objs = await asyncio.get_event_loop().run_in_executor(
predictExecutor, lambda: predict()
)
except:
traceback.print_exc()
raise
ret = self.create_detection_result(objs, src_size, cvss)
return ret

View File

@@ -20,22 +20,8 @@ class CoreMLTextRecognition(TextRecognition):
self.recogExecutor = concurrent.futures.ThreadPoolExecutor(1, "recog-text")
def downloadModel(self, model: str):
model_version = "v8"
mlmodel = "model"
files = [
f"{model}/{model}.mlpackage/Data/com.apple.CoreML/weights/weight.bin",
f"{model}/{model}.mlpackage/Data/com.apple.CoreML/{mlmodel}.mlmodel",
f"{model}/{model}.mlpackage/Manifest.json",
]
for f in files:
p = self.downloadFile(
f"https://github.com/koush/coreml-models/raw/main/{f}",
f"{model_version}/{f}",
)
modelFile = os.path.dirname(p)
model_path = self.downloadHuggingFaceModelLocalFallback(model)
modelFile = os.path.join(model_path, f"{model}.mlpackage")
model = ct.models.MLModel(modelFile)
inputName = model.get_spec().description.input[0].name
return model, inputName

View File

@@ -1,12 +1,12 @@
{
"name": "@scrypted/diagnostics",
"version": "0.0.21",
"version": "0.0.29",
"lockfileVersion": 2,
"requires": true,
"packages": {
"": {
"name": "@scrypted/diagnostics",
"version": "0.0.21",
"version": "0.0.29",
"dependencies": {
"@scrypted/common": "file:../../common",
"@scrypted/sdk": "file:../../sdk",

View File

@@ -1,6 +1,6 @@
{
"name": "@scrypted/diagnostics",
"version": "0.0.21",
"version": "0.0.29",
"scripts": {
"scrypted-setup-project": "scrypted-setup-project",
"prescrypted-setup-project": "scrypted-package-json",

View File

@@ -1,16 +1,37 @@
import { Deferred } from '@scrypted/common/src/deferred';
import { safeKillFFmpeg } from '@scrypted/common/src/media-helpers';
import sdk, { Camera, FFmpegInput, Image, MediaObject, MediaStreamDestination, MotionSensor, Notifier, ObjectDetection, ScryptedDevice, ScryptedDeviceBase, ScryptedDeviceType, ScryptedInterface, ScryptedMimeTypes, Setting, Settings, VideoCamera } from '@scrypted/sdk';
import sdk, { Camera, FFmpegInput, Image, MediaObject, MediaStreamDestination, MotionSensor, Notifier, ObjectDetection, ScryptedDevice, ScryptedDeviceBase, ScryptedDeviceType, ScryptedInterface, ScryptedMimeTypes, Setting, Settings, VideoCamera, TextEmbedding, ImageEmbedding } from '@scrypted/sdk';
import { StorageSettings } from '@scrypted/sdk/storage-settings';
import child_process from 'child_process';
import dns from 'dns';
import { once } from 'events';
import fs from 'fs';
import net from 'net';
import os from 'os';
import sharp from 'sharp';
import { httpFetch } from '../../../server/src/fetch/http-fetch';
function cosineSimilarityPrenormalized(e1: Buffer, e2: Buffer) {
const embedding1 = new Float32Array(e1.buffer, e1.byteOffset, e1.length / Float32Array.BYTES_PER_ELEMENT);
const embedding2 = new Float32Array(e2.buffer, e2.byteOffset, e2.length / Float32Array.BYTES_PER_ELEMENT);
let dotProduct = 0;
for (let i = 0; i < embedding1.length; i++) {
dotProduct += embedding1[i] * embedding2[i];
}
return dotProduct;
}
class DiagnosticsPlugin extends ScryptedDeviceBase implements Settings {
storageSettings = new StorageSettings(this, {
validateSystem: {
console: true,
group: 'System',
title: 'Validate System',
description: 'Validate the system configuration.',
type: 'button',
onPut: () => this.validateSystem(),
},
testDevice: {
group: 'Device',
title: 'Validation Device',
@@ -29,14 +50,6 @@ class DiagnosticsPlugin extends ScryptedDeviceBase implements Settings {
this.validateDevice();
},
},
validateSystem: {
console: true,
group: 'System',
title: 'Validate System',
description: 'Validate the system configuration.',
type: 'button',
onPut: () => this.validateSystem(),
},
});
loggedMotion = new Map<string, number>();
@@ -87,6 +100,80 @@ class DiagnosticsPlugin extends ScryptedDeviceBase implements Settings {
}
}
async validateNVR() {
const console = this.console;
const nvrPlugin = sdk.systemManager.getDeviceById('@scrypted/nvr');
if (!nvrPlugin) {
await this.validate(console, 'NVR Plugin Check', async () => {
throw new Error('NVR plugin not installed.');
});
return;
}
// Consolidated loop for detection plugins
const detectionPlugins = [
'@scrypted/onnx',
'@scrypted/openvino',
'@scrypted/coreml',
'@scrypted/ncnn',
'@scrypted/tensorflow-lite'
];
for (const pluginId of detectionPlugins) {
const plugin = sdk.systemManager.getDeviceById<Settings & ObjectDetection>(pluginId);
if (!plugin) {
continue;
}
// Detect objects test
await this.validate(console, `${pluginId}`, async () => {
const settings = await plugin.getSettings();
const executionDevice = settings.find(s => s.key === 'execution_device');
if (executionDevice?.value?.toString().includes('CPU')) {
this.warnStep(console, 'Using CPU execution. GPU recommended for better performance.');
}
const zidane = await sdk.mediaManager.createMediaObjectFromUrl('https://docs.scrypted.app/img/scrypted-nvr/troubleshooting/zidane.jpg');
const detected = await plugin.detectObjects(zidane);
const personFound = detected.detections!.find(d => d.className === 'person' && d.score > .8);
if (!personFound) {
throw new Error('Person not detected in test image.');
}
});
const clip = sdk.systemManager.getDeviceById<TextEmbedding & ImageEmbedding>(pluginId, 'clipembedding');
// tflite and ncnn doesnt have it
if (!clip) {
continue;
}
// CLIP implementation test
await this.validate(console, `${pluginId} CLIP`, async () => {
// Test CLIP functionality
const testText = 'test';
const textEmbedding = await clip.getTextEmbedding(testText);
if (!textEmbedding || textEmbedding.length === 0) {
throw new Error('Failed to get text embedding.');
}
const testImage = await sdk.mediaManager.createMediaObjectFromUrl('https://docs.scrypted.app/img/scrypted-nvr/troubleshooting/zidane.jpg');
const imageEmbedding = await clip.getImageEmbedding(testImage);
if (!imageEmbedding || imageEmbedding.length === 0) {
throw new Error('Failed to get image embedding.');
}
// Test similarity calculation
const similarity = cosineSimilarityPrenormalized(imageEmbedding, textEmbedding);
if (typeof similarity !== 'number') {
throw new Error('Failed to calculate similarity.');
}
});
}
}
async validateDevice() {
const device = this.storageSettings.values.testDevice as ScryptedDevice & any;
const console = sdk.deviceManager.getMixinConsole(device.id);
@@ -227,19 +314,28 @@ class DiagnosticsPlugin extends ScryptedDeviceBase implements Settings {
const validated = new Set<string | undefined>();
const validateMediaStream = async (stepName: string, destination: MediaStreamDestination) => {
const vsos = await device.getVideoStreamOptions();
const streamId = vsos.find(vso => vso.destinations?.includes(destination))?.id;
let streamId: string | undefined;
await this.validate(console, stepName + ' (Metadata)', async () => {
const vsos = await device.getVideoStreamOptions();
streamId = vsos.find(vso => vso.destinations?.includes(destination))?.id;
});
if (!streamId) {
await this.validate(console, stepName, async () => "Skipped (Not Configured)");
return;
}
if (validated.has(streamId)) {
await this.validate(console, stepName, async () => "Skipped (Duplicate)");
return;
}
validated.add(streamId);
const ffmpegInput = await sdk.mediaManager.convertMediaObjectToJSON<FFmpegInput>(await getVideoStream(destination), ScryptedMimeTypes.FFmpegInput);
if (ffmpegInput.mediaStreamOptions?.video?.codec !== 'h264')
this.warnStep(console, `Stream ${stepName} is using codec ${ffmpegInput.mediaStreamOptions?.video?.codec}. h264 is recommended.`);
await this.validate(console, stepName + ' (Codec)', async () => {
const ffmpegInput = await sdk.mediaManager.convertMediaObjectToJSON<FFmpegInput>(await getVideoStream(destination), ScryptedMimeTypes.FFmpegInput);
if (ffmpegInput.mediaStreamOptions?.video?.codec !== 'h264')
this.warnStep(console, `Stream ${stepName} is using codec ${ffmpegInput.mediaStreamOptions?.video?.codec}. h264 is recommended.`);
});
await validateMedia(stepName, getVideoStream(destination));
const start = Date.now();
@@ -332,20 +428,20 @@ class DiagnosticsPlugin extends ScryptedDeviceBase implements Settings {
}).then(r => r.body.trim()));
await this.validate(this.console, 'System Time Accuracy', async () => {
const response = await httpFetch({
const response = await httpFetch({
url: 'https://cloudflare.com',
responseType: 'text',
timeout: 10000,
});
const dateHeader = response.headers.get('date');
const dateHeader = response.headers.get('date');
if (!dateHeader) {
throw new Error('No date header in response');
}
const serverTime = new Date(dateHeader).getTime(); const localTime = Date.now();
const serverTime = new Date(dateHeader).getTime(); const localTime = Date.now();
const difference = Math.abs(serverTime - localTime);
const differenceSeconds = Math.floor(difference / 1000);
if (differenceSeconds > 5) {
throw new Error(`Time drift detected: ${differenceSeconds} seconds difference from accurate time source.`);
}
@@ -363,14 +459,14 @@ const response = await httpFetch({
'https://home.scrypted.app',
'https://billing.scrypted.app'
];
for (const endpoint of endpoints) {
try {
const response = await httpFetch({
url: endpoint,
timeout: 5000,
});
if (response.statusCode >= 400) {
throw new Error(`${endpoint} returned status ${response.statusCode}`);
}
@@ -378,7 +474,7 @@ const response = await httpFetch({
throw new Error(`${endpoint} is not accessible: ${(error as Error).message}`);
}
}
return 'Both endpoints accessible';
});
@@ -456,18 +552,47 @@ const response = await httpFetch({
throw new Error('Invalid response received from short lived URL.');
});
if (cloudPlugin) {
await this.validate(this.console, 'Cloud IPv4 Address', async () => {
const externalAddress = await sdk.endpointManager.getCloudEndpoint();
if (!externalAddress)
throw new Error('Scrypted Cloud endpoint not found.');
const url = new URL(externalAddress);
const { hostname } = url;
if (net.isIP(hostname))
return;
const addresses = await dns.promises.lookup(hostname, { all: true });
const hasIPv4 = addresses.find(address => address.family === 4);
if (!hasIPv4)
this.warnStep(this.console, 'No IPv4 address found for Scrypted Cloud endpoint.');
else
return hasIPv4.address;
});
await this.validate(this.console, 'Cloud IPv6 Address', async () => {
const externalAddress = await sdk.endpointManager.getCloudEndpoint();
if (!externalAddress)
throw new Error('Scrypted Cloud endpoint not found.');
const url = new URL(externalAddress);
const { hostname } = url;
if (net.isIP(hostname))
return;
const addresses = await dns.promises.lookup(hostname, { all: true });
const hasIPv6 = addresses.find(address => address.family === 6);
if (!hasIPv6)
this.warnStep(this.console, 'No IPv6 address found for Scrypted Cloud endpoint.');
else
return hasIPv6.address;
});
}
if ((hasCUDA || process.platform === 'win32') && onnxPlugin) {
await this.validate(this.console, 'ONNX Plugin', async () => {
const settings = await onnxPlugin.getSettings();
const executionDevice = settings.find(s => s.key === 'execution_device');
if (executionDevice?.value?.toString().includes('CPU'))
this.warnStep(this.console, 'GPU device unvailable or not passed through to container.');
const zidane = await sdk.mediaManager.createMediaObjectFromUrl('https://docs.scrypted.app/img/scrypted-nvr/troubleshooting/zidane.jpg');
const detected = await onnxPlugin.detectObjects(zidane);
const personFound = detected.detections!.find(d => d.className === 'person' && d.score > .9);
if (!personFound)
throw new Error('Person not detected in test image.');
});
}
@@ -477,15 +602,34 @@ const response = await httpFetch({
const availbleDevices = settings.find(s => s.key === 'available_devices');
if (!availbleDevices?.value?.toString().includes('GPU'))
this.warnStep(this.console, 'GPU device unvailable or not passed through to container.');
const zidane = await sdk.mediaManager.createMediaObjectFromUrl('https://docs.scrypted.app/img/scrypted-nvr/troubleshooting/zidane.jpg');
const detected = await openvinoPlugin.detectObjects(zidane);
const personFound = detected.detections!.find(d => d.className === 'person' && d.score > .9);
if (!personFound)
throw new Error('Person not detected in test image.');
});
}
await this.validateNVR();
await this.validate(this.console, 'External Resource Access', async () => {
const urls = [
'https://huggingface.co/koushd/clip/resolve/main/requirements.txt',
'https://raw.githubusercontent.com/koush/openvino-models/refs/heads/main/scrypted_labels.txt',
'https://registry.npmjs.org/@scrypted/server'
];
for (const url of urls) {
try {
const response = await httpFetch({
url,
timeout: 5000,
});
if (response.statusCode >= 400) {
throw new Error(`${url} returned status ${response.statusCode}`);
}
} catch (error) {
throw new Error(`${url} is not accessible: ${(error as Error).message}`);
}
}
});
if (nvrPlugin) {
await this.validate(this.console, "GPU Decode", async () => {
const ffmpegPath = await sdk.mediaManager.getFFmpegPath();
@@ -570,7 +714,7 @@ const response = await httpFetch({
if (image.width !== 320)
throw new Error('Unexpected image with from GPU transform.')
const detected = await openvinoPlugin.detectObjects(zidane);
const personFound = detected.detections!.find(d => d.className === 'person' && d.score > .9);
const personFound = detected.detections!.find(d => d.className === 'person' && d.score > .8);
if (!personFound)
throw new Error('Person not detected in test image.');
}

View File

@@ -1,12 +1,12 @@
{
"name": "@vityevato/hikvision-doorbell",
"version": "1.0.1",
"version": "2.0.0d",
"lockfileVersion": 2,
"requires": true,
"packages": {
"": {
"name": "@vityevato/hikvision-doorbell",
"version": "1.0.1",
"version": "2.0.0d",
"license": "Apache",
"dependencies": {
"@scrypted/common": "file:../../common",
@@ -30,39 +30,41 @@
"license": "ISC",
"dependencies": {
"@scrypted/sdk": "file:../sdk",
"@scrypted/types": "^0.5.27",
"http-auth-utils": "^5.0.1",
"typescript": "^5.5.3"
},
"devDependencies": {
"@types/node": "^20.11.0",
"@types/node": "^20.19.11",
"monaco-editor": "^0.50.0",
"ts-node": "^10.9.2"
}
},
"../../sdk": {
"name": "@scrypted/sdk",
"version": "0.3.118",
"version": "0.5.48",
"license": "ISC",
"dependencies": {
"@babel/preset-typescript": "^7.26.0",
"@rollup/plugin-commonjs": "^28.0.1",
"@babel/preset-typescript": "^7.27.1",
"@rollup/plugin-commonjs": "^28.0.5",
"@rollup/plugin-json": "^6.1.0",
"@rollup/plugin-node-resolve": "^15.3.0",
"@rollup/plugin-typescript": "^12.1.1",
"@rollup/plugin-node-resolve": "^16.0.1",
"@rollup/plugin-typescript": "^12.1.2",
"@rollup/plugin-virtual": "^3.0.2",
"adm-zip": "^0.5.16",
"axios": "^1.7.8",
"babel-loader": "^9.2.1",
"axios": "^1.10.0",
"babel-loader": "^10.0.0",
"babel-plugin-const-enum": "^1.2.0",
"ncp": "^2.0.0",
"openai": "^6.1.0",
"raw-loader": "^4.0.2",
"rimraf": "^6.0.1",
"rollup": "^4.27.4",
"rollup": "^4.43.0",
"tmp": "^0.2.3",
"ts-loader": "^9.5.1",
"ts-loader": "^9.5.2",
"tslib": "^2.8.1",
"typescript": "^5.6.3",
"webpack": "^5.96.1",
"typescript": "^5.8.3",
"webpack": "^5.99.9",
"webpack-bundle-analyzer": "^4.10.2"
},
"bin": {
@@ -75,60 +77,62 @@
"scrypted-webpack": "bin/scrypted-webpack.js"
},
"devDependencies": {
"@types/node": "^22.10.1",
"@types/node": "^24.0.1",
"ts-node": "^10.9.2",
"typedoc": "^0.26.11"
"typedoc": "^0.28.5"
}
},
"../../server": {
"name": "@scrypted/server",
"version": "0.138.1",
"version": "0.142.9",
"hasInstallScript": true,
"license": "ISC",
"dependencies": {
"@scrypted/ffmpeg-static": "^6.1.0-build3",
"@scrypted/node-pty": "^1.0.22",
"@scrypted/types": "^0.3.108",
"@scrypted/node-pty": "^1.0.25",
"@scrypted/types": "^0.5.43",
"adm-zip": "^0.5.16",
"body-parser": "^1.20.3",
"body-parser": "^2.2.0",
"cookie-parser": "^1.4.7",
"dotenv": "^16.4.5",
"engine.io": "^6.6.2",
"express": "^4.21.1",
"dotenv": "^16.5.0",
"engine.io": "^6.6.4",
"express": "^5.1.0",
"follow-redirects": "^1.15.9",
"http-auth": "^4.2.0",
"level": "^8.0.1",
"http-auth": "^4.2.1",
"level": "^10.0.0",
"lodash": "^4.17.21",
"mime-types": "^3.0.1",
"node-dijkstra": "^2.5.0",
"node-forge": "^1.3.1",
"node-gyp": "^10.2.0",
"py": "npm:@bjia56/portable-python@^0.1.112",
"semver": "^7.6.3",
"sharp": "^0.33.5",
"node-gyp": "^11.2.0",
"py": "npm:@bjia56/portable-python@^0.1.141",
"semver": "^7.7.2",
"sharp": "^0.34.2",
"source-map-support": "^0.5.21",
"tar": "^7.4.3",
"tslib": "^2.8.1",
"typescript": "^5.5.4",
"typescript": "^5.8.3",
"whatwg-mimetype": "^4.0.0",
"ws": "^8.18.0"
"ws": "^8.18.2"
},
"bin": {
"scrypted-serve": "bin/scrypted-serve"
},
"devDependencies": {
"@types/adm-zip": "^0.5.7",
"@types/cookie-parser": "^1.4.8",
"@types/express": "^5.0.0",
"@types/cookie-parser": "^1.4.9",
"@types/express": "^5.0.3",
"@types/follow-redirects": "^1.14.4",
"@types/http-auth": "^4.1.4",
"@types/lodash": "^4.17.13",
"@types/node": "^22.10.1",
"@types/lodash": "^4.17.17",
"@types/mime-types": "^3.0.1",
"@types/node": "^24.0.3",
"@types/node-dijkstra": "^2.5.6",
"@types/node-forge": "^1.3.11",
"@types/semver": "^7.5.8",
"@types/semver": "^7.7.0",
"@types/source-map-support": "^0.5.10",
"@types/whatwg-mimetype": "^3.0.2",
"@types/ws": "^8.5.13",
"@types/ws": "^8.18.1",
"rimraf": "^6.0.1"
}
},
@@ -249,7 +253,8 @@
"version": "file:../../common",
"requires": {
"@scrypted/sdk": "file:../sdk",
"@types/node": "^20.11.0",
"@scrypted/types": "^0.5.27",
"@types/node": "^20.19.11",
"http-auth-utils": "^5.0.1",
"monaco-editor": "^0.50.0",
"ts-node": "^10.9.2",
@@ -259,28 +264,29 @@
"@scrypted/sdk": {
"version": "file:../../sdk",
"requires": {
"@babel/preset-typescript": "^7.26.0",
"@rollup/plugin-commonjs": "^28.0.1",
"@babel/preset-typescript": "^7.27.1",
"@rollup/plugin-commonjs": "^28.0.5",
"@rollup/plugin-json": "^6.1.0",
"@rollup/plugin-node-resolve": "^15.3.0",
"@rollup/plugin-typescript": "^12.1.1",
"@rollup/plugin-node-resolve": "^16.0.1",
"@rollup/plugin-typescript": "^12.1.2",
"@rollup/plugin-virtual": "^3.0.2",
"@types/node": "^22.10.1",
"@types/node": "^24.0.1",
"adm-zip": "^0.5.16",
"axios": "^1.7.8",
"babel-loader": "^9.2.1",
"axios": "^1.10.0",
"babel-loader": "^10.0.0",
"babel-plugin-const-enum": "^1.2.0",
"ncp": "^2.0.0",
"openai": "^6.1.0",
"raw-loader": "^4.0.2",
"rimraf": "^6.0.1",
"rollup": "^4.27.4",
"rollup": "^4.43.0",
"tmp": "^0.2.3",
"ts-loader": "^9.5.1",
"ts-loader": "^9.5.2",
"ts-node": "^10.9.2",
"tslib": "^2.8.1",
"typedoc": "^0.26.11",
"typescript": "^5.6.3",
"webpack": "^5.96.1",
"typedoc": "^0.28.5",
"typescript": "^5.8.3",
"webpack": "^5.99.9",
"webpack-bundle-analyzer": "^4.10.2"
}
},
@@ -288,44 +294,46 @@
"version": "file:../../server",
"requires": {
"@scrypted/ffmpeg-static": "^6.1.0-build3",
"@scrypted/node-pty": "^1.0.22",
"@scrypted/types": "^0.3.108",
"@scrypted/node-pty": "^1.0.25",
"@scrypted/types": "^0.5.43",
"@types/adm-zip": "^0.5.7",
"@types/cookie-parser": "^1.4.8",
"@types/express": "^5.0.0",
"@types/cookie-parser": "^1.4.9",
"@types/express": "^5.0.3",
"@types/follow-redirects": "^1.14.4",
"@types/http-auth": "^4.1.4",
"@types/lodash": "^4.17.13",
"@types/node": "^22.10.1",
"@types/lodash": "^4.17.17",
"@types/mime-types": "^3.0.1",
"@types/node": "^24.0.3",
"@types/node-dijkstra": "^2.5.6",
"@types/node-forge": "^1.3.11",
"@types/semver": "^7.5.8",
"@types/semver": "^7.7.0",
"@types/source-map-support": "^0.5.10",
"@types/whatwg-mimetype": "^3.0.2",
"@types/ws": "^8.5.13",
"@types/ws": "^8.18.1",
"adm-zip": "^0.5.16",
"body-parser": "^1.20.3",
"body-parser": "^2.2.0",
"cookie-parser": "^1.4.7",
"dotenv": "^16.4.5",
"engine.io": "^6.6.2",
"express": "^4.21.1",
"dotenv": "^16.5.0",
"engine.io": "^6.6.4",
"express": "^5.1.0",
"follow-redirects": "^1.15.9",
"http-auth": "^4.2.0",
"level": "^8.0.1",
"http-auth": "^4.2.1",
"level": "^10.0.0",
"lodash": "^4.17.21",
"mime-types": "^3.0.1",
"node-dijkstra": "^2.5.0",
"node-forge": "^1.3.1",
"node-gyp": "^10.2.0",
"py": "npm:@bjia56/portable-python@^0.1.112",
"node-gyp": "^11.2.0",
"py": "npm:@bjia56/portable-python@^0.1.141",
"rimraf": "^6.0.1",
"semver": "^7.6.3",
"sharp": "^0.33.5",
"semver": "^7.7.2",
"sharp": "^0.34.2",
"source-map-support": "^0.5.21",
"tar": "^7.4.3",
"tslib": "^2.8.1",
"typescript": "^5.5.4",
"typescript": "^5.8.3",
"whatwg-mimetype": "^4.0.0",
"ws": "^8.18.0"
"ws": "^8.18.2"
}
},
"@types/ip": {

View File

@@ -1,6 +1,6 @@
{
"name": "@vityevato/hikvision-doorbell",
"version": "2.0.2",
"version": "2.0.8",
"description": "Hikvision Doorbell Plugin for Scrypted",
"author": "Roman Sokolov",
"license": "Apache",

View File

@@ -42,34 +42,44 @@ export class AuthRequst {
const req = Http.request(url, opt)
// Apply timeout if specified (Node.js http.request doesn't use timeout from options)
if (opt.timeout) {
req.setTimeout (opt.timeout, () => {
req.destroy (new Error (`Request timeout after ${opt.timeout}ms`));
});
}
req.once('response', async (resp) => {
try {
if (resp.statusCode == 401) {
if (resp.statusCode == 401) {
// Hikvision quirk: even if we already had a sessionAuth, a fresh
// WWW-Authenticate challenge may require rebuilding credentials.
// Limit the number of digest rebuilds to avoid infinite loops.
const attempt = (opt.digestRetry ?? 0);
if (attempt >= 2) {
// Give up after a couple of rebuild attempts and surface the 401 response
resolve(await this.parseResponse (opt.responseType, resp));
return;
}
// Hikvision quirk: even if we already had a sessionAuth, a fresh
// WWW-Authenticate challenge may require rebuilding credentials.
// Limit the number of digest rebuilds to avoid infinite loops.
const attempt = (opt.digestRetry ?? 0);
if (attempt >= 2) {
// Give up after a couple of rebuild attempts and surface the 401 response
resolve(await this.parseResponse (opt.responseType, resp));
return;
const newAuth = this.createAuth(resp.headers['www-authenticate'], !!this.auth);
// Clear cached auth to avoid stale nonce reuse
this.auth = undefined;
opt.sessionAuth = newAuth;
opt.digestRetry = attempt + 1;
const result = await this.request(url, opt, body);
resolve(result);
}
const newAuth = this.createAuth(resp.headers['www-authenticate'], !!this.auth);
// Clear cached auth to avoid stale nonce reuse
this.auth = undefined;
opt.sessionAuth = newAuth;
opt.digestRetry = attempt + 1;
const result = await this.request(url, opt, body);
resolve(result);
}
else {
// Cache the negotiated session auth only if it was provided for this request.
if (opt.sessionAuth) {
this.auth = opt.sessionAuth;
else {
// Cache the negotiated session auth only if it was provided for this request.
if (opt.sessionAuth) {
this.auth = opt.sessionAuth;
}
resolve(await this.parseResponse(opt.responseType, resp));
}
resolve(await this.parseResponse(opt.responseType, resp));
} catch (error) {
reject(error);
}
});
@@ -169,6 +179,10 @@ export class AuthRequst {
readable.once('end', () => {
resolve(result);
});
readable.once('error', (error) => {
reject(error);
});
});
}
@@ -184,6 +198,10 @@ export class AuthRequst {
readable.once('end', () => {
resolve(result);
});
readable.once('error', (error) => {
reject(error);
});
});
}

View File

@@ -49,6 +49,7 @@ interface AcsEventResponse {
const maxEventAgeSeconds = 30; // Ignore events older than this many seconds
const callPollingIntervalSec = 1; // Call status polling interval in seconds
const alertTickTimeoutSec = 60; // Alert stream tick timeout in seconds
const acsPollingTimeoutSec = 5; // ACS polling request timeout in seconds
const EventCodeMap = new Map<string, HikvisionDoorbellEvent>([
['5,25', HikvisionDoorbellEvent.DoorOpened],
@@ -113,16 +114,19 @@ export class HikvisionDoorbellAPI extends HikvisionCameraAPI
password: string,
callStatusPolling: boolean,
public console: Console,
public storage: Storage
public storage: Storage,
skipCapabilitiesInit: boolean = false
)
{
let endpoint = libip.isV4Format(address) ? `${address}:${port}` : `[${address}]:${port}`;
let endpoint = libip.isV4Format (address) ? `${address}:${port}` : `[${address}]:${port}`;
super (endpoint, username, password, console);
this.endpoint = endpoint;
this.auth = new AuthRequst (username, password, console);
// Initialize door capabilities
this.initializeDoorCapabilities();
// Initialize door capabilities (skip for event-only API instances)
if (!skipCapabilitiesInit) {
this.initializeDoorCapabilities();
}
this.useCallStatusPolling = callStatusPolling;
}
@@ -136,26 +140,36 @@ export class HikvisionDoorbellAPI extends HikvisionCameraAPI
{
// Create a promise for this specific request to prevent queue blocking
const requestPromise = this.requestQueue.then(async () => {
let url: string = urlOrOptions as string;
let url: string | undefined;
let opt: AuthRequestOptions | undefined;
if (typeof urlOrOptions !== 'string') {
url = urlOrOptions.url as string;
if (typeof urlOrOptions.url !== 'string') {
url = (urlOrOptions.url as URL).toString();
if (typeof urlOrOptions === 'string') {
url = urlOrOptions;
} else {
if (urlOrOptions.url) {
url = typeof urlOrOptions.url === 'string'
? urlOrOptions.url
: urlOrOptions.url.toString();
}
opt = {
method: urlOrOptions.method,
responseType: urlOrOptions.responseType || 'buffer',
headers: urlOrOptions.headers as OutgoingHttpHeaders,
timeout: urlOrOptions.timeout,
};
}
// Validate URL before making request
if (!url || url.includes ('undefined')) {
throw new Error (`Invalid request URL: ${url}`);
}
// Safety fallback and attach debug id
if (!opt) {
opt = { responseType: 'buffer' } as AuthRequestOptions;
}
return await this.auth.request(url, opt, body);
return await this.auth.request (url, opt, body);
});
// Update the queue to continue after this request (success or failure)
@@ -416,7 +430,8 @@ export class HikvisionDoorbellAPI extends HikvisionCameraAPI
// If already loading, wait for the existing promise
if (this.loadCapabilitiesPromise) {
return this.loadCapabilitiesPromise;
await this.loadCapabilitiesPromise;
return;
}
// Start loading and store the promise
@@ -654,23 +669,35 @@ export class HikvisionDoorbellAPI extends HikvisionCameraAPI
this.console.error ('Failed to set phone number record:', e);
}
// Set call button configuration
// Small delay to allow device to process previous request
await new Promise (resolve => setTimeout (resolve, 500));
// Set call button configuration with retry logic
const keyCfgData = `<?xml version="1.0" encoding="UTF-8"?><KeyCfg xmlns="http://www.isapi.org/ver20/XMLSchema" version="2.0"><id>${buttonNumber}</id><callNumber>${roomNumber}</callNumber><moduleId>1</moduleId><templateNo>0</templateNo></KeyCfg>`;
try {
const response = await this.request ({
url: `http://${this.endpoint}/ISAPI/VideoIntercom/keyCfg/${buttonNumber}`,
method: 'PUT',
headers: {
'Content-Type': 'application/x-www-form-urlencoded'
},
responseType: 'text',
}, keyCfgData);
this.console.debug (`Call button ${buttonNumber} configured for room ${roomNumber}: ${response.body}`);
}
catch (e) {
this.console.error (`Failed to configure call button ${buttonNumber}:`, e);
for (let attempt = 0; attempt < 2; attempt++) {
try {
const response = await this.request ({
url: `http://${this.endpoint}/ISAPI/VideoIntercom/keyCfg/${buttonNumber}`,
method: 'PUT',
headers: {
'Content-Type': 'application/xml'
},
responseType: 'text',
}, keyCfgData);
this.console.debug (`Call button ${buttonNumber} configured for room ${roomNumber}: ${response.body}`);
break;
}
catch (e) {
if (attempt === 0 && (e.code === 'EPIPE' || e.code === 'ECONNRESET')) {
this.console.warn (`Call button ${buttonNumber} configuration failed (${e.code}), retrying...`);
await new Promise (resolve => setTimeout (resolve, 1000));
continue;
}
this.console.error (`Failed to configure call button ${buttonNumber}:`, e);
break;
}
}
@@ -719,8 +746,8 @@ export class HikvisionDoorbellAPI extends HikvisionCameraAPI
private isCallPollingActive: boolean = false;
// ACS event polling properties
private acsEventPollingInterval?: NodeJS.Timeout;
private lastAcsEventTime: Date = new Date();
private isAcsPollingInProgress: boolean = false;
// Timezone properties
private deviceTimezone?: string; // GMT offset in format like '+03:00'
@@ -901,6 +928,7 @@ export class HikvisionDoorbellAPI extends HikvisionCameraAPI
const response = await this.request ({
url: `http://${this.endpoint}/ISAPI/AccessControl/AcsEvent?format=json`,
method: 'POST',
timeout: acsPollingTimeoutSec * 1000,
headers: {
'Content-Type': 'application/json',
'Accept': 'application/json',
@@ -961,8 +989,15 @@ export class HikvisionDoorbellAPI extends HikvisionCameraAPI
* This method can be called periodically to check for new events
* @param lastEventTime - Optional timestamp to filter events newer than this time
*/
private async pollAndProcessAcsEvents (lastEventTime?: Date): Promise<void>
private async pollAndProcessAcsEvents (lastEventTime?: Date, isRetry: boolean = false): Promise<void>
{
// Prevent multiple concurrent polling requests
if (this.isAcsPollingInProgress) {
this.console.debug ('ACS polling already in progress, skipping');
return;
}
this.isAcsPollingInProgress = true;
try {
const eventResponse = await this.getAcsEvents();
let latestEventTime: Date | undefined;
@@ -993,7 +1028,16 @@ export class HikvisionDoorbellAPI extends HikvisionCameraAPI
} catch (error) {
this.console.error (`Failed to poll and process ACS events: ${error}`);
throw error;
// Retry once after a short delay if this was the first attempt
if (!isRetry) {
this.console.debug (`Retrying ACS polling after ${acsPollingTimeoutSec} seconds...`);
this.isAcsPollingInProgress = false;
await new Promise (resolve => setTimeout (resolve, acsPollingTimeoutSec * 1000));
return this.pollAndProcessAcsEvents (lastEventTime, true);
}
} finally {
this.isAcsPollingInProgress = false;
}
}
@@ -1061,7 +1105,8 @@ export class HikvisionDoorbellAPI extends HikvisionCameraAPI
url: `http://${this.endpoint}/ISAPI/Event/notification/alertStream`,
responseType: 'readable',
headers: {
'Accept': '*/*'
'Accept': '*/*',
'Connection': 'keep-alive'
}
});
@@ -1162,6 +1207,7 @@ export class HikvisionDoorbellAPI extends HikvisionCameraAPI
this.console.debug (`AlertStream JSON: ${JSON.stringify (eventData, null, 2)}`);
// Poll ACS events (errors are handled internally)
this.pollAndProcessAcsEvents (this.lastAcsEventTime);
}
}

View File

@@ -93,6 +93,14 @@ export class HikvisionCameraDoorbell extends HikvisionCamera implements Camera,
const debugEnabled = this.storage.getItem ('debug');
this.debugController.setDebugEnabled (debugEnabled === 'true');
// Add global unhandledRejection handler to prevent silent failures
process.on ('unhandledRejection', (reason: any, promise: Promise<any>) => {
this.console.error (`Unhandled Promise Rejection: ${reason}`);
if (reason?.stack) {
this.console.error (`Stack trace: ${reason.stack}`);
}
});
this.updateSip();
}
@@ -210,6 +218,8 @@ export class HikvisionCameraDoorbell extends HikvisionCamera implements Camera,
this.stopIntercom();
});
}
}).catch(e => {
this.console.error('Failed to stop call during reconnection:', e);
});
return;
}
@@ -1152,6 +1162,9 @@ export class HikvisionCameraDoorbell extends HikvisionCamera implements Camera,
this.httpStreamSwitcher.destroy();
this.httpStreamSwitcher = undefined;
}
} catch (error) {
this.console.error (`Failed to stop intercom: ${error}`);
// Don't throw - we want to ensure cleanup happens
} finally {
// Always reset state
this.intercomBusy = false;
@@ -1168,6 +1181,7 @@ export class HikvisionCameraDoorbell extends HikvisionCamera implements Camera,
private createEventApi(): HikvisionDoorbellAPI
{
// Event API only listens for events, skip door capabilities initialization
return new HikvisionDoorbellAPI (
this.getIPAddress(),
this.getHttpPort(),
@@ -1175,7 +1189,8 @@ export class HikvisionCameraDoorbell extends HikvisionCamera implements Camera,
this.getPassword(),
this.isCallPolling(),
this.console,
this.storage
this.storage,
true // skipCapabilitiesInit
);
}
@@ -1218,7 +1233,11 @@ export class HikvisionCameraDoorbell extends HikvisionCamera implements Camera,
} catch (e) {
this.console.error (`Error installing fake SIP settings: ${e}`);
// repeat if unreached
this.installSipSettingsOnDeviceTimeout = setTimeout (() => this.installSipSettingsOnDevice(), UNREACHED_RETRY_SEC * 1000);
this.installSipSettingsOnDeviceTimeout = setTimeout (() => {
this.installSipSettingsOnDevice().catch(err => {
this.console.error('Failed to retry installing SIP settings:', err);
});
}, UNREACHED_RETRY_SEC * 1000);
}
}
}

View File

@@ -2,22 +2,37 @@ import sdk from '@scrypted/sdk';
import { isLoopback, isV4Format, isV6Format } from 'ip';
import dgram from 'node:dgram';
const MAX_RETRIES = 10;
const RETRY_DELAY_SEC = 10;
export async function localServiceIpAddress (doorbellIp: string): Promise<string>
{
let host = "localhost";
try {
const typeCheck = isV4Format (doorbellIp) ? isV4Format : isV6Format;
for (const address of await sdk.endpointManager.getLocalAddresses()) {
if (!isLoopback(address) && typeCheck(address)) {
host = address;
break;
const typeCheck = isV4Format (doorbellIp) ? isV4Format : isV6Format;
for (let attempt = 0; attempt < MAX_RETRIES; attempt++)
{
try
{
const addresses = await sdk.endpointManager.getLocalAddresses();
for (const address of addresses || [])
{
if (!isLoopback (address) && typeCheck (address))
{
return address;
}
}
}
}
catch (e) {
catch (e) {
}
// Wait before retry if addresses not available yet
if (attempt < MAX_RETRIES - 1) {
await awaitTimeout (RETRY_DELAY_SEC * 1000);
}
}
return host;
throw new Error('Could not find local service IP address');
}
export function udpSocketType (ip: string): dgram.SocketType {

View File

@@ -17,7 +17,7 @@
},
"devDependencies": {
"@types/content-type": "^1.1.8",
"@types/node": "^20.11.30"
"@types/node": "^22.19.1"
}
},
"../../common": {
@@ -26,34 +26,43 @@
"license": "ISC",
"dependencies": {
"@scrypted/sdk": "file:../sdk",
"@scrypted/server": "file:../server",
"@scrypted/types": "^0.5.27",
"http-auth-utils": "^5.0.1",
"typescript": "^5.3.3"
"typescript": "^5.5.3"
},
"devDependencies": {
"@types/node": "^20.11.0",
"@types/node": "^20.19.11",
"monaco-editor": "^0.50.0",
"ts-node": "^10.9.2"
}
},
"../../sdk": {
"name": "@scrypted/sdk",
"version": "0.3.29",
"version": "0.5.52",
"license": "ISC",
"dependencies": {
"@babel/preset-typescript": "^7.18.6",
"adm-zip": "^0.4.13",
"axios": "^1.6.5",
"babel-loader": "^9.1.0",
"babel-plugin-const-enum": "^1.1.0",
"esbuild": "^0.15.9",
"@babel/preset-typescript": "^7.27.1",
"@rollup/plugin-commonjs": "^28.0.9",
"@rollup/plugin-json": "^6.1.0",
"@rollup/plugin-node-resolve": "^16.0.1",
"@rollup/plugin-terser": "^0.4.4",
"@rollup/plugin-typescript": "^12.3.0",
"@rollup/plugin-virtual": "^3.0.2",
"adm-zip": "^0.5.16",
"axios": "^1.10.0",
"babel-loader": "^10.0.0",
"babel-plugin-const-enum": "^1.2.0",
"ncp": "^2.0.0",
"openai": "^6.1.0",
"raw-loader": "^4.0.2",
"rimraf": "^3.0.2",
"tmp": "^0.2.1",
"ts-loader": "^9.4.2",
"typescript": "^4.9.4",
"webpack": "^5.75.0",
"webpack-bundle-analyzer": "^4.5.0"
"rimraf": "^6.0.1",
"rollup": "^4.52.5",
"tmp": "^0.2.3",
"ts-loader": "^9.5.4",
"tslib": "^2.8.1",
"typescript": "^5.9.3",
"webpack": "^5.99.9",
"webpack-bundle-analyzer": "^4.10.2"
},
"bin": {
"scrypted-changelog": "bin/scrypted-changelog.js",
@@ -65,11 +74,9 @@
"scrypted-webpack": "bin/scrypted-webpack.js"
},
"devDependencies": {
"@types/node": "^18.11.18",
"@types/stringify-object": "^4.0.0",
"stringify-object": "^3.3.0",
"ts-node": "^10.4.0",
"typedoc": "^0.23.21"
"@types/node": "^24.9.2",
"ts-node": "^10.9.2",
"typedoc": "^0.28.14"
}
},
"../sdk": {
@@ -90,11 +97,12 @@
"dev": true
},
"node_modules/@types/node": {
"version": "20.11.30",
"resolved": "https://registry.npmjs.org/@types/node/-/node-20.11.30.tgz",
"integrity": "sha512-dHM6ZxwlmuZaRmUPfv1p+KrdD1Dci04FbdEm/9wEMouFqxYoFl5aMkt0VMAUtYRQDyYvD41WJLukhq/ha3YuTw==",
"version": "22.19.1",
"resolved": "https://registry.npmjs.org/@types/node/-/node-22.19.1.tgz",
"integrity": "sha512-LCCV0HdSZZZb34qifBsyWlUmok6W7ouER+oQIGBScS8EsZsQbrtFTUrDX4hOl+CS6p7cnNC4td+qrSVGSCTUfQ==",
"license": "MIT",
"dependencies": {
"undici-types": "~5.26.4"
"undici-types": "~6.21.0"
}
},
"node_modules/@types/xml2js": {
@@ -119,9 +127,10 @@
"integrity": "sha512-NqVDv9TpANUjFm0N8uM5GxL36UgKi9/atZw+x7YFnQ8ckwFGKrl4xX4yWtrey3UJm5nP1kUbnYgLopqWNSRhWw=="
},
"node_modules/undici-types": {
"version": "5.26.5",
"resolved": "https://registry.npmjs.org/undici-types/-/undici-types-5.26.5.tgz",
"integrity": "sha512-JlCMO+ehdEIKqlFxk6IfVoAUVmgz7cU7zD/h9XZ0qzeosSHmUJVOzSQvvYSYWXkFXC+IfLKSIffhv0sVZup6pA=="
"version": "6.21.0",
"resolved": "https://registry.npmjs.org/undici-types/-/undici-types-6.21.0.tgz",
"integrity": "sha512-iwDZqg0QAGrg9Rav5H4n0M64c3mkR59cJ6wQp+7C4nI0gsmExaedaYLNO44eT4AtBBwjbTiGPMlt2Md0T9H9JQ==",
"license": "MIT"
},
"node_modules/xml2js": {
"version": "0.6.2",
@@ -149,35 +158,42 @@
"version": "file:../../common",
"requires": {
"@scrypted/sdk": "file:../sdk",
"@scrypted/server": "file:../server",
"@types/node": "^20.11.0",
"@scrypted/types": "^0.5.27",
"@types/node": "^20.19.11",
"http-auth-utils": "^5.0.1",
"monaco-editor": "^0.50.0",
"ts-node": "^10.9.2",
"typescript": "^5.3.3"
"typescript": "^5.5.3"
}
},
"@scrypted/sdk": {
"version": "file:../../sdk",
"requires": {
"@babel/preset-typescript": "^7.18.6",
"@types/node": "^18.11.18",
"@types/stringify-object": "^4.0.0",
"adm-zip": "^0.4.13",
"axios": "^1.6.5",
"babel-loader": "^9.1.0",
"babel-plugin-const-enum": "^1.1.0",
"esbuild": "^0.15.9",
"@babel/preset-typescript": "^7.27.1",
"@rollup/plugin-commonjs": "^28.0.9",
"@rollup/plugin-json": "^6.1.0",
"@rollup/plugin-node-resolve": "^16.0.1",
"@rollup/plugin-terser": "^0.4.4",
"@rollup/plugin-typescript": "^12.3.0",
"@rollup/plugin-virtual": "^3.0.2",
"@types/node": "^24.9.2",
"adm-zip": "^0.5.16",
"axios": "^1.10.0",
"babel-loader": "^10.0.0",
"babel-plugin-const-enum": "^1.2.0",
"ncp": "^2.0.0",
"openai": "^6.1.0",
"raw-loader": "^4.0.2",
"rimraf": "^3.0.2",
"stringify-object": "^3.3.0",
"tmp": "^0.2.1",
"ts-loader": "^9.4.2",
"ts-node": "^10.4.0",
"typedoc": "^0.23.21",
"typescript": "^4.9.4",
"webpack": "^5.75.0",
"webpack-bundle-analyzer": "^4.5.0"
"rimraf": "^6.0.1",
"rollup": "^4.52.5",
"tmp": "^0.2.3",
"ts-loader": "^9.5.4",
"ts-node": "^10.9.2",
"tslib": "^2.8.1",
"typedoc": "^0.28.14",
"typescript": "^5.9.3",
"webpack": "^5.99.9",
"webpack-bundle-analyzer": "^4.10.2"
}
},
"@types/content-type": {
@@ -187,11 +203,11 @@
"dev": true
},
"@types/node": {
"version": "20.11.30",
"resolved": "https://registry.npmjs.org/@types/node/-/node-20.11.30.tgz",
"integrity": "sha512-dHM6ZxwlmuZaRmUPfv1p+KrdD1Dci04FbdEm/9wEMouFqxYoFl5aMkt0VMAUtYRQDyYvD41WJLukhq/ha3YuTw==",
"version": "22.19.1",
"resolved": "https://registry.npmjs.org/@types/node/-/node-22.19.1.tgz",
"integrity": "sha512-LCCV0HdSZZZb34qifBsyWlUmok6W7ouER+oQIGBScS8EsZsQbrtFTUrDX4hOl+CS6p7cnNC4td+qrSVGSCTUfQ==",
"requires": {
"undici-types": "~5.26.4"
"undici-types": "~6.21.0"
}
},
"@types/xml2js": {
@@ -213,9 +229,9 @@
"integrity": "sha512-NqVDv9TpANUjFm0N8uM5GxL36UgKi9/atZw+x7YFnQ8ckwFGKrl4xX4yWtrey3UJm5nP1kUbnYgLopqWNSRhWw=="
},
"undici-types": {
"version": "5.26.5",
"resolved": "https://registry.npmjs.org/undici-types/-/undici-types-5.26.5.tgz",
"integrity": "sha512-JlCMO+ehdEIKqlFxk6IfVoAUVmgz7cU7zD/h9XZ0qzeosSHmUJVOzSQvvYSYWXkFXC+IfLKSIffhv0sVZup6pA=="
"version": "6.21.0",
"resolved": "https://registry.npmjs.org/undici-types/-/undici-types-6.21.0.tgz",
"integrity": "sha512-iwDZqg0QAGrg9Rav5H4n0M64c3mkR59cJ6wQp+7C4nI0gsmExaedaYLNO44eT4AtBBwjbTiGPMlt2Md0T9H9JQ=="
},
"xml2js": {
"version": "0.6.2",

View File

@@ -45,6 +45,6 @@
},
"devDependencies": {
"@types/content-type": "^1.1.8",
"@types/node": "^20.11.30"
"@types/node": "^22.19.1"
}
}

View File

@@ -255,6 +255,7 @@ export class HikvisionCamera extends RtspSmartCamera implements Camera, Intercom
detectionId,
timestamp: now,
detections,
sourceId: this.pluginId
};
this.onDeviceEvent(ScryptedInterface.ObjectDetector, detected);

View File

@@ -20,6 +20,11 @@ export async function getDeviceInfo(credential: AuthFetchCredentialState, addres
const serialNumber = response.body.match(/>(.*?)<\/serialNumber>/)?.[1];
const macAddress = response.body.match(/>(.*?)<\/macAddress>/)?.[1];
const firmwareVersion = response.body.match(/>(.*?)<\/firmwareVersion>/)?.[1];
if (!deviceModel && !deviceName && !serialNumber && !macAddress && !firmwareVersion) {
throw new Error('Failed to parse device info from camera.');
}
return {
deviceModel,
deviceName,

View File

@@ -1 +1 @@
../../../openvino/src/ov/async_infer.py
../../../openvino/src/common/async_infer.py

View File

@@ -1,19 +1,19 @@
{
"name": "@scrypted/objectdetector",
"version": "0.1.73",
"version": "0.1.77",
"lockfileVersion": 2,
"requires": true,
"packages": {
"": {
"name": "@scrypted/objectdetector",
"version": "0.1.73",
"version": "0.1.77",
"license": "Apache-2.0",
"dependencies": {
"@scrypted/common": "file:../../common",
"@scrypted/sdk": "file:../../sdk"
},
"devDependencies": {
"@types/node": "^20.11.0"
"@types/node": "^22.19.0"
}
},
"../../common": {
@@ -22,39 +22,42 @@
"license": "ISC",
"dependencies": {
"@scrypted/sdk": "file:../sdk",
"@scrypted/types": "^0.5.27",
"http-auth-utils": "^5.0.1",
"typescript": "^5.5.3"
},
"devDependencies": {
"@types/node": "^20.11.0",
"@types/node": "^20.19.11",
"monaco-editor": "^0.50.0",
"ts-node": "^10.9.2"
}
},
"../../sdk": {
"name": "@scrypted/sdk",
"version": "0.3.106",
"version": "0.5.51",
"license": "ISC",
"dependencies": {
"@babel/preset-typescript": "^7.26.0",
"@rollup/plugin-commonjs": "^28.0.1",
"@babel/preset-typescript": "^7.27.1",
"@rollup/plugin-commonjs": "^28.0.9",
"@rollup/plugin-json": "^6.1.0",
"@rollup/plugin-node-resolve": "^15.3.0",
"@rollup/plugin-typescript": "^12.1.1",
"@rollup/plugin-node-resolve": "^16.0.1",
"@rollup/plugin-terser": "^0.4.4",
"@rollup/plugin-typescript": "^12.3.0",
"@rollup/plugin-virtual": "^3.0.2",
"adm-zip": "^0.5.16",
"axios": "^1.7.8",
"babel-loader": "^9.2.1",
"axios": "^1.10.0",
"babel-loader": "^10.0.0",
"babel-plugin-const-enum": "^1.2.0",
"ncp": "^2.0.0",
"openai": "^6.1.0",
"raw-loader": "^4.0.2",
"rimraf": "^6.0.1",
"rollup": "^4.27.4",
"rollup": "^4.52.5",
"tmp": "^0.2.3",
"ts-loader": "^9.5.1",
"ts-loader": "^9.5.4",
"tslib": "^2.8.1",
"typescript": "^5.6.3",
"webpack": "^5.96.1",
"typescript": "^5.9.3",
"webpack": "^5.99.9",
"webpack-bundle-analyzer": "^4.10.2"
},
"bin": {
@@ -67,9 +70,9 @@
"scrypted-webpack": "bin/scrypted-webpack.js"
},
"devDependencies": {
"@types/node": "^22.10.1",
"@types/node": "^24.9.2",
"ts-node": "^10.9.2",
"typedoc": "^0.26.11"
"typedoc": "^0.28.14"
}
},
"node_modules/@scrypted/common": {
@@ -81,19 +84,21 @@
"link": true
},
"node_modules/@types/node": {
"version": "20.11.0",
"resolved": "https://registry.npmjs.org/@types/node/-/node-20.11.0.tgz",
"integrity": "sha512-o9bjXmDNcF7GbM4CNQpmi+TutCgap/K3w1JyKgxAjqx41zp9qlIAVFi0IhCNsJcXolEqLWhbFbEeL0PvYm4pcQ==",
"version": "22.19.0",
"resolved": "https://registry.npmjs.org/@types/node/-/node-22.19.0.tgz",
"integrity": "sha512-xpr/lmLPQEj+TUnHmR+Ab91/glhJvsqcjB+yY0Ix9GO70H6Lb4FHH5GeqdOE5btAx7eIMwuHkp4H2MSkLcqWbA==",
"dev": true,
"license": "MIT",
"dependencies": {
"undici-types": "~5.26.4"
"undici-types": "~6.21.0"
}
},
"node_modules/undici-types": {
"version": "5.26.5",
"resolved": "https://registry.npmjs.org/undici-types/-/undici-types-5.26.5.tgz",
"integrity": "sha512-JlCMO+ehdEIKqlFxk6IfVoAUVmgz7cU7zD/h9XZ0qzeosSHmUJVOzSQvvYSYWXkFXC+IfLKSIffhv0sVZup6pA==",
"dev": true
"version": "6.21.0",
"resolved": "https://registry.npmjs.org/undici-types/-/undici-types-6.21.0.tgz",
"integrity": "sha512-iwDZqg0QAGrg9Rav5H4n0M64c3mkR59cJ6wQp+7C4nI0gsmExaedaYLNO44eT4AtBBwjbTiGPMlt2Md0T9H9JQ==",
"dev": true,
"license": "MIT"
},
"node-moving-things-tracker": {
"version": "0.9.1",
@@ -118,7 +123,8 @@
"version": "file:../../common",
"requires": {
"@scrypted/sdk": "file:../sdk",
"@types/node": "^20.11.0",
"@scrypted/types": "^0.5.27",
"@types/node": "^20.19.11",
"http-auth-utils": "^5.0.1",
"monaco-editor": "^0.50.0",
"ts-node": "^10.9.2",
@@ -128,44 +134,46 @@
"@scrypted/sdk": {
"version": "file:../../sdk",
"requires": {
"@babel/preset-typescript": "^7.26.0",
"@rollup/plugin-commonjs": "^28.0.1",
"@babel/preset-typescript": "^7.27.1",
"@rollup/plugin-commonjs": "^28.0.9",
"@rollup/plugin-json": "^6.1.0",
"@rollup/plugin-node-resolve": "^15.3.0",
"@rollup/plugin-typescript": "^12.1.1",
"@rollup/plugin-node-resolve": "^16.0.1",
"@rollup/plugin-terser": "^0.4.4",
"@rollup/plugin-typescript": "^12.3.0",
"@rollup/plugin-virtual": "^3.0.2",
"@types/node": "^22.10.1",
"@types/node": "^24.9.2",
"adm-zip": "^0.5.16",
"axios": "^1.7.8",
"babel-loader": "^9.2.1",
"axios": "^1.10.0",
"babel-loader": "^10.0.0",
"babel-plugin-const-enum": "^1.2.0",
"ncp": "^2.0.0",
"openai": "^6.1.0",
"raw-loader": "^4.0.2",
"rimraf": "^6.0.1",
"rollup": "^4.27.4",
"rollup": "^4.52.5",
"tmp": "^0.2.3",
"ts-loader": "^9.5.1",
"ts-loader": "^9.5.4",
"ts-node": "^10.9.2",
"tslib": "^2.8.1",
"typedoc": "^0.26.11",
"typescript": "^5.6.3",
"webpack": "^5.96.1",
"typedoc": "^0.28.14",
"typescript": "^5.9.3",
"webpack": "^5.99.9",
"webpack-bundle-analyzer": "^4.10.2"
}
},
"@types/node": {
"version": "20.11.0",
"resolved": "https://registry.npmjs.org/@types/node/-/node-20.11.0.tgz",
"integrity": "sha512-o9bjXmDNcF7GbM4CNQpmi+TutCgap/K3w1JyKgxAjqx41zp9qlIAVFi0IhCNsJcXolEqLWhbFbEeL0PvYm4pcQ==",
"version": "22.19.0",
"resolved": "https://registry.npmjs.org/@types/node/-/node-22.19.0.tgz",
"integrity": "sha512-xpr/lmLPQEj+TUnHmR+Ab91/glhJvsqcjB+yY0Ix9GO70H6Lb4FHH5GeqdOE5btAx7eIMwuHkp4H2MSkLcqWbA==",
"dev": true,
"requires": {
"undici-types": "~5.26.4"
"undici-types": "~6.21.0"
}
},
"undici-types": {
"version": "5.26.5",
"resolved": "https://registry.npmjs.org/undici-types/-/undici-types-5.26.5.tgz",
"integrity": "sha512-JlCMO+ehdEIKqlFxk6IfVoAUVmgz7cU7zD/h9XZ0qzeosSHmUJVOzSQvvYSYWXkFXC+IfLKSIffhv0sVZup6pA==",
"version": "6.21.0",
"resolved": "https://registry.npmjs.org/undici-types/-/undici-types-6.21.0.tgz",
"integrity": "sha512-iwDZqg0QAGrg9Rav5H4n0M64c3mkR59cJ6wQp+7C4nI0gsmExaedaYLNO44eT4AtBBwjbTiGPMlt2Md0T9H9JQ==",
"dev": true
}
}

View File

@@ -1,6 +1,6 @@
{
"name": "@scrypted/objectdetector",
"version": "0.1.73",
"version": "0.1.77",
"description": "Scrypted Video Analysis Plugin. Installed alongside a detection service like OpenCV or TensorFlow.",
"author": "Scrypted",
"license": "Apache-2.0",
@@ -49,6 +49,6 @@
"@scrypted/sdk": "file:../../sdk"
},
"devDependencies": {
"@types/node": "^20.11.0"
"@types/node": "^22.19.0"
}
}

View File

@@ -405,7 +405,9 @@ class ObjectDetectionMixin extends SettingsMixinDeviceBase<VideoCamera & Camera
}, 30000);
signal.promise.finally(() => clearInterval(interval));
const currentDetections = new Map<string, number>();
const stationaryDetections = new Map<string, number>();
const filteredDetections = new Map<string, number>();
const movingDetections = new Map<string, number>();
let lastReport = 0;
updatePipelineStatus('waiting result');
@@ -477,21 +479,32 @@ class ObjectDetectionMixin extends SettingsMixinDeviceBase<VideoCamera & Camera
if (!this.hasMotionType) {
this.plugin.trackDetection();
const numZonedDetections = zonedDetections.filter(d => d.className !== 'motion').length;
const numOriginalDetections = originalDetections.filter(d => d.className !== 'motion').length;
if (numZonedDetections !== numOriginalDetections)
currentDetections.set('filtered', (currentDetections.get('filtered') || 0) + 1);
for (const d of originalDetections) {
if (!zonedDetections.includes(d)) {
filteredDetections.set(d.className, Math.max(filteredDetections.get(d.className) || 0, d.score));
}
}
for (const d of detected.detected.detections) {
currentDetections.set(d.className, Math.max(currentDetections.get(d.className) || 0, d.score));
const set = d.movement?.moving ? movingDetections : stationaryDetections;
set.set(d.className, Math.max(set.get(d.className) || 0, d.score));
}
if (now > lastReport + 10000) {
const found = [...currentDetections.entries()].map(([className, score]) => `${className} (${score})`);
if (!found.length)
found.push('[no detections]');
this.console.log(`[${Math.round((now - start) / 100) / 10}s] Detected:`, ...found);
currentDetections.clear();
const classScores = (set: Map<string, number>) => {
const found = [...set.entries()].map(([className, score]) => `${className} (${score})`);
if (!found.length)
found.push('[no detections]');
return found;
};
this.console.log(`[${Math.round((now - start) / 100) / 10}s] Detected (stationary):`, ...classScores(stationaryDetections));
this.console.log(`[${Math.round((now - start) / 100) / 10}s] Detected (moving) :`, ...classScores(movingDetections));
this.console.log(`[${Math.round((now - start) / 100) / 10}s] Detected (filtered) :`, ...classScores(filteredDetections));
this.console.log(`[${Math.round((now - start) / 100) / 10}s] Zones : ${zones.length}`);
stationaryDetections.clear();
movingDetections.clear();
filteredDetections.clear();
lastReport = now;
}
}
@@ -697,7 +710,7 @@ class ObjectDetectionMixin extends SettingsMixinDeviceBase<VideoCamera & Camera
const gstreamer = sdk.systemManager.getDeviceById('@scrypted/python-codecs', 'gstreamer') || undefined;
const libav = sdk.systemManager.getDeviceById('@scrypted/python-codecs', 'libav') || undefined;
const ffmpeg = sdk.systemManager.getDeviceById('@scrypted/objectdetector', 'ffmpeg') || undefined;
const use = pipelines.find(p => p.name === frameGenerator) || webassembly || gstreamer || libav || ffmpeg;
const use = pipelines.find(p => p.name === frameGenerator) || webassembly || libav || gstreamer || ffmpeg;
return use.id;
}

View File

@@ -1,4 +1,4 @@
{
"scrypted.debugHost": "koushik-ubuntu24",
"scrypted.debugHost": "scrypted-nvr",
}

View File

@@ -1,12 +1,12 @@
{
"name": "@scrypted/onnx",
"version": "0.1.127",
"version": "0.1.130",
"lockfileVersion": 2,
"requires": true,
"packages": {
"": {
"name": "@scrypted/onnx",
"version": "0.1.127",
"version": "0.1.130",
"devDependencies": {
"@scrypted/sdk": "file:../../sdk"
}

View File

@@ -50,5 +50,5 @@
"devDependencies": {
"@scrypted/sdk": "file:../../sdk"
},
"version": "0.1.127"
"version": "0.1.130"
}

View File

@@ -4,6 +4,7 @@ import ast
import asyncio
import concurrent.futures
import json
import os
import platform
import sys
import threading
@@ -23,6 +24,7 @@ from predict import PredictPlugin
from .face_recognition import ONNXFaceRecognition
from .clip_embedding import ONNXClipEmbedding
from .segment import ONNXSegmentation
try:
from .text_recognition import ONNXTextRecognition
@@ -31,15 +33,11 @@ except:
availableModels = [
"Default",
"scrypted_yolov10m_320",
"scrypted_yolov10n_320",
"scrypted_yolo_nas_s_320",
"scrypted_yolov6n_320",
"scrypted_yolov6s_320",
"scrypted_yolov9c_320",
"scrypted_yolov9s_320",
"scrypted_yolov9t_320",
"scrypted_yolov8n_320",
"scrypted_yolov9t_relu_test",
"scrypted_yolov9c_relu",
"scrypted_yolov9m_relu",
"scrypted_yolov9s_relu",
"scrypted_yolov9t_relu",
]
@@ -66,7 +64,7 @@ class ONNXPlugin(
if model == "Default" or model not in availableModels:
if model != "Default":
self.storage.setItem("model", "Default")
model = "scrypted_yolov9c_320"
model = "scrypted_yolov9c_relu"
self.yolo = "yolo" in model
self.scrypted_yolov10 = "scrypted_yolov10" in model
self.scrypted_yolo_nas = "scrypted_yolo_nas" in model
@@ -76,17 +74,8 @@ class ONNXPlugin(
print(f"model {model}")
onnxmodel = (
model
if self.scrypted_yolo_nas
else "best" if self.scrypted_model else model
)
model_version = "v3"
onnxfile = self.downloadFile(
f"https://github.com/koush/onnx-models/raw/main/{model}/{onnxmodel}.onnx",
f"{model_version}/{model}/{onnxmodel}.onnx",
)
model_path = self.downloadHuggingFaceModelLocalFallback(model)
onnxfile = os.path.join(model_path, f"{model}.onnx")
print(onnxfile)
@@ -167,6 +156,7 @@ class ONNXPlugin(
self.faceDevice = None
self.textDevice = None
self.clipDevice = None
self.segmentDevice = None
if not self.forked:
asyncio.ensure_future(self.prepareRecognitionModels(), loop=self.loop)
@@ -211,6 +201,18 @@ class ONNXPlugin(
"name": "ONNX CLIP Embedding",
}
)
await scrypted_sdk.deviceManager.onDeviceDiscovered(
{
"nativeId": "segment",
"type": scrypted_sdk.ScryptedDeviceType.Builtin.value,
"interfaces": [
scrypted_sdk.ScryptedInterface.ClusterForkInterface.value,
scrypted_sdk.ScryptedInterface.ObjectDetection.value,
],
"name": "ONNX Segmentation",
}
)
except:
pass
@@ -224,6 +226,9 @@ class ONNXPlugin(
elif nativeId == "clipembedding":
self.clipDevice = self.clipDevice or ONNXClipEmbedding(self, nativeId)
return self.clipDevice
elif nativeId == "segment":
self.segmentDevice = self.segmentDevice or ONNXSegmentation(self, nativeId)
return self.segmentDevice
custom_model = self.custom_models.get(nativeId, None)
if custom_model:
return custom_model

View File

@@ -2,6 +2,7 @@ from __future__ import annotations
import asyncio
import concurrent.futures
import os
import platform
import sys
import threading
@@ -15,12 +16,8 @@ from predict.face_recognize import FaceRecognizeDetection
class ONNXFaceRecognition(FaceRecognizeDetection):
def downloadModel(self, model: str):
onnxmodel = "best" if "scrypted" in model else model
model_version = "v1"
onnxfile = self.downloadFile(
f"https://github.com/koush/onnx-models/raw/main/{model}/{onnxmodel}.onnx",
f"{model_version}/{model}/{onnxmodel}.onnx",
)
model_path = self.downloadHuggingFaceModelLocalFallback(model)
onnxfile = os.path.join(model_path, f"{model}.onnx")
print(onnxfile)
compiled_models_array = []

View File

@@ -0,0 +1,55 @@
from __future__ import annotations
import asyncio
import os
import traceback
import numpy as np
import onnxruntime
from predict.segment import Segmentation
from common import yolov9_seg
from common import async_infer
prepareExecutor, predictExecutor = async_infer.create_executors("Segment")
class ONNXSegmentation(Segmentation):
def __init__(self, plugin, nativeId: str):
super().__init__(plugin=plugin, nativeId=nativeId)
def loadModel(self, name):
model_path = self.plugin.downloadHuggingFaceModelLocalFallback(name)
onnxfile = os.path.join(model_path, f"{name}.onnx")
model = onnxruntime.InferenceSession(onnxfile)
return model
async def detect_once(self, input, settings, src_size, cvss):
def prepare():
im = np.expand_dims(input, axis=0)
im = im.transpose((0, 3, 1, 2)) # BHWC to BCHW, (n, 3, h, w)
im = im.astype(np.float32) / 255.0
im = np.ascontiguousarray(im) # contiguous
return im
def predict():
input_tensor = prepare()
output_tensors = self.model.run(None, {self.input_name: input_tensor})
pred = output_tensors[0]
proto = output_tensors[1]
pred = yolov9_seg.non_max_suppression(pred, nm=32)
return self.process_segmentation_output(pred, proto)
try:
objs = await asyncio.get_event_loop().run_in_executor(
predictExecutor, lambda: predict()
)
except:
traceback.print_exc()
raise
ret = self.create_detection_result(objs, src_size, cvss)
return ret

View File

@@ -2,6 +2,7 @@ from __future__ import annotations
import asyncio
import concurrent.futures
import os
import platform
import sys
import threading
@@ -15,12 +16,8 @@ from predict.text_recognize import TextRecognition
class ONNXTextRecognition(TextRecognition):
def downloadModel(self, model: str):
onnxmodel = model
model_version = "v4"
onnxfile = self.downloadFile(
f"https://github.com/koush/onnx-models/raw/main/{model}/{onnxmodel}.onnx",
f"{model_version}/{model}/{onnxmodel}.onnx",
)
model_path = self.downloadHuggingFaceModelLocalFallback(model)
onnxfile = os.path.join(model_path, f"{model}.onnx")
print(onnxfile)
compiled_models_array = []

View File

@@ -73,6 +73,7 @@ export async function listenEvents(thisDevice: ScryptedDeviceBase, client: Onvif
const ret = {
destroy() {
clearTimeout(binaryTimeout);
clearTimeout(motionTimeout);
try {
client.unsubscribe();

View File

@@ -1,41 +1,42 @@
{
"name": "@scrypted/openvino",
"version": "0.1.188",
"version": "0.1.194",
"lockfileVersion": 2,
"requires": true,
"packages": {
"": {
"name": "@scrypted/openvino",
"version": "0.1.188",
"version": "0.1.194",
"devDependencies": {
"@scrypted/sdk": "file:../../sdk"
}
},
"../../sdk": {
"name": "@scrypted/sdk",
"version": "0.5.20",
"version": "0.5.55",
"dev": true,
"license": "ISC",
"dependencies": {
"@babel/preset-typescript": "^7.27.1",
"@rollup/plugin-commonjs": "^28.0.5",
"@rollup/plugin-commonjs": "^28.0.9",
"@rollup/plugin-json": "^6.1.0",
"@rollup/plugin-node-resolve": "^16.0.1",
"@rollup/plugin-typescript": "^12.1.2",
"@rollup/plugin-terser": "^0.4.4",
"@rollup/plugin-typescript": "^12.3.0",
"@rollup/plugin-virtual": "^3.0.2",
"adm-zip": "^0.5.16",
"axios": "^1.10.0",
"babel-loader": "^10.0.0",
"babel-plugin-const-enum": "^1.2.0",
"ncp": "^2.0.0",
"openai": "^5.3.0",
"openai": "^6.1.0",
"raw-loader": "^4.0.2",
"rimraf": "^6.0.1",
"rollup": "^4.43.0",
"rollup": "^4.52.5",
"tmp": "^0.2.3",
"ts-loader": "^9.5.2",
"ts-loader": "^9.5.4",
"tslib": "^2.8.1",
"typescript": "^5.8.3",
"typescript": "^5.9.3",
"webpack": "^5.99.9",
"webpack-bundle-analyzer": "^4.10.2"
},
@@ -49,9 +50,9 @@
"scrypted-webpack": "bin/scrypted-webpack.js"
},
"devDependencies": {
"@types/node": "^24.0.1",
"@types/node": "^24.9.2",
"ts-node": "^10.9.2",
"typedoc": "^0.28.5"
"typedoc": "^0.28.14"
}
},
"../sdk": {
@@ -67,27 +68,28 @@
"version": "file:../../sdk",
"requires": {
"@babel/preset-typescript": "^7.27.1",
"@rollup/plugin-commonjs": "^28.0.5",
"@rollup/plugin-commonjs": "^28.0.9",
"@rollup/plugin-json": "^6.1.0",
"@rollup/plugin-node-resolve": "^16.0.1",
"@rollup/plugin-typescript": "^12.1.2",
"@rollup/plugin-terser": "^0.4.4",
"@rollup/plugin-typescript": "^12.3.0",
"@rollup/plugin-virtual": "^3.0.2",
"@types/node": "^24.0.1",
"@types/node": "^24.9.2",
"adm-zip": "^0.5.16",
"axios": "^1.10.0",
"babel-loader": "^10.0.0",
"babel-plugin-const-enum": "^1.2.0",
"ncp": "^2.0.0",
"openai": "^5.3.0",
"openai": "^6.1.0",
"raw-loader": "^4.0.2",
"rimraf": "^6.0.1",
"rollup": "^4.43.0",
"rollup": "^4.52.5",
"tmp": "^0.2.3",
"ts-loader": "^9.5.2",
"ts-loader": "^9.5.4",
"ts-node": "^10.9.2",
"tslib": "^2.8.1",
"typedoc": "^0.28.5",
"typescript": "^5.8.3",
"typedoc": "^0.28.14",
"typescript": "^5.9.3",
"webpack": "^5.99.9",
"webpack-bundle-analyzer": "^4.10.2"
}

View File

@@ -50,5 +50,5 @@
"devDependencies": {
"@scrypted/sdk": "file:../../sdk"
},
"version": "0.1.188"
"version": "0.1.194"
}

View File

@@ -1,6 +1,5 @@
import concurrent.futures
def create_executors(name: str):
prepare = concurrent.futures.ThreadPoolExecutor(1, f"{name}Prepare")
predict = concurrent.futures.ThreadPoolExecutor(1, f"{name}Predict")

View File

@@ -0,0 +1,82 @@
COCO_LABELS = {
0: "person",
1: "bicycle",
2: "car",
3: "motorcycle",
4: "airplane",
5: "bus",
6: "train",
7: "truck",
8: "boat",
9: "traffic light",
10: "fire hydrant",
11: "stop sign",
12: "parking meter",
13: "bench",
14: "bird",
15: "cat",
16: "dog",
17: "horse",
18: "sheep",
19: "cow",
20: "elephant",
21: "bear",
22: "zebra",
23: "giraffe",
24: "backpack",
25: "umbrella",
26: "handbag",
27: "tie",
28: "suitcase",
29: "frisbee",
30: "skis",
31: "snowboard",
32: "sports ball",
33: "kite",
34: "baseball bat",
35: "baseball glove",
36: "skateboard",
37: "surfboard",
38: "tennis racket",
39: "bottle",
40: "wine glass",
41: "cup",
42: "fork",
43: "knife",
44: "spoon",
45: "bowl",
46: "banana",
47: "apple",
48: "sandwich",
49: "orange",
50: "broccoli",
51: "carrot",
52: "hot dog",
53: "pizza",
54: "donut",
55: "cake",
56: "chair",
57: "couch",
58: "potted plant",
59: "bed",
60: "dining table",
61: "toilet",
62: "tv",
63: "laptop",
64: "mouse",
65: "remote",
66: "keyboard",
67: "cell phone",
68: "microwave",
69: "oven",
70: "toaster",
71: "sink",
72: "refrigerator",
73: "book",
74: "clock",
75: "vase",
76: "scissors",
77: "teddy bear",
78: "hair drier",
79: "toothbrush",
}

View File

@@ -0,0 +1,355 @@
"""
YOLOv9 Segmentation Parser - Numpy Implementation
This module provides pure numpy implementations of mask processing functions
that are equivalent to their torch counterparts in utils/segment/general.py.
"""
import numpy as np
import cv2
import time
def crop_mask_numpy(masks, boxes):
"""
Crop predicted masks by zeroing out everything not in the predicted bbox.
Numpy version of crop_mask.
Args:
masks: numpy array [n, h, w] - predicted masks
boxes: numpy array [n, 4] - bbox coords [x1, y1, x2, y2]
Returns:
numpy array [n, h, w] - cropped masks
"""
n, h, w = masks.shape
# Safely clamp and normalize bounding boxes
boxes_clamped = np.clip(boxes, 0, None)
boxes_clamped[:, 0] = np.minimum(boxes_clamped[:, 0], w) # x1 <= w
boxes_clamped[:, 2] = np.minimum(boxes_clamped[:, 2], w) # x2 <= w
boxes_clamped[:, 1] = np.minimum(boxes_clamped[:, 1], h) # y1 <= h
boxes_clamped[:, 3] = np.minimum(boxes_clamped[:, 3], h) # y2 <= h
# Ensure x1 <= x2 and y1 <= y2
boxes_clamped[:, 0] = np.minimum(boxes_clamped[:, 0], boxes_clamped[:, 2]) # x1 <= x2
boxes_clamped[:, 1] = np.minimum(boxes_clamped[:, 1], boxes_clamped[:, 3]) # y1 <= y2
x1 = boxes_clamped[:, 0][:, None, None] # (n, 1, 1)
y1 = boxes_clamped[:, 1][:, None, None] # (n, 1, 1)
x2 = boxes_clamped[:, 2][:, None, None] # (n, 1, 1)
y2 = boxes_clamped[:, 3][:, None, None] # (n, 1, 1)
r = np.arange(w).reshape(1, 1, -1) # (1, 1, w)
c = np.arange(h).reshape(1, -1, 1) # (1, h, 1)
crop_region = (r >= x1) & (r < x2) & (c >= y1) & (c < y2)
return masks * crop_region
def _upsample_bilinear(masks, target_shape):
"""
Upsample masks bilinearly to target shape.
Matches PyTorch's F.interpolate(mode='bilinear', align_corners=False).
Args:
masks: numpy array [n, h, w]
target_shape: tuple (target_h, target_w)
Returns:
numpy array [n, target_h, target_w]
"""
# Defensive check: ensure masks has valid shape
if len(masks.shape) != 3 or masks.shape[0] == 0:
print(f"Warning: unexpected mask shape for upsampling: {masks.shape}")
return masks
n, h, w = masks.shape
masks_transposed = masks.transpose(1, 2, 0) # (h, w, n)
try:
upsampled = cv2.resize(
masks_transposed.astype(np.float32),
(target_shape[1], target_shape[0]), # cv2 uses (width, height)
interpolation=cv2.INTER_LINEAR
)
# cv2.resize may return 2D for single-channel input, need to restore 3D shape
if len(upsampled.shape) == 2:
# Input was single mask, cv2 returned (H, W) instead of (H, W, 1)
upsampled = upsampled[:, :, None] # (H, W, 1)
result = upsampled.transpose(2, 0, 1) # (n, h, w)
# Validate output shape
if result.shape != (n, target_shape[0], target_shape[1]):
print(f"Warning: upsampled mask shape mismatch. Expected {(n, target_shape[0], target_shape[1])}, got {result.shape}")
return masks
return result
except Exception as e:
print(f"Warning: error upscaling masks: {e}, falling back to original masks")
return masks
def process_mask_numpy(protos, masks_in, bboxes, shape, upsample=False):
"""
Process masks using numpy.
Numpy version of process_mask from utils/segment/general.py.
Args:
protos: numpy array or torch tensor [c, mh, mw] - prototype masks
masks_in: numpy array or torch tensor [n, c] - mask coefficients
bboxes: numpy array or torch tensor [n, 4] - bbox coords [x1, y1, x2, y2]
shape: tuple (ih, iw) - input image size (height, width)
upsample: bool - whether to upsample masks to image size
Returns:
numpy array [n, ih, iw] (or [n, mh, mw] if upsample=False) - binary masks
"""
c, mh, mw = protos.shape # prototype: CHW
ih, iw = shape # input image: height, width
# Validate inputs
if masks_in.shape[0] == 0:
print(f"Warning: empty masks_in shape: {masks_in.shape}")
return np.zeros((0, ih if upsample else mh, iw if upsample else mw), dtype=bool)
if masks_in.shape[1] != c:
print(f"Warning: masks_in shape mismatch: expected [:, {c}], got {masks_in.shape}")
return np.zeros((0, ih if upsample else mh, iw if upsample else mw), dtype=bool)
# Flatten protos for matrix multiplication: [c, mh, mw] -> [c, mh*mw]
protos_flat = protos.reshape(c, -1)
# Matrix multiplication: [n, c] @ [c, mh*mw] = [n, mh*mw]
masks_flat = masks_in @ protos_flat
# Apply sigmoid and reshape: [n, mh*mw] -> [n, mh, mw]
masks = (1 / (1 + np.exp(-masks_flat))).reshape(-1, mh, mw)
# Scale bboxes from image coordinates to mask coordinates
downsampled_bboxes = bboxes.copy()
downsampled_bboxes[:, 0] *= mw / iw # x1
downsampled_bboxes[:, 2] *= mw / iw # x2
downsampled_bboxes[:, 3] *= mh / ih # y2
downsampled_bboxes[:, 1] *= mh / ih # y1
# Crop masks to bounding boxes
masks = crop_mask_numpy(masks, downsampled_bboxes)
# Upsample to image size if requested
if upsample:
masks = _upsample_bilinear(masks, shape)
# Binarize masks with threshold 0.5
return (masks > 0.5)
def masks2segments_numpy(masks):
"""
Convert binary masks to segment contours (list of points).
Returns all contours for each mask (multiple polygons possible).
Args:
masks: numpy array [n, h, w] - binary masks (True/False or 0/1)
Returns:
List of lists of numpy arrays. Each inner list contains contours for one mask,
where each contour has shape [num_points, 2] containing contour points [x, y]
"""
segments = []
for mask in masks:
# Convert to uint8 for cv2
mask_uint8 = (mask * 255).astype(np.uint8)
# Find contours
contours, _ = cv2.findContours(
mask_uint8,
mode=cv2.RETR_EXTERNAL, # only outer contours
method=cv2.CHAIN_APPROX_SIMPLE # simplified contours
)
mask_contours = []
for contour in contours:
# Squeeze to remove extra dimension and convert to [x, y] format
contour = contour.squeeze().astype(np.float32)
# cv2 returns [x, y], ensure shape is [n, 2]
if len(contour.shape) == 1:
contour = contour.reshape(1, -1)
mask_contours.append(contour)
# If no contours found, add empty list
segments.append(mask_contours if mask_contours else [np.array([], dtype=np.float32).reshape(0, 2)])
return segments
def masks2polygons_numpy(masks):
"""
Convert binary masks to polygon points for plotting.
Args:
masks: numpy array [n, h, w] - binary masks (True/False or 0/1)
Returns:
List of lists, each containing [x, y] coordinates as a flat list suitable for drawing
Format: [[[x1, y1], [x2, y2], ...], ...] or [[x1, y1, x2, y2, ...], ...]
"""
segments = masks2segments_numpy(masks)
# Convert to list of [x, y] pairs
return [segment.tolist() for segment in segments]
def xywh2xyxy(x):
"""Convert [x_center, y_center, width, height] to [x1, y1, x2, y2]"""
y = np.copy(x)
y[:, 0] = x[:, 0] - x[:, 2] / 2 # x1
y[:, 1] = x[:, 1] - x[:, 3] / 2 # y1
y[:, 2] = x[:, 0] + x[:, 2] / 2 # x2
y[:, 3] = x[:, 1] + x[:, 3] / 2 # y2
return y
def box_iou(box1, box2):
"""Calculate IoU between two sets of boxes"""
area1 = (box1[:, 2] - box1[:, 0]) * (box1[:, 3] - box1[:, 1])
area2 = (box2[:, 2] - box2[:, 0]) * (box2[:, 3] - box2[:, 1])
iou = np.zeros((len(box1), len(box2)), dtype=np.float32)
for i in range(len(box1)):
for j in range(len(box2)):
inter_x1 = np.maximum(box1[i, 0], box2[j, 0])
inter_y1 = np.maximum(box1[i, 1], box2[j, 1])
inter_x2 = np.minimum(box1[i, 2], box2[j, 2])
inter_y2 = np.minimum(box1[i, 3], box2[j, 3])
inter_w = np.maximum(0, inter_x2 - inter_x1)
inter_h = np.maximum(0, inter_y2 - inter_y1)
inter_area = inter_w * inter_h
union = area1[i] + area2[j] - inter_area
iou[i, j] = inter_area / union if union > 0 else 0
return iou
def nms(boxes, scores, iou_thres):
"""Non-Maximum Suppression implementation in NumPy"""
if len(boxes) == 0:
return np.array([], dtype=np.int32)
indices = np.argsort(-scores)
keep = []
while len(indices) > 0:
i = indices[0]
keep.append(i)
if len(indices) == 1:
break
iou_scores = box_iou(boxes[indices[0:1]], boxes[indices[1:]])[0]
indices = indices[1:][iou_scores < iou_thres]
return np.array(keep, dtype=np.int32)
def non_max_suppression(
prediction,
conf_thres=0.25,
iou_thres=0.45,
classes=None,
agnostic=False,
multi_label=False,
labels=(),
max_det=300,
nm=0,
):
"""Non-Maximum Suppression (NMS) on inference results to reject overlapping detections
Returns:
list of detections, on (n,6) tensor per image [xyxy, conf, cls]
"""
if isinstance(prediction, (list, tuple)):
prediction = prediction[0]
bs = prediction.shape[0]
nc = prediction.shape[1] - nm - 4
mi = 4 + nc
xc = np.max(prediction[:, 4:mi], axis=1) > conf_thres
assert 0 <= conf_thres <= 1, f'Invalid Confidence threshold {conf_thres}, valid values are between 0.0 and 1.0'
assert 0 <= iou_thres <= 1, f'Invalid IoU {iou_thres}, valid values are between 0.0 and 1.0'
max_wh = 7680
max_nms = 30000
time_limit = 2.5 + 0.05 * bs
redundant = True
multi_label &= nc > 1
merge = False
t = time.time()
output = [np.zeros((0, 6 + nm), dtype=np.float32)] * bs
for xi, pred_x in enumerate(prediction):
x = pred_x.T[xc[xi]]
if labels and len(labels[xi]):
lb = labels[xi]
v = np.zeros((len(lb), nc + nm + 5), dtype=x.dtype)
v[:, :4] = lb[:, 1:5]
v[np.arange(len(lb)), lb[:, 0].astype(int) + 4] = 1.0
x = np.concatenate((x, v), 0)
if x.shape[0] == 0:
continue
box = x[:, :4]
cls = x[:, 4:4 + nc]
mask = x[:, 4 + nc:] if nm > 0 else np.zeros((x.shape[0], nm), dtype=x.dtype)
box = xywh2xyxy(box)
if multi_label:
i, j = np.where(cls > conf_thres)
x = np.concatenate((box[i], x[i, 4 + j][:, None], j[:, None].astype(np.float32), mask[i]), 1)
else:
j = np.argmax(cls, axis=1, keepdims=True)
conf = cls[np.arange(len(cls)), j.flatten()][:, None]
x = np.concatenate((box, conf, j.astype(np.float32), mask), 1)[conf.flatten() > conf_thres]
if classes is not None:
class_tensor = np.array(classes, dtype=np.float32)
mask = np.any(x[:, 5:6] == class_tensor, axis=1)
x = x[mask]
n = x.shape[0]
if n == 0:
continue
elif n > max_nms:
x = x[x[:, 4].argsort()[::-1][:max_nms]]
else:
x = x[x[:, 4].argsort()[::-1]]
c = x[:, 5:6] * (0 if agnostic else max_wh)
boxes, scores = x[:, :4] + c, x[:, 4]
i = nms(boxes, scores, iou_thres)
if i.shape[0] > max_det:
i = i[:max_det]
if merge and (1 < n < 3E3):
iou = box_iou(boxes[i], boxes) > iou_thres
weights = iou * scores[None]
x[i, :4] = np.dot(weights, x[:, :4]).astype(np.float32) / weights.sum(1, keepdims=True)
if redundant:
i = i[iou.sum(1) > 1]
output[xi] = x[i]
if (time.time() - t) > time_limit:
import warnings
warnings.warn(f'WARNING ⚠️ NMS time limit {time_limit:.3f}s exceeded')
break
return output

View File

@@ -1,5 +1,6 @@
from __future__ import annotations
import os
import asyncio
import concurrent.futures
import json
@@ -8,15 +9,15 @@ import traceback
from typing import Any, Tuple
import numpy as np
import openvino as ov
from ov.segment import OpenVINOSegmentation
import scrypted_sdk
from PIL import Image
from scrypted_sdk.other import SettingValue
from scrypted_sdk.types import Setting
import common.yolo as yolo
from predict import Prediction, PredictPlugin
from predict.rectangle import Rectangle
import openvino as ov
from predict import PredictPlugin
from .custom_detection import OpenVINOCustomDetection
from .face_recognition import OpenVINOFaceRecognition
@@ -37,25 +38,11 @@ prepareExecutor = concurrent.futures.ThreadPoolExecutor(
availableModels = [
"Default",
"scrypted_yolov9c_relu_int8_320",
"scrypted_yolov9m_relu_int8_320",
"scrypted_yolov9s_relu_int8_320",
"scrypted_yolov9t_relu_int8_320",
"scrypted_yolov9c_int8_320",
"scrypted_yolov9m_int8_320",
"scrypted_yolov9s_int8_320",
"scrypted_yolov9t_int8_320",
"scrypted_yolov10m_320",
"scrypted_yolov10s_320",
"scrypted_yolov10n_320",
"scrypted_yolo_nas_s_320",
"scrypted_yolov6n_320",
"scrypted_yolov6s_320",
"scrypted_yolov9c_320",
"scrypted_yolov9m_320",
"scrypted_yolov9s_320",
"scrypted_yolov9t_320",
"scrypted_yolov8n_320",
"scrypted_yolov9t_relu_test_int8",
"scrypted_yolov9c_relu_int8",
"scrypted_yolov9m_relu_int8",
"scrypted_yolov9s_relu_int8",
"scrypted_yolov9t_relu_int8",
]
@@ -164,8 +151,6 @@ class OpenVINOPlugin(
self.mode = mode
# todo remove this, don't need to export two models anymore.
precision = "FP16"
self.precision = precision
model = self.storage.getItem("model") or "Default"
if model == "Default" or model not in availableModels:
@@ -176,62 +161,21 @@ class OpenVINOPlugin(
if model != "Default":
self.storage.setItem("model", "Default")
if arc or nvidia or npu:
model = "scrypted_yolov9c_relu_int8_320"
model = "scrypted_yolov9c_relu_int8"
elif iris_xe:
model = "scrypted_yolov9s_relu_int8_320"
model = "scrypted_yolov9s_relu_int8"
else:
model = "scrypted_yolov9t_relu_int8_320"
self.yolo = "yolo" in model
self.scrypted_yolov9 = "scrypted_yolov9" in model
self.scrypted_yolov10 = "scrypted_yolov10" in model
self.scrypted_yolo_nas = "scrypted_yolo_nas" in model
self.scrypted_yolo = "scrypted_yolo" in model
self.scrypted_model = "scrypted" in model
self.scrypted_yuv = "yuv" in model
self.sigmoid = model == "yolo-v4-tiny-tf"
model = "scrypted_yolov9t_relu_int8"
self.modelName = model
ovmodel = (
"best-converted"
if self.scrypted_yolov9
else "best" if self.scrypted_model else model
)
ovmodel = "best-converted"
model_version = "v7"
xmlFile = self.downloadFile(
f"https://github.com/koush/openvino-models/raw/main/{model}/{precision}/{ovmodel}.xml",
f"{model_version}/{model}/{precision}/{ovmodel}.xml",
)
self.downloadFile(
f"https://github.com/koush/openvino-models/raw/main/{model}/{precision}/{ovmodel}.bin",
f"{model_version}/{model}/{precision}/{ovmodel}.bin",
)
if self.scrypted_yolo_nas:
labelsFile = self.downloadFile(
"https://github.com/koush/openvino-models/raw/main/scrypted_nas_labels.txt",
"scrypted_nas_labels.txt",
)
elif self.scrypted_model:
labelsFile = self.downloadFile(
"https://github.com/koush/openvino-models/raw/main/scrypted_labels.txt",
"scrypted_labels.txt",
)
elif self.yolo:
labelsFile = self.downloadFile(
"https://github.com/koush/openvino-models/raw/main/coco_80cl.txt",
"coco_80cl.txt",
)
else:
labelsFile = self.downloadFile(
"https://github.com/koush/openvino-models/raw/main/coco_labels.txt",
"coco_labels.txt",
)
model_path = self.downloadHuggingFaceModelLocalFallback(model)
xmlFile = os.path.join(model_path, f"{ovmodel}.xml")
try:
self.compiled_model = self.core.compile_model(xmlFile, mode)
except:
import traceback
traceback.print_exc()
if "GPU" in mode:
@@ -244,36 +188,11 @@ class OpenVINOPlugin(
print("Reverting all settings.")
self.storage.removeItem("mode")
self.storage.removeItem("model")
self.storage.removeItem("precision")
self.requestRestart()
self.infer_queue = ov.AsyncInferQueue(self.compiled_model)
def predict(output):
if not self.yolo:
objs = []
for values in output[0][0]:
valid, index, confidence, l, t, r, b = values
if valid == -1:
break
def torelative(value: float):
return value * self.model_dim
l = torelative(l)
t = torelative(t)
r = torelative(r)
b = torelative(b)
obj = Prediction(index - 1, confidence, Rectangle(l, t, r, b))
objs.append(obj)
return objs
if self.scrypted_yolov10:
return yolo.parse_yolov10(output[0])
if self.scrypted_yolo_nas:
return yolo.parse_yolo_nas([output[1], output[0]])
return yolo.parse_yolov9(output[0])
def callback(infer_request, future: asyncio.Future):
@@ -292,18 +211,18 @@ class OpenVINOPlugin(
)
print(f"model/mode: {model}/{mode}")
# mobilenet 1,300,300,3
# yolov3/4 1,416,416,3
# yolov9 1,3,320,320
# second dim is always good.
self.model_dim = self.compiled_model.inputs[0].shape[2]
labels_contents = open(labelsFile, "r").read()
self.labels = parse_label_contents(labels_contents)
self.labels = {
0: 'person',
1: 'vehicle',
2: 'animal',
}
self.faceDevice = None
self.textDevice = None
self.clipDevice = None
self.segmentDevice = None
if not self.forked:
asyncio.ensure_future(self.prepareRecognitionModels(), loop=self.loop)
@@ -311,7 +230,6 @@ class OpenVINOPlugin(
async def getSettings(self) -> list[Setting]:
mode = self.storage.getItem("mode") or "Default"
model = self.storage.getItem("model") or "Default"
precision = self.storage.getItem("precision") or "Default"
return [
{
"title": "Available Devices",
@@ -355,35 +273,14 @@ class OpenVINOPlugin(
return [self.model_dim, self.model_dim]
def get_input_format(self):
if self.scrypted_yuv:
return "yuvj444p"
return super().get_input_format()
async def detect_once(self, input: Image.Image, settings: Any, src_size, cvss):
def prepare():
# the input_tensor can be created with the shared_memory=True parameter,
# but that seems to cause issues on some platforms.
if self.scrypted_yolo:
if not self.scrypted_yuv:
im = np.expand_dims(input, axis=0)
im = im.transpose((0, 3, 1, 2)) # BHWC to BCHW, (n, 3, h, w)
else:
# when a yuv image is requested, it may be either planar or interleaved
# as as hack, the input will come as RGB if already planar.
if input.mode != "RGB":
im = np.array(input)
im = im.reshape((1, self.model_dim, self.model_dim, 3))
im = im.transpose((0, 3, 1, 2)) # BHWC to BCHW, (n, 3, h, w)
else:
im = np.array(input)
im = im.reshape((1, 3, self.model_dim, self.model_dim))
im = im.astype(np.float32) / 255.0
im = np.ascontiguousarray(im) # contiguous
elif self.yolo:
im = np.expand_dims(np.array(input), axis=0).astype(np.float32)
else:
im = np.expand_dims(np.array(input), axis=0)
im = np.expand_dims(input, axis=0)
im = im.transpose((0, 3, 1, 2)) # BHWC to BCHW, (n, 3, h, w)
im = im.astype(np.float32) / 255.0
im = np.ascontiguousarray(im) # contiguous
return im
try:
@@ -440,6 +337,18 @@ class OpenVINOPlugin(
"name": "OpenVINO CLIP Embedding",
}
)
await scrypted_sdk.deviceManager.onDeviceDiscovered(
{
"nativeId": "segment",
"type": scrypted_sdk.ScryptedDeviceType.Builtin.value,
"interfaces": [
scrypted_sdk.ScryptedInterface.ClusterForkInterface.value,
scrypted_sdk.ScryptedInterface.ObjectDetection.value,
],
"name": "OpenVINO Segmentation",
}
)
except:
pass
@@ -453,6 +362,9 @@ class OpenVINOPlugin(
elif nativeId == "clipembedding":
self.clipDevice = self.clipDevice or OpenVINOClipEmbedding(self, nativeId)
return self.clipDevice
elif nativeId == "segment":
self.segmentDevice = self.segmentDevice or OpenVINOSegmentation(self, nativeId)
return self.segmentDevice
custom_model = self.custom_models.get(nativeId, None)
if custom_model:
return custom_model

View File

@@ -7,7 +7,7 @@ import numpy as np
import openvino as ov
from PIL import Image
from ov import async_infer
from common import async_infer
from predict.clip import ClipEmbedding
from scrypted_sdk import ObjectsDetected

View File

@@ -6,7 +6,7 @@ import numpy as np
import openvino as ov
from PIL import Image
from ov import async_infer
from common import async_infer
from predict.custom_detect import CustomDetection
from scrypted_sdk import ObjectsDetected
@@ -16,7 +16,6 @@ customDetectPrepare, customDetectPredict = async_infer.create_executors("CustomD
class OpenVINOCustomDetection(CustomDetection):
def __init__(self, plugin, nativeId: str):
super().__init__(plugin=plugin, nativeId=nativeId)
self.prefer_relu = True
def loadModel(self, files: list[str]):
# find the xml file in the files list

View File

@@ -1,12 +1,13 @@
from __future__ import annotations
import asyncio
import os
import numpy as np
import openvino as ov
from PIL import Image
from ov import async_infer
import openvino as ov
from common import async_infer
from predict.face_recognize import FaceRecognizeDetection
faceDetectPrepare, faceDetectPredict = async_infer.create_executors("FaceDetect")
@@ -18,22 +19,14 @@ faceRecognizePrepare, faceRecognizePredict = async_infer.create_executors(
class OpenVINOFaceRecognition(FaceRecognizeDetection):
def __init__(self, plugin, nativeId: str):
super().__init__(plugin=plugin, nativeId=nativeId)
self.prefer_relu = True
def downloadModel(self, model: str):
scrypted_yolov9 = "scrypted_yolov9" in model
inception = "inception" in model
ovmodel = "best-converted" if scrypted_yolov9 else "best"
precision = self.plugin.precision
model_version = "v8"
xmlFile = self.downloadFile(
f"https://github.com/koush/openvino-models/raw/main/{model}/{precision}/{ovmodel}.xml",
f"{model_version}/{model}/{precision}/{ovmodel}.xml",
)
self.downloadFile(
f"https://github.com/koush/openvino-models/raw/main/{model}/{precision}/{ovmodel}.bin",
f"{model_version}/{model}/{precision}/{ovmodel}.bin",
)
ovmodel = "best-converted" if not inception else "best"
if not inception:
model = model + "_int8"
model_path = self.downloadHuggingFaceModelLocalFallback(model)
xmlFile = os.path.join(model_path, f"{ovmodel}.xml")
if inception:
model = self.plugin.core.read_model(xmlFile)
model.reshape([1, 3, 160, 160])

View File

@@ -0,0 +1,58 @@
from __future__ import annotations
import asyncio
import os
import traceback
import numpy as np
import openvino as ov
from predict.segment import Segmentation
from common import yolov9_seg
from common import async_infer
prepareExecutor, predictExecutor = async_infer.create_executors("Segment")
class OpenVINOSegmentation(Segmentation):
def __init__(self, plugin, nativeId: str):
super().__init__(plugin=plugin, nativeId=nativeId)
def loadModel(self, name):
name = name + "_int8"
model_path = self.downloadHuggingFaceModelLocalFallback(name)
ovmodel = "best-converted"
xmlFile = os.path.join(model_path, f"{ovmodel}.xml")
model = self.plugin.core.compile_model(xmlFile, self.plugin.mode)
return model
async def detect_once(self, input, settings, src_size, cvss):
def predict():
im = np.expand_dims(input, axis=0)
im = im.transpose((0, 3, 1, 2)) # BHWC to BCHW, (n, 3, h, w)
im = im.astype(np.float32) / 255.0
im = np.ascontiguousarray(im) # contiguous
infer_request = self.model.create_infer_request()
tensor = ov.Tensor(array=im)
infer_request.set_input_tensor(tensor)
output_tensors = infer_request.infer()
pred = output_tensors[0]
proto = output_tensors[1]
pred = yolov9_seg.non_max_suppression(pred, nm=32)
return self.process_segmentation_output(pred, proto)
try:
objs = await asyncio.get_event_loop().run_in_executor(
predictExecutor, lambda: predict()
)
except:
traceback.print_exc()
raise
ret = self.create_detection_result(objs, src_size, cvss)
return ret

View File

@@ -1,11 +1,12 @@
from __future__ import annotations
import asyncio
import os
import numpy as np
import openvino as ov
from ov import async_infer
import openvino as ov
from common import async_infer
from predict.text_recognize import TextRecognition
textDetectPrepare, textDetectPredict = async_infer.create_executors("TextDetect")
@@ -17,19 +18,14 @@ textRecognizePrepare, textRecognizePredict = async_infer.create_executors(
class OpenVINOTextRecognition(TextRecognition):
def downloadModel(self, model: str):
ovmodel = "best"
precision = self.plugin.precision
model_version = "v6"
xmlFile = self.downloadFile(
f"https://github.com/koush/openvino-models/raw/main/{model}/{precision}/{ovmodel}.xml",
f"{model_version}/{model}/{precision}/{ovmodel}.xml",
)
self.downloadFile(
f"https://github.com/koush/openvino-models/raw/main/{model}/{precision}/{ovmodel}.bin",
f"{model_version}/{model}/{precision}/{ovmodel}.bin",
)
model_path = self.downloadHuggingFaceModelLocalFallback(model)
xmlFile = os.path.join(model_path, f"{ovmodel}.xml")
if "vgg" in model:
model = self.plugin.core.read_model(xmlFile)
model.reshape([1, 1, 64, 384])
# this reshape causes a crash on GPU but causes a crash if NOT used with NPU...
# on older systems skipping the reshape does not crash, but does throw na exception which is recoverable.
if "NPU" in self.plugin.mode:
model.reshape([1, 1, 64, 384])
return self.plugin.core.compile_model(model, self.plugin.mode)
else:
model = self.plugin.core.read_model(xmlFile)

View File

@@ -20,6 +20,10 @@ import common.colors
from detect import DetectPlugin
from predict.rectangle import Rectangle
cache_dir = os.path.join(os.environ["SCRYPTED_PLUGIN_VOLUME"], "files", "hf")
# os.makedirs(cache_dir, exist_ok=True)
# os.environ['HF_HUB_CACHE'] = cache_dir
original_getaddrinfo = socket.getaddrinfo
# Sort the results to put IPv4 addresses first
@@ -34,7 +38,7 @@ def custom_getaddrinfo(host, port, family=0, type=0, proto=0, flags=0):
socket.getaddrinfo = custom_getaddrinfo
class Prediction:
def __init__(self, id: int, score: float, bbox: Rectangle, embedding: str = None):
def __init__(self, id: int, score: float, bbox: Rectangle, embedding: str = None, clipPaths: List[List[Tuple[float, float]]] = None):
# these may be numpy values. sanitize them.
self.id = int(id)
self.score = float(score)
@@ -46,7 +50,7 @@ class Prediction:
float(bbox.ymax),
)
self.embedding = embedding
self.clipPaths = clipPaths
class PredictPlugin(DetectPlugin, scrypted_sdk.ClusterForkInterface, scrypted_sdk.ScryptedSystemDevice, scrypted_sdk.DeviceCreator, scrypted_sdk.DeviceProvider):
labels: dict
@@ -59,6 +63,8 @@ class PredictPlugin(DetectPlugin, scrypted_sdk.ClusterForkInterface, scrypted_sd
):
super().__init__(nativeId=nativeId)
self.periodic_restart = True
self.systemDevice = {
"deviceCreator": "Model",
}
@@ -82,6 +88,34 @@ class PredictPlugin(DetectPlugin, scrypted_sdk.ClusterForkInterface, scrypted_sd
if not self.plugin and not self.forked:
asyncio.ensure_future(self.startCluster(), loop=self.loop)
def downloadHuggingFaceModel(self, model: str, local_files_only: bool = False) -> str:
from huggingface_hub import snapshot_download
plugin_suffix = self.pluginId.split('/')[1]
local_dir = os.path.join(cache_dir, plugin_suffix, model)
local_path = snapshot_download(
repo_id="scrypted/plugin-models",
allow_patterns=f"{plugin_suffix}/{model}/*",
local_files_only=local_files_only,
local_dir=local_dir,
)
local_path = os.path.join(local_path, plugin_suffix, model)
return local_path
def downloadHuggingFaceModelLocalFallback(self, model: str) -> str:
try:
local_path = self.downloadHuggingFaceModel(model)
print("Downloaded/refreshed model:", model)
return local_path
except Exception:
traceback.print_exc()
print("Unable to download model:", model)
print('This may be due to network or firewall issues.')
print("Trying model from Hugging Face Hub (offline):", model)
local_path = self.downloadHuggingFaceModel(model, local_files_only=True)
return local_path
def downloadFile(self, url: str, filename: str):
try:
filesPath = os.path.join(os.environ["SCRYPTED_PLUGIN_VOLUME"], "files")
@@ -119,7 +153,8 @@ class PredictPlugin(DetectPlugin, scrypted_sdk.ClusterForkInterface, scrypted_sd
return ["motion"]
def requestRestart(self):
asyncio.ensure_future(scrypted_sdk.deviceManager.requestRestart())
if self.periodic_restart:
asyncio.ensure_future(scrypted_sdk.deviceManager.requestRestart())
# width, height, channels
def get_input_details(self) -> Tuple[int, int, int]:
@@ -156,6 +191,8 @@ class PredictPlugin(DetectPlugin, scrypted_sdk.ClusterForkInterface, scrypted_sd
detection["score"] = obj.score
if hasattr(obj, "embedding") and obj.embedding is not None:
detection["embedding"] = obj.embedding
if hasattr(obj, "clipPaths") and obj.clipPaths is not None and len(obj.clipPaths) > 0:
detection["clipPaths"] = obj.clipPaths
detections.append(detection)
if convert_to_src_size:
@@ -169,6 +206,15 @@ class PredictPlugin(DetectPlugin, scrypted_sdk.ClusterForkInterface, scrypted_sd
if any(map(lambda x: not math.isfinite(x), detection["boundingBox"])):
print("unexpected nan detected", obj.bbox)
continue
# Transform clipPaths coordinates if present
if "clipPaths" in detection and detection["clipPaths"] is not None:
clip_paths = detection["clipPaths"]
# Convert each polygon (list of [x, y] tuples) to source size
transformed = [[
(convert_to_src_size((pt[0], pt[1]))[0], convert_to_src_size((pt[0], pt[1]))[1])
for pt in polygon
] for polygon in clip_paths]
detection["clipPaths"] = transformed
detection_result["detections"].append(detection)
# print(detection_result)
@@ -238,31 +284,6 @@ class PredictPlugin(DetectPlugin, scrypted_sdk.ClusterForkInterface, scrypted_sd
self.requestRestart()
raise
# async def detectObjects(
# self, mediaObject: scrypted_sdk.MediaObject, session: ObjectDetectionSession = None
# ) -> ObjectsDetected:
# # main plugin can dispatch
# plugin: PredictPlugin = None
# if scrypted_sdk.clusterManager and scrypted_sdk.clusterManager.getClusterMode() and not self.forked:
# if session:
# del session['batch']
# if len(self.forks):
# totalWorkers = len(self.forks)
# if not self.forked:
# totalWorkers += 1
# self.clusterIndex += 1
# self.clusterIndex %= totalWorkers
# if len(self.forks) != self.clusterIndex:
# fork = list(self.forks.values())[self.clusterIndex]
# result = await fork.result
# plugin = await result.getPlugin()
# if not plugin:
# return await super().detectObjects(mediaObject, session)
# return await plugin.detectObjects(mediaObject, session)
async def run_detection_image(
self, image: scrypted_sdk.Image, detection_session: ObjectDetectionSession
) -> ObjectsDetected:
@@ -303,21 +324,59 @@ class PredictPlugin(DetectPlugin, scrypted_sdk.ClusterForkInterface, scrypted_sd
if image.ffmpegFormats != True:
format = image.format or "rgb"
b = await image.toBuffer(
{
"resize": resize,
"format": format,
}
)
if settings and settings.get("pad", False):
if iw / w > ih / h:
scale = w / iw
else:
scale = h / ih
nw = int(iw * scale)
nh = int(ih * scale)
resize = {
"width": nw,
"height": nh,
}
b = await image.toBuffer(
{
"resize": resize,
"format": format,
}
)
if self.get_input_format() == "rgb":
data = await common.colors.ensureRGBData(b, (nw, nh), format)
elif self.get_input_format() == "rgba":
data = await common.colors.ensureRGBAData(b, (nw, nh), format)
elif self.get_input_format() == "yuvj444p":
data = await common.colors.ensureYCbCrAData(b, (nw, nh), format)
else:
raise Exception("unsupported format")
# data is a PIL image and we need to pad it to w, h
new_image = Image.new(data.mode, (w, h))
paste_x = (w - nw) // 2
paste_y = (h - nh) // 2
new_image.paste(data, (paste_x, paste_y))
data.close()
data = new_image
if self.get_input_format() == "rgb":
data = await common.colors.ensureRGBData(b, (w, h), format)
elif self.get_input_format() == "rgba":
data = await common.colors.ensureRGBAData(b, (w, h), format)
elif self.get_input_format() == "yuvj444p":
data = await common.colors.ensureYCbCrAData(b, (w, h), format)
else:
raise Exception("unsupported format")
b = await image.toBuffer(
{
"resize": resize,
"format": format,
}
)
if self.get_input_format() == "rgb":
data = await common.colors.ensureRGBData(b, (w, h), format)
elif self.get_input_format() == "rgba":
data = await common.colors.ensureRGBAData(b, (w, h), format)
elif self.get_input_format() == "yuvj444p":
data = await common.colors.ensureYCbCrAData(b, (w, h), format)
else:
raise Exception("unsupported format")
try:
ret = await self.safe_detect_once(data, settings, (iw, ih), cvss)
@@ -365,6 +424,8 @@ class PredictPlugin(DetectPlugin, scrypted_sdk.ClusterForkInterface, scrypted_sd
ret = await result.getFaceRecognition()
elif self.nativeId == "clipembedding":
ret = await result.getClipEmbedding()
elif self.nativeId == "segmentation":
ret = await result.getSegmentation()
else:
ret = await result.getCustomDetection(self.nativeId)
return ret
@@ -392,6 +453,10 @@ class PredictPlugin(DetectPlugin, scrypted_sdk.ClusterForkInterface, scrypted_sd
self.forks[cwid] = pf
continue
if self.pluginId not in workers[cwid]['labels']:
print(f"not using cluster worker {workers[cwid]['name']} without label {self.pluginId}")
continue
async def startClusterWorker(clusterWorkerId=cwid):
print("starting cluster worker", clusterWorkerId)
try:
@@ -496,6 +561,9 @@ class Fork:
async def getClipEmbedding(self):
return await self.plugin.getDevice("clipembedding")
async def getSegmentation(self):
return await self.plugin.getDevice("segmentation")
async def getCustomDetection(self, nativeId: str):
return await self.plugin.getDevice(nativeId)

View File

@@ -2,7 +2,6 @@ from __future__ import annotations
import asyncio
import base64
import os
from typing import Tuple
import scrypted_sdk
@@ -15,6 +14,8 @@ class ClipEmbedding(PredictPlugin, scrypted_sdk.TextEmbedding, scrypted_sdk.Imag
def __init__(self, plugin: PredictPlugin, nativeId: str):
super().__init__(nativeId=nativeId, plugin=plugin)
hf_id = "openai/clip-vit-base-patch32"
self.inputwidth = 224
self.inputheight = 224
@@ -23,10 +24,31 @@ class ClipEmbedding(PredictPlugin, scrypted_sdk.TextEmbedding, scrypted_sdk.Imag
self.minThreshold = 0.5
self.model = self.initModel()
self.processor = CLIPProcessor.from_pretrained(
"openai/clip-vit-base-patch32",
cache_dir=os.path.join(os.environ["SCRYPTED_PLUGIN_VOLUME"], "files", "hf"),
)
self.processor = None
print("Loading CLIP processor from local cache.")
try:
self.processor = CLIPProcessor.from_pretrained(
hf_id,
local_files_only=True,
)
print("Loaded CLIP processor from local cache.")
except Exception:
print("CLIP processor not available in local cache yet.")
asyncio.ensure_future(self.refreshClipProcessor(hf_id), loop=self.loop)
async def refreshClipProcessor(self, hf_id: str):
try:
print("Refreshing CLIP processor cache (online).")
processor = await asyncio.to_thread(
CLIPProcessor.from_pretrained,
hf_id,
)
self.processor = processor
print("Refreshed CLIP processor cache.")
except Exception:
print("CLIP processor cache refresh failed.")
def getFiles(self):
pass
@@ -43,7 +65,11 @@ class ClipEmbedding(PredictPlugin, scrypted_sdk.TextEmbedding, scrypted_sdk.Imag
pass
async def getImageEmbedding(self, input):
detections = await super().detectObjects(input, None)
detections = await super().detectObjects(input, {
"settings": {
"pad": True,
}
})
return detections["detections"][0]["embedding"]
async def detectObjects(self, mediaObject, session = None):

View File

@@ -26,9 +26,6 @@ class CustomDetection(PredictPlugin, scrypted_sdk.Settings):
def __init__(self, plugin: PredictPlugin, nativeId: str):
super().__init__(nativeId=nativeId, plugin=plugin)
if not hasattr(self, "prefer_relu"):
self.prefer_relu = False
self.inputheight = 320
self.inputwidth = 320
@@ -38,9 +35,6 @@ class CustomDetection(PredictPlugin, scrypted_sdk.Settings):
self.init_model()
# self.detectModel = self.downloadModel("scrypted_yolov9t_relu_face_320" if self.prefer_relu else "scrypted_yolov9t_face_320")
# self.faceModel = self.downloadModel("inception_resnet_v1")
def init_model(self):
config_url = self.storage.getItem('config_url')
if not config_url:

View File

@@ -26,9 +26,6 @@ class FaceRecognizeDetection(PredictPlugin):
def __init__(self, plugin: PredictPlugin, nativeId: str):
super().__init__(nativeId=nativeId, plugin=plugin)
if not hasattr(self, "prefer_relu"):
self.prefer_relu = False
self.inputheight = 320
self.inputwidth = 320
@@ -38,7 +35,7 @@ class FaceRecognizeDetection(PredictPlugin):
self.loop = asyncio.get_event_loop()
self.minThreshold = 0.5
self.detectModel = self.downloadModel("scrypted_yolov9t_relu_face_320" if self.prefer_relu else "scrypted_yolov9t_face_320")
self.detectModel = self.downloadModel("scrypted_yolov9t_relu_face")
self.faceModel = self.downloadModel("inception_resnet_v1")
def downloadModel(self, model: str):

View File

@@ -0,0 +1,89 @@
from __future__ import annotations
from typing import Tuple
import numpy as np
from common import async_infer
from common import yolov9_seg
from predict import PredictPlugin
from predict import Prediction
from predict.rectangle import Rectangle
import asyncio
from common import coco
import traceback
customDetectPrepare, customDetectPredict = async_infer.create_executors("Segment")
class Segmentation(PredictPlugin):
def __init__(self, plugin, nativeId: str):
super().__init__(plugin=plugin, nativeId=nativeId)
self.inputwidth = 320
self.inputheight = 320
self.loop = asyncio.get_event_loop()
self.labels = coco.COCO_LABELS
try:
self.model = self.loadModel('scrypted_yolov9t_seg_relu')
except:
traceback.print_exc()
raise
def loadModel(self, name: str):
pass
# width, height, channels
def get_input_details(self) -> Tuple[int, int, int]:
return (self.inputwidth, self.inputheight, 3)
def get_input_size(self) -> Tuple[float, float]:
return (self.inputwidth, self.inputheight)
def get_input_format(self) -> str:
return "rgb"
def process_segmentation_output(self, pred, proto):
"""
Process segmentation model outputs into a list of Prediction objects.
Args:
pred: Predictions output from NMS (list of detections)
proto: Prototype masks for segmentation
Returns:
List of Prediction objects with segmentation masks (clipPaths)
"""
objs = []
for det in pred:
if not len(det):
continue
# Upsample masks to input image space (320x320)
masks = yolov9_seg.process_mask_numpy(proto.squeeze(0), det[:, 6:], det[:, :4], (320, 320), upsample=True)
# Convert masks to contour points
segments = yolov9_seg.masks2segments_numpy(masks)
# Create Prediction instances
for i in range(len(det)):
# Convert all contours for this detection to list of [x, y] tuples
mask_contours = segments[i]
clip_paths = []
for contour in mask_contours:
if len(contour) > 0 and contour.shape[1] == 2:
single_path = [(float(contour[j, 0]), float(contour[j, 1])) for j in range(len(contour))]
clip_paths.append(single_path)
prediction = Prediction(
id=int(det[i, 5]), # class_id
score=float(det[i, 4]), # confidence
bbox=Rectangle(
xmin=float(det[i, 0]), # x1
ymin=float(det[i, 1]), # y1
xmax=float(det[i, 2]), # x2
ymax=float(det[i, 3]), # y3
),
embedding=None, # no embedding for segmentation
clipPaths=clip_paths # list of polygon outlines [[[x, y], ...], ...] at 320x320
)
objs.append(prediction)
return objs

View File

@@ -1,12 +1,19 @@
# openvino 2025.3.0 is failing to load on 9700, this may be because models need to be reexported.
# openvino 2025.3.0 is failing to load on 9700 (VGG), this may be because models need to be reexported.
# openvino 2025.0.0 does not detect CPU on 13500H
# openvino 2024.5.0 crashes NPU. Update: NPU can not be used with AUTO in this version
# openvino 2024.4.0 crashes legacy systems.
# openvino 2024.3.0 crashes on older CPU (J4105 and older) if level-zero is installed via apt.
# openvino 2024.2.0 and older crashes on arc dGPU.
# openvino 2024.2.0 and newer crashes on 700H and 900H GPUs
# this works on wyse 5070 and core ultra 125h but requires a recent scrypted image for the compute runtime.
# openvino==2025.4.0
openvino==2024.5.0
Pillow==10.3.0
opencv-python-headless==4.10.0.84
# clip processor
transformers==4.52.4
# model downloads
huggingface-hub

Binary file not shown.

After

Width:  |  Height:  |  Size: 94 KiB

View File

@@ -0,0 +1,7 @@
#!/bin/bash
# Script to create a 2-second MP4 video from camera-slash.jpg
# Using H.264 Main profile, no audio, 10fps, 1 keyframe
cd $(dirname $0)
ffmpeg -y -loop 1 -i ../snapshot/fs/camera-slash.jpg -c:v libx264 -profile:v main -t 4 -r 10 -pix_fmt yuv420p -g 10 fs/camera-slash.mp4

Binary file not shown.

View File

@@ -1,12 +1,12 @@
{
"name": "@scrypted/prebuffer-mixin",
"version": "0.10.61",
"version": "0.10.65",
"lockfileVersion": 2,
"requires": true,
"packages": {
"": {
"name": "@scrypted/prebuffer-mixin",
"version": "0.10.61",
"version": "0.10.65",
"license": "Apache-2.0",
"dependencies": {
"@scrypted/common": "file:../../common",

View File

@@ -1,6 +1,6 @@
{
"name": "@scrypted/prebuffer-mixin",
"version": "0.10.61",
"version": "0.10.65",
"description": "Video Stream Rebroadcast, Prebuffer, and Management Plugin for Scrypted.",
"author": "Scrypted",
"license": "Apache-2.0",
@@ -37,6 +37,7 @@
"realfs": true
},
"dependencies": {
"@koush/werift-src": "file:../../external/werift",
"@scrypted/common": "file:../../common",
"@scrypted/sdk": "file:../../sdk",
"h264-sps-parser": "^0.2.1",

View File

@@ -0,0 +1,116 @@
/**
* Creates an AU header for AAC frames in MPEG-4 Generic format (RTP)
*
* @param frameSize - Size of the AAC frame in bytes
* @param auIndex - AU index (default 0 for continuous streams)
* @param sizeLength - Number of bits for frame size field (default 13)
* @param indexLength - Number of bits for AU index field (default 3)
* @returns The AU header as a Buffer
*/
export function createAUHeader(
frameSize: number,
auIndex: number = 0,
sizeLength: number = 13,
indexLength: number = 3
): Buffer {
// Calculate total header bits and bytes
const totalBits = sizeLength + indexLength;
const totalBytes = Math.ceil(totalBits / 8);
// Validate inputs
if (frameSize < 0 || frameSize > ((1 << sizeLength) - 1)) {
throw new Error(`Frame size ${frameSize} is too large for sizeLength ${sizeLength} (max ${(1 << sizeLength) - 1})`);
}
if (auIndex < 0 || auIndex > ((1 << indexLength) - 1)) {
throw new Error(`AU index ${auIndex} is too large for indexLength ${indexLength} (max ${(1 << indexLength) - 1})`);
}
// Combine size and index into a single value
const combinedValue = (frameSize << indexLength) | auIndex;
const header = Buffer.alloc(totalBytes);
header.writeUintBE(combinedValue, 0, totalBytes);
return header;
}
/**
* Creates the AU-header-length field (precedes the AU headers in RTP payload)
*
* @param totalAUHeadersBytes - Total bytes of all AU headers combined
* @returns AU-header-length as a 2-byte Buffer (big-endian)
*/
export function createAUHeaderLength(totalAUHeadersBytes: number): Buffer {
const headerLengthBits = totalAUHeadersBytes * 8;
if (headerLengthBits > 65535) {
throw new Error('Total AU header bits exceeds 16-bit limit');
}
// AU-header-length is a 16-bit integer in network byte order (big-endian)
const lengthHeader = new Buffer(2);
lengthHeader[0] = (headerLengthBits >> 8) & 0xFF;
lengthHeader[1] = headerLengthBits & 0xFF;
return lengthHeader;
}
/**
* Given raw AAC frames, creates the complete RTP payload with AU headers
*
* @param frames - Array of raw AAC frames (no ADTS headers)
* @param sizeLength - Number of bits for frame size field (default 13)
* @param indexLength - Number of bits for AU index field (default 3)
* @returns Complete RTP payload (AU-header-length + AU headers + raw frames)
*/
export function createAACRTPPayload(
frames: Buffer[],
sizeLength: number = 13,
indexLength: number = 3
): Buffer {
if (frames.length === 0) {
throw new Error('No frames provided');
}
// Create AU headers for all frames
const auHeaders: Buffer[] = [];
let totalAUHeaderBytes = 0;
for (let i = 0; i < frames.length; i++) {
const auHeader = createAUHeader(frames[i].length, 0, sizeLength, indexLength);
auHeaders.push(auHeader);
totalAUHeaderBytes += auHeader.length;
}
// Create AU-header-length field
const headerLengthField = createAUHeaderLength(totalAUHeaderBytes);
// Calculate total payload size
let totalSize = headerLengthField.length + totalAUHeaderBytes;
for (const frame of frames) {
totalSize += frame.length;
}
// Assemble the payload
const payload = new Buffer(totalSize);
let offset = 0;
// Copy AU-header-length
payload.set(headerLengthField, offset);
offset += headerLengthField.length;
// Copy AU headers
for (const header of auHeaders) {
payload.set(header, offset);
offset += header.length;
}
// Copy raw AAC frames
for (const frame of frames) {
payload.set(frame, offset);
offset += frame.length;
}
return payload;
}

View File

@@ -0,0 +1,405 @@
/**
* FLV Audio/Video tag payload parser
* RTMP messages for audio (type 8) and video (type 9) contain FLV tag payloads
*/
// ============================================================================
// Video Tag Types (in FLV header, byte 0, low nibble)
// ============================================================================
export enum VideoCodecId {
JPEG = 1,
SORENSON_H263 = 2,
SCREEN_VIDEO = 3,
ON2_VP6 = 4,
ON2_VP6_WITH_ALPHA = 5,
SCREEN_VIDEO_V2 = 6,
H264 = 7,
}
// ============================================================================
// Video Frame Types (in FLV header, byte 0, high nibble)
// ============================================================================
export enum VideoFrameType {
KEY = 1, // Keyframe (I-frame)
INTER = 2, // Inter frame (P-frame)
DISPOSABLE_INTER = 3, // Disposable inter frame
GENERATED_KEYFRAME = 4,
VIDEO_INFO = 5, // Video info/command frame
}
// ============================================================================
// AVC Packet Types (byte 1 for H.264 codec)
// ============================================================================
export enum AVC_PACKET_TYPE {
SEQUENCE_HEADER = 0, // AVC sequence header (decoder configuration)
NALU = 1, // AVC NALU unit
END_OF_SEQUENCE = 2, // AVC end of sequence
}
// ============================================================================
// Audio Sound Formats (in FLV header, byte 0, top 4 bits)
// ============================================================================
export enum AudioSoundFormat {
PCM_BE = 0,
ADPCM = 1,
MP3 = 2,
PCM_LE = 3,
NELLYMOSER_16K = 4,
NELLYMOSER_8K = 5,
NELLYMOSER = 6,
G711_A = 7,
G711_U = 8,
AAC = 10,
SPEEX = 11,
MP3_8K = 14,
}
// ============================================================================
// Audio Sound Rates (in FLV header, byte 0, bits 2-3)
// ============================================================================
export enum AudioSoundRate {
_5_5KHZ = 0,
_11KHZ = 1,
_22KHZ = 2,
_44KHZ = 3,
}
// ============================================================================
// Audio Sound Size (in FLV header, byte 0, bit 1)
// ============================================================================
export enum AudioSoundSize {
SAMPLE_8BIT = 0,
SAMPLE_16BIT = 1,
}
// ============================================================================
// Audio Sound Type (in FLV header, byte 0, bit 0)
// ============================================================================
export enum AudioSoundType {
MONO = 0,
STEREO = 1,
}
// ============================================================================
// AAC Packet Types (byte 1 for AAC codec)
// ============================================================================
export enum AAC_PACKET_TYPE {
SEQUENCE_HEADER = 0, // AAC sequence header (AudioSpecificConfig)
RAW = 1, // AAC raw data
}
// ============================================================================
// Parsed Video Tag Structure
// ============================================================================
export interface FlvVideoTag {
frameType: VideoFrameType;
codecId: VideoCodecId;
// H.264 specific
avcPacketType?: AVC_PACKET_TYPE;
compositionTime?: number;
// H.264 sequence header
avcDecoderConfigurationRecord?: {
configurationVersion: number;
avcProfileIndication: number;
profileCompatibility: number;
avcLevelIndication: number;
lengthSizeMinusOne: number; // NALU length = (value & 0x03) + 1
sps: Buffer[]; // Sequence parameter sets
pps: Buffer[]; // Picture parameter sets
};
// H.264 NALU data
nalus?: Buffer[];
// Raw payload (for non-H.264 codecs)
rawPayload?: Buffer;
}
// ============================================================================
// Parsed Audio Tag Structure
// ============================================================================
export interface FlvAudioTag {
soundFormat: AudioSoundFormat;
soundRate: AudioSoundRate;
soundSize: AudioSoundSize;
soundType: AudioSoundType;
// AAC specific
aacPacketType?: AAC_PACKET_TYPE;
// AAC sequence header (AudioSpecificConfig)
audioSpecificConfig?: {
audioObjectType: number;
samplingFrequencyIndex: number;
channelConfiguration: number;
};
// Raw audio data
data: Buffer;
}
// ============================================================================
// Parser Result
// ============================================================================
export type FlvTag = FlvVideoTag | FlvAudioTag;
// ============================================================================
// Parse AVCDecoderConfigurationRecord (H.264 decoder configuration)
// ============================================================================
function parseAVCDecoderConfigurationRecord(buffer: Buffer, offset: number, length: number): {
config: FlvVideoTag['avcDecoderConfigurationRecord'],
bytesConsumed: number
} {
if (length < 6) {
throw new Error('AVCDecoderConfigurationRecord too short');
}
const config: FlvVideoTag['avcDecoderConfigurationRecord'] = {
configurationVersion: buffer[offset],
avcProfileIndication: buffer[offset + 1],
profileCompatibility: buffer[offset + 2],
avcLevelIndication: buffer[offset + 3],
lengthSizeMinusOne: buffer[offset + 4] & 0x03,
sps: [],
pps: [],
};
const numSPS = buffer[offset + 5] & 0x1F;
let pos = offset + 6;
// Parse SPS
for (let i = 0; i < numSPS; i++) {
if (pos + 2 > buffer.length) {
throw new Error('AVCDecoderConfigurationRecord truncated reading SPS length');
}
const spsLength = buffer.readUInt16BE(pos);
pos += 2;
if (pos + spsLength > buffer.length) {
throw new Error(`AVCDecoderConfigurationRecord: SPS data exceeds buffer length`);
}
config.sps.push(buffer.subarray(pos, pos + spsLength));
pos += spsLength;
}
// Parse PPS
if (pos >= buffer.length) {
return { config, bytesConsumed: pos - offset };
}
const numPPS = buffer[pos];
pos++;
for (let i = 0; i < numPPS; i++) {
if (pos + 2 > buffer.length) {
throw new Error('AVCDecoderConfigurationRecord truncated reading PPS length');
}
const ppsLength = buffer.readUInt16BE(pos);
pos += 2;
if (pos + ppsLength > buffer.length) {
throw new Error(`AVCDecoderConfigurationRecord: PPS data exceeds buffer length`);
}
config.pps.push(buffer.subarray(pos, pos + ppsLength));
pos += ppsLength;
}
return { config, bytesConsumed: pos - offset };
}
// ============================================================================
// Parse H.264 NALU units from AVCPacketType=1 payload
// The NALUs are preceded by length fields (size = lengthSizeMinusOne + 1)
// ============================================================================
function parseNALUUnits(buffer: Buffer, offset: number, length: number, naluLengthSize: number): Buffer[] {
const nalus: Buffer[] = [];
let pos = offset;
if (naluLengthSize < 1 || naluLengthSize > 4) {
throw new Error(`Invalid NALU length size: ${naluLengthSize}`);
}
while (pos + naluLengthSize <= offset + length) {
const naluLength = buffer.readUintBE(pos, naluLengthSize);
pos += naluLengthSize;
if (naluLength === 0) {
continue; // Skip zero-length NALUs
}
if (pos + naluLength > offset + length) {
throw new Error(`NALU data exceeds buffer length at position ${pos}`);
}
nalus.push(buffer.subarray(pos, pos + naluLength));
pos += naluLength;
}
return nalus;
}
// ============================================================================
// Parse AudioSpecificConfig (AAC decoder configuration)
// ============================================================================
function parseAudioSpecificConfig(buffer: Buffer, offset: number, length: number): {
aacConfig: FlvAudioTag['audioSpecificConfig'],
bytesConsumed: number
} {
if (length < 2) {
throw new Error('AudioSpecificConfig too short');
}
// AudioSpecificConfig is 2+ bytes, bit-packed
const byte0 = buffer[offset];
const byte1 = buffer[offset + 1];
const aacConfig: FlvAudioTag['audioSpecificConfig'] = {
audioObjectType: (byte0 >> 3) & 0x1F,
samplingFrequencyIndex: ((byte0 & 0x07) << 1) | ((byte1 >> 7) & 0x01),
channelConfiguration: (byte1 >> 3) & 0x0F,
};
return { aacConfig, bytesConsumed: 2 };
}
// ============================================================================
// Parse FLV Video Tag Payload
// ============================================================================
export function parseFlvVideoTag(buffer: Buffer): FlvVideoTag {
if (buffer.length < 1) {
throw new Error('Video tag too short');
}
const byte0 = buffer[0];
const frameType = (byte0 >> 4) as VideoFrameType;
const codecId = (byte0 & 0x0F) as VideoCodecId;
const result: FlvVideoTag = {
frameType,
codecId,
};
if (codecId === VideoCodecId.H264) {
// H.264/AVC codec
if (buffer.length < 5) {
throw new Error('H.264 video tag too short');
}
result.avcPacketType = buffer[1] as AVC_PACKET_TYPE;
result.compositionTime = buffer.readIntBE(2, 3);
switch (result.avcPacketType) {
case AVC_PACKET_TYPE.SEQUENCE_HEADER: {
const data = buffer.subarray(5);
const parsed = parseAVCDecoderConfigurationRecord(data, 0, data.length);
result.avcDecoderConfigurationRecord = parsed.config;
break;
}
case AVC_PACKET_TYPE.NALU: {
// Need to know NALU length size from the sequence header
// We'll assume 4 bytes (most common) if not provided
const naluLengthSize = 4;
const data = buffer.subarray(5);
result.nalus = parseNALUUnits(data, 0, data.length, naluLengthSize);
break;
}
case AVC_PACKET_TYPE.END_OF_SEQUENCE:
// No payload
break;
}
} else {
// Other video codecs - just return raw payload
result.rawPayload = buffer.subarray(1);
}
return result;
}
// ============================================================================
// Parse FLV Audio Tag Payload
// ============================================================================
export function parseFlvAudioTag(buffer: Buffer): FlvAudioTag {
if (buffer.length < 1) {
throw new Error('Audio tag too short');
}
const byte0 = buffer[0];
const soundFormat: AudioSoundFormat = (byte0 >> 4) & 0x0F;
const soundRate: AudioSoundRate = (byte0 >> 2) & 0x03;
const soundSize: AudioSoundSize = (byte0 >> 1) & 0x01;
const soundType: AudioSoundType = byte0 & 0x01;
const result: FlvAudioTag = {
soundFormat,
soundRate,
soundSize,
soundType,
data: Buffer.alloc(0),
};
if (soundFormat === AudioSoundFormat.AAC) {
if (buffer.length < 2) {
throw new Error('AAC audio tag too short');
}
result.aacPacketType = buffer[1] as AAC_PACKET_TYPE;
if (result.aacPacketType === AAC_PACKET_TYPE.SEQUENCE_HEADER) {
const data = buffer.subarray(2);
const parsed = parseAudioSpecificConfig(data, 0, data.length);
result.audioSpecificConfig = parsed.aacConfig;
} else {
result.data = buffer.subarray(2);
}
} else {
// Raw audio data for other formats
result.data = buffer.subarray(1);
}
return result;
}
// ============================================================================
// Parse FLV Tag (auto-detect video or audio based on codec/format)
// This function requires you to know the RTMP message type (8=audio, 9=video)
// ============================================================================
export function parseFlvTag(buffer: Buffer, messageType: number): FlvTag {
if (messageType === 9) {
return parseFlvVideoTag(buffer);
} else if (messageType === 8) {
return parseFlvAudioTag(buffer);
} else {
throw new Error(`Unsupported message type for FLV parsing: ${messageType}`);
}
}
// ============================================================================
// Parse H.264 NALU unit type (5-bit value in first byte's low bits)
// ============================================================================
export function parseNALUHeader(buffer: Buffer): number {
if (buffer.length < 1) {
throw new Error('NALU too short');
}
return buffer[0] & 0x1F;
}
// ============================================================================
// Helper: Format H.264 NALU unit type name
// ============================================================================
export function getNALUTypeName(nalcType: number): string {
const types: Record<number, string> = {
1: 'slice_layer_without_partitioning_non_idr',
5: 'slice_layer_without_partitioning_idr',
6: 'sei',
7: 'seq_parameter_set',
8: 'pic_parameter_set',
9: 'access_unit_delimiter',
};
return types[nalcType] || `unknown (${nalcType})`;
}

View File

@@ -19,6 +19,7 @@ import { FileRtspServer } from './file-rtsp-server';
import { getUrlLocalAdresses } from './local-addresses';
import { REBROADCAST_MIXIN_INTERFACE_TOKEN } from './rebroadcast-mixin-token';
import { connectRFC4571Parser, startRFC4571Parser } from './rfc4571';
import { startRtmpSession } from './rtmp-session';
import { RtspSessionParserSpecific, startRtspSession } from './rtsp-session';
import { getSpsResolution } from './sps-resolution';
import { createStreamSettings } from './stream-settings';
@@ -187,13 +188,17 @@ class PrebufferSession {
return mediaStreamOptions?.container?.startsWith('rtsp');
}
canUseRtmpParser(mediaStreamOptions: MediaStreamOptions) {
return mediaStreamOptions?.container?.startsWith('rtmp');
}
getParser(mediaStreamOptions: MediaStreamOptions) {
let parser: string;
let rtspParser = this.storage.getItem(this.rtspParserKey);
let isDefault = !rtspParser || rtspParser === 'Default';
if (!this.canUseRtspParser(mediaStreamOptions)) {
if (!this.canUseRtspParser(mediaStreamOptions) && !this.canUseRtmpParser(mediaStreamOptions)) {
parser = STRING_DEFAULT;
isDefault = true;
rtspParser = undefined;
@@ -340,12 +345,26 @@ class PrebufferSession {
let usingFFmpeg = true;
if (this.canUseRtspParser(this.advertisedMediaStreamOptions)) {
if (this.canUseRtspParser(this.advertisedMediaStreamOptions) || this.canUseRtmpParser(this.advertisedMediaStreamOptions)) {
const parser = this.getParser(this.advertisedMediaStreamOptions);
const defaultValue = parser.parser;
const currentParser = parser.isDefault ? STRING_DEFAULT : parser.parser;
const choices = this.canUseRtmpParser(this.advertisedMediaStreamOptions)
? [
STRING_DEFAULT,
SCRYPTED_PARSER_TCP,
FFMPEG_PARSER_TCP,
]
: [
STRING_DEFAULT,
SCRYPTED_PARSER_TCP,
SCRYPTED_PARSER_UDP,
FFMPEG_PARSER_TCP,
FFMPEG_PARSER_UDP,
]
settings.push(
{
key: this.rtspParserKey,
@@ -354,13 +373,7 @@ class PrebufferSession {
title: 'RTSP Parser',
description: `The RTSP Parser used to read the stream. The default is "${defaultValue}" for this stream.`,
value: currentParser,
choices: [
STRING_DEFAULT,
SCRYPTED_PARSER_TCP,
SCRYPTED_PARSER_UDP,
FFMPEG_PARSER_TCP,
FFMPEG_PARSER_UDP,
],
choices,
}
);
@@ -456,6 +469,10 @@ class PrebufferSession {
catch (e) {
}
if (this.mixin.streamSettings.storageSettings.values.privacyMode) {
mso.audio = null;
}
// camera may explicity request that its audio stream be muted via a null.
// respect that setting.
const audioSoftMuted = mso?.audio === null;
@@ -474,7 +491,22 @@ class PrebufferSession {
this.parsers = rbo.parsers;
let mo: MediaObject;
if (this.mixin.streamSettings.storageSettings.values.synthenticStreams.includes(this.streamId)) {
if (this.mixin.streamSettings.storageSettings.values.privacyMode) {
const ffmpegInput: FFmpegInput = {
container: 'mp4',
inputArguments: [
'-re',
'-stream_loop', '-1',
'-i', 'camera-slash.mp4',
],
mediaStreamOptions: {
id: this.streamId,
container: 'mp4',
}
};
mo = await mediaManager.createMediaObject(ffmpegInput, ScryptedMimeTypes.FFmpegInput);
}
else if (this.mixin.streamSettings.storageSettings.values.synthenticStreams.includes(this.streamId)) {
const syntheticInputId = this.storage.getItem(this.syntheticInputIdKey);
if (!syntheticInputId)
throw new Error('synthetic stream has not been configured with an input');
@@ -520,14 +552,26 @@ class PrebufferSession {
this.usingScryptedUdpParser = parser === SCRYPTED_PARSER_UDP;
if (this.usingScryptedParser) {
const rtspParser = createRtspParser();
rbo.parsers.rtsp = rtspParser;
if (this.canUseRtmpParser(sessionMso)) {
// rtmp becomes repackaged as rtsp
const rtspParser = createRtspParser();
rbo.parsers.rtsp = rtspParser;
session = await startRtspSession(this.console, ffmpegInput.url, ffmpegInput.mediaStreamOptions, {
useUdp: parser === SCRYPTED_PARSER_UDP,
audioSoftMuted,
rtspRequestTimeout: 10000,
});
session = await startRtmpSession(this.console, ffmpegInput.url, ffmpegInput.mediaStreamOptions, {
audioSoftMuted,
rtspRequestTimeout: 10000,
});
}
else {
const rtspParser = createRtspParser();
rbo.parsers.rtsp = rtspParser;
session = await startRtspSession(this.console, ffmpegInput.url, ffmpegInput.mediaStreamOptions, {
useUdp: parser === SCRYPTED_PARSER_UDP,
audioSoftMuted,
rtspRequestTimeout: 10000,
});
}
}
else {
let acodec: string[];
@@ -549,10 +593,12 @@ class PrebufferSession {
acodec = audioSoftMuted ? acodec : ['-acodec', 'copy'];
if (parser === FFMPEG_PARSER_UDP)
ffmpegInput.inputArguments = ['-rtsp_transport', 'udp', '-i', ffmpegInput.url];
else if (parser === FFMPEG_PARSER_TCP)
ffmpegInput.inputArguments = ['-rtsp_transport', 'tcp', '-i', ffmpegInput.url];
if (!this.canUseRtmpParser(mso)) {
if (parser === FFMPEG_PARSER_UDP)
ffmpegInput.inputArguments = ['-rtsp_transport', 'udp', '-i', ffmpegInput.url];
else if (parser === FFMPEG_PARSER_TCP)
ffmpegInput.inputArguments = ['-rtsp_transport', 'tcp', '-i', ffmpegInput.url];
}
// create missing pts from dts so mpegts and mp4 muxing does not fail
const userInputArguments = this.storage.getItem(this.ffmpegInputArgumentsKey);
const extraInputArguments = userInputArguments || DEFAULT_FFMPEG_INPUT_ARGUMENTS;

View File

@@ -0,0 +1,627 @@
import { readLength } from '@scrypted/common/src/read-stream';
import { Socket } from 'net';
function writeUInt24BE(buffer: Buffer, value: number, offset: number): void {
buffer[offset] = (value >> 16) & 0xFF;
buffer[offset + 1] = (value >> 8) & 0xFF;
buffer[offset + 2] = value & 0xFF;
}
// Constants
const HANDSHAKE_SIZE = 1536;
const RTMP_VERSION = 3;
// Chunk format types
enum ChunkFormat {
TYPE_0 = 0,
TYPE_1 = 1,
TYPE_2 = 2,
TYPE_3 = 3
}
// RTMP message types
enum RtmpMessageType {
CHUNK_SIZE = 1,
ABORT = 2,
ACKNOWLEDGEMENT = 3,
USER_CONTROL = 4,
WINDOW_ACKNOWLEDGEMENT_SIZE = 5,
SET_PEER_BANDWIDTH = 6,
AUDIO = 8,
VIDEO = 9,
DATA_AMF0 = 18,
COMMAND_AMF0 = 20
}
// Control messages
export class SetChunkSize {
constructor(public chunkSize: number) { }
}
export class UserControlSetBufferLength {
constructor(public streamId: number, public bufferLength: number) { }
}
export interface CreateStreamResult {
streamId: number;
}
export interface OnStatusResult {
level: string;
code: string;
description: string;
}
interface ChunkStream {
chunkStreamId: number;
messageStreamId: number;
messageLength: number;
messageTypeId: number;
timestamp: number;
sequenceNumber: number;
messageData: Buffer[];
totalReceived: number;
hasExtendedTimestamp: boolean;
}
export class RtmpClient {
socket: Socket | null = null;
private chunkSize: number = 128;
private outgoingChunkSize: number = 128;
private windowAckSize: number = 5000000;
private streamId: number = 0;
private lastAcknowledgementBytes: number = 0;
private totalBytesReceived: number = 0;
private transactionId: number = 1;
private chunkStreams: Map<number, ChunkStream> = new Map();
constructor(public url: string, public console?: Console) {
this.socket = new Socket();
}
async setup() {
this.console?.log('Starting stream()...');
await this.connect();
// Send connect command
this.console?.log('Sending connect command...');
await this.sendConnect();
this.console?.log('Connect command sent');
while (true) {
const msg = await this.readMessage();
const { messageTypeId } = msg.chunkStream;
if (messageTypeId === RtmpMessageType.WINDOW_ACKNOWLEDGEMENT_SIZE) {
continue;
}
if (messageTypeId === RtmpMessageType.SET_PEER_BANDWIDTH) {
continue;
}
if (messageTypeId === RtmpMessageType.CHUNK_SIZE) {
const newChunkSize = msg.message.readUInt32BE(0);
this.console?.log(`Server set chunk size to ${newChunkSize}`);
this.chunkSize = newChunkSize;
continue;
}
if (messageTypeId === RtmpMessageType.COMMAND_AMF0) {
// Parse AMF0 command
// For simplicity, we only handle _result for connect here
const commandName = msg.message.subarray(3, 10).toString('utf8');
if (commandName === '_result') {
this.console?.log('Received _result for connect');
break;
}
throw new Error(`Unexpected command: ${commandName}`);
}
throw new Error(`Unexpected message type: ${messageTypeId}`);
}
// Send window acknowledgement size
this.sendWindowAckSize(5000000);
// Send createStream
this.console?.log('Sending createStream...');
this.streamId = await this.sendCreateStream();
// Wait for _result for createStream
const createStreamResult = await this.readMessage();
// check it
const { messageTypeId } = createStreamResult.chunkStream;
if (messageTypeId !== RtmpMessageType.COMMAND_AMF0) {
throw new Error(`Unexpected message type waiting for createStream result: ${messageTypeId}, expected COMMAND_AMF0`);
}
this.console?.log('Got createStream _result');
// Send getStreamLength then play (matching ffmpeg's order)
const parsedUrl = new URL(this.url);
// Extract stream name (after /app/)
const parts = parsedUrl.pathname.split('/');
const streamName = parts.length > 2 ? parts.slice(2).join('/') : '';
const playPath = streamName + parsedUrl.search;
this.console?.log('Sending getStreamLength with path:', playPath);
const getStreamLengthData = this.encodeAMF0Command('getStreamLength', this.transactionId++, null, playPath);
this.sendMessage(5, 0, RtmpMessageType.COMMAND_AMF0, 0, getStreamLengthData);
this.console?.log('Sending play command with path:', playPath);
this.sendPlay(this.streamId, playPath);
this.console?.log('Sending setBufferLength...');
this.setBufferLength(this.streamId, 3000);
}
/**
* Connect to the RTMP server and start streaming
*/
async *readLoop(): AsyncGenerator<{
packet: Buffer,
codec: string,
timestamp: number,
}> {
this.console?.log('Starting to yield video/audio packets...');
// Just yield video/audio packets as they arrive
while (true) {
const msg = await this.readMessage();
if (msg.chunkStream.messageTypeId === RtmpMessageType.VIDEO) {
yield { packet: msg.message, codec: 'video', timestamp: msg.chunkStream.timestamp };
} else if (msg.chunkStream.messageTypeId === RtmpMessageType.AUDIO) {
yield { packet: msg.message, codec: 'audio', timestamp: msg.chunkStream.timestamp };
}
else {
this.console?.warn(`Ignoring message type ${msg.chunkStream.messageTypeId}`);
}
}
}
/**
* Connect to RTMP server
*/
private async connect(): Promise<void> {
const parsedUrl = new URL(this.url);
const host = parsedUrl.hostname;
const port = parseInt(parsedUrl.port) || 1935;
this.console?.log(`Connecting to RTMP server at ${host}:${port}`);
// Add socket event listeners
this.socket.on('close', (hadError) => {
this.console?.log(`Socket closed, hadError=${hadError}`);
});
this.socket.on('end', () => {
this.console?.log('Socket received FIN');
});
this.socket.on('error', (err) => {
this.console?.error('Socket error:', err);
});
await new Promise<void>((resolve, reject) => {
this.socket!.connect(port, host, () => {
this.console?.log('Socket connected');
resolve();
});
this.socket!.once('error', reject);
});
this.console?.log('Performing handshake...');
await this.performHandshake();
this.console?.log('Handshake complete');
}
/**
* Perform RTMP handshake
* Client sends: C0 + C1
* Server responds: S0 + S1 + S2
* Client responds: C2
*/
private async performHandshake(): Promise<void> {
if (!this.socket) throw new Error('Socket not connected');
// Send C0 (1 byte: version)
const c0 = Buffer.from([RTMP_VERSION]);
// Send C1 (1536 bytes: time[4] + zero[4] + random[1528])
const c1 = Buffer.alloc(HANDSHAKE_SIZE);
const timestamp = Math.floor(Date.now() / 1000);
c1.writeUInt32BE(timestamp, 0);
c1.writeUInt32BE(0, 4); // zero
// Send C0 + C1
this.socket.write(Buffer.concat([c0, c1]));
// Read S0 (1 byte)
const s0 = await this.readExactly(1);
const serverVersion = s0[0];
if (serverVersion !== RTMP_VERSION) {
throw new Error(`Unsupported RTMP version: ${serverVersion}`);
}
// Read S1 (1536 bytes)
const s1 = await this.readExactly(HANDSHAKE_SIZE);
const s1Time = s1.readUInt32BE(0);
// Read S2 (1536 bytes)
const s2 = await this.readExactly(HANDSHAKE_SIZE);
// Send C2 (echo of S1)
const c2 = s1;
this.socket.write(c2);
}
/**
* Parse RTMP chunks after handshake
*/
private async readMessage(): Promise<{
message: Buffer,
chunkStream: ChunkStream,
}> {
const stream = this.socket!;
while (true) {
// Read chunk basic header (1-3 bytes)
const basicHeader = await readLength(stream, 1);
const fmt = (basicHeader[0] >> 6) & 0x03;
let csId = basicHeader[0] & 0x3F;
// Handle 2-byte and 3-byte forms
if (csId === 0) {
const secondByte = await readLength(stream, 1);
csId = secondByte[0] + 64;
} else if (csId === 1) {
const bytes = await readLength(stream, 2);
csId = (bytes[1] << 8) | bytes[0] + 64;
}
// Chunk stream ID 2 is reserved for protocol control messages, but we should still parse it
// Get or create chunk stream state
let chunkStream = this.chunkStreams.get(csId);
if (!chunkStream) {
chunkStream = {
chunkStreamId: csId,
messageStreamId: 0,
messageLength: 0,
messageTypeId: 0,
timestamp: 0,
sequenceNumber: 0,
messageData: [],
totalReceived: 0,
hasExtendedTimestamp: false
};
this.chunkStreams.set(csId, chunkStream);
}
// Parse message header based on format
let timestamp: number;
let messageLength: number;
let messageTypeId: number;
let messageStreamId: number;
let hasExtendedTimestamp = false;
let headerSize: number;
if (fmt === ChunkFormat.TYPE_0) {
// Type 0: 11 bytes
headerSize = 11;
const header = await readLength(stream, 11);
timestamp = header.readUIntBE(0, 3);
messageLength = header.readUIntBE(3, 3);
messageTypeId = header[6];
messageStreamId = header.readUInt32LE(7);
// Update chunk stream state
chunkStream.messageStreamId = messageStreamId;
chunkStream.messageLength = messageLength;
chunkStream.messageTypeId = messageTypeId;
chunkStream.timestamp = timestamp;
chunkStream.totalReceived = 0;
chunkStream.messageData = [];
if (timestamp >= 0xFFFFFF) {
hasExtendedTimestamp = true;
chunkStream.hasExtendedTimestamp = true;
}
} else if (fmt === ChunkFormat.TYPE_1) {
// Type 1: 7 bytes
headerSize = 7;
const header = await readLength(stream, 7);
const timestampDelta = header.readUIntBE(0, 3);
messageLength = header.readUIntBE(3, 3);
messageTypeId = header[6];
// Update chunk stream state
chunkStream.messageLength = messageLength;
chunkStream.messageTypeId = messageTypeId;
chunkStream.timestamp += timestampDelta;
chunkStream.totalReceived = 0;
chunkStream.messageData = [];
if (timestampDelta >= 0xFFFFFF) {
hasExtendedTimestamp = true;
chunkStream.hasExtendedTimestamp = true;
}
} else if (fmt === ChunkFormat.TYPE_2) {
// Type 2: 3 bytes
headerSize = 3;
const header = await readLength(stream, 3);
const timestampDelta = header.readUIntBE(0, 3);
// Update chunk stream state
chunkStream.timestamp += timestampDelta;
chunkStream.totalReceived = 0;
chunkStream.messageData = [];
if (timestampDelta >= 0xFFFFFF) {
hasExtendedTimestamp = true;
chunkStream.hasExtendedTimestamp = true;
}
} else {
headerSize = 0;
// Type 3: 0 bytes - use previous values
if (chunkStream.totalReceived === 0) {
throw new Error('Type 3 chunk but no previous chunk in stream');
}
}
// Read extended timestamp if present
if (hasExtendedTimestamp || chunkStream.hasExtendedTimestamp) {
const extTs = await readLength(stream, 4);
const extendedTimestamp = extTs.readUInt32BE(0);
if (fmt === ChunkFormat.TYPE_0) {
chunkStream.timestamp = extendedTimestamp;
} else if (fmt === ChunkFormat.TYPE_1 || fmt === ChunkFormat.TYPE_2) {
// For type 1 and 2, the extended timestamp replaces the delta
chunkStream.timestamp = chunkStream.timestamp - (fmt === ChunkFormat.TYPE_1 ? (await readLength(stream, 0)).readUIntBE(0, 3) : 0) + extendedTimestamp;
}
}
// Calculate chunk data size
const remainingInMessage = chunkStream.messageLength - chunkStream.totalReceived;
const chunkDataSize = Math.min(this.chunkSize, remainingInMessage);
const MAX_CHUNK_SIZE = 1024 * 1024;
if (chunkDataSize > MAX_CHUNK_SIZE) {
throw new Error(`Chunk size ${chunkDataSize} exceeds maximum allowed size of ${MAX_CHUNK_SIZE} bytes`);
}
// Read chunk data
const chunkData = await readLength(stream, chunkDataSize);
chunkStream.messageData.push(chunkData);
chunkStream.totalReceived += chunkDataSize;
// Track bytes received for window acknowledgements
// Count: basic header (1 byte) + message header (0-11 bytes) + extended timestamp (0-4 bytes) + payload
const extTimestampSize = (hasExtendedTimestamp || chunkStream.hasExtendedTimestamp) ? 4 : 0;
const bytesInChunk = 1 + headerSize + extTimestampSize + chunkDataSize;
this.totalBytesReceived += bytesInChunk;
// Send window acknowledgement if threshold exceeded
this.sendAcknowledgementIfNeeded();
// Check if message is complete
if (chunkStream.totalReceived >= chunkStream.messageLength) {
const message = Buffer.concat(chunkStream.messageData);
chunkStream.messageData = [];
chunkStream.totalReceived = 0;
chunkStream.hasExtendedTimestamp = false;
return {
chunkStream,
message,
};
}
}
}
/**
* Send acknowledgement if window threshold exceeded
*/
private sendAcknowledgementIfNeeded(): void {
const bytesToAck = this.totalBytesReceived - this.lastAcknowledgementBytes;
if (bytesToAck >= this.windowAckSize) {
this.lastAcknowledgementBytes = this.totalBytesReceived;
console.log(`Sending acknowledgement: ${this.lastAcknowledgementBytes} bytes received (${bytesToAck} since last ACK)`);
const data = Buffer.alloc(4);
data.writeUInt32BE(this.lastAcknowledgementBytes & 0xFFFFFFFF, 0);
this.sendMessage(2, 0, RtmpMessageType.ACKNOWLEDGEMENT, 0, data);
}
}
/**
* Read exactly n bytes from socket
*/
private async readExactly(n: number): Promise<Buffer> {
return readLength(this.socket!, n);
}
/**
* Encode value to AMF0
*/
private encodeAMF0(value: any): Buffer {
if (typeof value === 'number') {
const buf = Buffer.alloc(9);
buf[0] = 0x00; // Number marker
buf.writeDoubleBE(value, 1);
return buf;
} else if (typeof value === 'string') {
const buf = Buffer.alloc(3 + value.length);
buf[0] = 0x02; // String marker
buf.writeUInt16BE(value.length, 1);
buf.write(value, 3, 'utf8');
return buf;
} else if (typeof value === 'boolean') {
const buf = Buffer.alloc(2);
buf[0] = 0x01; // Boolean marker
buf[1] = value ? 1 : 0;
return buf;
} else if (value === null || value === undefined) {
return Buffer.from([0x05]); // Null marker
} else if (typeof value === 'object') {
// Object
const parts: Buffer[] = [Buffer.from([0x03])]; // Object marker
for (const [key, val] of Object.entries(value)) {
// Key
const keyBuf = Buffer.alloc(2 + key.length);
keyBuf.writeUInt16BE(key.length, 0);
keyBuf.write(key, 2, 'utf8');
parts.push(keyBuf);
// Value
parts.push(this.encodeAMF0(val));
}
// End of object marker
parts.push(Buffer.from([0x00, 0x00, 0x09]));
return Buffer.concat(parts);
}
throw new Error(`Unsupported AMF0 type: ${typeof value}`);
}
/**
* Encode command to AMF0
*/
private encodeAMF0Command(commandName: string, transactionId: number, commandObject: any, ...args: any[]): Buffer {
const parts: Buffer[] = [];
// Command name (string)
parts.push(this.encodeAMF0(commandName));
// Transaction ID (number)
parts.push(this.encodeAMF0(transactionId));
// Command object
parts.push(this.encodeAMF0(commandObject));
// Additional arguments
for (const arg of args) {
parts.push(this.encodeAMF0(arg));
}
return Buffer.concat(parts);
}
/**
* Send a message as RTMP chunks
*/
private sendMessage(
chunkStreamId: number,
messageStreamId: number,
messageTypeId: number,
timestamp: number,
data: Buffer
): void {
if (!this.socket) throw new Error('Socket not connected');
const chunks: Buffer[] = [];
let offset = 0;
while (offset < data.length) {
const chunkDataSize = Math.min(this.outgoingChunkSize, data.length - offset);
const isType0 = offset === 0;
// Type 0 header is 12 bytes (1 + 3 + 3 + 1 + 4)
const headerSize = isType0 ? 12 : 1;
const header = Buffer.alloc(headerSize);
// Basic header (chunk stream ID)
if (chunkStreamId < 64) {
header[0] = (isType0 ? ChunkFormat.TYPE_0 : ChunkFormat.TYPE_3) << 6 | chunkStreamId;
} else {
// Handle extended chunk stream IDs (simplified for now)
header[0] = (isType0 ? ChunkFormat.TYPE_0 : ChunkFormat.TYPE_3) << 6 | 1;
}
if (isType0) {
// Type 0 header
writeUInt24BE(header, timestamp, 1);
writeUInt24BE(header, data.length, 4);
header[7] = messageTypeId;
header.writeUInt32LE(messageStreamId, 8);
}
chunks.push(header);
chunks.push(data.subarray(offset, offset + chunkDataSize));
offset += chunkDataSize;
}
for (const chunk of chunks) {
this.socket.write(chunk);
}
}
/**
* Send connect command
*/
private async sendConnect(): Promise<void> {
const parsedUrl = new URL(this.url);
const tcUrl = `${parsedUrl.protocol}//${parsedUrl.host}/${parsedUrl.pathname.split('/')[1]}`;
const connectObject = {
app: parsedUrl.pathname.split('/')[1],
flashVer: 'LNX 9,0,124,2',
tcUrl: tcUrl,
fpad: false,
capabilities: 15,
audioCodecs: 4071,
videoCodecs: 252,
videoFunction: 1
};
const data = this.encodeAMF0Command('connect', this.transactionId++, connectObject);
this.sendMessage(3, 0, RtmpMessageType.COMMAND_AMF0, 0, data);
}
/**
* Send createStream command
*/
private async sendCreateStream(): Promise<number> {
const data = this.encodeAMF0Command('createStream', this.transactionId++, null);
this.sendMessage(3, 0, RtmpMessageType.COMMAND_AMF0, 0, data);
return 1;
}
/**
* Send play command
*/
private sendPlay(streamId: number, playPath: string): void {
const data = this.encodeAMF0Command('play', this.transactionId++, null, playPath, -2000);
this.sendMessage(4, streamId, RtmpMessageType.COMMAND_AMF0, 0, data);
}
/**
* Send setBufferLength user control
*/
private setBufferLength(streamId: number, bufferLength: number): void {
const data = Buffer.alloc(10);
data.writeUInt16BE(3, 0);
data.writeUInt32BE(streamId, 2);
data.writeUInt32BE(bufferLength, 6);
this.sendMessage(2, 0, RtmpMessageType.USER_CONTROL, 1, data);
}
/**
* Send window acknowledgement size
*/
private sendWindowAckSize(windowSize: number): void {
const data = Buffer.alloc(4);
data.writeUInt32BE(windowSize, 0);
this.sendMessage(2, 0, RtmpMessageType.WINDOW_ACKNOWLEDGEMENT_SIZE, 0, data);
}
/**
* Destroy the connection
*/
destroy() {
if (this.socket) {
this.socket.destroy();
this.socket = null;
}
}
}

View File

@@ -0,0 +1,224 @@
import { RTSP_FRAME_MAGIC } from "@scrypted/common/src/rtsp-server";
import { StreamChunk } from "@scrypted/common/src/stream-parser";
import { ResponseMediaStreamOptions } from "@scrypted/sdk";
import { EventEmitter } from "stream";
import { RtpHeader, RtpPacket } from '../../../external/werift/packages/rtp/src/rtp/rtp';
import { H264Repacketizer } from "../../homekit/src/types/camera/h264-packetizer";
import { addRtpTimestamp, nextSequenceNumber } from "../../homekit/src/types/camera/jitter-buffer";
import { createAACRTPPayload } from "./au";
import { ParserSession, setupActivityTimer } from "./ffmpeg-rebroadcast";
import { parseFlvAudioTag, parseFlvVideoTag, VideoCodecId } from "./flv";
import { negotiateMediaStream } from "./rfc4571";
import { RtmpClient } from "./rtmp-client";
export type RtspChannelCodecMapping = { [key: number]: string };
export interface RtspSessionParserSpecific {
interleaved: Map<string, number>;
}
export async function startRtmpSession(console: Console, url: string, mediaStreamOptions: ResponseMediaStreamOptions, options: {
audioSoftMuted: boolean,
rtspRequestTimeout: number,
}): Promise<ParserSession<"rtsp">> {
let isActive = true;
const events = new EventEmitter();
// need this to prevent kill from throwing due to uncaught Error during cleanup
events.on('error', () => { });
const rtmpClient = new RtmpClient(url, console);
const cleanupSockets = () => {
rtmpClient.destroy();
}
let sessionKilled: any;
const killed = new Promise<void>(resolve => {
sessionKilled = resolve;
});
const kill = (error?: Error) => {
if (isActive) {
events.emit('killed');
events.emit('error', error || new Error('killed'));
}
isActive = false;
sessionKilled();
cleanupSockets();
};
rtmpClient.socket.on('close', () => {
kill(new Error('rtmp socket closed'));
});
rtmpClient.socket.on('error', e => {
kill(e);
});
const { resetActivityTimer } = setupActivityTimer('rtsp', kill, events, options?.rtspRequestTimeout);
try {
await rtmpClient.setup();
let sdp = `v=0
o=- 0 0 IN IP4 0.0.0.0
s=-
t=0 0
m=video 0 RTP/AVP 96
a=control:streamid=0
a=rtpmap:96 H264/90000`;
if (!options?.audioSoftMuted) {
sdp += `
m=audio 0 RTP/AVP 97
a=control:streamid=2
a=rtpmap:97 MPEG4-GENERIC/16000/1
a=fmtp:97 profile-level-id=1;mode=AAC-hbr;sizelength=13;indexlength=3;indexdeltalength=3; config=1408`;
}
sdp = sdp.split('\n').join('\r\n');
const start = async () => {
try {
let audioSequenceNumber = 0;
let videoSequenceNumber = 0;
const h264Repacketizer = new H264Repacketizer(console, 32000);
for await (const rtmpPacket of rtmpClient.readLoop()) {
if (!isActive)
break;
resetActivityTimer?.();
if (rtmpPacket.codec === 'audio') {
if (options?.audioSoftMuted)
continue;
const flv = parseFlvAudioTag(rtmpPacket.packet);
if (!flv.data?.length)
continue;
const header = new RtpHeader({
sequenceNumber: audioSequenceNumber,
timestamp: addRtpTimestamp(0, Math.floor(rtmpPacket.timestamp / 1000 * 16000)),
payloadType: 97,
marker: false,
});
audioSequenceNumber = nextSequenceNumber(audioSequenceNumber);
const audioPayload = createAACRTPPayload([flv.data]);
const rtp = new RtpPacket(header, audioPayload).serialize();
const prefix = Buffer.alloc(2);
prefix[0] = RTSP_FRAME_MAGIC;
prefix[1] = 2;
const length = Buffer.alloc(2);
length.writeUInt16BE(rtp.length, 0);
events.emit('rtsp', {
chunks: [Buffer.concat([prefix, length]), rtp],
type: 'aac',
});
continue;
}
if (rtmpPacket.codec !== 'video')
throw new Error('unknown rtmp codec ' + rtmpPacket.codec);
const flv = parseFlvVideoTag(rtmpPacket.packet);
if (flv.codecId !== VideoCodecId.H264)
throw new Error('unsupported rtmp video codec ' + flv.codecId);
const prefix = Buffer.alloc(2);
prefix[0] = RTSP_FRAME_MAGIC;
prefix[1] = 0;
const nalus: Buffer[] = [];
if (flv.nalus) {
nalus.push(...flv.nalus);
}
else if (flv.avcDecoderConfigurationRecord?.sps && flv.avcDecoderConfigurationRecord.pps) {
// make sure there's only one
if (flv.avcDecoderConfigurationRecord.sps.length > 1 || flv.avcDecoderConfigurationRecord.pps.length > 1)
throw new Error('rtmp sps/pps contains multiple nalus, only using the first of each');
nalus.push(flv.avcDecoderConfigurationRecord.sps[0]);
nalus.push(flv.avcDecoderConfigurationRecord.pps[0]);
}
else {
throw new Error('rtmp h264 nalus missing');
}
for (const nalu of nalus) {
const header = new RtpHeader({
sequenceNumber: videoSequenceNumber,
timestamp: addRtpTimestamp(0, Math.floor(rtmpPacket.timestamp / 1000 * 90000)),
payloadType: 96,
marker: true,
});
videoSequenceNumber = nextSequenceNumber(videoSequenceNumber);
const rtp = new RtpPacket(header, nalu);
const packets = h264Repacketizer.repacketize(rtp);
for (const packet of packets) {
const length = Buffer.alloc(2);
const rtp = packet.serialize();
length.writeUInt16BE(rtp.length, 0);
events.emit('rtsp', {
chunks: [Buffer.concat([prefix, length]), rtp],
type: 'h264',
});
}
}
}
}
catch (e) {
kill(e);
}
finally {
kill(new Error('rtsp read loop exited'));
}
};
// this return block is intentional, to ensure that the remaining code happens sync.
return (() => {
return {
start,
sdp: Promise.resolve(sdp),
get isActive() { return isActive },
kill(error?: Error) {
kill(error);
},
killed,
resetActivityTimer,
negotiateMediaStream: (requestMediaStream, inputVideoCodec, inputAudioCodec) => {
return negotiateMediaStream(sdp, mediaStreamOptions, inputVideoCodec, inputAudioCodec, requestMediaStream);
},
emit(container: 'rtsp', chunk: StreamChunk) {
events.emit(container, chunk);
return this;
},
on(event: string, cb: any) {
events.on(event, cb);
return this;
},
once(event: any, cb: any) {
events.once(event, cb);
return this;
},
removeListener(event, cb) {
events.removeListener(event, cb);
return this;
}
}
})();
}
catch (e) {
cleanupSockets();
throw e;
}
}

View File

@@ -1,6 +1,5 @@
import { getH264DecoderArgs } from "@scrypted/common/src/ffmpeg-hardware-acceleration";
import { MixinDeviceBase, ResponseMediaStreamOptions, VideoCamera } from "@scrypted/sdk";
import { StorageSetting, StorageSettings } from "@scrypted/sdk/storage-settings";
import { StorageSetting, StorageSettings, StorageSettingsDict } from "@scrypted/sdk/storage-settings";
export type StreamStorageSetting = StorageSetting & {
prefersPrebuffer: boolean,
@@ -13,6 +12,11 @@ function getStreamTypes<T extends string>(storageSettings: StreamStorageSettings
return storageSettings;
}
function msoHasJpegCodec(mso: ResponseMediaStreamOptions) {
const lower = mso?.video?.codec?.toLowerCase();
return lower?.includes('jpeg') || lower?.includes('jpg');
}
function pickBestStream(msos: ResponseMediaStreamOptions[], resolution: number) {
if (!msos)
return;
@@ -20,6 +24,10 @@ function pickBestStream(msos: ResponseMediaStreamOptions[], resolution: number)
let best: ResponseMediaStreamOptions;
let bestScore: number;
for (const mso of msos) {
if (msoHasJpegCodec(mso)) {
continue;
}
const score = Math.abs(mso.video?.width * mso.video?.height - resolution);
if (!best || score < bestScore) {
best = mso;
@@ -80,12 +88,25 @@ export function createStreamSettings(device: MixinDeviceBase<VideoCamera>) {
});
const storageSettings = new StorageSettings(device, {
hasMjpeg: {
subgroup,
title: 'Invalid Codecs',
type: 'html',
defaultValue: '<p style="color: red;">MJPEG streams detected. These streams are incompatible Scrypted and should be reconfigured to H264 using the camera\'s web admin if possible.</p>',
hide: true,
},
noAudio: {
subgroup,
title: 'No Audio',
description: 'Enable this setting if the camera does not have audio or to mute audio.',
type: 'boolean',
},
privacyMode: {
group: 'Privacy',
title: 'Disable Stream',
description: 'Disable this camera\'s stream to all services provided by Scrypted.',
type: 'boolean',
},
enabledStreams: {
subgroup,
title: 'Prebuffered Streams',
@@ -191,6 +212,15 @@ export function createStreamSettings(device: MixinDeviceBase<VideoCamera>) {
try {
const msos = await device.mixinDevice.getVideoStreamOptions();
const hasMjpeg: StorageSettingsDict<'hasMjpeg'> = msos?.find(mso => msoHasJpegCodec(mso))
? {
hasMjpeg: {
hide: false,
},
}
: undefined;
enabledStreams = {
defaultValue: getDefaultPrebufferedStreams(msos)?.map(mso => mso.name || mso.id),
choices: msos.map((mso, index) => mso.name || mso.id),
@@ -205,11 +235,13 @@ export function createStreamSettings(device: MixinDeviceBase<VideoCamera>) {
lowResolutionStream: createStreamOptions(streamTypes.lowResolutionStream, msos),
recordingStream: createStreamOptions(streamTypes.recordingStream, msos),
remoteRecordingStream: createStreamOptions(streamTypes.remoteRecordingStream, msos),
...hasMjpeg,
}
}
else {
return {
enabledStreams,
...hasMjpeg,
}
}
}

View File

@@ -1,12 +1,12 @@
{
"name": "@scrypted/reolink",
"version": "0.0.108",
"version": "0.0.111",
"lockfileVersion": 3,
"requires": true,
"packages": {
"": {
"name": "@scrypted/reolink",
"version": "0.0.108",
"version": "0.0.111",
"license": "Apache",
"dependencies": {
"@scrypted/common": "file:../../common",
@@ -15,7 +15,7 @@
},
"devDependencies": {
"@scrypted/sdk": "file:../../sdk",
"@types/node": "^22.0.2"
"@types/node": "^22.19.1"
}
},
"../../common": {
@@ -24,40 +24,43 @@
"license": "ISC",
"dependencies": {
"@scrypted/sdk": "file:../sdk",
"@scrypted/types": "^0.5.27",
"http-auth-utils": "^5.0.1",
"typescript": "^5.5.3"
},
"devDependencies": {
"@types/node": "^20.11.0",
"@types/node": "^20.19.11",
"monaco-editor": "^0.50.0",
"ts-node": "^10.9.2"
}
},
"../../sdk": {
"name": "@scrypted/sdk",
"version": "0.3.108",
"version": "0.5.52",
"dev": true,
"license": "ISC",
"dependencies": {
"@babel/preset-typescript": "^7.26.0",
"@rollup/plugin-commonjs": "^28.0.1",
"@babel/preset-typescript": "^7.27.1",
"@rollup/plugin-commonjs": "^28.0.9",
"@rollup/plugin-json": "^6.1.0",
"@rollup/plugin-node-resolve": "^15.3.0",
"@rollup/plugin-typescript": "^12.1.1",
"@rollup/plugin-node-resolve": "^16.0.1",
"@rollup/plugin-terser": "^0.4.4",
"@rollup/plugin-typescript": "^12.3.0",
"@rollup/plugin-virtual": "^3.0.2",
"adm-zip": "^0.5.16",
"axios": "^1.7.8",
"babel-loader": "^9.2.1",
"axios": "^1.10.0",
"babel-loader": "^10.0.0",
"babel-plugin-const-enum": "^1.2.0",
"ncp": "^2.0.0",
"openai": "^6.1.0",
"raw-loader": "^4.0.2",
"rimraf": "^6.0.1",
"rollup": "^4.27.4",
"rollup": "^4.52.5",
"tmp": "^0.2.3",
"ts-loader": "^9.5.1",
"ts-loader": "^9.5.4",
"tslib": "^2.8.1",
"typescript": "^5.6.3",
"webpack": "^5.96.1",
"typescript": "^5.9.3",
"webpack": "^5.99.9",
"webpack-bundle-analyzer": "^4.10.2"
},
"bin": {
@@ -70,9 +73,9 @@
"scrypted-webpack": "bin/scrypted-webpack.js"
},
"devDependencies": {
"@types/node": "^22.10.1",
"@types/node": "^24.9.2",
"ts-node": "^10.9.2",
"typedoc": "^0.26.11"
"typedoc": "^0.28.14"
}
},
"../onvif/onvif": {
@@ -107,12 +110,13 @@
"link": true
},
"node_modules/@types/node": {
"version": "22.0.2",
"resolved": "https://registry.npmjs.org/@types/node/-/node-22.0.2.tgz",
"integrity": "sha512-yPL6DyFwY5PiMVEwymNeqUTKsDczQBJ/5T7W/46RwLU/VH+AA8aT5TZkvBviLKLbbm0hlfftEkGrNzfRk/fofQ==",
"version": "22.19.1",
"resolved": "https://registry.npmjs.org/@types/node/-/node-22.19.1.tgz",
"integrity": "sha512-LCCV0HdSZZZb34qifBsyWlUmok6W7ouER+oQIGBScS8EsZsQbrtFTUrDX4hOl+CS6p7cnNC4td+qrSVGSCTUfQ==",
"dev": true,
"license": "MIT",
"dependencies": {
"undici-types": "~6.11.1"
"undici-types": "~6.21.0"
}
},
"node_modules/onvif": {
@@ -120,10 +124,11 @@
"link": true
},
"node_modules/undici-types": {
"version": "6.11.1",
"resolved": "https://registry.npmjs.org/undici-types/-/undici-types-6.11.1.tgz",
"integrity": "sha512-mIDEX2ek50x0OlRgxryxsenE5XaQD4on5U2inY7RApK3SOJpofyw7uW2AyfMKkhAxXIceo2DeWGVGwyvng1GNQ==",
"dev": true
"version": "6.21.0",
"resolved": "https://registry.npmjs.org/undici-types/-/undici-types-6.21.0.tgz",
"integrity": "sha512-iwDZqg0QAGrg9Rav5H4n0M64c3mkR59cJ6wQp+7C4nI0gsmExaedaYLNO44eT4AtBBwjbTiGPMlt2Md0T9H9JQ==",
"dev": true,
"license": "MIT"
}
}
}

View File

@@ -1,6 +1,6 @@
{
"name": "@scrypted/reolink",
"version": "0.0.108",
"version": "0.0.111",
"description": "Reolink Plugin for Scrypted",
"author": "Scrypted",
"license": "Apache",
@@ -42,7 +42,7 @@
"onvif": "file:../onvif/onvif"
},
"devDependencies": {
"@types/node": "^22.0.2",
"@scrypted/sdk": "file:../../sdk"
"@scrypted/sdk": "file:../../sdk",
"@types/node": "^22.19.1"
}
}

View File

@@ -7,7 +7,9 @@ import { OnvifCameraAPI, OnvifEvent, connectCameraAPI } from './onvif-api';
import { listenEvents } from './onvif-events';
import { OnvifIntercom } from './onvif-intercom';
import { DevInfo } from './probe';
import { AIState, Enc, isDeviceNvr, ReolinkCameraClient } from './reolink-api';
import { AIState, Enc, isDeviceHomeHub, isDeviceNvr, ReolinkCameraClient } from './reolink-api';
import { ReolinkNvrDevice } from './nvr/nvr';
import { ReolinkNvrClient } from './nvr/api';
class ReolinkCameraSiren extends ScryptedDeviceBase implements OnOff {
sirenTimeout: NodeJS.Timeout;
@@ -237,6 +239,7 @@ class ReolinkCamera extends RtspSmartCamera implements Camera, DeviceProvider, R
await this.updateAbilities();
await this.updateDevice();
await this.reportDevices();
await this.checkNetData();
this.startDevicesStatesPolling();
})()
.catch(e => {
@@ -464,6 +467,26 @@ class ReolinkCamera extends RtspSmartCamera implements Camera, DeviceProvider, R
return batteryConfigVer > 0;
}
hasRtsp() {
const rtspAbility = this.storageSettings.values.abilities?.value?.Ability?.supportRtspEnable;
return rtspAbility && rtspAbility?.ver !== 0;
}
hasRtmp() {
const rtmpAbility = this.storageSettings.values.abilities?.value?.Ability?.supportRtmpEnable;
return rtmpAbility && rtmpAbility?.ver !== 0;
}
hasOnvif() {
const onvifAbility = this.storageSettings.values.abilities?.value?.Ability?.supportOnvifEnable;
return onvifAbility && onvifAbility?.ver !== 0;
}
hasHttps() {
const httpsAbility = this.storageSettings.values.abilities?.value?.Ability?.supportHttpsEnable;
return httpsAbility && httpsAbility?.ver !== 0;
}
async updateDevice() {
const interfaces = this.provider.getInterfaces();
let type = ScryptedDeviceType.Camera;
@@ -604,6 +627,7 @@ class ReolinkCamera extends RtspSmartCamera implements Camera, DeviceProvider, R
const od: ObjectsDetected = {
timestamp: Date.now(),
detections: [],
sourceId: this.pluginId
};
for (const c of classes) {
const { alarm_state } = ai.value[c];
@@ -661,6 +685,7 @@ class ReolinkCamera extends RtspSmartCamera implements Camera, DeviceProvider, R
score: 1,
}
],
sourceId: this.pluginId
};
sdk.deviceManager.onDeviceEvent(this.nativeId, ScryptedInterface.ObjectDetector, od);
}
@@ -842,7 +867,9 @@ class ReolinkCamera extends RtspSmartCamera implements Camera, DeviceProvider, R
// anecdotally, encoders of type h265 do not have a working RTMP main stream.
const mainEncType = this.storageSettings.values.abilities?.value?.Ability?.abilityChn?.[rtspChannel]?.mainEncType?.ver;
if (live === 2) {
if (isDeviceHomeHub(deviceInfo)) {
streams.push(...[rtspMain, rtspSub]);
} else if (live === 2) {
if (mainEncType === 1) {
streams.push(rtmpSub, rtspMain, rtspSub);
}
@@ -1037,6 +1064,53 @@ class ReolinkCamera extends RtspSmartCamera implements Camera, DeviceProvider, R
});
}
async checkNetData() {
try {
const api = this.getClientWithToken();
const { netData } = await api.getNetData();
this.console.log('netData', JSON.stringify(netData));
const deviceInfo = this.storageSettings.values.deviceInfo;
const isHomeHub = isDeviceHomeHub(deviceInfo);
const shouldDisableHttps = this.hasHttps() ? netData.httpsEnable === 1 : false;
const shouldEnableRtmp = this.hasRtmp() ? (!isHomeHub && netData.rtmpEnable === 0) : false;
const shouldDisableRtmp = this.hasRtmp() ? (isHomeHub && netData.rtmpEnable === 1) : false;
const shouldEnableRtsp = this.hasRtsp() ? netData.rtspEnable === 0 : false;
const shouldEnableOnvif = this.hasOnvif() ? netData.onvifEnable === 0 : false;
this.console.log(`NetData checks: shouldDisableHttps: ${shouldDisableHttps}, shouldEnableRtmp: ${shouldEnableRtmp}, shouldEnableRtsp: ${shouldEnableRtsp}, shouldEnableOnvif: ${shouldEnableOnvif}, shouldDisableRtmp: ${shouldDisableRtmp}`);
if (shouldDisableHttps || shouldEnableRtmp || shouldEnableRtsp || shouldEnableOnvif || shouldDisableRtmp) {
const newNetData = {
...netData
};
if (shouldDisableHttps) {
newNetData.httpsEnable = 0;
}
if (shouldEnableRtmp) {
newNetData.rtmpEnable = 1;
}
if (shouldDisableRtmp) {
newNetData.rtmpEnable = 0;
}
if (shouldEnableRtsp) {
newNetData.rtspEnable = 1;
}
if (shouldEnableOnvif) {
newNetData.onvifEnable = 1;
}
this.console.log(`Fixing netdata settings: ${JSON.stringify(newNetData)}`);
await api.setNetData(newNetData);
}
} catch (e) {
this.console.error('Error in pollDeviceStates', e);
}
}
async getDevice(nativeId: string): Promise<any> {
if (nativeId.endsWith('-siren')) {
this.siren ||= new ReolinkCameraSiren(this, nativeId);
@@ -1062,8 +1136,10 @@ class ReolinkCamera extends RtspSmartCamera implements Camera, DeviceProvider, R
}
class ReolinkProvider extends RtspProvider {
nvrDevices = new Map<string, ReolinkNvrDevice>();
getScryptedDeviceCreator(): string {
return 'Reolink Camera';
return 'Reolink Camera/NVR';
}
getAdditionalInterfaces() {
@@ -1077,10 +1153,31 @@ class ReolinkProvider extends RtspProvider {
];
}
getDevice(nativeId: string) {
if (nativeId.endsWith('-reolink-nvr')) {
let ret = this.nvrDevices.get(nativeId);
if (!ret) {
ret = new ReolinkNvrDevice(nativeId, this);
if (ret)
this.nvrDevices.set(nativeId, ret);
}
return ret;
} else {
return super.getDevice(nativeId);
}
}
async createDevice(settings: DeviceCreatorSettings, nativeId?: string): Promise<string> {
const httpAddress = `${settings.ip}:${settings.httpPort || 80}`;
let info: DeviceInformation = {};
const isNvr = settings.isNvr?.toString() === 'true';
if (isNvr) {
return this.createNvrDeviceFromSettings(settings);
}
const skipValidate = settings.skipValidate?.toString() === 'true';
const username = settings.username?.toString();
const password = settings.password?.toString();
@@ -1152,6 +1249,12 @@ class ReolinkProvider extends RtspProvider {
title: 'IP Address',
placeholder: '192.168.2.222',
},
{
key: 'isNvr',
title: 'Is NVR',
description: 'Set if adding a Reolink NVR device. This will allow adding cameras connected to the NVR.',
type: 'boolean',
},
{
subgroup: 'Advanced',
key: 'rtspChannel',
@@ -1180,6 +1283,49 @@ class ReolinkProvider extends RtspProvider {
createCamera(nativeId: string) {
return new ReolinkCamera(nativeId, this);
}
async createNvrDeviceFromSettings(settings: DeviceCreatorSettings) {
const username = settings.username?.toString();
const password = settings.password?.toString();
const ip = settings.ip?.toString();
const httpPort = settings.httpPort;
const rtspPort = settings.rtspPort;
const httpAddress = `${ip}:${httpPort || 80}`;
const client = new ReolinkNvrClient(httpAddress, username, password, this.console);
const { devInfo } = await client.getHubInfo();
if (!devInfo) {
throw new Error('Unable to connect to Reolink NVR. Please verify the IP address, port, username, and password are correct.');
}
const { detail, name } = devInfo;
const nativeId = `${detail}-reolink-nvr`;
await sdk.deviceManager.onDeviceDiscovered({
nativeId,
name,
interfaces: [
ScryptedInterface.Settings,
ScryptedInterface.DeviceDiscovery,
ScryptedInterface.DeviceProvider,
ScryptedInterface.Reboot,
],
type: ScryptedDeviceType.API,
});
const nvrDevice = this.getDevice(nativeId);
nvrDevice.storageSettings.values.ipAddress = ip;
nvrDevice.storageSettings.values.username = username;
nvrDevice.storageSettings.values.password = password;
nvrDevice.storageSettings.values.httpPort = httpPort;
nvrDevice.storageSettings.values.rtspPort = rtspPort;
nvrDevice.updateDeviceInfo(devInfo);
return nativeId;
}
}
export default ReolinkProvider;

Some files were not shown because too many files have changed in this diff Show More