Compare commits

...

70 Commits

Author SHA1 Message Date
dependabot[bot]
dc96ed35b2 build(deps): Bump react-dom from 19.2.3 to 19.2.4
Bumps [react-dom](https://github.com/facebook/react/tree/HEAD/packages/react-dom) from 19.2.3 to 19.2.4.
- [Release notes](https://github.com/facebook/react/releases)
- [Changelog](https://github.com/facebook/react/blob/main/CHANGELOG.md)
- [Commits](https://github.com/facebook/react/commits/v19.2.4/packages/react-dom)

---
updated-dependencies:
- dependency-name: react-dom
  dependency-version: 19.2.4
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-01-30 19:55:59 +00:00
github-actions[bot]
351ba09f4e chore: bump version to 0.5.6 (VERSION + package.json) (#482)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-01-29 15:08:41 +00:00
Michel Roegl-Brunner
580986abfa Merge pull request #481 from community-scripts/fix/466
fix: resolve server from DB for SSH when client sends no ssh_key_path (fixes #466)
2026-01-29 16:03:08 +01:00
Michel Rögl-Brunner
e1d270d52c fix: resolve server from DB for SSH when client sends no ssh_key_path (fixes #466)
- Add resolveServerForSSH() to load full server (including ssh_key_path) from DB
  when WebSocket server has id but key auth without valid ssh_key_path
- Call resolver in handleMessage for all start flows (clone, backup, update,
  shell, script) so Shell and Update over SSH work with key auth
- Extend ServerInfo typedef with auth_type, ssh_key_path for TypeScript
2026-01-29 15:59:58 +01:00
Michel Roegl-Brunner
20dbcae42a Merge pull request #480 from community-scripts/fix/405
fix: delete local JSON files when removed from remote repo (fixes #405)
2026-01-29 15:47:31 +01:00
Michel Rögl-Brunner
8e8c724392 fix: delete local JSON files when removed from remote repo (fixes #405)
- Add deleteLocalFilesRemovedFromRepo() to remove local script JSON files
  that belong to the synced repo but are no longer in the remote list
- Call it in syncJsonFilesForRepo() before find/sync so stale scripts
  no longer appear and download attempts don't 404
- Extend sync return types with deletedFiles; aggregate in syncJsonFiles()
  and include removed count in success message
2026-01-29 15:44:45 +01:00
Michel Roegl-Brunner
201b33ec84 Merge pull request #479 from community-scripts/fix/464
fix: use node-specific Proxmox config paths for VM vs LXC (fixes #464)
2026-01-29 15:31:57 +01:00
Michel Rögl-Brunner
6d2df9929c fix: use node-specific Proxmox config paths for VM vs LXC detection
- isVM(): check /etc/pve/nodes/<server.name>/qemu-server and lxc first, fallback to /etc/pve/qemu-server and lxc for single-node
- checkConfigAndExtractInfo, config-existence checks, getContainerHostname, addClonedContainerToDatabase: use node-specific paths
- syncLXCConfig/updateLXCConfig: use node-specific LXC config path
- server.js clone flow: use node-specific config path

Fixes #464
2026-01-29 15:29:35 +01:00
Michel Roegl-Brunner
f33504baf5 Merge pull request #478 from community-scripts/fix/312
fix: handle special characters in SSH password/passphrase (Fixes #312)
2026-01-29 15:20:44 +01:00
Michel Rögl-Brunner
4bc5f4d6ad fix: handle special characters in SSH password/passphrase (Fixes #312)
- Use sshpass -f with temp file in transferScriptsFolder so password/passphrase
  never go through shell; safe for {, $, ", etc.
- Pass password via SSH_PASSWORD env in testWithExpect instead of embedding
  in script
- Add ServerForm hint: SSH key recommended; special chars supported
2026-01-29 15:18:41 +01:00
Michel Rögl-Brunner
a52a897346 chore: update publish_release workflow to bump package.json version too 2026-01-29 14:44:53 +01:00
Michel Rögl-Brunner
1d585d4d3f Unf**k deps 2026-01-29 14:43:56 +01:00
Michel Rögl-Brunner
d4b8ceb581 Merge fix/362: chore deps and overrides (next >=16.1.5, hono >=4.11.7, lodash >=4.17.23) 2026-01-29 14:29:46 +01:00
Michel Rögl-Brunner
7079c236ab chore: bump deps and overrides (next >=16.1.5, hono >=4.11.7, lodash >=4.17.23) 2026-01-29 14:27:56 +01:00
Michel Roegl-Brunner
0678aba911 Merge pull request #463 from community-scripts/dependabot/npm_and_yarn/npm_and_yarn-eb4f97c0ca
build(deps): Bump hono from 4.10.6 to 4.11.4 in the npm_and_yarn group across 1 directory
2026-01-29 14:16:56 +01:00
Michel Roegl-Brunner
ffdd742aa0 Merge pull request #476 from community-scripts/fix/362
Fix #362: auto-detect race, VM shell path, UI hints
2026-01-29 14:15:21 +01:00
Michel Roegl-Brunner
f4de214a83 Merge pull request #461 from community-scripts/dependabot/npm_and_yarn/testing-library/react-16.3.2
build(deps-dev): Bump @testing-library/react from 16.3.1 to 16.3.2
2026-01-29 14:14:26 +01:00
Michel Roegl-Brunner
3b0da19cd1 Merge pull request #460 from community-scripts/dependabot/npm_and_yarn/tanstack/react-query-5.90.19
build(deps): Bump @tanstack/react-query from 5.90.18 to 5.90.19
2026-01-29 14:14:18 +01:00
Michel Roegl-Brunner
08bc4ab37b Merge pull request #459 from community-scripts/dependabot/npm_and_yarn/typescript-eslint-8.53.1
build(deps-dev): Bump typescript-eslint from 8.53.0 to 8.53.1
2026-01-29 14:14:08 +01:00
Michel Roegl-Brunner
d2e7477898 Merge pull request #458 from community-scripts/dependabot/npm_and_yarn/better-sqlite3-12.6.2
build(deps): Bump better-sqlite3 from 12.6.0 to 12.6.2
2026-01-29 14:13:59 +01:00
Michel Rögl-Brunner
b5c6beafff Fix #362: auto-detect race, VM shell path, UI hints
- Defer resolve in autoDetectLXCContainers (pct/qm list) so stdout is complete
- Pass containerType when opening shell; use qm terminal for VMs, pct enter for LXC
- Add UI hint for VM shell (serial console, Ctrl+O, serial port requirement)
- Rename auto-detect to Containers & VMs and update help text

Fixes #362
2026-01-29 14:12:49 +01:00
Michel Roegl-Brunner
a34566651a Merge pull request #475 from community-scripts/fix/438
Fix PBS certificate validation (Fixes #438)
2026-01-29 13:57:51 +01:00
Michel Rögl-Brunner
4628e67e5c Fix PBS certificate validation: pass PBS_FINGERPRINT, optional fingerprint for trusted CA
- Pass stored pbs_fingerprint as PBS_FINGERPRINT in login, snapshot list, and restore
- Allow empty fingerprint so trusted-CA PBS works without entering one
- Make fingerprint field optional in PBSCredentialsModal with updated helper text

Fixes #438
2026-01-29 13:55:53 +01:00
Michel Roegl-Brunner
578fa28461 Merge pull request #474 from community-scripts/fix/404
fix: allow domain names for APT Cacher in container creation UI
2026-01-29 13:42:31 +01:00
Michel Rögl-Brunner
9e6154b0de fix: allow domain names for APT Cacher in container creation UI
- Add validateHostname and validateAptCacherAddress (IPv4 or hostname)
- Use new validator for var_apt_cacher_ip; error message: Invalid IPv4 or hostname
- Label: APT Cacher host or IP; placeholder shows IP or hostname example

Fixes #404
2026-01-29 13:40:19 +01:00
Michel Roegl-Brunner
d29f71a92f Merge pull request #473 from community-scripts/fix/365
fix: detect app slug from LXC /usr/bin/update for port lookup
2026-01-29 13:28:36 +01:00
Michel Rögl-Brunner
aea14cda7e fix: detect app slug from LXC /usr/bin/update for port lookup
Resolve interface_port from community-scripts update file when hostname
differs from JSON slug (e.g. lxcpeanut vs peanut). Primary: slug parsed
from pct exec ... cat /usr/bin/update; fallback: hostname/suffix match.

Fixes #365
2026-01-29 13:26:29 +01:00
Michel Roegl-Brunner
4893ccda6e Merge pull request #472 from community-scripts/feat/406
feat: private/custom git repos - GitHub, GitLab, Bitbucket, custom
2026-01-29 13:11:54 +01:00
Michel Rögl-Brunner
a56c625b4f feat: private/custom git repos - GitHub, GitLab, Bitbucket, custom
- Add repository URL validation for GitHub, GitLab, Bitbucket, and custom hosts
- Add git provider layer (listDirectory, downloadRawFile) for all providers
- Wire githubJsonService and scriptDownloader to use provider; sync/download from any supported source
- Update GeneralSettingsModal placeholder and help text; .env.example and env schema for GITLAB_TOKEN, BITBUCKET_APP_PASSWORD

Closes #406
2026-01-29 13:08:28 +01:00
Michel Roegl-Brunner
54b2187f98 Merge pull request #471 from community-scripts/feat/419
feat: add TUN/TAP (VPN) option to container features in web GUI
2026-01-29 11:37:51 +01:00
Michel Rögl-Brunner
2f4e8606ed feat: add TUN/TAP (VPN) option to container features in web GUI
- Add var_tun to advanced defaults (default: no)
- Add TUN/TAP (VPN) dropdown in Container Features section for /dev/net/tun
- Enables Tailscale, WireGuard, OpenVPN in LXC containers via GUI
2026-01-29 11:32:23 +01:00
Michel Roegl-Brunner
ff5478dd72 Merge pull request #470 from community-scripts/feat/447
feat: Add Update all downloaded scripts button
2026-01-29 11:25:33 +01:00
Michel Rögl-Brunner
944a527972 fix: normalize failed item error type for TypeScript build 2026-01-29 11:22:43 +01:00
Michel Roegl-Brunner
c4479c1932 Merge pull request #469 from community-scripts/update_january_core
(core): Major update to upstream core functions with validation, IPv6 support, and Debian 13 fixes
2026-01-29 11:20:28 +01:00
CanbiZ (MickLesk)
9998e48621 fix(build.func): Fix typo - SD should use var_searchdomain not var_storage 2026-01-29 11:18:52 +01:00
Michel Rögl-Brunner
34eade3971 feat: add Update all downloaded scripts button
- Add bulk update button on Downloaded Scripts tab
- Use existing loadMultipleScripts API for all downloaded script slugs
- Confirmation modal before running (may take several minutes)
- Inline result: success/fail counts, hover for failed slugs
- Invalidate getAllDownloadedScripts and getScriptCardsWithCategories on success
2026-01-29 11:17:36 +01:00
CanbiZ (MickLesk)
82be47b959 refactor(core): Major update to core functions with validation, IPv6 support, and Debian 13 fixes
- alpine-tools.func: Complete rewrite with simplified structure and better error handling
- build.func: Add comprehensive validation functions (Container-ID, hostname, MAC, VLAN, MTU, IPv6, bridge, gateway, timezone, tags), storage space validation, improved password handling
- core.func: Add ensure_profile_loaded() and get_lxc_ip() functions, improved cleanup_lxc() with fallback error handling
- install.func: Fix Debian 13 LXC template bug (root owned by nobody), integrate get_lxc_ip()
- tools.func: Add IPv6 fallback support, improved NVIDIA GPU detection (including Open Kernel Module), Debian 13 Trixie support, new setup_meilisearch() function, completely reworked MariaDB setup with distribution package fallback
2026-01-29 11:01:12 +01:00
Michel Roegl-Brunner
9b77fc7ddb Merge pull request #468 from community-scripts/fix/465
fix: advanced modal SSH key discovery and tags delimiter
2026-01-29 10:27:56 +01:00
Michel Rögl-Brunner
db12ac4219 fix: advanced modal SSH key discovery and tags delimiter
- Allow ; as alternative to , for tags field (normalize on submit)
- Add GET /api/servers/[id]/discover-ssh-keys to find host SSH keys like native advanced mode
- Advanced modal: fetch discovered keys, dropdown to select + manual paste input
- Label/placeholder: Tags (comma or semicolon separated), e.g. tag1; tag2
2026-01-29 10:23:17 +01:00
dependabot[bot]
c06b8e6731 build(deps-dev): Bump typescript-eslint from 8.53.0 to 8.53.1
Bumps [typescript-eslint](https://github.com/typescript-eslint/typescript-eslint/tree/HEAD/packages/typescript-eslint) from 8.53.0 to 8.53.1.
- [Release notes](https://github.com/typescript-eslint/typescript-eslint/releases)
- [Changelog](https://github.com/typescript-eslint/typescript-eslint/blob/main/packages/typescript-eslint/CHANGELOG.md)
- [Commits](https://github.com/typescript-eslint/typescript-eslint/commits/v8.53.1/packages/typescript-eslint)

---
updated-dependencies:
- dependency-name: typescript-eslint
  dependency-version: 8.53.1
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-01-29 09:12:03 +00:00
dependabot[bot]
14e01513e3 build(deps): Bump @tanstack/react-query from 5.90.18 to 5.90.19
Bumps [@tanstack/react-query](https://github.com/TanStack/query/tree/HEAD/packages/react-query) from 5.90.18 to 5.90.19.
- [Release notes](https://github.com/TanStack/query/releases)
- [Changelog](https://github.com/TanStack/query/blob/main/packages/react-query/CHANGELOG.md)
- [Commits](https://github.com/TanStack/query/commits/@tanstack/react-query@5.90.19/packages/react-query)

---
updated-dependencies:
- dependency-name: "@tanstack/react-query"
  dependency-version: 5.90.19
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-01-29 09:11:59 +00:00
Michel Roegl-Brunner
f66d1db861 Merge pull request #467 from community-scripts/fix/398
feat(ConfigurationModal): add Container ID (CTID) and DNS Search Domain to advanced install
2026-01-29 10:10:45 +01:00
Michel Rögl-Brunner
886c3e37ff feat(ConfigurationModal): add Container ID (CTID) and DNS Search Domain to advanced install
- Add optional Container ID (CTID) field at top of advanced form (var_ctid)
- Add DNS Search Domain field in Network section (var_searchdomain)
- Validate CTID when set: integer >= 100; empty = use next available ID
- Both fields optional; empty values omitted from env so script uses defaults
2026-01-29 10:05:14 +01:00
root
38deb09aa9 Add ctid option 2026-01-29 10:01:30 +01:00
dependabot[bot]
2e4634ca25 build(deps): Bump hono in the npm_and_yarn group across 1 directory
Bumps the npm_and_yarn group with 1 update in the / directory: [hono](https://github.com/honojs/hono).


Updates `hono` from 4.10.6 to 4.11.4
- [Release notes](https://github.com/honojs/hono/releases)
- [Commits](https://github.com/honojs/hono/compare/v4.10.6...v4.11.4)

---
updated-dependencies:
- dependency-name: hono
  dependency-version: 4.11.4
  dependency-type: indirect
  dependency-group: npm_and_yarn
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-01-23 22:50:38 +00:00
dependabot[bot]
a82bc02b15 build(deps-dev): Bump @testing-library/react from 16.3.1 to 16.3.2
Bumps [@testing-library/react](https://github.com/testing-library/react-testing-library) from 16.3.1 to 16.3.2.
- [Release notes](https://github.com/testing-library/react-testing-library/releases)
- [Changelog](https://github.com/testing-library/react-testing-library/blob/main/CHANGELOG.md)
- [Commits](https://github.com/testing-library/react-testing-library/compare/v16.3.1...v16.3.2)

---
updated-dependencies:
- dependency-name: "@testing-library/react"
  dependency-version: 16.3.2
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-01-19 21:19:12 +00:00
dependabot[bot]
2ea44e6b24 build(deps): Bump better-sqlite3 from 12.6.0 to 12.6.2
Bumps [better-sqlite3](https://github.com/WiseLibs/better-sqlite3) from 12.6.0 to 12.6.2.
- [Release notes](https://github.com/WiseLibs/better-sqlite3/releases)
- [Commits](https://github.com/WiseLibs/better-sqlite3/compare/v12.6.0...v12.6.2)

---
updated-dependencies:
- dependency-name: better-sqlite3
  dependency-version: 12.6.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-01-19 21:18:19 +00:00
dependabot[bot]
6d326dce1f build(deps): Bump next from 16.1.2 to 16.1.3 (#453)
Bumps [next](https://github.com/vercel/next.js) from 16.1.2 to 16.1.3.
- [Release notes](https://github.com/vercel/next.js/releases)
- [Changelog](https://github.com/vercel/next.js/blob/canary/release.js)
- [Commits](https://github.com/vercel/next.js/compare/v16.1.2...v16.1.3)

---
updated-dependencies:
- dependency-name: next
  dependency-version: 16.1.3
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-16 21:19:21 +01:00
dependabot[bot]
6c8e177d3e build(deps-dev): Bump baseline-browser-mapping from 2.9.14 to 2.9.15 (#454)
Bumps [baseline-browser-mapping](https://github.com/web-platform-dx/baseline-browser-mapping) from 2.9.14 to 2.9.15.
- [Release notes](https://github.com/web-platform-dx/baseline-browser-mapping/releases)
- [Commits](https://github.com/web-platform-dx/baseline-browser-mapping/compare/v2.9.14...v2.9.15)

---
updated-dependencies:
- dependency-name: baseline-browser-mapping
  dependency-version: 2.9.15
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-16 21:19:11 +01:00
dependabot[bot]
879a548345 build(deps-dev): Bump eslint-config-next from 16.1.2 to 16.1.3 (#455)
Bumps [eslint-config-next](https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next) from 16.1.2 to 16.1.3.
- [Release notes](https://github.com/vercel/next.js/releases)
- [Changelog](https://github.com/vercel/next.js/blob/canary/release.js)
- [Commits](https://github.com/vercel/next.js/commits/v16.1.3/packages/eslint-config-next)

---
updated-dependencies:
- dependency-name: eslint-config-next
  dependency-version: 16.1.3
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-16 21:19:01 +01:00
dependabot[bot]
64cd81d5ba build(deps): Bump @tanstack/react-query from 5.90.17 to 5.90.18 (#456)
Bumps [@tanstack/react-query](https://github.com/TanStack/query/tree/HEAD/packages/react-query) from 5.90.17 to 5.90.18.
- [Release notes](https://github.com/TanStack/query/releases)
- [Changelog](https://github.com/TanStack/query/blob/main/packages/react-query/CHANGELOG.md)
- [Commits](https://github.com/TanStack/query/commits/@tanstack/react-query@5.90.18/packages/react-query)

---
updated-dependencies:
- dependency-name: "@tanstack/react-query"
  dependency-version: 5.90.18
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-16 21:18:49 +01:00
Michel Roegl-Brunner
61e75949c8 Merge pull request #452 from community-scripts/dependabot/npm_and_yarn/prettier-3.8.0 2026-01-15 21:02:14 +01:00
Michel Roegl-Brunner
a5d24bfad7 Merge pull request #451 from community-scripts/dependabot/npm_and_yarn/eslint-config-next-16.1.2 2026-01-15 21:02:01 +01:00
Michel Roegl-Brunner
04595c0093 Merge pull request #450 from community-scripts/dependabot/npm_and_yarn/next-16.1.2 2026-01-15 21:01:51 +01:00
Michel Roegl-Brunner
06fdb4889d Merge pull request #449 from community-scripts/dependabot/npm_and_yarn/types/node-24.10.9 2026-01-15 21:01:33 +01:00
dependabot[bot]
38d4f9f918 build(deps-dev): Bump prettier from 3.7.4 to 3.8.0
Bumps [prettier](https://github.com/prettier/prettier) from 3.7.4 to 3.8.0.
- [Release notes](https://github.com/prettier/prettier/releases)
- [Changelog](https://github.com/prettier/prettier/blob/main/CHANGELOG.md)
- [Commits](https://github.com/prettier/prettier/compare/3.7.4...3.8.0)

---
updated-dependencies:
- dependency-name: prettier
  dependency-version: 3.8.0
  dependency-type: direct:development
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-01-15 19:57:36 +00:00
dependabot[bot]
63dc7c6983 build(deps-dev): Bump eslint-config-next from 16.1.1 to 16.1.2
Bumps [eslint-config-next](https://github.com/vercel/next.js/tree/HEAD/packages/eslint-config-next) from 16.1.1 to 16.1.2.
- [Release notes](https://github.com/vercel/next.js/releases)
- [Changelog](https://github.com/vercel/next.js/blob/canary/release.js)
- [Commits](https://github.com/vercel/next.js/commits/v16.1.2/packages/eslint-config-next)

---
updated-dependencies:
- dependency-name: eslint-config-next
  dependency-version: 16.1.2
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-01-15 19:57:08 +00:00
dependabot[bot]
d57c6059fc build(deps): Bump next from 16.1.1 to 16.1.2
Bumps [next](https://github.com/vercel/next.js) from 16.1.1 to 16.1.2.
- [Release notes](https://github.com/vercel/next.js/releases)
- [Changelog](https://github.com/vercel/next.js/blob/canary/release.js)
- [Commits](https://github.com/vercel/next.js/compare/v16.1.1...v16.1.2)

---
updated-dependencies:
- dependency-name: next
  dependency-version: 16.1.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-01-15 19:56:46 +00:00
dependabot[bot]
eb152f9fae build(deps-dev): Bump @types/node from 24.10.8 to 24.10.9
Bumps [@types/node](https://github.com/DefinitelyTyped/DefinitelyTyped/tree/HEAD/types/node) from 24.10.8 to 24.10.9.
- [Release notes](https://github.com/DefinitelyTyped/DefinitelyTyped/releases)
- [Commits](https://github.com/DefinitelyTyped/DefinitelyTyped/commits/HEAD/types/node)

---
updated-dependencies:
- dependency-name: "@types/node"
  dependency-version: 24.10.9
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-01-15 19:56:22 +00:00
Michel Roegl-Brunner
1a8e98fec0 Merge pull request #448 from community-scripts/dependabot/npm_and_yarn/tanstack/react-query-5.90.17 2026-01-15 14:24:10 +01:00
Michel Roegl-Brunner
83a1c7ea31 Merge pull request #446 from community-scripts/dependabot/npm_and_yarn/types/node-24.10.8 2026-01-15 14:23:57 +01:00
dependabot[bot]
79c63a7d3d build(deps): Bump @tanstack/react-query from 5.90.16 to 5.90.17
Bumps [@tanstack/react-query](https://github.com/TanStack/query/tree/HEAD/packages/react-query) from 5.90.16 to 5.90.17.
- [Release notes](https://github.com/TanStack/query/releases)
- [Changelog](https://github.com/TanStack/query/blob/main/packages/react-query/CHANGELOG.md)
- [Commits](https://github.com/TanStack/query/commits/@tanstack/react-query@5.90.17/packages/react-query)

---
updated-dependencies:
- dependency-name: "@tanstack/react-query"
  dependency-version: 5.90.17
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-01-14 19:57:01 +00:00
dependabot[bot]
753721eee0 build(deps-dev): Bump @types/node from 24.10.4 to 24.10.8
Bumps [@types/node](https://github.com/DefinitelyTyped/DefinitelyTyped/tree/HEAD/types/node) from 24.10.4 to 24.10.8.
- [Release notes](https://github.com/DefinitelyTyped/DefinitelyTyped/releases)
- [Commits](https://github.com/DefinitelyTyped/DefinitelyTyped/commits/HEAD/types/node)

---
updated-dependencies:
- dependency-name: "@types/node"
  dependency-version: 24.10.8
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-01-13 20:01:03 +00:00
github-actions[bot]
09607296af chore: add VERSION v0.5.5 (#445)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-01-13 17:03:29 +00:00
CanbiZ (MickLesk)
c88040084a Improve server startup logging and update script fetching (#443)
Adds success and error logging to the Next.js app preparation process in server.js, including guidance for missing production builds. In versionRouter, always fetches the latest update.sh from GitHub before running updates, logging the outcome and falling back to the local script if fetching fails.
2026-01-13 18:03:01 +01:00
CanbiZ (MickLesk)
2573eb7314 github: improve PR template (#444) 2026-01-13 18:02:41 +01:00
github-actions[bot]
414c356446 chore: add VERSION v0.5.4 (#441)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2026-01-13 16:05:32 +00:00
CanbiZ
c38ded7a39 Update dependencies in package-lock.json
Upgraded multiple dependencies and devDependencies to their latest versions, including @prisma, @tanstack/react-query, next, eslint, typescript-eslint, and others. This ensures compatibility, security, and access to new features and bug fixes.
2026-01-13 17:03:21 +01:00
CanbiZ
0cfed84cd0 update package.json 2026-01-13 16:56:58 +01:00
CanbiZ (MickLesk)
9611bc9bcf Improve Node.js upgrade and service recovery in update.sh (#440)
Enhances the Node.js upgrade process by handling both .list and .sources files, updating the apt cache, and adding error handling for download and install failures. Introduces a function to re-enable and start the systemd service on failure to prevent user lockout, and ensures this is called during rollback and upgrade errors. Also refines Node.js version checks and build environment setup.
2026-01-13 16:53:37 +01:00
44 changed files with 3635 additions and 1770 deletions

View File

@@ -18,7 +18,12 @@ ALLOWED_SCRIPT_PATHS="scripts/"
WEBSOCKET_PORT="3001" WEBSOCKET_PORT="3001"
# User settings # User settings
# Optional tokens for private repos: GITHUB_TOKEN (GitHub), GITLAB_TOKEN (GitLab),
# BITBUCKET_APP_PASSWORD or BITBUCKET_TOKEN (Bitbucket). REPO_URL and added repos
# can be GitHub, GitLab, Bitbucket, or custom Git servers.
GITHUB_TOKEN= GITHUB_TOKEN=
GITLAB_TOKEN=
BITBUCKET_APP_PASSWORD=
SAVE_FILTER=false SAVE_FILTER=false
FILTERS= FILTERS=
AUTH_USERNAME= AUTH_USERNAME=

View File

@@ -4,7 +4,7 @@
## 🔗 Related PR / Issue ## 🔗 Related PR / Issue
Link: # Fixes: #
## ✅ Prerequisites (**X** in brackets) ## ✅ Prerequisites (**X** in brackets)

View File

@@ -31,20 +31,24 @@ jobs:
echo "Found draft version: ${{ steps.draft.outputs.tag_name }}" echo "Found draft version: ${{ steps.draft.outputs.tag_name }}"
- name: Create branch and commit VERSION - name: Create branch and commit VERSION and package.json
run: | run: |
branch="update-version-${{ steps.draft.outputs.tag_name }}" branch="update-version-${{ steps.draft.outputs.tag_name }}"
# Delete remote branch if exists # Delete remote branch if exists
git push origin --delete "$branch" || echo "No remote branch to delete" git push origin --delete "$branch" || echo "No remote branch to delete"
git fetch origin main git fetch origin main
git checkout -b "$branch" origin/main git checkout -b "$branch" origin/main
# Write VERSION file and timestamp to ensure a diff # Version without 'v' prefix (e.g. v1.2.3 -> 1.2.3)
version="${{ steps.draft.outputs.tag_name }}" version="${{ steps.draft.outputs.tag_name }}"
echo "$version" | sed 's/^v//' > VERSION version_plain=$(echo "$version" | sed 's/^v//')
git add VERSION # Write VERSION file
echo "$version_plain" > VERSION
# Update package.json version
jq --arg v "$version_plain" '.version = $v' package.json > package.json.tmp && mv package.json.tmp package.json
git add VERSION package.json
git config user.name "github-actions[bot]" git config user.name "github-actions[bot]"
git config user.email "github-actions[bot]@users.noreply.github.com" git config user.email "github-actions[bot]@users.noreply.github.com"
git commit -m "chore: add VERSION $version" --allow-empty git commit -m "chore: bump version to $version_plain (VERSION + package.json)" --allow-empty
- name: Push changes - name: Push changes
run: | run: |
@@ -57,8 +61,8 @@ jobs:
pr_url=$(gh pr create \ pr_url=$(gh pr create \
--base main \ --base main \
--head update-version-${{ steps.draft.outputs.tag_name }} \ --head update-version-${{ steps.draft.outputs.tag_name }} \
--title "chore: add VERSION ${{ steps.draft.outputs.tag_name }}" \ --title "chore: bump version to ${{ steps.draft.outputs.tag_name }} (VERSION + package.json)" \
--body "Adds VERSION file for release ${{ steps.draft.outputs.tag_name }}" \ --body "Updates VERSION file and package.json version for release ${{ steps.draft.outputs.tag_name }}" \
--label automated) --label automated)
pr_number=$(echo "$pr_url" | awk -F/ '{print $NF}') pr_number=$(echo "$pr_url" | awk -F/ '{print $NF}')

View File

@@ -1 +1 @@
0.5.3 0.5.6

1087
package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,6 +1,6 @@
{ {
"name": "pve-scripts-local", "name": "pve-scripts-local",
"version": "0.1.0", "version": "0.5.6",
"private": true, "private": true,
"type": "module", "type": "module",
"scripts": { "scripts": {
@@ -25,35 +25,35 @@
"typecheck": "tsc --noEmit" "typecheck": "tsc --noEmit"
}, },
"dependencies": { "dependencies": {
"@prisma/adapter-better-sqlite3": "^7.1.0", "@prisma/adapter-better-sqlite3": "^7.3.0",
"@prisma/client": "^7.1.0", "@prisma/client": "^7.3.0",
"@radix-ui/react-dropdown-menu": "^2.1.16", "@radix-ui/react-dropdown-menu": "^2.1.16",
"@radix-ui/react-slot": "^1.2.4", "@radix-ui/react-slot": "^1.2.4",
"@t3-oss/env-nextjs": "^0.13.10", "@t3-oss/env-nextjs": "^0.13.10",
"@tailwindcss/typography": "^0.5.19", "@tailwindcss/typography": "^0.5.19",
"@tanstack/react-query": "^5.90.12", "@tanstack/react-query": "^5.90.20",
"@trpc/client": "^11.8.0", "@trpc/client": "^11.8.1",
"@trpc/react-query": "^11.8.1", "@trpc/react-query": "^11.8.1",
"@trpc/server": "^11.8.0", "@trpc/server": "^11.8.1",
"@types/react-syntax-highlighter": "^15.5.13", "@types/react-syntax-highlighter": "^15.5.13",
"@types/ws": "^8.18.1", "@types/ws": "^8.18.1",
"@xterm/addon-fit": "^0.10.0", "@xterm/addon-fit": "^0.11.0",
"@xterm/addon-web-links": "^0.12.0", "@xterm/addon-web-links": "^0.12.0",
"@xterm/xterm": "^6.0.0", "@xterm/xterm": "^6.0.0",
"axios": "^1.13.2", "axios": "^1.13.2",
"bcryptjs": "^3.0.3", "bcryptjs": "^3.0.3",
"better-sqlite3": "^12.5.0", "better-sqlite3": "^12.6.2",
"class-variance-authority": "^0.7.1", "class-variance-authority": "^0.7.1",
"clsx": "^2.1.1", "clsx": "^2.1.1",
"cron-validator": "^1.4.0", "cron-validator": "^1.4.0",
"dotenv": "^17.2.3", "dotenv": "^17.2.3",
"jsonwebtoken": "^9.0.3", "jsonwebtoken": "^9.0.3",
"lucide-react": "^0.562.0", "lucide-react": "^0.562.0",
"next": "^16.0.10", "next": ">=16.1.5",
"node-cron": "^4.2.1", "node-cron": "^4.2.1",
"node-pty": "^1.0.0", "node-pty": "^1.1.0",
"react": "^19.2.3", "react": "^19.2.3",
"react-dom": "^19.2.3", "react-dom": "^19.2.4",
"react-markdown": "^10.1.0", "react-markdown": "^10.1.0",
"react-syntax-highlighter": "^16.1.0", "react-syntax-highlighter": "^16.1.0",
"refractor": "^5.0.0", "refractor": "^5.0.0",
@@ -62,37 +62,38 @@
"strip-ansi": "^7.1.2", "strip-ansi": "^7.1.2",
"superjson": "^2.2.6", "superjson": "^2.2.6",
"tailwind-merge": "^3.4.0", "tailwind-merge": "^3.4.0",
"ws": "^8.18.3", "ws": "^8.19.0",
"zod": "^4.1.13" "zod": "^4.3.5"
}, },
"devDependencies": { "devDependencies": {
"next": ">=16.1.5",
"@tailwindcss/postcss": "^4.1.18", "@tailwindcss/postcss": "^4.1.18",
"@testing-library/jest-dom": "^6.9.1", "@testing-library/jest-dom": "^6.9.1",
"@testing-library/react": "^16.3.1", "@testing-library/react": "^16.3.2",
"@testing-library/user-event": "^14.6.1", "@testing-library/user-event": "^14.6.1",
"@types/bcryptjs": "^3.0.0", "@types/bcryptjs": "^3.0.0",
"@types/better-sqlite3": "^7.6.13", "@types/better-sqlite3": "^7.6.13",
"@types/jsonwebtoken": "^9.0.10", "@types/jsonwebtoken": "^9.0.10",
"@types/node": "^24.10.4", "@types/node": "^24.10.9",
"@types/node-cron": "^3.0.11", "@types/node-cron": "^3.0.11",
"@types/react": "^19.2.7", "@types/react": "^19.2.8",
"@types/react-dom": "^19.2.3", "@types/react-dom": "^19.2.3",
"@vitejs/plugin-react": "^5.1.2", "@vitejs/plugin-react": "^5.1.2",
"@vitest/coverage-v8": "^4.0.16", "@vitest/coverage-v8": "^4.0.17",
"@vitest/ui": "^4.0.14", "@vitest/ui": "^4.0.17",
"baseline-browser-mapping": "^2.9.3", "baseline-browser-mapping": "^2.9.15",
"eslint": "^9.39.1", "eslint": "^9.39.2",
"eslint-config-next": "^16.1.0", "eslint-config-next": "^16.1.3",
"jsdom": "^27.3.0", "jsdom": "^27.4.0",
"postcss": "^8.5.6", "postcss": "^8.5.6",
"prettier": "^3.7.4", "prettier": "^3.8.0",
"prettier-plugin-tailwindcss": "^0.7.2", "prettier-plugin-tailwindcss": "^0.7.2",
"prisma": "^7.1.0", "prisma": "^7.3.0",
"tailwindcss": "^4.1.18", "tailwindcss": "^4.1.18",
"tsx": "^4.21.0", "tsx": "^4.21.0",
"typescript": "^5.9.3", "typescript": "^5.9.3",
"typescript-eslint": "^8.48.1", "typescript-eslint": "^8.54.0",
"vitest": "^4.0.14" "vitest": "^4.0.17"
}, },
"ct3aMetadata": { "ct3aMetadata": {
"initVersion": "7.39.3" "initVersion": "7.39.3"
@@ -102,6 +103,7 @@
"node": ">=24.0.0" "node": ">=24.0.0"
}, },
"overrides": { "overrides": {
"prismjs": "^1.30.0" "prismjs": "^1.30.0",
"hono": ">=4.11.7"
} }
} }

View File

@@ -11,6 +11,9 @@ source "$(dirname "${BASH_SOURCE[0]}")/error-handler.func"
load_functions load_functions
catch_errors catch_errors
# Get LXC IP address (must be called INSIDE container, after network is up)
get_lxc_ip
# This function enables IPv6 if it's not disabled and sets verbose mode # This function enables IPv6 if it's not disabled and sets verbose mode
verb_ip6() { verb_ip6() {
set_std_mode # Set STD mode based on VERBOSE set_std_mode # Set STD mode based on VERBOSE
@@ -125,22 +128,13 @@ update_os() {
# This function modifies the message of the day (motd) and SSH settings # This function modifies the message of the day (motd) and SSH settings
motd_ssh() { motd_ssh() {
echo "export TERM='xterm-256color'" >>/root/.bashrc echo "export TERM='xterm-256color'" >>/root/.bashrc
IP=$(ip -4 addr show eth0 | awk '/inet / {print $2}' | cut -d/ -f1 | head -n 1)
if [ -f "/etc/os-release" ]; then
OS_NAME=$(grep ^NAME /etc/os-release | cut -d= -f2 | tr -d '"')
OS_VERSION=$(grep ^VERSION_ID /etc/os-release | cut -d= -f2 | tr -d '"')
else
OS_NAME="Alpine Linux"
OS_VERSION="Unknown"
fi
PROFILE_FILE="/etc/profile.d/00_lxc-details.sh" PROFILE_FILE="/etc/profile.d/00_lxc-details.sh"
echo "echo -e \"\"" >"$PROFILE_FILE" echo "echo -e \"\"" >"$PROFILE_FILE"
echo -e "echo -e \"${BOLD}${APPLICATION} LXC Container${CL}"\" >>"$PROFILE_FILE" echo -e "echo -e \"${BOLD}${APPLICATION} LXC Container${CL}"\" >>"$PROFILE_FILE"
echo -e "echo -e \"${TAB}${GATEWAY}${YW} Provided by: ${GN}community-scripts ORG ${YW}| GitHub: ${GN}https://github.com/community-scripts/ProxmoxVE${CL}\"" >>"$PROFILE_FILE" echo -e "echo -e \"${TAB}${GATEWAY}${YW} Provided by: ${GN}community-scripts ORG ${YW}| GitHub: ${GN}https://github.com/community-scripts/ProxmoxVE${CL}\"" >>"$PROFILE_FILE"
echo "echo \"\"" >>"$PROFILE_FILE" echo "echo \"\"" >>"$PROFILE_FILE"
echo -e "echo -e \"${TAB}${OS}${YW} OS: ${GN}${OS_NAME} - Version: ${OS_VERSION}${CL}\"" >>"$PROFILE_FILE" echo -e "echo -e \"${TAB}${OS}${YW} OS: ${GN}\$(grep ^NAME /etc/os-release | cut -d= -f2 | tr -d '\"') - Version: \$(grep ^VERSION_ID /etc/os-release | cut -d= -f2 | tr -d '\"')${CL}\"" >>"$PROFILE_FILE"
echo -e "echo -e \"${TAB}${HOSTNAME}${YW} Hostname: ${GN}\$(hostname)${CL}\"" >>"$PROFILE_FILE" echo -e "echo -e \"${TAB}${HOSTNAME}${YW} Hostname: ${GN}\$(hostname)${CL}\"" >>"$PROFILE_FILE"
echo -e "echo -e \"${TAB}${INFO}${YW} IP Address: ${GN}\$(ip -4 addr show eth0 | awk '/inet / {print \$2}' | cut -d/ -f1 | head -n 1)${CL}\"" >>"$PROFILE_FILE" echo -e "echo -e \"${TAB}${INFO}${YW} IP Address: ${GN}\$(ip -4 addr show eth0 | awk '/inet / {print \$2}' | cut -d/ -f1 | head -n 1)${CL}\"" >>"$PROFILE_FILE"

View File

@@ -1,507 +1,188 @@
#!/bin/ash #!/bin/ash
# shellcheck shell=ash # shellcheck shell=ash
# Copyright (c) 2021-2026 community-scripts ORG
# Author: MickLesk
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
# Expects existing msg_* functions and optional $STD from the framework. if ! command -v curl >/dev/null 2>&1; then
apk update && apk add curl >/dev/null 2>&1
fi
source "$(dirname "${BASH_SOURCE[0]}")/core.func"
source "$(dirname "${BASH_SOURCE[0]}")/error-handler.func"
load_functions
catch_errors
# ------------------------------ # Get LXC IP address (must be called INSIDE container, after network is up)
# helpers get_lxc_ip
# ------------------------------
lower() { printf '%s' "$1" | tr '[:upper:]' '[:lower:]'; }
has() { command -v "$1" >/dev/null 2>&1; }
need_tool() { # This function enables IPv6 if it's not disabled and sets verbose mode
# usage: need_tool curl jq unzip ... verb_ip6() {
# setup missing tools via apk set_std_mode # Set STD mode based on VERBOSE
local missing=0 t
for t in "$@"; do if [ "${IPV6_METHOD:-}" = "disable" ]; then
if ! has "$t"; then missing=1; fi msg_info "Disabling IPv6 (this may affect some services)"
$STD sysctl -w net.ipv6.conf.all.disable_ipv6=1
$STD sysctl -w net.ipv6.conf.default.disable_ipv6=1
$STD sysctl -w net.ipv6.conf.lo.disable_ipv6=1
mkdir -p /etc/sysctl.d
$STD tee /etc/sysctl.d/99-disable-ipv6.conf >/dev/null <<EOF
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
EOF
$STD rc-update add sysctl default
msg_ok "Disabled IPv6"
fi
}
set -Eeuo pipefail
trap 'error_handler $? $LINENO "$BASH_COMMAND"' ERR
trap on_exit EXIT
trap on_interrupt INT
trap on_terminate TERM
error_handler() {
local exit_code="$1"
local line_number="$2"
local command="$3"
if [[ "$exit_code" -eq 0 ]]; then
return 0
fi
printf "\e[?25h"
echo -e "\n${RD}[ERROR]${CL} in line ${RD}$line_number${CL}: exit code ${RD}$exit_code${CL}: while executing command ${YW}$command${CL}\n"
exit "$exit_code"
}
on_exit() {
local exit_code="$?"
[[ -n "${lockfile:-}" && -e "$lockfile" ]] && rm -f "$lockfile"
exit "$exit_code"
}
on_interrupt() {
echo -e "\n${RD}Interrupted by user (SIGINT)${CL}"
exit 130
}
on_terminate() {
echo -e "\n${RD}Terminated by signal (SIGTERM)${CL}"
exit 143
}
# This function sets up the Container OS by generating the locale, setting the timezone, and checking the network connection
setting_up_container() {
msg_info "Setting up Container OS"
while [ $i -gt 0 ]; do
if [ "$(ip addr show | grep 'inet ' | grep -v '127.0.0.1' | awk '{print $2}' | cut -d'/' -f1)" != "" ]; then
break
fi
echo 1>&2 -en "${CROSS}${RD} No Network! "
sleep $RETRY_EVERY
i=$((i - 1))
done done
if [ "$missing" -eq 1 ]; then
msg_info "Installing tools: $*" if [ "$(ip addr show | grep 'inet ' | grep -v '127.0.0.1' | awk '{print $2}' | cut -d'/' -f1)" = "" ]; then
apk add --no-cache "$@" >/dev/null 2>&1 || { echo 1>&2 -e "\n${CROSS}${RD} No Network After $RETRY_NUM Tries${CL}"
msg_error "apk add failed for: $*" echo -e "${NETWORK}Check Network Settings"
return 1 exit 1
}
msg_ok "Tools ready: $*"
fi fi
msg_ok "Set up Container OS"
msg_ok "Network Connected: ${BL}$(ip addr show | grep 'inet ' | awk '{print $2}' | cut -d'/' -f1 | tail -n1)${CL}"
} }
net_resolves() { # This function checks the network connection by pinging a known IP address and prompts the user to continue if the internet is not connected
# better handling for missing getent on Alpine network_check() {
# usage: net_resolves api.github.com set +e
local host="$1" trap - ERR
ping -c1 -W1 "$host" >/dev/null 2>&1 || nslookup "$host" >/dev/null 2>&1 if ping -c 1 -W 1 1.1.1.1 &>/dev/null || ping -c 1 -W 1 8.8.8.8 &>/dev/null || ping -c 1 -W 1 9.9.9.9 &>/dev/null; then
} ipv4_status="${GN}✔${CL} IPv4"
ensure_usr_local_bin_persist() {
local PROFILE_FILE="/etc/profile.d/10-localbin.sh"
if [ ! -f "$PROFILE_FILE" ]; then
echo 'case ":$PATH:" in *:/usr/local/bin:*) ;; *) export PATH="/usr/local/bin:$PATH";; esac' >"$PROFILE_FILE"
chmod +x "$PROFILE_FILE"
fi
}
download_with_progress() {
# $1 url, $2 dest
local url="$1" out="$2" cl
need_tool curl pv || return 1
cl=$(curl -fsSLI "$url" 2>/dev/null | awk 'tolower($0) ~ /^content-length:/ {print $2}' | tr -d '\r')
if [ -n "$cl" ]; then
curl -fsSL "$url" | pv -s "$cl" >"$out" || {
msg_error "Download failed: $url"
return 1
}
else else
curl -fL# -o "$out" "$url" || { ipv4_status="${RD}✖${CL} IPv4"
msg_error "Download failed: $url" read -r -p "Internet NOT connected. Continue anyway? <y/N> " prompt
return 1 if [[ "${prompt,,}" =~ ^(y|yes)$ ]]; then
} echo -e "${INFO}${RD}Expect Issues Without Internet${CL}"
fi
}
# ------------------------------
# GitHub: check Release
# ------------------------------
check_for_gh_release() {
# app, repo, [pinned]
local app="$1" source="$2" pinned="${3:-}"
local app_lc
app_lc="$(lower "$app" | tr -d ' ')"
local current_file="$HOME/.${app_lc}"
local current="" release tag
msg_info "Check for update: $app"
net_resolves api.github.com || {
msg_error "DNS/network error: api.github.com"
return 1
}
need_tool curl jq || return 1
tag=$(curl -fsSL "https://api.github.com/repos/${source}/releases/latest" | jq -r '.tag_name // empty')
[ -z "$tag" ] && {
msg_error "Unable to fetch latest tag for $app"
return 1
}
release="${tag#v}"
[ -f "$current_file" ] && current="$(cat "$current_file")"
if [ -n "$pinned" ]; then
if [ "$pinned" = "$release" ]; then
msg_ok "$app pinned to v$pinned (no update)"
return 1
fi
if [ "$current" = "$pinned" ]; then
msg_ok "$app pinned v$pinned installed (upstream v$release)"
return 1
fi
msg_info "$app pinned v$pinned (upstream v$release) → update/downgrade"
CHECK_UPDATE_RELEASE="$pinned"
return 0
fi
if [ "$release" != "$current" ] || [ ! -f "$current_file" ]; then
CHECK_UPDATE_RELEASE="$release"
msg_info "New release available: v$release (current: v${current:-none})"
return 0
fi
msg_ok "$app is up to date (v$release)"
return 1
}
# ------------------------------
# GitHub: get Release & deploy (Alpine)
# modes: tarball | prebuild | singlefile
# ------------------------------
fetch_and_deploy_gh() {
# $1 app, $2 repo, [$3 mode], [$4 version], [$5 target], [$6 asset_pattern
local app="$1" repo="$2" mode="${3:-tarball}" version="${4:-latest}" target="${5:-/opt/$1}" pattern="${6:-}"
local app_lc
app_lc="$(lower "$app" | tr -d ' ')"
local vfile="$HOME/.${app_lc}"
local json url filename tmpd unpack
net_resolves api.github.com || {
msg_error "DNS/network error"
return 1
}
need_tool curl jq tar || return 1
[ "$mode" = "prebuild" ] || [ "$mode" = "singlefile" ] && need_tool unzip >/dev/null 2>&1 || true
tmpd="$(mktemp -d)" || return 1
mkdir -p "$target"
# Release JSON
if [ "$version" = "latest" ]; then
json="$(curl -fsSL "https://api.github.com/repos/$repo/releases/latest")" || {
msg_error "GitHub API failed"
rm -rf "$tmpd"
return 1
}
else
json="$(curl -fsSL "https://api.github.com/repos/$repo/releases/tags/$version")" || {
msg_error "GitHub API failed"
rm -rf "$tmpd"
return 1
}
fi
# correct Version
version="$(printf '%s' "$json" | jq -r '.tag_name // empty')"
version="${version#v}"
[ -z "$version" ] && {
msg_error "No tag in release json"
rm -rf "$tmpd"
return 1
}
case "$mode" in
tarball | source)
url="$(printf '%s' "$json" | jq -r '.tarball_url // empty')"
[ -z "$url" ] && url="https://github.com/$repo/archive/refs/tags/v$version.tar.gz"
filename="${app_lc}-${version}.tar.gz"
download_with_progress "$url" "$tmpd/$filename" || {
rm -rf "$tmpd"
return 1
}
tar -xzf "$tmpd/$filename" -C "$tmpd" || {
msg_error "tar extract failed"
rm -rf "$tmpd"
return 1
}
unpack="$(find "$tmpd" -mindepth 1 -maxdepth 1 -type d | head -n1)"
# copy content of unpack to target
(cd "$unpack" && tar -cf - .) | (cd "$target" && tar -xf -) || {
msg_error "copy failed"
rm -rf "$tmpd"
return 1
}
;;
prebuild)
[ -n "$pattern" ] || {
msg_error "prebuild requires asset pattern"
rm -rf "$tmpd"
return 1
}
url="$(printf '%s' "$json" | jq -r '.assets[].browser_download_url' | awk -v p="$pattern" '
BEGIN{IGNORECASE=1}
$0 ~ p {print; exit}
')"
[ -z "$url" ] && {
msg_error "asset not found for pattern: $pattern"
rm -rf "$tmpd"
return 1
}
filename="${url##*/}"
download_with_progress "$url" "$tmpd/$filename" || {
rm -rf "$tmpd"
return 1
}
# unpack archive (Zip or tarball)
case "$filename" in
*.zip)
need_tool unzip || {
rm -rf "$tmpd"
return 1
}
mkdir -p "$tmpd/unp"
unzip -q "$tmpd/$filename" -d "$tmpd/unp"
;;
*.tar.gz | *.tgz | *.tar.xz | *.tar.zst | *.tar.bz2)
mkdir -p "$tmpd/unp"
tar -xf "$tmpd/$filename" -C "$tmpd/unp"
;;
*)
msg_error "unsupported archive: $filename"
rm -rf "$tmpd"
return 1
;;
esac
# top-level folder strippen
if [ "$(find "$tmpd/unp" -mindepth 1 -maxdepth 1 -type d | wc -l)" -eq 1 ] && [ -z "$(find "$tmpd/unp" -mindepth 1 -maxdepth 1 -type f | head -n1)" ]; then
unpack="$(find "$tmpd/unp" -mindepth 1 -maxdepth 1 -type d)"
(cd "$unpack" && tar -cf - .) | (cd "$target" && tar -xf -) || {
msg_error "copy failed"
rm -rf "$tmpd"
return 1
}
else else
(cd "$tmpd/unp" && tar -cf - .) | (cd "$target" && tar -xf -) || { echo -e "${NETWORK}Check Network Settings"
msg_error "copy failed" exit 1
rm -rf "$tmpd"
return 1
}
fi
;;
singlefile)
[ -n "$pattern" ] || {
msg_error "singlefile requires asset pattern"
rm -rf "$tmpd"
return 1
}
url="$(printf '%s' "$json" | jq -r '.assets[].browser_download_url' | awk -v p="$pattern" '
BEGIN{IGNORECASE=1}
$0 ~ p {print; exit}
')"
[ -z "$url" ] && {
msg_error "asset not found for pattern: $pattern"
rm -rf "$tmpd"
return 1
}
filename="${url##*/}"
download_with_progress "$url" "$target/$app" || {
rm -rf "$tmpd"
return 1
}
chmod +x "$target/$app"
;;
*)
msg_error "Unknown mode: $mode"
rm -rf "$tmpd"
return 1
;;
esac
echo "$version" >"$vfile"
ensure_usr_local_bin_persist
rm -rf "$tmpd"
msg_ok "Deployed $app ($version) → $target"
}
# ------------------------------
# yq (mikefarah) Alpine
# ------------------------------
setup_yq() {
# prefer apk, unless FORCE_GH=1
if [ "${FORCE_GH:-0}" != "1" ] && apk info -e yq >/dev/null 2>&1; then
msg_info "Updating yq via apk"
apk add --no-cache --upgrade yq >/dev/null 2>&1 || true
msg_ok "yq ready ($(yq --version 2>/dev/null))"
return 0
fi
need_tool curl || return 1
local arch bin url tmp
case "$(uname -m)" in
x86_64) arch="amd64" ;;
aarch64) arch="arm64" ;;
*)
msg_error "Unsupported arch for yq: $(uname -m)"
return 1
;;
esac
url="https://github.com/mikefarah/yq/releases/latest/download/yq_linux_${arch}"
tmp="$(mktemp)"
download_with_progress "$url" "$tmp" || return 1
install -m 0755 "$tmp" /usr/local/bin/yq
rm -f "$tmp"
msg_ok "Setup yq ($(yq --version 2>/dev/null))"
}
# ------------------------------
# Adminer Alpine
# ------------------------------
setup_adminer() {
need_tool curl || return 1
msg_info "Setup Adminer (Alpine)"
mkdir -p /var/www/localhost/htdocs/adminer
curl -fsSL https://github.com/vrana/adminer/releases/latest/download/adminer.php \
-o /var/www/localhost/htdocs/adminer/index.php || {
msg_error "Adminer download failed"
return 1
}
msg_ok "Adminer at /adminer (served by your webserver)"
}
# ------------------------------
# uv Alpine (musl tarball)
# optional: PYTHON_VERSION="3.12"
# ------------------------------
setup_uv() {
need_tool curl tar || return 1
local UV_BIN="/usr/local/bin/uv"
local arch tarball url tmpd ver installed
case "$(uname -m)" in
x86_64) arch="x86_64-unknown-linux-musl" ;;
aarch64) arch="aarch64-unknown-linux-musl" ;;
*)
msg_error "Unsupported arch for uv: $(uname -m)"
return 1
;;
esac
ver="$(curl -fsSL https://api.github.com/repos/astral-sh/uv/releases/latest | jq -r '.tag_name' 2>/dev/null)"
ver="${ver#v}"
[ -z "$ver" ] && {
msg_error "uv: cannot determine latest version"
return 1
}
if has "$UV_BIN"; then
installed="$($UV_BIN -V 2>/dev/null | awk '{print $2}')"
[ "$installed" = "$ver" ] && {
msg_ok "uv $ver already installed"
return 0
}
msg_info "Updating uv $installed → $ver"
else
msg_info "Setup uv $ver"
fi
tmpd="$(mktemp -d)" || return 1
tarball="uv-${arch}.tar.gz"
url="https://github.com/astral-sh/uv/releases/download/v${ver}/${tarball}"
download_with_progress "$url" "$tmpd/uv.tar.gz" || {
rm -rf "$tmpd"
return 1
}
tar -xzf "$tmpd/uv.tar.gz" -C "$tmpd" || {
msg_error "uv: extract failed"
rm -rf "$tmpd"
return 1
}
# tar contains ./uv
if [ -x "$tmpd/uv" ]; then
install -m 0755 "$tmpd/uv" "$UV_BIN"
else
# fallback: in subfolder
install -m 0755 "$tmpd"/*/uv "$UV_BIN" 2>/dev/null || {
msg_error "uv binary not found in tar"
rm -rf "$tmpd"
return 1
}
fi
rm -rf "$tmpd"
ensure_usr_local_bin_persist
msg_ok "Setup uv $ver"
if [ -n "${PYTHON_VERSION:-}" ]; then
local match
match="$(uv python list --only-downloads 2>/dev/null | awk -v maj="$PYTHON_VERSION" '
$0 ~ "^cpython-"maj"\\." { print $0 }' | awk -F- '{print $2}' | sort -V | tail -n1)"
[ -z "$match" ] && {
msg_error "No matching Python for $PYTHON_VERSION"
return 1
}
if ! uv python list | grep -q "cpython-${match}-linux"; then
msg_info "Installing Python $match via uv"
uv python install "$match" || {
msg_error "uv python install failed"
return 1
}
msg_ok "Python $match installed (uv)"
fi fi
fi fi
} RESOLVEDIP=$(getent hosts github.com | awk '{ print $1 }')
if [[ -z "$RESOLVEDIP" ]]; then
# ------------------------------ msg_error "Internet: ${ipv4_status} DNS Failed"
# Java Alpine (OpenJDK)
# JAVA_VERSION: 17|21 (Default 21)
# ------------------------------
setup_java() {
local JAVA_VERSION="${JAVA_VERSION:-21}" pkg
case "$JAVA_VERSION" in
17) pkg="openjdk17-jdk" ;;
21 | *) pkg="openjdk21-jdk" ;;
esac
msg_info "Setup Java (OpenJDK $JAVA_VERSION)"
apk add --no-cache "$pkg" >/dev/null 2>&1 || {
msg_error "apk add $pkg failed"
return 1
}
# set JAVA_HOME
local prof="/etc/profile.d/20-java.sh"
if [ ! -f "$prof" ]; then
echo 'export JAVA_HOME=$(dirname $(dirname $(readlink -f $(command -v java))))' >"$prof"
echo 'case ":$PATH:" in *:$JAVA_HOME/bin:*) ;; *) export PATH="$JAVA_HOME/bin:$PATH";; esac' >>"$prof"
chmod +x "$prof"
fi
msg_ok "Java ready: $(java -version 2>&1 | head -n1)"
}
# ------------------------------
# Go Alpine (apk prefers, else tarball)
# ------------------------------
setup_go() {
if [ -z "${GO_VERSION:-}" ]; then
msg_info "Setup Go (apk)"
apk add --no-cache go >/dev/null 2>&1 || {
msg_error "apk add go failed"
return 1
}
msg_ok "Go ready: $(go version 2>/dev/null)"
return 0
fi
need_tool curl tar || return 1
local ARCH TARBALL URL TMP
case "$(uname -m)" in
x86_64) ARCH="amd64" ;;
aarch64) ARCH="arm64" ;;
*)
msg_error "Unsupported arch for Go: $(uname -m)"
return 1
;;
esac
TARBALL="go${GO_VERSION}.linux-${ARCH}.tar.gz"
URL="https://go.dev/dl/${TARBALL}"
msg_info "Setup Go $GO_VERSION (tarball)"
TMP="$(mktemp)"
download_with_progress "$URL" "$TMP" || return 1
rm -rf /usr/local/go
tar -C /usr/local -xzf "$TMP" || {
msg_error "extract go failed"
rm -f "$TMP"
return 1
}
rm -f "$TMP"
ln -sf /usr/local/go/bin/go /usr/local/bin/go
ln -sf /usr/local/go/bin/gofmt /usr/local/bin/gofmt
ensure_usr_local_bin_persist
msg_ok "Go ready: $(go version 2>/dev/null)"
}
# ------------------------------
# Composer Alpine
# uses php83-cli + openssl + phar
# ------------------------------
setup_composer() {
local COMPOSER_BIN="/usr/local/bin/composer"
if ! has php; then
# prefers php83
msg_info "Installing PHP CLI for Composer"
apk add --no-cache php83-cli php83-openssl php83-phar php83-iconv >/dev/null 2>&1 || {
# Fallback to generic php if 83 not available
apk add --no-cache php-cli php-openssl php-phar php-iconv >/dev/null 2>&1 || {
msg_error "Failed to install php-cli for composer"
return 1
}
}
msg_ok "PHP CLI ready: $(php -v | head -n1)"
fi
if [ -x "$COMPOSER_BIN" ]; then
msg_info "Updating Composer"
else else
msg_info "Setup Composer" msg_ok "Internet: ${ipv4_status} DNS: ${BL}${RESOLVEDIP}${CL}"
fi
set -e
trap 'error_handler $LINENO "$BASH_COMMAND"' ERR
}
# This function updates the Container OS by running apt-get update and upgrade
update_os() {
msg_info "Updating Container OS"
$STD apk -U upgrade
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/tools.func)
msg_ok "Updated Container OS"
}
# This function modifies the message of the day (motd) and SSH settings
motd_ssh() {
echo "export TERM='xterm-256color'" >>/root/.bashrc
PROFILE_FILE="/etc/profile.d/00_lxc-details.sh"
echo "echo -e \"\"" >"$PROFILE_FILE"
echo -e "echo -e \"${BOLD}${APPLICATION} LXC Container${CL}"\" >>"$PROFILE_FILE"
echo -e "echo -e \"${TAB}${GATEWAY}${YW} Provided by: ${GN}community-scripts ORG ${YW}| GitHub: ${GN}https://github.com/community-scripts/ProxmoxVE${CL}\"" >>"$PROFILE_FILE"
echo "echo \"\"" >>"$PROFILE_FILE"
echo -e "echo -e \"${TAB}${OS}${YW} OS: ${GN}\$(grep ^NAME /etc/os-release | cut -d= -f2 | tr -d '\"') - Version: \$(grep ^VERSION_ID /etc/os-release | cut -d= -f2 | tr -d '\"')${CL}\"" >>"$PROFILE_FILE"
echo -e "echo -e \"${TAB}${HOSTNAME}${YW} Hostname: ${GN}\$(hostname)${CL}\"" >>"$PROFILE_FILE"
echo -e "echo -e \"${TAB}${INFO}${YW} IP Address: ${GN}\$(ip -4 addr show eth0 | awk '/inet / {print \$2}' | cut -d/ -f1 | head -n 1)${CL}\"" >>"$PROFILE_FILE"
# Configure SSH if enabled
if [[ "${SSH_ROOT}" == "yes" ]]; then
# Enable sshd service
$STD rc-update add sshd
# Allow root login via SSH
sed -i "s/#PermitRootLogin prohibit-password/PermitRootLogin yes/g" /etc/ssh/sshd_config
# Start the sshd service
$STD /etc/init.d/sshd start
fi
}
# Validate Timezone for some LXC's
validate_tz() {
[[ -f "/usr/share/zoneinfo/$1" ]]
}
# This function customizes the container and enables passwordless login for the root user
customize() {
if [[ "$PASSWORD" == "" ]]; then
msg_info "Customizing Container"
passwd -d root >/dev/null 2>&1
# Ensure agetty is available
apk add --no-cache --force-broken-world util-linux >/dev/null 2>&1
# Create persistent autologin boot script
mkdir -p /etc/local.d
cat <<'EOF' >/etc/local.d/autologin.start
#!/bin/sh
sed -i 's|^tty1::respawn:.*|tty1::respawn:/sbin/agetty --autologin root --noclear tty1 38400 linux|' /etc/inittab
kill -HUP 1
EOF
touch /root/.hushlogin
chmod +x /etc/local.d/autologin.start
rc-update add local >/dev/null 2>&1
# Apply autologin immediately for current session
/etc/local.d/autologin.start
msg_ok "Customized Container"
fi fi
need_tool curl || return 1 echo "bash -c \"\$(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/ct/${app}.sh)\"" >/usr/bin/update
curl -fsSL https://getcomposer.org/installer -o /tmp/composer-setup.php || { chmod +x /usr/bin/update
msg_error "composer installer download failed"
return 1
}
php /tmp/composer-setup.php --install-dir=/usr/local/bin --filename=composer >/dev/null 2>&1 || {
msg_error "composer install failed"
return 1
}
rm -f /tmp/composer-setup.php
ensure_usr_local_bin_persist
msg_ok "Composer ready: $(composer --version 2>/dev/null)"
} }

File diff suppressed because it is too large Load Diff

View File

@@ -127,6 +127,34 @@ icons() {
HOURGLASS="${TAB}⏳${TAB}" HOURGLASS="${TAB}⏳${TAB}"
} }
# ------------------------------------------------------------------------------
# ensure_profile_loaded()
#
# - Sources /etc/profile.d/*.sh scripts if not already loaded
# - Fixes PATH issues when running via pct enter/exec (non-login shells)
# - Safe to call multiple times (uses guard variable)
# - Should be called in update_script() or any script running inside LXC
# ------------------------------------------------------------------------------
ensure_profile_loaded() {
# Skip if already loaded or running on Proxmox host
[[ -n "${_PROFILE_LOADED:-}" ]] && return
command -v pveversion &>/dev/null && return
# Source all profile.d scripts to ensure PATH is complete
if [[ -d /etc/profile.d ]]; then
for script in /etc/profile.d/*.sh; do
[[ -r "$script" ]] && source "$script"
done
fi
# Also ensure /usr/local/bin is in PATH (common install location)
if [[ ":$PATH:" != *":/usr/local/bin:"* ]]; then
export PATH="/usr/local/bin:$PATH"
fi
export _PROFILE_LOADED=1
}
# ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------
# default_vars() # default_vars()
# #
@@ -787,11 +815,9 @@ is_verbose_mode() {
# ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------
# cleanup_lxc() # cleanup_lxc()
# #
# - Comprehensive cleanup of package managers, caches, and logs # - Cleans package manager and language caches (safe for installs AND updates)
# - Supports Alpine (apk), Debian/Ubuntu (apt), and language package managers # - Supports Alpine (apk), Debian/Ubuntu (apt), Python, Node.js, Go, Rust, Ruby, PHP
# - Cleans: Python (pip/uv), Node.js (npm/yarn/pnpm), Go, Rust, Ruby, PHP # - Uses fallback error handling to prevent cleanup failures from breaking installs
# - Truncates log files and vacuums systemd journal
# - Run at end of container creation to minimize disk usage
# ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------
cleanup_lxc() { cleanup_lxc() {
msg_info "Cleaning up" msg_info "Cleaning up"
@@ -800,32 +826,52 @@ cleanup_lxc() {
$STD apk cache clean || true $STD apk cache clean || true
rm -rf /var/cache/apk/* rm -rf /var/cache/apk/*
else else
$STD apt -y autoremove || true $STD apt -y autoremove 2>/dev/null || msg_warn "apt autoremove failed (non-critical)"
$STD apt -y autoclean || true $STD apt -y autoclean 2>/dev/null || msg_warn "apt autoclean failed (non-critical)"
$STD apt -y clean || true $STD apt -y clean 2>/dev/null || msg_warn "apt clean failed (non-critical)"
fi fi
# Clear temp artifacts (keep sockets/FIFOs; ignore errors)
find /tmp /var/tmp -type f -name 'tmp*' -delete 2>/dev/null || true find /tmp /var/tmp -type f -name 'tmp*' -delete 2>/dev/null || true
find /tmp /var/tmp -type f -name 'tempfile*' -delete 2>/dev/null || true find /tmp /var/tmp -type f -name 'tempfile*' -delete 2>/dev/null || true
# Node.js npm - directly remove cache directory # Python
# npm cache clean/verify can fail with ENOTEMPTY errors, so we skip them if command -v pip &>/dev/null; then
rm -rf /root/.cache/pip 2>/dev/null || true
fi
if command -v uv &>/dev/null; then
rm -rf /root/.cache/uv 2>/dev/null || true
fi
# Node.js
if command -v npm &>/dev/null; then if command -v npm &>/dev/null; then
rm -rf /root/.npm/_cacache /root/.npm/_logs 2>/dev/null || true rm -rf /root/.npm/_cacache /root/.npm/_logs 2>/dev/null || true
fi fi
# Node.js yarn if command -v yarn &>/dev/null; then
if command -v yarn &>/dev/null; then yarn cache clean &>/dev/null || true; fi rm -rf /root/.cache/yarn /root/.yarn/cache 2>/dev/null || true
# Node.js pnpm fi
if command -v pnpm &>/dev/null; then pnpm store prune &>/dev/null || true; fi if command -v pnpm &>/dev/null; then
# Go pnpm store prune &>/dev/null || true
if command -v go &>/dev/null; then $STD go clean -cache -modcache || true; fi fi
# Rust cargo
if command -v cargo &>/dev/null; then $STD cargo clean || true; fi # Go (only build cache, not modules)
# Ruby gem if command -v go &>/dev/null; then
if command -v gem &>/dev/null; then $STD gem cleanup || true; fi $STD go clean -cache 2>/dev/null || true
# Composer (PHP) fi
if command -v composer &>/dev/null; then COMPOSER_ALLOW_SUPERUSER=1 $STD composer clear-cache || true; fi
# Rust (only registry cache, not build artifacts)
if command -v cargo &>/dev/null; then
rm -rf /root/.cargo/registry/cache /root/.cargo/.package-cache 2>/dev/null || true
fi
# Ruby
if command -v gem &>/dev/null; then
rm -rf /root/.gem/cache 2>/dev/null || true
fi
# PHP
if command -v composer &>/dev/null; then
rm -rf /root/.composer/cache 2>/dev/null || true
fi
msg_ok "Cleaned" msg_ok "Cleaned"
} }
@@ -878,8 +924,95 @@ check_or_create_swap() {
fi fi
} }
# ------------------------------------------------------------------------------
# Loads LOCAL_IP from persistent store or detects if missing.
#
# Description:
# - Loads from /run/local-ip.env or performs runtime lookup
# ------------------------------------------------------------------------------
function get_lxc_ip() {
local IP_FILE="/run/local-ip.env"
if [[ -f "$IP_FILE" ]]; then
# shellcheck disable=SC1090
source "$IP_FILE"
fi
if [[ -z "${LOCAL_IP:-}" ]]; then
get_current_ip() {
local ip
# Try direct interface lookup for eth0 FIRST (most reliable for LXC) - IPv4
ip=$(ip -4 addr show eth0 2>/dev/null | awk '/inet / {print $2}' | cut -d/ -f1 | head -n1)
if [[ -n "$ip" && "$ip" =~ ^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$ ]]; then
echo "$ip"
return 0
fi
# Fallback: Try hostname -I (returns IPv4 first if available)
if command -v hostname >/dev/null 2>&1; then
ip=$(hostname -I 2>/dev/null | awk '{print $1}')
if [[ -n "$ip" && "$ip" =~ ^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$ ]]; then
echo "$ip"
return 0
fi
fi
# Try routing table with IPv4 targets
local ipv4_targets=("8.8.8.8" "1.1.1.1" "default")
for target in "${ipv4_targets[@]}"; do
if [[ "$target" == "default" ]]; then
ip=$(ip route get 1 2>/dev/null | awk '{for(i=1;i<=NF;i++) if ($i=="src") print $(i+1)}')
else
ip=$(ip route get "$target" 2>/dev/null | awk '{for(i=1;i<=NF;i++) if ($i=="src") print $(i+1)}')
fi
if [[ -n "$ip" ]]; then
echo "$ip"
return 0
fi
done
# IPv6 fallback: Try direct interface lookup for eth0
ip=$(ip -6 addr show eth0 scope global 2>/dev/null | awk '/inet6 / {print $2}' | cut -d/ -f1 | head -n1)
if [[ -n "$ip" && "$ip" =~ : ]]; then
echo "$ip"
return 0
fi
# IPv6 fallback: Try hostname -I for IPv6
if command -v hostname >/dev/null 2>&1; then
ip=$(hostname -I 2>/dev/null | tr ' ' '\n' | grep -E ':' | head -n1)
if [[ -n "$ip" && "$ip" =~ : ]]; then
echo "$ip"
return 0
fi
fi
# IPv6 fallback: Use routing table with IPv6 targets
local ipv6_targets=("2001:4860:4860::8888" "2606:4700:4700::1111")
for target in "${ipv6_targets[@]}"; do
ip=$(ip -6 route get "$target" 2>/dev/null | awk '{for(i=1;i<=NF;i++) if ($i=="src") print $(i+1)}')
if [[ -n "$ip" && "$ip" =~ : ]]; then
echo "$ip"
return 0
fi
done
return 1
}
LOCAL_IP="$(get_current_ip || true)"
if [[ -z "$LOCAL_IP" ]]; then
msg_error "Could not determine LOCAL_IP"
return 1
fi
fi
export LOCAL_IP
}
# ============================================================================== # ==============================================================================
# SIGNAL TRAPS # SIGNAL TRAPS
# ============================================================================== # ==============================================================================
trap 'stop_spinner' EXIT INT TERM trap 'stop_spinner' EXIT INT TERM

View File

@@ -37,6 +37,9 @@ source "$(dirname "${BASH_SOURCE[0]}")/error-handler.func"
load_functions load_functions
catch_errors catch_errors
# Get LXC IP address (must be called INSIDE container, after network is up)
get_lxc_ip
# ============================================================================== # ==============================================================================
# SECTION 2: NETWORK & CONNECTIVITY # SECTION 2: NETWORK & CONNECTIVITY
# ============================================================================== # ==============================================================================
@@ -76,6 +79,13 @@ EOF
# ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------
setting_up_container() { setting_up_container() {
msg_info "Setting up Container OS" msg_info "Setting up Container OS"
# Fix Debian 13 LXC template bug where / is owned by nobody
# Only attempt in privileged containers (unprivileged cannot chown /)
if [[ "$(stat -c '%U' /)" != "root" ]]; then
(chown root:root / 2>/dev/null) || true
fi
for ((i = RETRY_NUM; i > 0; i--)); do for ((i = RETRY_NUM; i > 0; i--)); do
if [ "$(hostname -I)" != "" ]; then if [ "$(hostname -I)" != "" ]; then
break break

View File

@@ -184,7 +184,10 @@ install_packages_with_retry() {
local retry=0 local retry=0
while [[ $retry -le $max_retries ]]; do while [[ $retry -le $max_retries ]]; do
if $STD apt install -y "${packages[@]}" 2>/dev/null; then if DEBIAN_FRONTEND=noninteractive $STD apt install -y \
-o Dpkg::Options::="--force-confdef" \
-o Dpkg::Options::="--force-confold" \
"${packages[@]}" 2>/dev/null; then
return 0 return 0
fi fi
@@ -211,7 +214,10 @@ upgrade_packages_with_retry() {
local retry=0 local retry=0
while [[ $retry -le $max_retries ]]; do while [[ $retry -le $max_retries ]]; do
if $STD apt install --only-upgrade -y "${packages[@]}" 2>/dev/null; then if DEBIAN_FRONTEND=noninteractive $STD apt install --only-upgrade -y \
-o Dpkg::Options::="--force-confdef" \
-o Dpkg::Options::="--force-confold" \
"${packages[@]}" 2>/dev/null; then
return 0 return 0
fi fi
@@ -568,7 +574,8 @@ EOF
msg_error "Failed to download PHP keyring" msg_error "Failed to download PHP keyring"
return 1 return 1
} }
dpkg -i /tmp/debsuryorg-archive-keyring.deb >/dev/null 2>&1 || { # Don't use /dev/null redirection for dpkg as it may use background processes
dpkg -i /tmp/debsuryorg-archive-keyring.deb >>"$(get_active_logfile)" 2>&1 || {
msg_error "Failed to install PHP keyring" msg_error "Failed to install PHP keyring"
rm -f /tmp/debsuryorg-archive-keyring.deb rm -f /tmp/debsuryorg-archive-keyring.deb
return 1 return 1
@@ -1838,8 +1845,9 @@ function fetch_and_deploy_gh_release() {
} }
chmod 644 "$tmpdir/$filename" chmod 644 "$tmpdir/$filename"
$STD apt install -y "$tmpdir/$filename" || { # SYSTEMD_OFFLINE=1 prevents systemd-tmpfiles failures in unprivileged LXC (Debian 13+/systemd 257+)
$STD dpkg -i "$tmpdir/$filename" || { SYSTEMD_OFFLINE=1 $STD apt install -y "$tmpdir/$filename" || {
SYSTEMD_OFFLINE=1 $STD dpkg -i "$tmpdir/$filename" || {
msg_error "Both apt and dpkg installation failed" msg_error "Both apt and dpkg installation failed"
rm -rf "$tmpdir" rm -rf "$tmpdir"
return 1 return 1
@@ -1894,7 +1902,7 @@ function fetch_and_deploy_gh_release() {
rm -rf "$tmpdir" "$unpack_tmp" rm -rf "$tmpdir" "$unpack_tmp"
return 1 return 1
} }
elif [[ "$filename" == *.tar.* || "$filename" == *.tgz ]]; then elif [[ "$filename" == *.tar.* || "$filename" == *.tgz || "$filename" == *.txz ]]; then
tar --no-same-owner -xf "$tmpdir/$filename" -C "$unpack_tmp" || { tar --no-same-owner -xf "$tmpdir/$filename" -C "$unpack_tmp" || {
msg_error "Failed to extract TAR archive" msg_error "Failed to extract TAR archive"
rm -rf "$tmpdir" "$unpack_tmp" rm -rf "$tmpdir" "$unpack_tmp"
@@ -1998,50 +2006,6 @@ function fetch_and_deploy_gh_release() {
rm -rf "$tmpdir" rm -rf "$tmpdir"
} }
# ------------------------------------------------------------------------------
# Loads LOCAL_IP from persistent store or detects if missing.
#
# Description:
# - Loads from /run/local-ip.env or performs runtime lookup
# ------------------------------------------------------------------------------
function import_local_ip() {
local IP_FILE="/run/local-ip.env"
if [[ -f "$IP_FILE" ]]; then
# shellcheck disable=SC1090
source "$IP_FILE"
fi
if [[ -z "${LOCAL_IP:-}" ]]; then
get_current_ip() {
local targets=("8.8.8.8" "1.1.1.1" "192.168.1.1" "10.0.0.1" "172.16.0.1" "default")
local ip
for target in "${targets[@]}"; do
if [[ "$target" == "default" ]]; then
ip=$(ip route get 1 2>/dev/null | awk '{for(i=1;i<=NF;i++) if ($i=="src") print $(i+1)}')
else
ip=$(ip route get "$target" 2>/dev/null | awk '{for(i=1;i<=NF;i++) if ($i=="src") print $(i+1)}')
fi
if [[ -n "$ip" ]]; then
echo "$ip"
return 0
fi
done
return 1
}
LOCAL_IP="$(get_current_ip || true)"
if [[ -z "$LOCAL_IP" ]]; then
msg_error "Could not determine LOCAL_IP"
return 1
fi
fi
export LOCAL_IP
}
# ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------
# Installs Adminer (Debian/Ubuntu via APT, Alpine via direct download). # Installs Adminer (Debian/Ubuntu via APT, Alpine via direct download).
# #
@@ -2669,6 +2633,7 @@ function setup_hwaccel() {
# GPU Selection - Let user choose which GPU(s) to configure # GPU Selection - Let user choose which GPU(s) to configure
# ═══════════════════════════════════════════════════════════════════════════ # ═══════════════════════════════════════════════════════════════════════════
local -a SELECTED_INDICES=() local -a SELECTED_INDICES=()
local install_nvidia_drivers="yes"
if [[ $gpu_count -eq 1 ]]; then if [[ $gpu_count -eq 1 ]]; then
# Single GPU - auto-select # Single GPU - auto-select
@@ -2677,7 +2642,7 @@ function setup_hwaccel() {
else else
# Multiple GPUs - show selection menu # Multiple GPUs - show selection menu
echo "" echo ""
msg_info "Multiple GPUs detected:" msg_custom "⚠" "${YW}" "Multiple GPUs detected:"
echo "" echo ""
for i in "${!GPU_LIST[@]}"; do for i in "${!GPU_LIST[@]}"; do
local type_display="${GPU_TYPES[$i]}" local type_display="${GPU_TYPES[$i]}"
@@ -2730,6 +2695,30 @@ function setup_hwaccel() {
fi fi
fi fi
# Ask whether to install NVIDIA drivers in the container
local nvidia_selected="no"
for idx in "${SELECTED_INDICES[@]}"; do
if [[ "${GPU_TYPES[$idx]}" == "NVIDIA" ]]; then
nvidia_selected="yes"
break
fi
done
if [[ "$nvidia_selected" == "yes" ]]; then
if [[ -n "${INSTALL_NVIDIA_DRIVERS:-}" ]]; then
install_nvidia_drivers="${INSTALL_NVIDIA_DRIVERS}"
else
echo ""
msg_custom "🎮" "${GN}" "NVIDIA GPU passthrough detected"
local nvidia_reply=""
read -r -t 60 -p "${TAB3}⚙️ Install NVIDIA driver libraries in the container? [Y/n] (auto-yes in 60s): " nvidia_reply || nvidia_reply=""
case "${nvidia_reply,,}" in
n | no) install_nvidia_drivers="no" ;;
*) install_nvidia_drivers="yes" ;;
esac
fi
fi
# ═══════════════════════════════════════════════════════════════════════════ # ═══════════════════════════════════════════════════════════════════════════
# OS Detection # OS Detection
# ═══════════════════════════════════════════════════════════════════════════ # ═══════════════════════════════════════════════════════════════════════════
@@ -2790,7 +2779,11 @@ function setup_hwaccel() {
# NVIDIA GPUs # NVIDIA GPUs
# ───────────────────────────────────────────────────────────────────────── # ─────────────────────────────────────────────────────────────────────────
NVIDIA) NVIDIA)
_setup_nvidia_gpu "$os_id" "$os_codename" "$os_version" if [[ "$install_nvidia_drivers" == "yes" ]]; then
_setup_nvidia_gpu "$os_id" "$os_codename" "$os_version"
else
msg_warn "Skipping NVIDIA driver installation (user opted to install manually)"
fi
;; ;;
esac esac
done done
@@ -2920,8 +2913,15 @@ _setup_intel_legacy() {
vainfo \ vainfo \
intel-gpu-tools 2>/dev/null || msg_warn "Some Intel legacy packages failed" intel-gpu-tools 2>/dev/null || msg_warn "Some Intel legacy packages failed"
# beignet provides OpenCL for older Intel GPUs (if available) # beignet provides OpenCL for older Intel GPUs (Sandy Bridge to Broadwell)
$STD apt -y install beignet-opencl-icd 2>/dev/null || true # Note: beignet-opencl-icd was removed in Debian 12+ and Ubuntu 22.04+
# Check if package is available before attempting installation
if apt-cache show beignet-opencl-icd &>/dev/null; then
$STD apt -y install beignet-opencl-icd 2>/dev/null || msg_warn "beignet-opencl-icd installation failed (optional)"
else
msg_warn "beignet-opencl-icd not available - OpenCL support for legacy Intel GPU limited"
msg_warn "Note: Hardware video encoding/decoding (VA-API) still works without OpenCL"
fi
msg_ok "Intel Legacy GPU configured" msg_ok "Intel Legacy GPU configured"
} }
@@ -2989,16 +2989,24 @@ _setup_nvidia_gpu() {
msg_info "Installing NVIDIA GPU drivers" msg_info "Installing NVIDIA GPU drivers"
# Prevent interactive dialogs (e.g., "Mismatching nvidia kernel module" whiptail)
export DEBIAN_FRONTEND=noninteractive
export NEEDRESTART_MODE=a
# Detect host driver version (passed through via /proc) # Detect host driver version (passed through via /proc)
# Format varies by driver type:
# Proprietary: "NVRM version: NVIDIA UNIX x86_64 Kernel Module 550.54.14 Thu..."
# Open: "NVRM version: NVIDIA UNIX Open Kernel Module for x86_64 590.48.01 Release..."
# Use regex to extract version number (###.##.## pattern)
local nvidia_host_version="" local nvidia_host_version=""
if [[ -f /proc/driver/nvidia/version ]]; then if [[ -f /proc/driver/nvidia/version ]]; then
nvidia_host_version=$(grep "NVRM version:" /proc/driver/nvidia/version 2>/dev/null | awk '{print $8}') nvidia_host_version=$(grep -oP '\d{3,}\.\d+\.\d+' /proc/driver/nvidia/version 2>/dev/null | head -1)
fi fi
if [[ -z "$nvidia_host_version" ]]; then if [[ -z "$nvidia_host_version" ]]; then
msg_warn "NVIDIA host driver version not found in /proc/driver/nvidia/version" msg_warn "NVIDIA host driver version not found in /proc/driver/nvidia/version"
msg_warn "Ensure NVIDIA drivers are installed on host and GPU passthrough is enabled" msg_warn "Ensure NVIDIA drivers are installed on host and GPU passthrough is enabled"
$STD apt -y install va-driver-all vainfo 2>/dev/null || true $STD apt-get -y install va-driver-all vainfo 2>/dev/null || true
return 0 return 0
fi fi
@@ -3011,53 +3019,115 @@ _setup_nvidia_gpu() {
sed -i -E 's/Components: (.*)$/Components: \1 contrib non-free non-free-firmware/g' /etc/apt/sources.list.d/debian.sources 2>/dev/null || true sed -i -E 's/Components: (.*)$/Components: \1 contrib non-free non-free-firmware/g' /etc/apt/sources.list.d/debian.sources 2>/dev/null || true
fi fi
fi fi
$STD apt-get -y update 2>/dev/null || msg_warn "apt update failed - continuing anyway"
# Determine CUDA repository # For Debian 13 Trixie/Sid: Use Debian's own nvidia packages first (better compatibility)
local cuda_repo="debian12" # NVIDIA's CUDA repo targets Debian 12 and may not have amd64 packages for Trixie
case "$os_codename" in if [[ "$os_codename" == "trixie" || "$os_codename" == "sid" ]]; then
bullseye) cuda_repo="debian11" ;; msg_info "Debian ${os_codename}: Using Debian's NVIDIA packages"
bookworm) cuda_repo="debian12" ;;
trixie | sid) cuda_repo="debian12" ;; # Forward compatible
esac
# Add NVIDIA CUDA repository # Extract major version for flexible matching (580.126.09 -> 580)
if [[ ! -f /usr/share/keyrings/cuda-archive-keyring.gpg ]]; then local nvidia_major_version="${nvidia_host_version%%.*}"
msg_info "Adding NVIDIA CUDA repository (${cuda_repo})"
local cuda_keyring # Check what versions are actually available
cuda_keyring="$(mktemp)" local available_version=""
if curl -fsSL -o "$cuda_keyring" "https://developer.download.nvidia.com/compute/cuda/repos/${cuda_repo}/x86_64/cuda-keyring_1.1-1_all.deb" 2>/dev/null; then available_version=$(apt-cache madison libcuda1 2>/dev/null | awk '{print $3}' | grep -E "^${nvidia_major_version}\." | head -1 || true)
$STD dpkg -i "$cuda_keyring" 2>/dev/null || true
if [[ -n "$available_version" ]]; then
msg_info "Found available NVIDIA version: ${available_version}"
local nvidia_pkgs="libcuda1=${available_version} libnvcuvid1=${available_version} libnvidia-encode1=${available_version} libnvidia-ml1=${available_version}"
if $STD apt-get -y -o Dpkg::Options::="--force-confold" install --no-install-recommends $nvidia_pkgs 2>/dev/null; then
msg_ok "Installed NVIDIA libraries (${available_version})"
else
msg_warn "Failed to install NVIDIA ${available_version} - trying unversioned"
$STD apt-get -y -o Dpkg::Options::="--force-confold" install --no-install-recommends libcuda1 libnvcuvid1 libnvidia-encode1 libnvidia-ml1 2>/dev/null || true
fi
else else
msg_warn "Failed to download NVIDIA CUDA keyring" # No matching major version - try latest available or unversioned
msg_warn "No NVIDIA packages for version ${nvidia_major_version}.x found in repos"
available_version=$(apt-cache madison libcuda1 2>/dev/null | awk '{print $3}' | head -1 || true)
if [[ -n "$available_version" ]]; then
msg_info "Trying latest available: ${available_version} (may cause version mismatch)"
$STD apt-get -y -o Dpkg::Options::="--force-confold" install --no-install-recommends \
libcuda1="${available_version}" libnvcuvid1="${available_version}" \
libnvidia-encode1="${available_version}" libnvidia-ml1="${available_version}" 2>/dev/null ||
$STD apt-get -y -o Dpkg::Options::="--force-confold" install --no-install-recommends \
libcuda1 libnvcuvid1 libnvidia-encode1 libnvidia-ml1 2>/dev/null ||
msg_warn "NVIDIA library installation failed - GPU compute may not work"
else
msg_warn "No NVIDIA packages available in Debian repos - GPU support disabled"
fi
fi fi
rm -f "$cuda_keyring" $STD apt-get -y -o Dpkg::Options::="--force-confold" install --no-install-recommends nvidia-smi 2>/dev/null || true
fi
# Pin NVIDIA repo for version matching else
cat <<'NVIDIA_PIN' >/etc/apt/preferences.d/nvidia-cuda-pin # Debian 11/12: Use NVIDIA CUDA repository for version matching
local cuda_repo="debian12"
case "$os_codename" in
bullseye) cuda_repo="debian11" ;;
bookworm) cuda_repo="debian12" ;;
esac
# Add NVIDIA CUDA repository
if [[ ! -f /usr/share/keyrings/cuda-archive-keyring.gpg ]]; then
msg_info "Adding NVIDIA CUDA repository (${cuda_repo})"
local cuda_keyring
cuda_keyring="$(mktemp)"
if curl -fsSL -o "$cuda_keyring" "https://developer.download.nvidia.com/compute/cuda/repos/${cuda_repo}/x86_64/cuda-keyring_1.1-1_all.deb" 2>/dev/null; then
$STD dpkg -i "$cuda_keyring" 2>/dev/null || true
else
msg_warn "Failed to download NVIDIA CUDA keyring"
fi
rm -f "$cuda_keyring"
fi
# Pin NVIDIA repo for version matching
cat <<'NVIDIA_PIN' >/etc/apt/preferences.d/nvidia-cuda-pin
Package: * Package: *
Pin: origin developer.download.nvidia.com Pin: origin developer.download.nvidia.com
Pin-Priority: 1001 Pin-Priority: 1001
NVIDIA_PIN NVIDIA_PIN
$STD apt -y update $STD apt-get -y update 2>/dev/null || msg_warn "apt update failed - continuing anyway"
# Install version-matched NVIDIA libraries # Extract major version for flexible matching (580.126.09 -> 580)
local nvidia_pkgs="libcuda1=${nvidia_host_version}* libnvcuvid1=${nvidia_host_version}* libnvidia-encode1=${nvidia_host_version}* libnvidia-ml1=${nvidia_host_version}*" local nvidia_major_version="${nvidia_host_version%%.*}"
msg_info "Installing NVIDIA libraries (version ${nvidia_host_version})" # Check what versions are actually available in CUDA repo
if $STD apt -y install --no-install-recommends $nvidia_pkgs 2>/dev/null; then local available_version=""
msg_ok "Installed version-matched NVIDIA libraries" available_version=$(apt-cache madison libcuda1 2>/dev/null | awk '{print $3}' | grep -E "^${nvidia_major_version}\." | head -1 || true)
else
msg_warn "Version-pinned install failed - trying unpinned" if [[ -n "$available_version" ]]; then
if $STD apt -y install --no-install-recommends libcuda1 libnvcuvid1 libnvidia-encode1 libnvidia-ml1 2>/dev/null; then msg_info "Installing NVIDIA libraries (version ${available_version})"
msg_warn "Installed NVIDIA libraries (unpinned) - version mismatch may occur" local nvidia_pkgs="libcuda1=${available_version} libnvcuvid1=${available_version} libnvidia-encode1=${available_version} libnvidia-ml1=${available_version}"
if $STD apt-get -y -o Dpkg::Options::="--force-confold" install --no-install-recommends $nvidia_pkgs 2>/dev/null; then
msg_ok "Installed version-matched NVIDIA libraries"
else
msg_warn "Version-pinned install failed - trying unpinned"
$STD apt-get -y -o Dpkg::Options::="--force-confold" install --no-install-recommends libcuda1 libnvcuvid1 libnvidia-encode1 libnvidia-ml1 2>/dev/null ||
msg_warn "NVIDIA library installation failed"
fi
else else
msg_warn "NVIDIA library installation failed" msg_warn "No NVIDIA packages for version ${nvidia_major_version}.x in CUDA repo (host: ${nvidia_host_version})"
# Try latest available version
available_version=$(apt-cache madison libcuda1 2>/dev/null | awk '{print $3}' | head -1 || true)
if [[ -n "$available_version" ]]; then
msg_info "Trying latest available: ${available_version} (version mismatch warning)"
if $STD apt-get -y -o Dpkg::Options::="--force-confold" install --no-install-recommends \
libcuda1="${available_version}" libnvcuvid1="${available_version}" \
libnvidia-encode1="${available_version}" libnvidia-ml1="${available_version}" 2>/dev/null; then
msg_ok "Installed NVIDIA libraries (${available_version}) - version differs from host"
else
$STD apt-get -y -o Dpkg::Options::="--force-confold" install --no-install-recommends libcuda1 libnvcuvid1 libnvidia-encode1 libnvidia-ml1 2>/dev/null ||
msg_warn "NVIDIA library installation failed"
fi
else
msg_warn "No NVIDIA packages available in CUDA repo - GPU support disabled"
fi
fi fi
fi
$STD apt -y install --no-install-recommends nvidia-smi 2>/dev/null || true $STD apt-get -y -o Dpkg::Options::="--force-confold" install --no-install-recommends nvidia-smi 2>/dev/null || true
fi
elif [[ "$os_id" == "ubuntu" ]]; then elif [[ "$os_id" == "ubuntu" ]]; then
# Ubuntu versioning # Ubuntu versioning
@@ -3081,20 +3151,45 @@ NVIDIA_PIN
rm -f "$cuda_keyring" rm -f "$cuda_keyring"
fi fi
$STD apt -y update $STD apt-get -y update 2>/dev/null || msg_warn "apt update failed - continuing anyway"
# Try version-matched install # Extract major version for flexible matching
local nvidia_pkgs="libcuda1=${nvidia_host_version}* libnvcuvid1=${nvidia_host_version}* libnvidia-encode1=${nvidia_host_version}* libnvidia-ml1=${nvidia_host_version}*" local nvidia_major_version="${nvidia_host_version%%.*}"
if $STD apt -y install --no-install-recommends $nvidia_pkgs 2>/dev/null; then
msg_ok "Installed version-matched NVIDIA libraries" # Check what versions are available
local available_version=""
available_version=$(apt-cache madison libcuda1 2>/dev/null | awk '{print $3}' | grep -E "^${nvidia_major_version}\." | head -1 || true)
if [[ -n "$available_version" ]]; then
msg_info "Installing NVIDIA libraries (version ${available_version})"
local nvidia_pkgs="libcuda1=${available_version} libnvcuvid1=${available_version} libnvidia-encode1=${available_version} libnvidia-ml1=${available_version}"
if $STD apt-get -y -o Dpkg::Options::="--force-confold" install --no-install-recommends $nvidia_pkgs 2>/dev/null; then
msg_ok "Installed version-matched NVIDIA libraries"
else
# Fallback to Ubuntu repo packages with versioned nvidia-utils
msg_warn "CUDA repo install failed - trying Ubuntu native packages (nvidia-utils-${nvidia_major_version})"
if $STD apt-get -y -o Dpkg::Options::="--force-confold" install --no-install-recommends \
libnvidia-decode-${nvidia_major_version} libnvidia-encode-${nvidia_major_version} nvidia-utils-${nvidia_major_version} 2>/dev/null; then
msg_ok "Installed Ubuntu NVIDIA packages (${nvidia_major_version})"
else
msg_warn "NVIDIA driver installation failed - please install manually: apt install nvidia-utils-${nvidia_major_version}"
fi
fi
else else
# Fallback to Ubuntu repo packages msg_warn "No NVIDIA packages for version ${nvidia_major_version}.x in CUDA repo"
$STD apt -y install --no-install-recommends libnvidia-decode libnvidia-encode nvidia-utils 2>/dev/null || msg_warn "NVIDIA installation failed" # Fallback to Ubuntu repo packages with versioned nvidia-utils
msg_info "Trying Ubuntu native packages (nvidia-utils-${nvidia_major_version})"
if $STD apt-get -y -o Dpkg::Options::="--force-confold" install --no-install-recommends \
libnvidia-decode-${nvidia_major_version} libnvidia-encode-${nvidia_major_version} nvidia-utils-${nvidia_major_version} 2>/dev/null; then
msg_ok "Installed Ubuntu NVIDIA packages (${nvidia_major_version})"
else
msg_warn "NVIDIA driver installation failed - please install manually: apt install nvidia-utils-${nvidia_major_version}"
fi
fi fi
fi fi
# VA-API for hybrid setups (Intel + NVIDIA) # VA-API for hybrid setups (Intel + NVIDIA)
$STD apt -y install va-driver-all vainfo 2>/dev/null || true $STD apt-get -y install va-driver-all vainfo 2>/dev/null || true
msg_ok "NVIDIA GPU configured" msg_ok "NVIDIA GPU configured"
} }
@@ -3496,10 +3591,11 @@ IP_FILE="/run/local-ip.env"
mkdir -p "$(dirname "$IP_FILE")" mkdir -p "$(dirname "$IP_FILE")"
get_current_ip() { get_current_ip() {
local targets=("8.8.8.8" "1.1.1.1" "192.168.1.1" "10.0.0.1" "172.16.0.1" "default")
local ip local ip
for target in "${targets[@]}"; do # Try IPv4 targets first
local ipv4_targets=("8.8.8.8" "1.1.1.1" "192.168.1.1" "10.0.0.1" "172.16.0.1" "default")
for target in "${ipv4_targets[@]}"; do
if [[ "$target" == "default" ]]; then if [[ "$target" == "default" ]]; then
ip=$(ip route get 1 2>/dev/null | awk '{for(i=1;i<=NF;i++) if ($i=="src") print $(i+1)}') ip=$(ip route get 1 2>/dev/null | awk '{for(i=1;i<=NF;i++) if ($i=="src") print $(i+1)}')
else else
@@ -3511,6 +3607,23 @@ get_current_ip() {
fi fi
done done
# IPv6 fallback: Try direct interface lookup for eth0
ip=$(ip -6 addr show eth0 scope global 2>/dev/null | awk '/inet6 / {print $2}' | cut -d/ -f1 | head -n1)
if [[ -n "$ip" && "$ip" =~ : ]]; then
echo "$ip"
return 0
fi
# IPv6 fallback: Use routing table with IPv6 targets (Google DNS, Cloudflare DNS)
local ipv6_targets=("2001:4860:4860::8888" "2606:4700:4700::1111")
for target in "${ipv6_targets[@]}"; do
ip=$(ip -6 route get "$target" 2>/dev/null | awk '{for(i=1;i<=NF;i++) if ($i=="src") print $(i+1)}')
if [[ -n "$ip" && "$ip" =~ : ]]; then
echo "$ip"
return 0
fi
done
return 1 return 1
} }
@@ -3549,58 +3662,145 @@ EOF
} }
# ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------
# Installs or updates MariaDB from official repo. # Installs or updates MariaDB.
# #
# Description: # Description:
# - Uses Debian/Ubuntu distribution packages by default (most reliable)
# - Only uses official MariaDB repository when a specific version is requested
# - Detects current MariaDB version and replaces it if necessary # - Detects current MariaDB version and replaces it if necessary
# - Preserves existing database data # - Preserves existing database data
# - Dynamically determines latest GA version if "latest" is given
# #
# Variables: # Variables:
# MARIADB_VERSION - MariaDB version to install (e.g. 10.11, latest) (default: latest) # MARIADB_VERSION - MariaDB version to install (optional)
# - Not set or "latest": Uses distribution packages (recommended)
# - Specific version (e.g. "11.4", "12.2"): Uses MariaDB official repo
# ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------
setup_mariadb() { setup_mariadb() {
local MARIADB_VERSION="${MARIADB_VERSION:-latest}" local MARIADB_VERSION="${MARIADB_VERSION:-latest}"
local USE_DISTRO_PACKAGES=false
# Resolve "latest" to actual version # Ensure non-interactive mode for all apt operations
if [[ "$MARIADB_VERSION" == "latest" ]]; then export DEBIAN_FRONTEND=noninteractive
if ! curl -fsI --max-time 10 http://mirror.mariadb.org/repo/ >/dev/null 2>&1; then export NEEDRESTART_MODE=a
msg_warn "MariaDB mirror not reachable - trying mariadb_repo_setup fallback" export NEEDRESTART_SUSPEND=1
# Try using official mariadb_repo_setup script as fallback
if curl -fsSL --max-time 15 https://r.mariadb.com/downloads/mariadb_repo_setup 2>/dev/null | bash -s -- --skip-verify >/dev/null 2>&1; then
msg_ok "MariaDB repository configured via mariadb_repo_setup"
# Extract version from configured repo
MARIADB_VERSION=$(grep -oP 'repo/\K[0-9]+\.[0-9]+\.[0-9]+' /etc/apt/sources.list.d/mariadb.list 2>/dev/null | head -n1 || echo "12.2")
else
msg_warn "mariadb_repo_setup failed - using hardcoded fallback version"
MARIADB_VERSION="12.2"
fi
else
MARIADB_VERSION=$(curl -fsSL --max-time 15 http://mirror.mariadb.org/repo/ 2>/dev/null |
grep -Eo '[0-9]+\.[0-9]+\.[0-9]+/' |
grep -vE 'rc/|rolling/' |
sed 's|/||' |
sort -Vr |
head -n1 || echo "")
if [[ -z "$MARIADB_VERSION" ]]; then # Determine installation method:
msg_warn "Could not parse latest GA MariaDB version from mirror - trying mariadb_repo_setup" # - "latest" or empty: Use distribution packages (avoids mirror issues)
if curl -fsSL --max-time 15 https://r.mariadb.com/downloads/mariadb_repo_setup 2>/dev/null | bash -s -- --skip-verify >/dev/null 2>&1; then # - Specific version: Use MariaDB official repository
msg_ok "MariaDB repository configured via mariadb_repo_setup" if [[ "$MARIADB_VERSION" == "latest" || -z "$MARIADB_VERSION" ]]; then
MARIADB_VERSION=$(grep -oP 'repo/\K[0-9]+\.[0-9]+\.[0-9]+' /etc/apt/sources.list.d/mariadb.list 2>/dev/null | head -n1 || echo "12.2") USE_DISTRO_PACKAGES=true
else msg_info "Setup MariaDB (distribution packages)"
msg_warn "mariadb_repo_setup failed - using hardcoded fallback version" else
MARIADB_VERSION="12.2" msg_info "Setup MariaDB $MARIADB_VERSION (official repository)"
fi
fi
fi
fi fi
# Get currently installed version # Get currently installed version
local CURRENT_VERSION="" local CURRENT_VERSION=""
CURRENT_VERSION=$(is_tool_installed "mariadb" 2>/dev/null) || true CURRENT_VERSION=$(is_tool_installed "mariadb" 2>/dev/null) || true
# Pre-configure debconf to prevent any interactive prompts during install/upgrade
debconf-set-selections <<EOF
mariadb-server mariadb-server/feedback boolean false
mariadb-server mariadb-server/root_password password
mariadb-server mariadb-server/root_password_again password
EOF
# If specific version requested, also configure version-specific debconf
if [[ "$USE_DISTRO_PACKAGES" == "false" ]]; then
local MARIADB_MAJOR_MINOR
MARIADB_MAJOR_MINOR=$(echo "$MARIADB_VERSION" | awk -F. '{print $1"."$2}')
if [[ -n "$MARIADB_MAJOR_MINOR" ]]; then
debconf-set-selections <<EOF
mariadb-server-$MARIADB_MAJOR_MINOR mariadb-server/feedback boolean false
mariadb-server-$MARIADB_MAJOR_MINOR mariadb-server/root_password password
mariadb-server-$MARIADB_MAJOR_MINOR mariadb-server/root_password_again password
EOF
fi
fi
# ============================================================================
# DISTRIBUTION PACKAGES PATH (default, most reliable)
# ============================================================================
if [[ "$USE_DISTRO_PACKAGES" == "true" ]]; then
# Check if MariaDB was previously installed from official repo
local HAD_MARIADB_REPO=false
if [[ -f /etc/apt/sources.list.d/mariadb.sources ]] || [[ -f /etc/apt/sources.list.d/mariadb.list ]]; then
HAD_MARIADB_REPO=true
msg_info "Removing MariaDB official repository (switching to distribution packages)"
fi
# Clean up any existing MariaDB repository files to avoid conflicts
cleanup_old_repo_files "mariadb"
# If we had a repo, we need to refresh APT cache
if [[ "$HAD_MARIADB_REPO" == "true" ]]; then
$STD apt update || msg_warn "APT update had issues, continuing..."
fi
# Ensure APT is working
ensure_apt_working || return 1
# Check if installed version is from official repo and higher than distro version
# In this case, we keep the existing installation to avoid data issues
if [[ -n "$CURRENT_VERSION" ]]; then
# Get available distro version
local DISTRO_VERSION=""
DISTRO_VERSION=$(apt-cache policy mariadb-server 2>/dev/null | grep -E "Candidate:" | awk '{print $2}' | grep -oP '^\d+:\K\d+\.\d+\.\d+' || echo "")
if [[ -n "$DISTRO_VERSION" ]]; then
# Compare versions - if current is higher, keep it
local CURRENT_MAJOR DISTRO_MAJOR
CURRENT_MAJOR=$(echo "$CURRENT_VERSION" | awk -F. '{print $1}')
DISTRO_MAJOR=$(echo "$DISTRO_VERSION" | awk -F. '{print $1}')
if [[ "$CURRENT_MAJOR" -gt "$DISTRO_MAJOR" ]]; then
msg_warn "MariaDB $CURRENT_VERSION is already installed (higher than distro $DISTRO_VERSION)"
msg_warn "Keeping existing installation to preserve data integrity"
msg_warn "To use distribution packages, manually remove MariaDB first"
_setup_mariadb_runtime_dir
cache_installed_version "mariadb" "$CURRENT_VERSION"
msg_ok "Setup MariaDB $CURRENT_VERSION (existing installation kept)"
return 0
fi
fi
fi
# Install or upgrade MariaDB from distribution packages
if ! install_packages_with_retry "mariadb-server" "mariadb-client"; then
msg_error "Failed to install MariaDB packages from distribution"
return 1
fi
# Get installed version for caching
local INSTALLED_VERSION=""
INSTALLED_VERSION=$(mariadb --version 2>/dev/null | grep -oP '\d+\.\d+\.\d+' | head -n1 || echo "distro")
# Configure runtime directory and finish
_setup_mariadb_runtime_dir
cache_installed_version "mariadb" "$INSTALLED_VERSION"
msg_ok "Setup MariaDB $INSTALLED_VERSION (distribution packages)"
return 0
fi
# ============================================================================
# OFFICIAL REPOSITORY PATH (only when specific version requested)
# ============================================================================
# First, check if there's an old/broken repository that needs cleanup
if [[ -f /etc/apt/sources.list.d/mariadb.sources ]] || [[ -f /etc/apt/sources.list.d/mariadb.list ]]; then
local OLD_REPO_VERSION=""
OLD_REPO_VERSION=$(grep -oP 'repo/\K[0-9]+\.[0-9]+(\.[0-9]+)?' /etc/apt/sources.list.d/mariadb.sources 2>/dev/null || \
grep -oP 'repo/\K[0-9]+\.[0-9]+(\.[0-9]+)?' /etc/apt/sources.list.d/mariadb.list 2>/dev/null || echo "")
# Check if old repo points to a different version
if [[ -n "$OLD_REPO_VERSION" ]] && [[ "${OLD_REPO_VERSION%.*}" != "${MARIADB_VERSION%.*}" ]]; then
msg_info "Cleaning up old MariaDB repository (was: $OLD_REPO_VERSION, requested: $MARIADB_VERSION)"
cleanup_old_repo_files "mariadb"
$STD apt update || msg_warn "APT update had issues, continuing..."
fi
fi
# Scenario 1: Already installed at target version - just update packages # Scenario 1: Already installed at target version - just update packages
if [[ -n "$CURRENT_VERSION" && "$CURRENT_VERSION" == "$MARIADB_VERSION" ]]; then if [[ -n "$CURRENT_VERSION" && "$CURRENT_VERSION" == "$MARIADB_VERSION" ]]; then
msg_info "Update MariaDB $MARIADB_VERSION" msg_info "Update MariaDB $MARIADB_VERSION"
@@ -3639,9 +3839,7 @@ setup_mariadb() {
remove_old_tool_version "mariadb" remove_old_tool_version "mariadb"
fi fi
# Scenario 3: Fresh install or version change # Scenario 3: Fresh install or version change with specific version
msg_info "Setup MariaDB $MARIADB_VERSION"
# Prepare repository (cleanup + validation) # Prepare repository (cleanup + validation)
prepare_repository_setup "mariadb" || { prepare_repository_setup "mariadb" || {
msg_error "Failed to prepare MariaDB repository" msg_error "Failed to prepare MariaDB repository"
@@ -3667,31 +3865,39 @@ setup_mariadb() {
return 1 return 1
} }
# Set debconf selections for all potential versions
local MARIADB_MAJOR_MINOR
MARIADB_MAJOR_MINOR=$(echo "$MARIADB_VERSION" | awk -F. '{print $1"."$2}')
if [[ -n "$MARIADB_MAJOR_MINOR" ]]; then
echo "mariadb-server-$MARIADB_MAJOR_MINOR mariadb-server/feedback boolean false" | debconf-set-selections
fi
# Install packages with retry logic # Install packages with retry logic
export DEBIAN_FRONTEND=noninteractive
if ! install_packages_with_retry "mariadb-server" "mariadb-client"; then if ! install_packages_with_retry "mariadb-server" "mariadb-client"; then
# Fallback: try without specific version # Fallback: try distribution packages
msg_warn "Failed to install MariaDB packages from upstream repo, trying distro fallback..." msg_warn "Failed to install MariaDB $MARIADB_VERSION from official repo, falling back to distribution packages..."
cleanup_old_repo_files "mariadb" cleanup_old_repo_files "mariadb"
$STD apt update || { $STD apt update || {
msg_warn "APT update also failed, continuing with cache" msg_warn "APT update also failed, continuing with cache"
} }
install_packages_with_retry "mariadb-server" "mariadb-client" || { if install_packages_with_retry "mariadb-server" "mariadb-client"; then
msg_error "Failed to install MariaDB packages (both upstream and distro)" local FALLBACK_VERSION=""
FALLBACK_VERSION=$(mariadb --version 2>/dev/null | grep -oP '\d+\.\d+\.\d+' | head -n1 || echo "distro")
msg_warn "Installed MariaDB $FALLBACK_VERSION from distribution instead of requested $MARIADB_VERSION"
_setup_mariadb_runtime_dir
cache_installed_version "mariadb" "$FALLBACK_VERSION"
msg_ok "Setup MariaDB $FALLBACK_VERSION (fallback to distribution packages)"
return 0
else
msg_error "Failed to install MariaDB packages (both official repo and distribution)"
return 1 return 1
} fi
fi fi
_setup_mariadb_runtime_dir
cache_installed_version "mariadb" "$MARIADB_VERSION"
msg_ok "Setup MariaDB $MARIADB_VERSION"
}
# ------------------------------------------------------------------------------
# Helper function: Configure MariaDB runtime directory persistence
# ------------------------------------------------------------------------------
_setup_mariadb_runtime_dir() {
# Configure tmpfiles.d to ensure /run/mysqld directory is created on boot # Configure tmpfiles.d to ensure /run/mysqld directory is created on boot
# This fixes the issue where MariaDB fails to start after container reboot # This fixes the issue where MariaDB fails to start after container reboot
msg_info "Configuring MariaDB runtime directory persistence"
# Create tmpfiles.d configuration with error handling # Create tmpfiles.d configuration with error handling
if ! printf '# Ensure /run/mysqld directory exists with correct permissions for MariaDB\nd /run/mysqld 0755 mysql mysql -\n' >/etc/tmpfiles.d/mariadb.conf; then if ! printf '# Ensure /run/mysqld directory exists with correct permissions for MariaDB\nd /run/mysqld 0755 mysql mysql -\n' >/etc/tmpfiles.d/mariadb.conf; then
@@ -3711,11 +3917,6 @@ setup_mariadb() {
msg_warn "mysql user not found - directory created with correct permissions but ownership not set" msg_warn "mysql user not found - directory created with correct permissions but ownership not set"
fi fi
fi fi
msg_ok "Configured MariaDB runtime directory persistence"
cache_installed_version "mariadb" "$MARIADB_VERSION"
msg_ok "Setup MariaDB $MARIADB_VERSION"
} }
# ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------
@@ -3815,6 +4016,11 @@ function setup_mongodb() {
DISTRO_ID=$(get_os_info id) DISTRO_ID=$(get_os_info id)
DISTRO_CODENAME=$(get_os_info codename) DISTRO_CODENAME=$(get_os_info codename)
# Ensure non-interactive mode for all apt operations
export DEBIAN_FRONTEND=noninteractive
export NEEDRESTART_MODE=a
export NEEDRESTART_SUSPEND=1
# Check AVX support # Check AVX support
if ! grep -qm1 'avx[^ ]*' /proc/cpuinfo; then if ! grep -qm1 'avx[^ ]*' /proc/cpuinfo; then
local major="${MONGO_VERSION%%.*}" local major="${MONGO_VERSION%%.*}"
@@ -3933,6 +4139,11 @@ function setup_mysql() {
DISTRO_ID=$(awk -F= '/^ID=/{print $2}' /etc/os-release | tr -d '"') DISTRO_ID=$(awk -F= '/^ID=/{print $2}' /etc/os-release | tr -d '"')
DISTRO_CODENAME=$(awk -F= '/^VERSION_CODENAME=/{print $2}' /etc/os-release) DISTRO_CODENAME=$(awk -F= '/^VERSION_CODENAME=/{print $2}' /etc/os-release)
# Ensure non-interactive mode for all apt operations
export DEBIAN_FRONTEND=noninteractive
export NEEDRESTART_MODE=a
export NEEDRESTART_SUSPEND=1
# Get currently installed version # Get currently installed version
local CURRENT_VERSION="" local CURRENT_VERSION=""
CURRENT_VERSION=$(is_tool_installed "mysql" 2>/dev/null) || true CURRENT_VERSION=$(is_tool_installed "mysql" 2>/dev/null) || true
@@ -4027,7 +4238,6 @@ EOF
ensure_apt_working || return 1 ensure_apt_working || return 1
# Try multiple package names with retry logic # Try multiple package names with retry logic
export DEBIAN_FRONTEND=noninteractive
local mysql_install_success=false local mysql_install_success=false
if apt-cache search "^mysql-server$" 2>/dev/null | grep -q . && if apt-cache search "^mysql-server$" 2>/dev/null | grep -q . &&
@@ -4315,11 +4525,20 @@ EOF
return 1 return 1
} }
manage_tool_repository "php" "$PHP_VERSION" "" "https://packages.sury.org/debsuryorg-archive-keyring.deb" || { # Use different repository based on OS
msg_error "Failed to setup PHP repository" if [[ "$DISTRO_ID" == "ubuntu" ]]; then
return 1 # Ubuntu: Use ondrej/php PPA
} msg_info "Adding ondrej/php PPA for Ubuntu"
$STD apt install -y software-properties-common
# Don't use $STD for add-apt-repository as it uses background processes
add-apt-repository -y ppa:ondrej/php >>"$(get_active_logfile)" 2>&1
else
# Debian: Use Sury repository
manage_tool_repository "php" "$PHP_VERSION" "" "https://packages.sury.org/debsuryorg-archive-keyring.deb" || {
msg_error "Failed to setup PHP repository"
return 1
}
fi
ensure_apt_working || return 1 ensure_apt_working || return 1
$STD apt update $STD apt update
@@ -4342,6 +4561,14 @@ EOF
if [[ "$PHP_FPM" == "YES" ]]; then if [[ "$PHP_FPM" == "YES" ]]; then
MODULE_LIST+=" php${PHP_VERSION}-fpm" MODULE_LIST+=" php${PHP_VERSION}-fpm"
# Create systemd override for PHP-FPM to fix runtime directory issues in LXC containers
mkdir -p /etc/systemd/system/php${PHP_VERSION}-fpm.service.d/
cat <<EOF >/etc/systemd/system/php${PHP_VERSION}-fpm.service.d/override.conf
[Service]
RuntimeDirectory=php
RuntimeDirectoryMode=0755
EOF
$STD systemctl daemon-reload
fi fi
# install apache2 with PHP support if requested # install apache2 with PHP support if requested
@@ -4466,6 +4693,11 @@ function setup_postgresql() {
DISTRO_ID=$(awk -F= '/^ID=/{print $2}' /etc/os-release | tr -d '"') DISTRO_ID=$(awk -F= '/^ID=/{print $2}' /etc/os-release | tr -d '"')
DISTRO_CODENAME=$(awk -F= '/^VERSION_CODENAME=/{print $2}' /etc/os-release) DISTRO_CODENAME=$(awk -F= '/^VERSION_CODENAME=/{print $2}' /etc/os-release)
# Ensure non-interactive mode for all apt operations
export DEBIAN_FRONTEND=noninteractive
export NEEDRESTART_MODE=a
export NEEDRESTART_SUSPEND=1
# Get currently installed version # Get currently installed version
local CURRENT_PG_VERSION="" local CURRENT_PG_VERSION=""
if command -v psql >/dev/null; then if command -v psql >/dev/null; then
@@ -4904,6 +5136,146 @@ function setup_ruby() {
msg_ok "Setup Ruby $RUBY_VERSION" msg_ok "Setup Ruby $RUBY_VERSION"
} }
# ------------------------------------------------------------------------------
# Installs or updates MeiliSearch search engine.
#
# Description:
# - Fresh install: Downloads binary, creates config/service, starts
# - Update: Checks for new release, updates binary if available
# - Waits for service to be ready before returning
# - Exports API keys for use by caller
#
# Variables:
# MEILISEARCH_BIND - Bind address (default: 127.0.0.1:7700)
# MEILISEARCH_ENV - Environment: production/development (default: production)
# MEILISEARCH_DB_PATH - Database path (default: /var/lib/meilisearch/data)
#
# Exports:
# MEILISEARCH_MASTER_KEY - The master key for admin access
# MEILISEARCH_API_KEY - The default search API key
# MEILISEARCH_API_KEY_UID - The UID of the default API key
#
# Example (install script):
# setup_meilisearch
#
# Example (CT update_script):
# setup_meilisearch
# ------------------------------------------------------------------------------
function setup_meilisearch() {
local MEILISEARCH_BIND="${MEILISEARCH_BIND:-127.0.0.1:7700}"
local MEILISEARCH_ENV="${MEILISEARCH_ENV:-production}"
local MEILISEARCH_DB_PATH="${MEILISEARCH_DB_PATH:-/var/lib/meilisearch/data}"
local MEILISEARCH_DUMP_DIR="${MEILISEARCH_DUMP_DIR:-/var/lib/meilisearch/dumps}"
local MEILISEARCH_SNAPSHOT_DIR="${MEILISEARCH_SNAPSHOT_DIR:-/var/lib/meilisearch/snapshots}"
# Get bind address for health checks
local MEILISEARCH_HOST="${MEILISEARCH_BIND%%:*}"
local MEILISEARCH_PORT="${MEILISEARCH_BIND##*:}"
[[ "$MEILISEARCH_HOST" == "0.0.0.0" ]] && MEILISEARCH_HOST="127.0.0.1"
# Update mode: MeiliSearch already installed
if [[ -f /usr/bin/meilisearch ]]; then
if check_for_gh_release "meilisearch" "meilisearch/meilisearch"; then
msg_info "Updating MeiliSearch"
systemctl stop meilisearch
fetch_and_deploy_gh_release "meilisearch" "meilisearch/meilisearch" "binary"
systemctl start meilisearch
msg_ok "Updated MeiliSearch"
fi
return 0
fi
# Fresh install
msg_info "Setup MeiliSearch"
# Install binary
fetch_and_deploy_gh_release "meilisearch" "meilisearch/meilisearch" "binary" || {
msg_error "Failed to install MeiliSearch binary"
return 1
}
# Download default config
curl -fsSL https://raw.githubusercontent.com/meilisearch/meilisearch/latest/config.toml -o /etc/meilisearch.toml || {
msg_error "Failed to download MeiliSearch config"
return 1
}
# Generate master key
MEILISEARCH_MASTER_KEY=$(openssl rand -base64 12)
export MEILISEARCH_MASTER_KEY
# Configure
sed -i \
-e "s|^env =.*|env = \"${MEILISEARCH_ENV}\"|" \
-e "s|^# master_key =.*|master_key = \"${MEILISEARCH_MASTER_KEY}\"|" \
-e "s|^db_path =.*|db_path = \"${MEILISEARCH_DB_PATH}\"|" \
-e "s|^dump_dir =.*|dump_dir = \"${MEILISEARCH_DUMP_DIR}\"|" \
-e "s|^snapshot_dir =.*|snapshot_dir = \"${MEILISEARCH_SNAPSHOT_DIR}\"|" \
-e 's|^# no_analytics = true|no_analytics = true|' \
-e "s|^http_addr =.*|http_addr = \"${MEILISEARCH_BIND}\"|" \
/etc/meilisearch.toml
# Create data directories
mkdir -p "${MEILISEARCH_DB_PATH}" "${MEILISEARCH_DUMP_DIR}" "${MEILISEARCH_SNAPSHOT_DIR}"
# Create systemd service
cat <<EOF >/etc/systemd/system/meilisearch.service
[Unit]
Description=Meilisearch
After=network.target
[Service]
ExecStart=/usr/bin/meilisearch --config-file-path /etc/meilisearch.toml
Restart=always
[Install]
WantedBy=multi-user.target
EOF
# Enable and start service
systemctl daemon-reload
systemctl enable -q --now meilisearch
# Wait for MeiliSearch to be ready (up to 30 seconds)
for i in {1..30}; do
if curl -s -o /dev/null -w "%{http_code}" "http://${MEILISEARCH_HOST}:${MEILISEARCH_PORT}/health" 2>/dev/null | grep -q "200"; then
break
fi
sleep 1
done
# Verify service is running
if ! systemctl is-active --quiet meilisearch; then
msg_error "MeiliSearch service failed to start"
return 1
fi
# Get API keys with retry logic
MEILISEARCH_API_KEY=""
for i in {1..10}; do
MEILISEARCH_API_KEY=$(curl -s -X GET "http://${MEILISEARCH_HOST}:${MEILISEARCH_PORT}/keys" \
-H "Authorization: Bearer ${MEILISEARCH_MASTER_KEY}" 2>/dev/null | \
grep -o '"key":"[^"]*"' | head -n 1 | sed 's/"key":"//;s/"//') || true
[[ -n "$MEILISEARCH_API_KEY" ]] && break
sleep 2
done
MEILISEARCH_API_KEY_UID=$(curl -s -X GET "http://${MEILISEARCH_HOST}:${MEILISEARCH_PORT}/keys" \
-H "Authorization: Bearer ${MEILISEARCH_MASTER_KEY}" 2>/dev/null | \
grep -o '"uid":"[^"]*"' | head -n 1 | sed 's/"uid":"//;s/"//') || true
export MEILISEARCH_API_KEY
export MEILISEARCH_API_KEY_UID
# Cache version
local MEILISEARCH_VERSION
MEILISEARCH_VERSION=$(/usr/bin/meilisearch --version 2>/dev/null | grep -oE '[0-9]+\.[0-9]+\.[0-9]+' | head -1) || true
cache_installed_version "meilisearch" "${MEILISEARCH_VERSION:-unknown}"
msg_ok "Setup MeiliSearch ${MEILISEARCH_VERSION:-}"
}
# ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------
# Installs or upgrades ClickHouse database server. # Installs or upgrades ClickHouse database server.
# #
@@ -4923,6 +5295,11 @@ function setup_clickhouse() {
DISTRO_ID=$(awk -F= '/^ID=/{print $2}' /etc/os-release | tr -d '"') DISTRO_ID=$(awk -F= '/^ID=/{print $2}' /etc/os-release | tr -d '"')
DISTRO_CODENAME=$(awk -F= '/^VERSION_CODENAME=/{print $2}' /etc/os-release) DISTRO_CODENAME=$(awk -F= '/^VERSION_CODENAME=/{print $2}' /etc/os-release)
# Ensure non-interactive mode for all apt operations
export DEBIAN_FRONTEND=noninteractive
export NEEDRESTART_MODE=a
export NEEDRESTART_SUSPEND=1
# Resolve "latest" version # Resolve "latest" version
if [[ "$CLICKHOUSE_VERSION" == "latest" ]]; then if [[ "$CLICKHOUSE_VERSION" == "latest" ]]; then
CLICKHOUSE_VERSION=$(curl -fsSL --max-time 15 https://packages.clickhouse.com/tgz/stable/ 2>/dev/null | CLICKHOUSE_VERSION=$(curl -fsSL --max-time 15 https://packages.clickhouse.com/tgz/stable/ 2>/dev/null |
@@ -4985,7 +5362,6 @@ function setup_clickhouse() {
"main" "main"
# Install packages with retry logic # Install packages with retry logic
export DEBIAN_FRONTEND=noninteractive
$STD apt update || { $STD apt update || {
msg_error "APT update failed for ClickHouse repository" msg_error "APT update failed for ClickHouse repository"
return 1 return 1
@@ -5628,4 +6004,4 @@ EOF
fi fi
msg_ok "Docker setup completed" msg_ok "Docker setup completed"
} }

View File

@@ -1,7 +1,7 @@
#!/usr/bin/env bash #!/usr/bin/env bash
SCRIPT_DIR="$(dirname "$0")" SCRIPT_DIR="$(dirname "$0")"
source "$SCRIPT_DIR/../core/build.func" source "$SCRIPT_DIR/../core/build.func"
# Copyright (c) 2021-2025 tteck # Copyright (c) 2021-2026 tteck
# Author: tteck (tteckster) # Author: tteck (tteckster)
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE # License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
# Source: https://www.debian.org/ # Source: https://www.debian.org/
@@ -40,5 +40,5 @@ start
build_container build_container
description description
msg_ok "Completed Successfully!\n" msg_ok "Completed successfully!\n"
echo -e "${CREATING}${GN}${APP} setup has been successfully initialized!${CL}" echo -e "${CREATING}${GN}${APP} setup has been successfully initialized!${CL}"

View File

@@ -1,6 +1,6 @@
#!/usr/bin/env bash #!/usr/bin/env bash
# Copyright (c) 2021-2025 tteck # Copyright (c) 2021-2026 tteck
# Author: tteck (tteckster) # Author: tteck (tteckster)
# License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE # License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE
# Source: https://www.debian.org/ # Source: https://www.debian.org/

View File

@@ -3,6 +3,7 @@ import { parse } from 'url';
import next from 'next'; import next from 'next';
import { WebSocketServer } from 'ws'; import { WebSocketServer } from 'ws';
import { spawn } from 'child_process'; import { spawn } from 'child_process';
import { existsSync } from 'fs';
import { join, resolve } from 'path'; import { join, resolve } from 'path';
import stripAnsi from 'strip-ansi'; import stripAnsi from 'strip-ansi';
import { spawn as ptySpawn } from 'node-pty'; import { spawn as ptySpawn } from 'node-pty';
@@ -56,6 +57,8 @@ const handle = app.getRequestHandler();
* @property {string} user * @property {string} user
* @property {string} password * @property {string} password
* @property {number} [id] * @property {number} [id]
* @property {string} [auth_type]
* @property {string} [ssh_key_path]
*/ */
/** /**
@@ -295,6 +298,20 @@ class ScriptExecutionHandler {
}); });
} }
/**
* Resolve full server from DB when client sends server with id but no ssh_key_path (e.g. for Shell/Update over SSH).
* @param {ServerInfo|null} server - Server from WebSocket message
* @returns {Promise<ServerInfo|null>} Same server or full server from DB
*/
async resolveServerForSSH(server) {
if (!server?.id) return server;
if (server.auth_type === 'key' && (!server.ssh_key_path || !existsSync(server.ssh_key_path))) {
const full = await this.db.getServerById(server.id);
return /** @type {ServerInfo|null} */ (full ?? server);
}
return server;
}
/** /**
* @param {ExtendedWebSocket} ws * @param {ExtendedWebSocket} ws
* @param {WebSocketMessage} message * @param {WebSocketMessage} message
@@ -305,16 +322,21 @@ class ScriptExecutionHandler {
switch (action) { switch (action) {
case 'start': case 'start':
if (scriptPath && executionId) { if (scriptPath && executionId) {
let serverToUse = server;
if (serverToUse?.id) {
serverToUse = await this.resolveServerForSSH(serverToUse) ?? serverToUse;
}
const resolved = serverToUse ?? server;
if (isClone && containerId && storage && server && cloneCount && hostnames && containerType) { if (isClone && containerId && storage && server && cloneCount && hostnames && containerType) {
await this.startSSHCloneExecution(ws, containerId, executionId, storage, server, containerType, cloneCount, hostnames); await this.startSSHCloneExecution(ws, containerId, executionId, storage, /** @type {ServerInfo} */ (resolved), containerType, cloneCount, hostnames);
} else if (isBackup && containerId && storage) { } else if (isBackup && containerId && storage) {
await this.startBackupExecution(ws, containerId, executionId, storage, mode, server); await this.startBackupExecution(ws, containerId, executionId, storage, mode, resolved);
} else if (isUpdate && containerId) { } else if (isUpdate && containerId) {
await this.startUpdateExecution(ws, containerId, executionId, mode, server, backupStorage); await this.startUpdateExecution(ws, containerId, executionId, mode, resolved, backupStorage);
} else if (isShell && containerId) { } else if (isShell && containerId) {
await this.startShellExecution(ws, containerId, executionId, mode, server); await this.startShellExecution(ws, containerId, executionId, mode, resolved, containerType);
} else { } else {
await this.startScriptExecution(ws, scriptPath, executionId, mode, server, envVars); await this.startScriptExecution(ws, scriptPath, executionId, mode, resolved, envVars);
} }
} else { } else {
this.sendMessage(ws, { this.sendMessage(ws, {
@@ -1153,10 +1175,11 @@ class ScriptExecutionHandler {
const hostname = hostnames[i]; const hostname = hostnames[i];
try { try {
// Read config file to get hostname/name // Read config file to get hostname/name (node-specific path)
const nodeName = server.name;
const configPath = containerType === 'lxc' const configPath = containerType === 'lxc'
? `/etc/pve/lxc/${nextId}.conf` ? `/etc/pve/nodes/${nodeName}/lxc/${nextId}.conf`
: `/etc/pve/qemu-server/${nextId}.conf`; : `/etc/pve/nodes/${nodeName}/qemu-server/${nextId}.conf`;
let configContent = ''; let configContent = '';
await new Promise(/** @type {(resolve: (value?: void) => void) => void} */ ((resolve) => { await new Promise(/** @type {(resolve: (value?: void) => void) => void} */ ((resolve) => {
@@ -1474,21 +1497,21 @@ class ScriptExecutionHandler {
* @param {string} executionId * @param {string} executionId
* @param {string} mode * @param {string} mode
* @param {ServerInfo|null} server * @param {ServerInfo|null} server
* @param {'lxc'|'vm'} [containerType='lxc']
*/ */
async startShellExecution(ws, containerId, executionId, mode = 'local', server = null) { async startShellExecution(ws, containerId, executionId, mode = 'local', server = null, containerType = 'lxc') {
try { try {
const typeLabel = containerType === 'vm' ? 'VM' : 'container';
// Send start message
this.sendMessage(ws, { this.sendMessage(ws, {
type: 'start', type: 'start',
data: `Starting shell session for container ${containerId}...`, data: `Starting shell session for ${typeLabel} ${containerId}...`,
timestamp: Date.now() timestamp: Date.now()
}); });
if (mode === 'ssh' && server) { if (mode === 'ssh' && server) {
await this.startSSHShellExecution(ws, containerId, executionId, server); await this.startSSHShellExecution(ws, containerId, executionId, server, containerType);
} else { } else {
await this.startLocalShellExecution(ws, containerId, executionId); await this.startLocalShellExecution(ws, containerId, executionId, containerType);
} }
} catch (error) { } catch (error) {
@@ -1505,12 +1528,12 @@ class ScriptExecutionHandler {
* @param {ExtendedWebSocket} ws * @param {ExtendedWebSocket} ws
* @param {string} containerId * @param {string} containerId
* @param {string} executionId * @param {string} executionId
* @param {'lxc'|'vm'} [containerType='lxc']
*/ */
async startLocalShellExecution(ws, containerId, executionId) { async startLocalShellExecution(ws, containerId, executionId, containerType = 'lxc') {
const { spawn } = await import('node-pty'); const { spawn } = await import('node-pty');
const shellCommand = containerType === 'vm' ? `qm terminal ${containerId}` : `pct enter ${containerId}`;
// Create a shell process that will run pct enter const childProcess = spawn('bash', ['-c', shellCommand], {
const childProcess = spawn('bash', ['-c', `pct enter ${containerId}`], {
name: 'xterm-color', name: 'xterm-color',
cols: 80, cols: 80,
rows: 24, rows: 24,
@@ -1553,14 +1576,15 @@ class ScriptExecutionHandler {
* @param {string} containerId * @param {string} containerId
* @param {string} executionId * @param {string} executionId
* @param {ServerInfo} server * @param {ServerInfo} server
* @param {'lxc'|'vm'} [containerType='lxc']
*/ */
async startSSHShellExecution(ws, containerId, executionId, server) { async startSSHShellExecution(ws, containerId, executionId, server, containerType = 'lxc') {
const sshService = getSSHExecutionService(); const sshService = getSSHExecutionService();
const shellCommand = containerType === 'vm' ? `qm terminal ${containerId}` : `pct enter ${containerId}`;
try { try {
const execution = await sshService.executeCommand( const execution = await sshService.executeCommand(
server, server,
`pct enter ${containerId}`, shellCommand,
/** @param {string} data */ /** @param {string} data */
(data) => { (data) => {
this.sendMessage(ws, { this.sendMessage(ws, {
@@ -1610,6 +1634,7 @@ class ScriptExecutionHandler {
// TerminalHandler removed - not used by current application // TerminalHandler removed - not used by current application
app.prepare().then(() => { app.prepare().then(() => {
console.log('> Next.js app prepared successfully');
const httpServer = createServer(async (req, res) => { const httpServer = createServer(async (req, res) => {
try { try {
// Be sure to pass `true` as the second argument to `url.parse`. // Be sure to pass `true` as the second argument to `url.parse`.
@@ -1715,4 +1740,9 @@ app.prepare().then(() => {
autoSyncModule.setupGracefulShutdown(); autoSyncModule.setupGracefulShutdown();
} }
}); });
}).catch((err) => {
console.error('> Failed to start server:', err.message);
console.error('> If you see "Could not find a production build", run: npm run build');
console.error('> Full error:', err);
process.exit(1);
}); });

View File

@@ -58,6 +58,11 @@ export function ConfigurationModal({
// Advanced mode state // Advanced mode state
const [advancedVars, setAdvancedVars] = useState<EnvVars>({}); const [advancedVars, setAdvancedVars] = useState<EnvVars>({});
// Discovered SSH keys on the Proxmox host (advanced mode only)
const [discoveredSshKeys, setDiscoveredSshKeys] = useState<string[]>([]);
const [discoveredSshKeysLoading, setDiscoveredSshKeysLoading] = useState(false);
const [discoveredSshKeysError, setDiscoveredSshKeysError] = useState<string | null>(null);
// Validation errors // Validation errors
const [errors, setErrors] = useState<Record<string, string>>({}); const [errors, setErrors] = useState<Record<string, string>>({});
@@ -104,6 +109,7 @@ export function ConfigurationModal({
var_mknod: 0, var_mknod: 0,
var_mount_fs: '', var_mount_fs: '',
var_protection: 'no', var_protection: 'no',
var_tun: 'no',
// System // System
var_timezone: '', var_timezone: '',
@@ -119,6 +125,38 @@ export function ConfigurationModal({
} }
}, [actualScript, server, mode, resources, slug]); }, [actualScript, server, mode, resources, slug]);
// Discover SSH keys on the Proxmox host when advanced mode is open
useEffect(() => {
if (!server?.id || !isOpen || mode !== 'advanced') {
setDiscoveredSshKeys([]);
setDiscoveredSshKeysError(null);
return;
}
let cancelled = false;
setDiscoveredSshKeysLoading(true);
setDiscoveredSshKeysError(null);
fetch(`/api/servers/${server.id}/discover-ssh-keys`)
.then((res) => {
if (!res.ok) throw new Error(res.status === 404 ? 'Server not found' : res.statusText);
return res.json();
})
.then((data: { keys?: string[] }) => {
if (!cancelled && Array.isArray(data.keys)) setDiscoveredSshKeys(data.keys);
})
.catch((err) => {
if (!cancelled) {
setDiscoveredSshKeys([]);
setDiscoveredSshKeysError(err instanceof Error ? err.message : 'Could not detect keys');
}
})
.finally(() => {
if (!cancelled) setDiscoveredSshKeysLoading(false);
});
return () => {
cancelled = true;
};
}, [server?.id, isOpen, mode]);
// Validation functions // Validation functions
const validateIPv4 = (ip: string): boolean => { const validateIPv4 = (ip: string): boolean => {
if (!ip) return true; // Empty is allowed (auto) if (!ip) return true; // Empty is allowed (auto)
@@ -161,6 +199,17 @@ export function ConfigurationModal({
return !isNaN(num) && num > 0; return !isNaN(num) && num > 0;
}; };
const validateHostname = (hostname: string): boolean => {
if (!hostname || hostname.length > 253) return false;
const label = /^[a-zA-Z0-9]([a-zA-Z0-9-]*[a-zA-Z0-9])?$/;
const labels = hostname.split('.');
return labels.length >= 1 && labels.every(l => l.length >= 1 && l.length <= 63 && label.test(l));
};
const validateAptCacherAddress = (value: string): boolean => {
return validateIPv4(value) || validateHostname(value);
};
const validateForm = (): boolean => { const validateForm = (): boolean => {
const newErrors: Record<string, string> = {}; const newErrors: Record<string, string> = {};
@@ -178,8 +227,8 @@ export function ConfigurationModal({
if (advancedVars.var_ns && !validateIPv4(advancedVars.var_ns as string)) { if (advancedVars.var_ns && !validateIPv4(advancedVars.var_ns as string)) {
newErrors.var_ns = 'Invalid IPv4 address'; newErrors.var_ns = 'Invalid IPv4 address';
} }
if (advancedVars.var_apt_cacher_ip && !validateIPv4(advancedVars.var_apt_cacher_ip as string)) { if (advancedVars.var_apt_cacher_ip && !validateAptCacherAddress(advancedVars.var_apt_cacher_ip as string)) {
newErrors.var_apt_cacher_ip = 'Invalid IPv4 address'; newErrors.var_apt_cacher_ip = 'Invalid IPv4 address or hostname';
} }
// Validate IPv4 CIDR if network mode is static // Validate IPv4 CIDR if network mode is static
const netValue = advancedVars.var_net; const netValue = advancedVars.var_net;
@@ -275,6 +324,16 @@ export function ConfigurationModal({
if ((hasPassword || hasSSHKey) && envVars.var_ssh !== 'no') { if ((hasPassword || hasSSHKey) && envVars.var_ssh !== 'no') {
envVars.var_ssh = 'yes'; envVars.var_ssh = 'yes';
} }
// Normalize var_tags: accept both comma and semicolon, output comma-separated
const rawTags = envVars.var_tags;
if (typeof rawTags === 'string' && rawTags.trim() !== '') {
envVars.var_tags = rawTags
.split(/[,;]/)
.map((s) => s.trim())
.filter(Boolean)
.join(',');
}
} }
// Remove empty string values (but keep 0, false, etc.) // Remove empty string values (but keep 0, false, etc.)
@@ -644,13 +703,13 @@ export function ConfigurationModal({
</div> </div>
<div className="col-span-2"> <div className="col-span-2">
<label className="block text-sm font-medium text-foreground mb-2"> <label className="block text-sm font-medium text-foreground mb-2">
Tags (comma-separated) Tags (comma or semicolon separated)
</label> </label>
<Input <Input
type="text" type="text"
value={typeof advancedVars.var_tags === 'boolean' ? '' : String(advancedVars.var_tags ?? '')} value={typeof advancedVars.var_tags === 'boolean' ? '' : String(advancedVars.var_tags ?? '')}
onChange={(e) => updateAdvancedVar('var_tags', e.target.value)} onChange={(e) => updateAdvancedVar('var_tags', e.target.value)}
placeholder="community-script" placeholder="e.g. tag1; tag2"
/> />
</div> </div>
</div> </div>
@@ -677,11 +736,40 @@ export function ConfigurationModal({
<label className="block text-sm font-medium text-foreground mb-2"> <label className="block text-sm font-medium text-foreground mb-2">
SSH Authorized Key SSH Authorized Key
</label> </label>
{discoveredSshKeysLoading && (
<p className="text-sm text-muted-foreground mb-2">Detecting SSH keys...</p>
)}
{discoveredSshKeysError && !discoveredSshKeysLoading && (
<p className="text-sm text-muted-foreground mb-2">Could not detect keys on host</p>
)}
{discoveredSshKeys.length > 0 && !discoveredSshKeysLoading && (
<div className="mb-2">
<label htmlFor="discover-ssh-key" className="sr-only">Use detected key</label>
<select
id="discover-ssh-key"
className="w-full rounded-md border border-input bg-background px-3 py-2 text-sm text-foreground focus:ring-2 focus:ring-ring focus:outline-none mb-2"
value=""
onChange={(e) => {
const idx = e.target.value;
if (idx === '') return;
const key = discoveredSshKeys[Number(idx)];
if (key) updateAdvancedVar('var_ssh_authorized_key', key);
}}
>
<option value=""> Select or paste below </option>
{discoveredSshKeys.map((key, i) => (
<option key={i} value={i}>
{key.length > 44 ? `${key.slice(0, 44)}...` : key}
</option>
))}
</select>
</div>
)}
<Input <Input
type="text" type="text"
value={typeof advancedVars.var_ssh_authorized_key === 'boolean' ? '' : String(advancedVars.var_ssh_authorized_key ?? '')} value={typeof advancedVars.var_ssh_authorized_key === 'boolean' ? '' : String(advancedVars.var_ssh_authorized_key ?? '')}
onChange={(e) => updateAdvancedVar('var_ssh_authorized_key', e.target.value)} onChange={(e) => updateAdvancedVar('var_ssh_authorized_key', e.target.value)}
placeholder="ssh-rsa AAAA..." placeholder="Or paste a public key: ssh-rsa AAAA..."
/> />
</div> </div>
</div> </div>
@@ -730,6 +818,20 @@ export function ConfigurationModal({
<option value={1}>Enabled</option> <option value={1}>Enabled</option>
</select> </select>
</div> </div>
<div>
<label className="block text-sm font-medium text-foreground mb-2">
TUN/TAP (VPN)
</label>
<select
value={typeof advancedVars.var_tun === 'boolean' ? (advancedVars.var_tun ? 'yes' : 'no') : String(advancedVars.var_tun ?? 'no')}
onChange={(e) => updateAdvancedVar('var_tun', e.target.value)}
className="w-full rounded-md border border-input bg-background px-3 py-2 text-sm text-foreground focus:ring-2 focus:ring-ring focus:outline-none"
>
<option value="no">No</option>
<option value="yes">Yes</option>
</select>
<p className="text-xs text-muted-foreground mt-1">For Tailscale, WireGuard, OpenVPN</p>
</div>
<div> <div>
<label className="block text-sm font-medium text-foreground mb-2"> <label className="block text-sm font-medium text-foreground mb-2">
Mknod Mknod
@@ -813,13 +915,13 @@ export function ConfigurationModal({
</div> </div>
<div> <div>
<label className="block text-sm font-medium text-foreground mb-2"> <label className="block text-sm font-medium text-foreground mb-2">
APT Cacher IP APT Cacher host or IP
</label> </label>
<Input <Input
type="text" type="text"
value={typeof advancedVars.var_apt_cacher_ip === 'boolean' ? '' : String(advancedVars.var_apt_cacher_ip ?? '')} value={typeof advancedVars.var_apt_cacher_ip === 'boolean' ? '' : String(advancedVars.var_apt_cacher_ip ?? '')}
onChange={(e) => updateAdvancedVar('var_apt_cacher_ip', e.target.value)} onChange={(e) => updateAdvancedVar('var_apt_cacher_ip', e.target.value)}
placeholder="192.168.1.10" placeholder="192.168.1.10 or apt-cacher.internal"
className={errors.var_apt_cacher_ip ? 'border-destructive' : ''} className={errors.var_apt_cacher_ip ? 'border-destructive' : ''}
/> />
{errors.var_apt_cacher_ip && ( {errors.var_apt_cacher_ip && (

View File

@@ -8,7 +8,9 @@ import { ScriptDetailModal } from "./ScriptDetailModal";
import { CategorySidebar } from "./CategorySidebar"; import { CategorySidebar } from "./CategorySidebar";
import { FilterBar, type FilterState } from "./FilterBar"; import { FilterBar, type FilterState } from "./FilterBar";
import { ViewToggle } from "./ViewToggle"; import { ViewToggle } from "./ViewToggle";
import { ConfirmationModal } from "./ConfirmationModal";
import { Button } from "./ui/button"; import { Button } from "./ui/button";
import { RefreshCw } from "lucide-react";
import type { ScriptCard as ScriptCardType } from "~/types/script"; import type { ScriptCard as ScriptCardType } from "~/types/script";
import type { Server } from "~/types/server"; import type { Server } from "~/types/server";
import { getDefaultFilters, mergeFiltersWithDefaults } from "./filterUtils"; import { getDefaultFilters, mergeFiltersWithDefaults } from "./filterUtils";
@@ -32,8 +34,15 @@ export function DownloadedScriptsTab({
const [filters, setFilters] = useState<FilterState>(getDefaultFilters()); const [filters, setFilters] = useState<FilterState>(getDefaultFilters());
const [saveFiltersEnabled, setSaveFiltersEnabled] = useState(false); const [saveFiltersEnabled, setSaveFiltersEnabled] = useState(false);
const [isLoadingFilters, setIsLoadingFilters] = useState(true); const [isLoadingFilters, setIsLoadingFilters] = useState(true);
const [updateAllConfirmOpen, setUpdateAllConfirmOpen] = useState(false);
const [updateResult, setUpdateResult] = useState<{
successCount: number;
failCount: number;
failed: { slug: string; error: string }[];
} | null>(null);
const gridRef = useRef<HTMLDivElement>(null); const gridRef = useRef<HTMLDivElement>(null);
const utils = api.useUtils();
const { const {
data: scriptCardsData, data: scriptCardsData,
isLoading: githubLoading, isLoading: githubLoading,
@@ -50,6 +59,30 @@ export function DownloadedScriptsTab({
{ enabled: !!selectedSlug }, { enabled: !!selectedSlug },
); );
const loadMultipleScriptsMutation = api.scripts.loadMultipleScripts.useMutation({
onSuccess: (data) => {
void utils.scripts.getAllDownloadedScripts.invalidate();
void utils.scripts.getScriptCardsWithCategories.invalidate();
setUpdateResult({
successCount: data.successful?.length ?? 0,
failCount: data.failed?.length ?? 0,
failed: (data.failed ?? []).map((f) => ({
slug: f.slug,
error: f.error ?? "Unknown error",
})),
});
setTimeout(() => setUpdateResult(null), 8000);
},
onError: (error) => {
setUpdateResult({
successCount: 0,
failCount: 1,
failed: [{ slug: "Request failed", error: error.message }],
});
setTimeout(() => setUpdateResult(null), 8000);
},
});
// Load SAVE_FILTER setting, saved filters, and view mode on component mount // Load SAVE_FILTER setting, saved filters, and view mode on component mount
useEffect(() => { useEffect(() => {
const loadSettings = async () => { const loadSettings = async () => {
@@ -416,6 +449,21 @@ export function DownloadedScriptsTab({
setSelectedSlug(null); setSelectedSlug(null);
}; };
const handleUpdateAllClick = () => {
setUpdateResult(null);
setUpdateAllConfirmOpen(true);
};
const handleUpdateAllConfirm = () => {
setUpdateAllConfirmOpen(false);
const slugs = downloadedScripts
.map((s) => s.slug)
.filter((slug): slug is string => Boolean(slug));
if (slugs.length > 0) {
loadMultipleScriptsMutation.mutate({ slugs });
}
};
if (githubLoading || localLoading) { if (githubLoading || localLoading) {
return ( return (
<div className="flex items-center justify-center py-12"> <div className="flex items-center justify-center py-12">
@@ -508,6 +556,43 @@ export function DownloadedScriptsTab({
{/* Main Content */} {/* Main Content */}
<div className="order-1 min-w-0 flex-1 lg:order-2" ref={gridRef}> <div className="order-1 min-w-0 flex-1 lg:order-2" ref={gridRef}>
{/* Update all downloaded scripts */}
<div className="mb-4 flex flex-wrap items-center gap-3">
<Button
onClick={handleUpdateAllClick}
disabled={loadMultipleScriptsMutation.isPending}
variant="secondary"
size="default"
className="flex items-center gap-2"
>
{loadMultipleScriptsMutation.isPending ? (
<>
<RefreshCw className="h-4 w-4 animate-spin" />
<span>Updating...</span>
</>
) : (
<>
<RefreshCw className="h-4 w-4" />
<span>Update all downloaded scripts</span>
</>
)}
</Button>
{updateResult && (
<span className="text-muted-foreground text-sm">
Updated {updateResult.successCount} successfully
{updateResult.failCount > 0
? `, ${updateResult.failCount} failed`
: ""}
.
{updateResult.failCount > 0 && updateResult.failed.length > 0 && (
<span className="ml-1" title={updateResult.failed.map((f) => `${f.slug}: ${f.error}`).join("\n")}>
(hover for details)
</span>
)}
</span>
)}
</div>
{/* Enhanced Filter Bar */} {/* Enhanced Filter Bar */}
<FilterBar <FilterBar
filters={filters} filters={filters}
@@ -621,6 +706,17 @@ export function DownloadedScriptsTab({
onClose={handleCloseModal} onClose={handleCloseModal}
onInstallScript={onInstallScript} onInstallScript={onInstallScript}
/> />
<ConfirmationModal
isOpen={updateAllConfirmOpen}
onClose={() => setUpdateAllConfirmOpen(false)}
onConfirm={handleUpdateAllConfirm}
title="Update all downloaded scripts"
message={`Update all ${downloadedScripts.length} downloaded scripts? This may take several minutes.`}
variant="simple"
confirmButtonText="Update all"
cancelButtonText="Cancel"
/>
</div> </div>
</div> </div>
</div> </div>

View File

@@ -1617,7 +1617,7 @@ export function GeneralSettingsModal({
<Input <Input
id="new-repo-url" id="new-repo-url"
type="url" type="url"
placeholder="https://github.com/owner/repo" placeholder="https://github.com/owner/repo or https://git.example.com/owner/repo"
value={newRepoUrl} value={newRepoUrl}
onChange={(e: React.ChangeEvent<HTMLInputElement>) => onChange={(e: React.ChangeEvent<HTMLInputElement>) =>
setNewRepoUrl(e.target.value) setNewRepoUrl(e.target.value)
@@ -1626,8 +1626,9 @@ export function GeneralSettingsModal({
className="w-full" className="w-full"
/> />
<p className="text-muted-foreground mt-1 text-xs"> <p className="text-muted-foreground mt-1 text-xs">
Enter a GitHub repository URL (e.g., Supported: GitHub, GitLab, Bitbucket, or custom Git
https://github.com/owner/repo) servers (e.g. https://github.com/owner/repo,
https://gitlab.com/owner/repo)
</p> </p>
</div> </div>
<div className="border-border flex items-center justify-between gap-3 rounded-lg border p-3"> <div className="border-border flex items-center justify-between gap-3 rounded-lg border p-3">

View File

@@ -80,6 +80,7 @@ export function InstalledScriptsTab() {
id: number; id: number;
containerId: string; containerId: string;
server?: any; server?: any;
containerType?: 'lxc' | 'vm';
} | null>(null); } | null>(null);
const [showBackupPrompt, setShowBackupPrompt] = useState(false); const [showBackupPrompt, setShowBackupPrompt] = useState(false);
const [showStorageSelection, setShowStorageSelection] = useState(false); const [showStorageSelection, setShowStorageSelection] = useState(false);
@@ -1167,6 +1168,7 @@ export function InstalledScriptsTab() {
id: script.id, id: script.id,
containerId: script.container_id, containerId: script.container_id,
server: server, server: server,
containerType: script.is_vm ? 'vm' : 'lxc',
}); });
}; };
@@ -1452,6 +1454,13 @@ export function InstalledScriptsTab() {
{/* Shell Terminal */} {/* Shell Terminal */}
{openingShell && ( {openingShell && (
<div className="mb-8" data-terminal="shell"> <div className="mb-8" data-terminal="shell">
{openingShell.containerType === 'vm' && (
<p className="text-muted-foreground mb-2 text-sm">
VM shell uses the Proxmox serial console. The VM must have a
serial port configured (e.g. <code className="bg-muted rounded px-1">qm set {openingShell.containerId} -serial0 socket</code>).
Detach with <kbd className="bg-muted rounded px-1">Ctrl+O</kbd>.
</p>
)}
<Terminal <Terminal
scriptPath={`shell-${openingShell.containerId}`} scriptPath={`shell-${openingShell.containerId}`}
onClose={handleCloseShellTerminal} onClose={handleCloseShellTerminal}
@@ -1459,6 +1468,7 @@ export function InstalledScriptsTab() {
server={openingShell.server} server={openingShell.server}
isShell={true} isShell={true}
containerId={openingShell.containerId} containerId={openingShell.containerId}
containerType={openingShell.containerType}
/> />
</div> </div>
)} )}
@@ -1538,7 +1548,7 @@ export function InstalledScriptsTab() {
> >
{showAutoDetectForm {showAutoDetectForm
? "Cancel Auto-Detect" ? "Cancel Auto-Detect"
: '🔍 Auto-Detect LXC Containers (Must contain a tag with "community-script")'} : '🔍 Auto-Detect Containers & VMs (tag: community-script)'}
</Button> </Button>
<Button <Button
onClick={() => { onClick={() => {
@@ -1764,12 +1774,11 @@ export function InstalledScriptsTab() {
</div> </div>
)} )}
{/* Auto-Detect LXC Containers Form */} {/* Auto-Detect Containers & VMs Form */}
{showAutoDetectForm && ( {showAutoDetectForm && (
<div className="bg-card border-border mb-6 rounded-lg border p-4 shadow-sm sm:p-6"> <div className="bg-card border-border mb-6 rounded-lg border p-4 shadow-sm sm:p-6">
<h3 className="text-foreground mb-4 text-lg font-semibold sm:mb-6"> <h3 className="text-foreground mb-4 text-lg font-semibold sm:mb-6">
Auto-Detect LXC Containers (Must contain a tag with Auto-Detect Containers &amp; VMs (tag: community-script)
&quot;community-script&quot;)
</h3> </h3>
<div className="space-y-4 sm:space-y-6"> <div className="space-y-4 sm:space-y-6">
<div className="bg-muted/30 border-muted rounded-lg border p-4"> <div className="bg-muted/30 border-muted rounded-lg border p-4">
@@ -1795,12 +1804,12 @@ export function InstalledScriptsTab() {
<p>This feature will:</p> <p>This feature will:</p>
<ul className="mt-1 list-inside list-disc space-y-1"> <ul className="mt-1 list-inside list-disc space-y-1">
<li>Connect to the selected server via SSH</li> <li>Connect to the selected server via SSH</li>
<li>Scan all LXC config files in /etc/pve/lxc/</li> <li>Scan LXC configs in /etc/pve/lxc/ and VM configs in /etc/pve/qemu-server/</li>
<li> <li>
Find containers with &quot;community-script&quot; in Find containers and VMs with &quot;community-script&quot; in
their tags their tags
</li> </li>
<li>Extract the container ID and hostname</li> <li>Extract the container/VM ID and hostname or name</li>
<li>Add them as installed script entries</li> <li>Add them as installed script entries</li>
</ul> </ul>
</div> </div>
@@ -2302,6 +2311,11 @@ export function InstalledScriptsTab() {
"stopped" "stopped"
} }
className="text-muted-foreground hover:text-foreground hover:bg-muted/20 focus:bg-muted/20" className="text-muted-foreground hover:text-foreground hover:bg-muted/20 focus:bg-muted/20"
title={
script.is_vm
? "VM serial console (requires serial port; detach with Ctrl+O)"
: undefined
}
> >
Shell Shell
</DropdownMenuItem> </DropdownMenuItem>

View File

@@ -270,22 +270,21 @@ export function PBSCredentialsModal({
htmlFor="pbs-fingerprint" htmlFor="pbs-fingerprint"
className="text-foreground mb-1 block text-sm font-medium" className="text-foreground mb-1 block text-sm font-medium"
> >
Fingerprint <span className="text-error">*</span> Fingerprint
</label> </label>
<input <input
type="text" type="text"
id="pbs-fingerprint" id="pbs-fingerprint"
value={pbsFingerprint} value={pbsFingerprint}
onChange={(e) => setPbsFingerprint(e.target.value)} onChange={(e) => setPbsFingerprint(e.target.value)}
required
disabled={isLoading} disabled={isLoading}
className="bg-card text-foreground placeholder-muted-foreground focus:ring-ring focus:border-ring border-border w-full rounded-md border px-3 py-2 shadow-sm focus:ring-2 focus:outline-none" className="bg-card text-foreground placeholder-muted-foreground focus:ring-ring focus:border-ring border-border w-full rounded-md border px-3 py-2 shadow-sm focus:ring-2 focus:outline-none"
placeholder="e.g., 7b:e5:87:38:5e:16:05:d1:12:22:7f:73:d2:e2:d0:cf:8c:cb:28:e2:74:0c:78:91:1a:71:74:2e:79:20:5a:02" placeholder="e.g., 7b:e5:87:38:5e:16:05:d1:12:22:7f:73:d2:e2:d0:cf:8c:cb:28:e2:74:0c:78:91:1a:71:74:2e:79:20:5a:02"
/> />
<p className="text-muted-foreground mt-1 text-xs"> <p className="text-muted-foreground mt-1 text-xs">
Server fingerprint for auto-acceptance. You can find this on Leave empty if PBS uses a trusted CA (e.g. Let&apos;s Encrypt).
your PBS dashboard by clicking the &quot;Show Fingerprint&quot; For self-signed certificates, enter the server fingerprint from
button. the PBS dashboard (&quot;Show Fingerprint&quot;).
</p> </p>
</div> </div>

View File

@@ -438,6 +438,11 @@ export function ServerForm({
{errors.password && ( {errors.password && (
<p className="text-destructive mt-1 text-sm">{errors.password}</p> <p className="text-destructive mt-1 text-sm">{errors.password}</p>
)} )}
<p className="text-muted-foreground mt-1 text-xs">
SSH key is recommended when possible. Special characters (e.g.{" "}
<code className="rounded bg-muted px-0.5">{"{ } $ \" '"}</code>) are
supported.
</p>
</div> </div>
)} )}

View File

@@ -0,0 +1,96 @@
import type { NextRequest } from 'next/server';
import { NextResponse } from 'next/server';
import { getDatabase } from '../../../../../server/database-prisma';
import { getSSHExecutionService } from '../../../../../server/ssh-execution-service';
import type { Server } from '~/types/server';
const DISCOVER_TIMEOUT_MS = 10_000;
/** Match lines that look like SSH public keys (same as build.func) */
const SSH_PUBKEY_RE = /^(ssh-(rsa|ed25519)|ecdsa-sha2-nistp256|sk-(ssh-ed25519|ecdsa-sha2-nistp256))\s+/;
/**
* Run a command on the Proxmox host and return buffered stdout.
* Resolves when the process exits or rejects on timeout/spawn error.
*/
function runRemoteCommand(
server: Server,
command: string,
timeoutMs: number
): Promise<{ stdout: string; exitCode: number }> {
const ssh = getSSHExecutionService();
return new Promise((resolve, reject) => {
const chunks: string[] = [];
let settled = false;
const finish = (stdout: string, exitCode: number) => {
if (settled) return;
settled = true;
clearTimeout(timer);
resolve({ stdout, exitCode });
};
const timer = setTimeout(() => {
if (settled) return;
settled = true;
reject(new Error('SSH discover keys timeout'));
}, timeoutMs);
ssh
.executeCommand(
server,
command,
(data: string) => chunks.push(data),
() => {},
(code: number) => finish(chunks.join(''), code)
)
.catch((err) => {
if (!settled) {
settled = true;
clearTimeout(timer);
reject(err);
}
});
});
}
export async function GET(
_request: NextRequest,
{ params }: { params: Promise<{ id: string }> }
) {
try {
const { id: idParam } = await params;
const id = parseInt(idParam);
if (isNaN(id)) {
return NextResponse.json({ error: 'Invalid server ID' }, { status: 400 });
}
const db = getDatabase();
const server = await db.getServerById(id) as Server | null;
if (!server) {
return NextResponse.json({ error: 'Server not found' }, { status: 404 });
}
// Same paths as native build.func ssh_discover_default_files()
const remoteScript = `bash -c 'for f in /root/.ssh/authorized_keys /root/.ssh/authorized_keys2 /root/.ssh/*.pub /etc/ssh/authorized_keys /etc/ssh/authorized_keys.d/* 2>/dev/null; do [ -f "$f" ] && [ -r "$f" ] && grep -E "^(ssh-(rsa|ed25519)|ecdsa-sha2-nistp256|sk-)" "$f" 2>/dev/null; done | sort -u'`;
const { stdout } = await runRemoteCommand(server, remoteScript, DISCOVER_TIMEOUT_MS);
const keys = stdout
.split(/\r?\n/)
.map((line) => line.trim())
.filter((line) => line.length > 0 && SSH_PUBKEY_RE.test(line));
return NextResponse.json({ keys });
} catch (error) {
console.error('Error discovering SSH keys:', error);
return NextResponse.json(
{
success: false,
error: error instanceof Error ? error.message : String(error),
},
{ status: 500 }
);
}
}

View File

@@ -23,8 +23,11 @@ export const env = createEnv({
ALLOWED_SCRIPT_PATHS: z.string().default("scripts/"), ALLOWED_SCRIPT_PATHS: z.string().default("scripts/"),
// WebSocket Configuration // WebSocket Configuration
WEBSOCKET_PORT: z.string().default("3001"), WEBSOCKET_PORT: z.string().default("3001"),
// GitHub Configuration // Git provider tokens (optional, for private repos)
GITHUB_TOKEN: z.string().optional(), GITHUB_TOKEN: z.string().optional(),
GITLAB_TOKEN: z.string().optional(),
BITBUCKET_APP_PASSWORD: z.string().optional(),
BITBUCKET_TOKEN: z.string().optional(),
// Authentication Configuration // Authentication Configuration
AUTH_USERNAME: z.string().optional(), AUTH_USERNAME: z.string().optional(),
AUTH_PASSWORD_HASH: z.string().optional(), AUTH_PASSWORD_HASH: z.string().optional(),
@@ -62,8 +65,10 @@ export const env = createEnv({
ALLOWED_SCRIPT_PATHS: process.env.ALLOWED_SCRIPT_PATHS, ALLOWED_SCRIPT_PATHS: process.env.ALLOWED_SCRIPT_PATHS,
// WebSocket Configuration // WebSocket Configuration
WEBSOCKET_PORT: process.env.WEBSOCKET_PORT, WEBSOCKET_PORT: process.env.WEBSOCKET_PORT,
// GitHub Configuration
GITHUB_TOKEN: process.env.GITHUB_TOKEN, GITHUB_TOKEN: process.env.GITHUB_TOKEN,
GITLAB_TOKEN: process.env.GITLAB_TOKEN,
BITBUCKET_APP_PASSWORD: process.env.BITBUCKET_APP_PASSWORD,
BITBUCKET_TOKEN: process.env.BITBUCKET_TOKEN,
// Authentication Configuration // Authentication Configuration
AUTH_USERNAME: process.env.AUTH_USERNAME, AUTH_USERNAME: process.env.AUTH_USERNAME,
AUTH_PASSWORD_HASH: process.env.AUTH_PASSWORD_HASH, AUTH_PASSWORD_HASH: process.env.AUTH_PASSWORD_HASH,

View File

@@ -418,44 +418,46 @@ async function isVM(scriptId: number, containerId: string, serverId: number | nu
return false; // Default to LXC if SSH fails return false; // Default to LXC if SSH fails
} }
// Check both config file paths // Node-specific paths (multi-node Proxmox: /etc/pve/nodes/NODENAME/...)
const vmConfigPath = `/etc/pve/qemu-server/${containerId}.conf`; const nodeName = (server as Server).name;
const lxcConfigPath = `/etc/pve/lxc/${containerId}.conf`; const vmConfigPathNode = `/etc/pve/nodes/${nodeName}/qemu-server/${containerId}.conf`;
const lxcConfigPathNode = `/etc/pve/nodes/${nodeName}/lxc/${containerId}.conf`;
// Check VM config file // Fallback for single-node or when server.name is not the Proxmox node name
let vmConfigExists = false; const vmConfigPathFallback = `/etc/pve/qemu-server/${containerId}.conf`;
await new Promise<void>((resolve) => { const lxcConfigPathFallback = `/etc/pve/lxc/${containerId}.conf`;
void sshExecutionService.executeCommand(
server as Server,
`test -f "${vmConfigPath}" && echo "exists" || echo "not_exists"`,
(data: string) => {
if (data.includes('exists')) {
vmConfigExists = true;
}
},
() => resolve(),
() => resolve()
);
});
if (vmConfigExists) {
return true; // VM config file exists
}
// Check LXC config file (not needed for return value, but check for completeness)
await new Promise<void>((resolve) => {
void sshExecutionService.executeCommand(
server as Server,
`test -f "${lxcConfigPath}" && echo "exists" || echo "not_exists"`,
(_data: string) => {
// Data handler not needed - just checking if file exists
},
() => resolve(),
() => resolve()
);
});
return false; // Always LXC since VM config doesn't exist const checkPathExists = (path: string): Promise<boolean> =>
new Promise<boolean>((resolve) => {
let exists = false;
void sshExecutionService.executeCommand(
server as Server,
`test -f "${path}" && echo "exists" || echo "not_exists"`,
(data: string) => {
if (data.includes('exists')) exists = true;
},
() => resolve(exists),
() => resolve(exists)
);
});
// Prefer node-specific paths first
const vmConfigExistsNode = await checkPathExists(vmConfigPathNode);
if (vmConfigExistsNode) {
return true; // VM config file exists on node
}
const lxcConfigExistsNode = await checkPathExists(lxcConfigPathNode);
if (lxcConfigExistsNode) {
return false; // LXC config file exists on node
}
// Fallback: single-node or server.name not matching Proxmox node name
const vmConfigExistsFallback = await checkPathExists(vmConfigPathFallback);
if (vmConfigExistsFallback) {
return true;
}
return false; // LXC (or neither path exists)
} catch (error) { } catch (error) {
console.error('Error determining container type:', error); console.error('Error determining container type:', error);
return false; // Default to LXC on error return false; // Default to LXC on error
@@ -971,10 +973,11 @@ export const installedScriptsRouter = createTRPCRouter({
}; };
// Helper function to check config file for community-script tag and extract hostname/name // Helper function to check config file for community-script tag and extract hostname/name
const nodeName = (server as Server).name;
const checkConfigAndExtractInfo = async (id: string, isVM: boolean): Promise<any> => { const checkConfigAndExtractInfo = async (id: string, isVM: boolean): Promise<any> => {
const configPath = isVM const configPath = isVM
? `/etc/pve/qemu-server/${id}.conf` ? `/etc/pve/nodes/${nodeName}/qemu-server/${id}.conf`
: `/etc/pve/lxc/${id}.conf`; : `/etc/pve/nodes/${nodeName}/lxc/${id}.conf`;
const readCommand = `cat "${configPath}" 2>/dev/null`; const readCommand = `cat "${configPath}" 2>/dev/null`;
@@ -1060,7 +1063,7 @@ export const installedScriptsRouter = createTRPCRouter({
reject(new Error(`pct list failed: ${error}`)); reject(new Error(`pct list failed: ${error}`));
}, },
(_exitCode: number) => { (_exitCode: number) => {
resolve(); setImmediate(() => resolve());
} }
); );
}); });
@@ -1079,7 +1082,7 @@ export const installedScriptsRouter = createTRPCRouter({
reject(new Error(`qm list failed: ${error}`)); reject(new Error(`qm list failed: ${error}`));
}, },
(_exitCode: number) => { (_exitCode: number) => {
resolve(); setImmediate(() => resolve());
} }
); );
}); });
@@ -1318,10 +1321,10 @@ export const installedScriptsRouter = createTRPCRouter({
// Check if ID exists in either pct list (containers) or qm list (VMs) // Check if ID exists in either pct list (containers) or qm list (VMs)
if (!existingIds.has(containerId)) { if (!existingIds.has(containerId)) {
// Also verify config file doesn't exist as a double-check // Also verify config file doesn't exist as a double-check (node-specific paths)
// Check both container and VM config paths const nodeName = (server as Server).name;
const checkContainerCommand = `test -f "/etc/pve/lxc/${containerId}.conf" && echo "exists" || echo "not_found"`; const checkContainerCommand = `test -f "/etc/pve/nodes/${nodeName}/lxc/${containerId}.conf" && echo "exists" || echo "not_found"`;
const checkVMCommand = `test -f "/etc/pve/qemu-server/${containerId}.conf" && echo "exists" || echo "not_found"`; const checkVMCommand = `test -f "/etc/pve/nodes/${nodeName}/qemu-server/${containerId}.conf" && echo "exists" || echo "not_found"`;
const configExists = await new Promise<boolean>((resolve) => { const configExists = await new Promise<boolean>((resolve) => {
let combinedOutput = ''; let combinedOutput = '';
@@ -2068,32 +2071,72 @@ export const installedScriptsRouter = createTRPCRouter({
}; };
} }
// Get the script's interface_port from metadata (prioritize metadata over existing database values) // Resolve app slug from /usr/bin/update (community-scripts) when available; else from hostname/suffix.
let detectedPort = 80; // Default fallback let slugFromUpdate: string | null = null;
try {
const updateCommand = `pct exec ${scriptData.container_id} -- cat /usr/bin/update 2>/dev/null`;
let updateOutput = '';
await new Promise<void>((resolve) => {
void sshExecutionService.executeCommand(
server as Server,
updateCommand,
(data: string) => { updateOutput += data; },
() => {},
() => resolve()
);
});
const ctSlugMatch = /ct\/([a-zA-Z0-9_.-]+)\.sh/.exec(updateOutput);
if (ctSlugMatch?.[1]) {
slugFromUpdate = ctSlugMatch[1].trim().toLowerCase();
console.log('🔍 Slug from /usr/bin/update:', slugFromUpdate);
}
} catch {
// Container may not be from community-scripts; use hostname fallback
}
// Get the script's interface_port from metadata. Primary: slug from /usr/bin/update; fallback: hostname/suffix.
let detectedPort = 80; // Default fallback
try { try {
// Import localScriptsService to get script metadata
const { localScriptsService } = await import('~/server/services/localScripts'); const { localScriptsService } = await import('~/server/services/localScripts');
// Get all scripts and find the one matching our script name
const allScripts = await localScriptsService.getAllScripts(); const allScripts = await localScriptsService.getAllScripts();
// Extract script slug from script_name (remove .sh extension) const nameFromHostname = scriptData.script_name.replace(/\.sh$/, '').toLowerCase();
const scriptSlug = scriptData.script_name.replace(/\.sh$/, '');
console.log('🔍 Looking for script with slug:', scriptSlug); // Primary: slug from /usr/bin/update (community-scripts)
let scriptMetadata =
const scriptMetadata = allScripts.find(script => script.slug === scriptSlug); slugFromUpdate != null
? allScripts.find((s) => s.slug === slugFromUpdate)
: undefined;
if (scriptMetadata) {
console.log('🔍 Using slug from /usr/bin/update for metadata:', scriptMetadata.slug);
}
// Fallback: exact hostname then hostname ends with slug (longest wins)
if (!scriptMetadata) {
scriptMetadata = allScripts.find((script) => script.slug === nameFromHostname);
if (!scriptMetadata) {
const suffixMatches = allScripts.filter((script) => nameFromHostname.endsWith(script.slug));
scriptMetadata =
suffixMatches.length > 0
? suffixMatches.reduce((a, b) => (a.slug.length >= b.slug.length ? a : b))
: undefined;
if (scriptMetadata) {
console.log('🔍 Matched metadata by slug suffix in hostname:', scriptMetadata.slug);
}
}
}
if (scriptMetadata?.interface_port) { if (scriptMetadata?.interface_port) {
detectedPort = scriptMetadata.interface_port; detectedPort = scriptMetadata.interface_port;
console.log('📋 Found interface_port in metadata:', detectedPort); console.log('📋 Found interface_port in metadata:', detectedPort);
} else { } else {
console.log('📋 No interface_port found in metadata, using default port 80'); console.log('📋 No interface_port found in metadata, using default port 80');
detectedPort = 80; // Default to port 80 if no metadata port found detectedPort = 80;
} }
} catch (error) { } catch (error) {
console.log('⚠️ Error getting script metadata, using default port 80:', error); console.log('⚠️ Error getting script metadata, using default port 80:', error);
detectedPort = 80; // Default to port 80 if metadata lookup fails detectedPort = 80;
} }
console.log('🎯 Final detected port:', detectedPort); console.log('🎯 Final detected port:', detectedPort);
@@ -2197,8 +2240,9 @@ export const installedScriptsRouter = createTRPCRouter({
}; };
} }
// Read config file // Read config file (node-specific path)
const configPath = `/etc/pve/lxc/${script.container_id}.conf`; const nodeName = (server as Server).name;
const configPath = `/etc/pve/nodes/${nodeName}/lxc/${script.container_id}.conf`;
const readCommand = `cat "${configPath}" 2>/dev/null`; const readCommand = `cat "${configPath}" 2>/dev/null`;
let rawConfig = ''; let rawConfig = '';
@@ -2328,8 +2372,9 @@ export const installedScriptsRouter = createTRPCRouter({
}; };
} }
// Write config file using heredoc for safe escaping // Write config file using heredoc for safe escaping (node-specific path)
const configPath = `/etc/pve/lxc/${script.container_id}.conf`; const nodeName = (server as Server).name;
const configPath = `/etc/pve/nodes/${nodeName}/lxc/${script.container_id}.conf`;
const writeCommand = `cat > "${configPath}" << 'EOFCONFIG' const writeCommand = `cat > "${configPath}" << 'EOFCONFIG'
${rawConfig} ${rawConfig}
EOFCONFIG`; EOFCONFIG`;
@@ -2737,9 +2782,10 @@ EOFCONFIG`;
const { getSSHExecutionService } = await import('~/server/ssh-execution-service'); const { getSSHExecutionService } = await import('~/server/ssh-execution-service');
const sshExecutionService = getSSHExecutionService(); const sshExecutionService = getSSHExecutionService();
const nodeName = (server as Server).name;
const configPath = input.containerType === 'lxc' const configPath = input.containerType === 'lxc'
? `/etc/pve/lxc/${input.containerId}.conf` ? `/etc/pve/nodes/${nodeName}/lxc/${input.containerId}.conf`
: `/etc/pve/qemu-server/${input.containerId}.conf`; : `/etc/pve/nodes/${nodeName}/qemu-server/${input.containerId}.conf`;
let configContent = ''; let configContent = '';
await new Promise<void>((resolve) => { await new Promise<void>((resolve) => {
@@ -3131,10 +3177,11 @@ EOFCONFIG`;
const { getSSHExecutionService } = await import('~/server/ssh-execution-service'); const { getSSHExecutionService } = await import('~/server/ssh-execution-service');
const sshExecutionService = getSSHExecutionService(); const sshExecutionService = getSSHExecutionService();
// Read config file to get hostname/name // Read config file to get hostname/name (node-specific path)
const nodeName = (server as Server).name;
const configPath = input.containerType === 'lxc' const configPath = input.containerType === 'lxc'
? `/etc/pve/lxc/${input.containerId}.conf` ? `/etc/pve/nodes/${nodeName}/lxc/${input.containerId}.conf`
: `/etc/pve/qemu-server/${input.containerId}.conf`; : `/etc/pve/nodes/${nodeName}/qemu-server/${input.containerId}.conf`;
let configContent = ''; let configContent = '';
await new Promise<void>((resolve) => { await new Promise<void>((resolve) => {

View File

@@ -238,6 +238,27 @@ export const versionRouter = createTRPCRouter({
// Clear/create the log file // Clear/create the log file
await writeFile(logPath, '', 'utf-8'); await writeFile(logPath, '', 'utf-8');
// Always fetch the latest update.sh from GitHub before running
// This ensures we always use the newest update script, avoiding
// the "chicken-and-egg" problem where old scripts can't update properly
const updateScriptUrl = 'https://raw.githubusercontent.com/community-scripts/ProxmoxVE-Local/main/update.sh';
try {
const response = await fetch(updateScriptUrl);
if (response.ok) {
const latestScript = await response.text();
await writeFile(updateScriptPath, latestScript, { mode: 0o755 });
// Log that we fetched the latest script
await writeFile(logPath, '[INFO] Fetched latest update.sh from GitHub\n', { flag: 'a' });
} else {
// If fetch fails, log warning but continue with local script
await writeFile(logPath, `[WARNING] Could not fetch latest update.sh (HTTP ${response.status}), using local version\n`, { flag: 'a' });
}
} catch (fetchError) {
// If fetch fails, log warning but continue with local script
const errorMsg = fetchError instanceof Error ? fetchError.message : 'Unknown error';
await writeFile(logPath, `[WARNING] Could not fetch latest update.sh: ${errorMsg}, using local version\n`, { flag: 'a' });
}
// Spawn the update script as a detached process using nohup // Spawn the update script as a detached process using nohup
// This allows it to run independently and kill the parent Node.js process // This allows it to run independently and kill the parent Node.js process
// Redirect output to log file // Redirect output to log file

View File

@@ -0,0 +1,55 @@
import type { DirEntry, GitProvider } from './types';
import { parseRepoUrl } from '../repositoryUrlValidation';
export class BitbucketProvider implements GitProvider {
async listDirectory(repoUrl: string, path: string, branch: string): Promise<DirEntry[]> {
const { owner, repo } = parseRepoUrl(repoUrl);
const listUrl = `https://api.bitbucket.org/2.0/repositories/${owner}/${repo}/src/${encodeURIComponent(branch)}/${path}`;
const headers: Record<string, string> = {
'User-Agent': 'PVEScripts-Local/1.0',
};
const token = process.env.BITBUCKET_APP_PASSWORD ?? process.env.BITBUCKET_TOKEN;
if (token) {
const auth = Buffer.from(`:${token}`).toString('base64');
headers.Authorization = `Basic ${auth}`;
}
const response = await fetch(listUrl, { headers });
if (!response.ok) {
throw new Error(`Bitbucket API error: ${response.status} ${response.statusText}`);
}
const body = (await response.json()) as { values?: { path: string; type: string }[] };
const data = body.values ?? (Array.isArray(body) ? body : []);
if (!Array.isArray(data)) {
throw new Error('Bitbucket API returned unexpected response');
}
return data.map((item: { path: string; type: string }) => {
const name = item.path.split('/').pop() ?? item.path;
return {
name,
path: item.path,
type: item.type === 'commit_directory' ? ('dir' as const) : ('file' as const),
};
});
}
async downloadRawFile(repoUrl: string, filePath: string, branch: string): Promise<string> {
const { owner, repo } = parseRepoUrl(repoUrl);
const rawUrl = `https://api.bitbucket.org/2.0/repositories/${owner}/${repo}/src/${encodeURIComponent(branch)}/${filePath}`;
const headers: Record<string, string> = {
'User-Agent': 'PVEScripts-Local/1.0',
};
const token = process.env.BITBUCKET_APP_PASSWORD ?? process.env.BITBUCKET_TOKEN;
if (token) {
const auth = Buffer.from(`:${token}`).toString('base64');
headers.Authorization = `Basic ${auth}`;
}
const response = await fetch(rawUrl, { headers });
if (!response.ok) {
throw new Error(`Failed to download ${filePath}: ${response.status} ${response.statusText}`);
}
return response.text();
}
}

View File

@@ -0,0 +1,44 @@
import type { DirEntry, GitProvider } from "./types";
import { parseRepoUrl } from "../repositoryUrlValidation";
export class CustomProvider implements GitProvider {
async listDirectory(repoUrl: string, path: string, branch: string): Promise<DirEntry[]> {
const { origin, owner, repo } = parseRepoUrl(repoUrl);
const apiUrl = `${origin}/api/v1/repos/${owner}/${repo}/contents/${path}?ref=${encodeURIComponent(branch)}`;
const headers: Record<string, string> = { "User-Agent": "PVEScripts-Local/1.0" };
const token = process.env.GITEA_TOKEN ?? process.env.GIT_TOKEN;
if (token) headers.Authorization = `token ${token}`;
const response = await fetch(apiUrl, { headers });
if (!response.ok) {
throw new Error(`Custom Git server: list directory failed (${response.status}).`);
}
const data = (await response.json()) as { type: string; name: string; path: string }[];
if (!Array.isArray(data)) {
const single = data as unknown as { type?: string; name?: string; path?: string };
if (single?.name) {
return [{ name: single.name, path: single.path ?? path, type: single.type === "dir" ? "dir" : "file" }];
}
throw new Error("Custom Git server returned unexpected response");
}
return data.map((item) => ({
name: item.name,
path: item.path,
type: item.type === "dir" ? ("dir" as const) : ("file" as const),
}));
}
async downloadRawFile(repoUrl: string, filePath: string, branch: string): Promise<string> {
const { origin, owner, repo } = parseRepoUrl(repoUrl);
const rawUrl = `${origin}/${owner}/${repo}/raw/${encodeURIComponent(branch)}/${filePath}`;
const headers: Record<string, string> = { "User-Agent": "PVEScripts-Local/1.0" };
const token = process.env.GITEA_TOKEN ?? process.env.GIT_TOKEN;
if (token) headers.Authorization = `token ${token}`;
const response = await fetch(rawUrl, { headers });
if (!response.ok) {
throw new Error(`Failed to download ${filePath} from custom Git server (${response.status}).`);
}
return response.text();
}
}

View File

@@ -0,0 +1,60 @@
import type { DirEntry, GitProvider } from './types';
import { parseRepoUrl } from '../repositoryUrlValidation';
export class GitHubProvider implements GitProvider {
async listDirectory(repoUrl: string, path: string, branch: string): Promise<DirEntry[]> {
const { owner, repo } = parseRepoUrl(repoUrl);
const apiUrl = `https://api.github.com/repos/${owner}/${repo}/contents/${path}?ref=${encodeURIComponent(branch)}`;
const headers: Record<string, string> = {
Accept: 'application/vnd.github.v3+json',
'User-Agent': 'PVEScripts-Local/1.0',
};
const token = process.env.GITHUB_TOKEN;
if (token) headers.Authorization = `token ${token}`;
const response = await fetch(apiUrl, { headers });
if (!response.ok) {
if (response.status === 403) {
const err = new Error(
`GitHub API rate limit exceeded. Consider setting GITHUB_TOKEN. Status: ${response.status} ${response.statusText}`
);
(err as Error & { name: string }).name = 'RateLimitError';
throw err;
}
throw new Error(`GitHub API error: ${response.status} ${response.statusText}`);
}
const data = (await response.json()) as { type: string; name: string; path: string }[];
if (!Array.isArray(data)) {
throw new Error('GitHub API returned unexpected response');
}
return data.map((item) => ({
name: item.name,
path: item.path,
type: item.type === 'dir' ? ('dir' as const) : ('file' as const),
}));
}
async downloadRawFile(repoUrl: string, filePath: string, branch: string): Promise<string> {
const { owner, repo } = parseRepoUrl(repoUrl);
const rawUrl = `https://raw.githubusercontent.com/${owner}/${repo}/${encodeURIComponent(branch)}/${filePath}`;
const headers: Record<string, string> = {
'User-Agent': 'PVEScripts-Local/1.0',
};
const token = process.env.GITHUB_TOKEN;
if (token) headers.Authorization = `token ${token}`;
const response = await fetch(rawUrl, { headers });
if (!response.ok) {
if (response.status === 403) {
const err = new Error(
`GitHub rate limit exceeded while downloading ${filePath}. Consider setting GITHUB_TOKEN.`
);
(err as Error & { name: string }).name = 'RateLimitError';
throw err;
}
throw new Error(`Failed to download ${filePath}: ${response.status} ${response.statusText}`);
}
return response.text();
}
}

View File

@@ -0,0 +1,58 @@
import type { DirEntry, GitProvider } from './types';
import { parseRepoUrl } from '../repositoryUrlValidation';
export class GitLabProvider implements GitProvider {
private getBaseUrl(repoUrl: string): string {
const { origin } = parseRepoUrl(repoUrl);
return origin;
}
private getProjectId(repoUrl: string): string {
const { owner, repo } = parseRepoUrl(repoUrl);
return encodeURIComponent(`${owner}/${repo}`);
}
async listDirectory(repoUrl: string, path: string, branch: string): Promise<DirEntry[]> {
const baseUrl = this.getBaseUrl(repoUrl);
const projectId = this.getProjectId(repoUrl);
const apiUrl = `${baseUrl}/api/v4/projects/${projectId}/repository/tree?path=${encodeURIComponent(path)}&ref=${encodeURIComponent(branch)}&per_page=100`;
const headers: Record<string, string> = {
'User-Agent': 'PVEScripts-Local/1.0',
};
const token = process.env.GITLAB_TOKEN;
if (token) headers['PRIVATE-TOKEN'] = token;
const response = await fetch(apiUrl, { headers });
if (!response.ok) {
throw new Error(`GitLab API error: ${response.status} ${response.statusText}`);
}
const data = (await response.json()) as { type: string; name: string; path: string }[];
if (!Array.isArray(data)) {
throw new Error('GitLab API returned unexpected response');
}
return data.map((item) => ({
name: item.name,
path: item.path,
type: item.type === 'tree' ? ('dir' as const) : ('file' as const),
}));
}
async downloadRawFile(repoUrl: string, filePath: string, branch: string): Promise<string> {
const baseUrl = this.getBaseUrl(repoUrl);
const projectId = this.getProjectId(repoUrl);
const encodedPath = encodeURIComponent(filePath);
const rawUrl = `${baseUrl}/api/v4/projects/${projectId}/repository/files/${encodedPath}/raw?ref=${encodeURIComponent(branch)}`;
const headers: Record<string, string> = {
'User-Agent': 'PVEScripts-Local/1.0',
};
const token = process.env.GITLAB_TOKEN;
if (token) headers['PRIVATE-TOKEN'] = token;
const response = await fetch(rawUrl, { headers });
if (!response.ok) {
throw new Error(`Failed to download ${filePath}: ${response.status} ${response.statusText}`);
}
return response.text();
}
}

View File

@@ -0,0 +1 @@
export { listDirectory, downloadRawFile, getRepoProvider } from "./index.ts";

View File

@@ -0,0 +1,28 @@
import type { DirEntry, GitProvider } from "./types";
import { getRepoProvider } from "../repositoryUrlValidation";
import { GitHubProvider } from "./github";
import { GitLabProvider } from "./gitlab";
import { BitbucketProvider } from "./bitbucket";
import { CustomProvider } from "./custom";
const providers: Record<string, GitProvider> = {
github: new GitHubProvider(),
gitlab: new GitLabProvider(),
bitbucket: new BitbucketProvider(),
custom: new CustomProvider(),
};
export type { DirEntry, GitProvider };
export { getRepoProvider };
export function getGitProvider(repoUrl: string): GitProvider {
return providers[getRepoProvider(repoUrl)]!;
}
export async function listDirectory(repoUrl: string, path: string, branch: string): Promise<DirEntry[]> {
return getGitProvider(repoUrl).listDirectory(repoUrl, path, branch);
}
export async function downloadRawFile(repoUrl: string, filePath: string, branch: string): Promise<string> {
return getGitProvider(repoUrl).downloadRawFile(repoUrl, filePath, branch);
}

View File

@@ -0,0 +1,14 @@
/**
* Git provider interface for listing and downloading repository files.
*/
export type DirEntry = {
name: string;
path: string;
type: 'file' | 'dir';
};
export interface GitProvider {
listDirectory(repoUrl: string, path: string, branch: string): Promise<DirEntry[]>;
downloadRawFile(repoUrl: string, filePath: string, branch: string): Promise<string>;
}

View File

@@ -0,0 +1,37 @@
/**
* Repository URL validation (JS mirror for server.js).
*/
const VALID_REPO_URL =
/^(https?:\/\/)(github\.com|gitlab\.com|bitbucket\.org|[^/]+)\/[^/]+\/[^/]+$/;
export const REPO_URL_ERROR_MESSAGE =
'Invalid repository URL. Supported: GitHub, GitLab, Bitbucket, and custom Git servers (e.g. https://host/owner/repo).';
export function isValidRepositoryUrl(url) {
if (typeof url !== 'string' || !url.trim()) return false;
return VALID_REPO_URL.test(url.trim());
}
export function getRepoProvider(url) {
if (!isValidRepositoryUrl(url)) throw new Error(REPO_URL_ERROR_MESSAGE);
const normalized = url.trim().toLowerCase();
if (normalized.includes('github.com')) return 'github';
if (normalized.includes('gitlab.com')) return 'gitlab';
if (normalized.includes('bitbucket.org')) return 'bitbucket';
return 'custom';
}
export function parseRepoUrl(url) {
if (!isValidRepositoryUrl(url)) throw new Error(REPO_URL_ERROR_MESSAGE);
try {
const u = new URL(url.trim());
const pathParts = u.pathname.replace(/^\/+/, '').replace(/\.git\/?$/, '').split('/');
return {
origin: u.origin,
owner: pathParts[0] ?? '',
repo: pathParts[1] ?? '',
};
} catch {
throw new Error(REPO_URL_ERROR_MESSAGE);
}
}

View File

@@ -0,0 +1,57 @@
/**
* Repository URL validation and provider detection.
* Supports GitHub, GitLab, Bitbucket, and custom Git servers.
*/
const VALID_REPO_URL =
/^(https?:\/\/)(github\.com|gitlab\.com|bitbucket\.org|[^/]+)\/[^/]+\/[^/]+$/;
export const REPO_URL_ERROR_MESSAGE =
'Invalid repository URL. Supported: GitHub, GitLab, Bitbucket, and custom Git servers (e.g. https://host/owner/repo).';
export type RepoProvider = 'github' | 'gitlab' | 'bitbucket' | 'custom';
/**
* Check if a string is a valid repository URL (format only).
*/
export function isValidRepositoryUrl(url: string): boolean {
if (typeof url !== 'string' || !url.trim()) return false;
return VALID_REPO_URL.test(url.trim());
}
/**
* Detect the Git provider from a repository URL.
*/
export function getRepoProvider(url: string): RepoProvider {
if (!isValidRepositoryUrl(url)) {
throw new Error(REPO_URL_ERROR_MESSAGE);
}
const normalized = url.trim().toLowerCase();
if (normalized.includes('github.com')) return 'github';
if (normalized.includes('gitlab.com')) return 'gitlab';
if (normalized.includes('bitbucket.org')) return 'bitbucket';
return 'custom';
}
/**
* Parse owner and repo from a repository URL (path segments).
* Works for GitHub, GitLab, Bitbucket, and custom (host/owner/repo).
*/
export function parseRepoUrl(url: string): { origin: string; owner: string; repo: string } {
if (!isValidRepositoryUrl(url)) {
throw new Error(REPO_URL_ERROR_MESSAGE);
}
try {
const u = new URL(url.trim());
const pathParts = u.pathname.replace(/^\/+/, '').replace(/\.git\/?$/, '').split('/');
const owner = pathParts[0] ?? '';
const repo = pathParts[1] ?? '';
return {
origin: u.origin,
owner,
repo,
};
} catch {
throw new Error(REPO_URL_ERROR_MESSAGE);
}
}

View File

@@ -327,13 +327,16 @@ class BackupService {
// PBS supports PBS_PASSWORD and PBS_REPOSITORY environment variables for non-interactive login // PBS supports PBS_PASSWORD and PBS_REPOSITORY environment variables for non-interactive login
const repository = `root@pam@${pbsIp}:${pbsDatastore}`; const repository = `root@pam@${pbsIp}:${pbsDatastore}`;
// Escape password for shell safety (single quotes) // Escape password and fingerprint for shell safety (single quotes)
const escapedPassword = credential.pbs_password.replace(/'/g, "'\\''"); const escapedPassword = credential.pbs_password.replace(/'/g, "'\\''");
const fingerprint = credential.pbs_fingerprint?.trim() ?? '';
// Use PBS_PASSWORD environment variable for non-interactive authentication const escapedFingerprint = fingerprint ? fingerprint.replace(/'/g, "'\\''") : '';
// Auto-accept fingerprint by piping "y" to stdin const envParts = [`PBS_PASSWORD='${escapedPassword}'`, `PBS_REPOSITORY='${repository}'`];
// PBS will use PBS_PASSWORD env var if available, avoiding interactive prompt if (escapedFingerprint) {
const fullCommand = `echo "y" | PBS_PASSWORD='${escapedPassword}' PBS_REPOSITORY='${repository}' timeout 10 proxmox-backup-client login --repository ${repository} 2>&1`; envParts.push(`PBS_FINGERPRINT='${escapedFingerprint}'`);
}
const envStr = envParts.join(' ');
const fullCommand = `${envStr} timeout 10 proxmox-backup-client login --repository ${repository} 2>&1`;
console.log(`[BackupService] Logging into PBS: ${repository}`); console.log(`[BackupService] Logging into PBS: ${repository}`);
@@ -419,9 +422,12 @@ class BackupService {
// Build full repository string: root@pam@<IP>:<DATASTORE> // Build full repository string: root@pam@<IP>:<DATASTORE>
const repository = `root@pam@${pbsIp}:${pbsDatastore}`; const repository = `root@pam@${pbsIp}:${pbsDatastore}`;
const fingerprint = credential.pbs_fingerprint?.trim() ?? '';
const escapedFingerprint = fingerprint ? fingerprint.replace(/'/g, "'\\''") : '';
const snapshotEnvParts = escapedFingerprint ? [`PBS_FINGERPRINT='${escapedFingerprint}'`] : [];
const snapshotEnvStr = snapshotEnvParts.length ? snapshotEnvParts.join(' ') + ' ' : '';
// Use correct command: snapshot list ct/<CT_ID> --repository <full_repo_string> // Use correct command: snapshot list ct/<CT_ID> --repository <full_repo_string>
const command = `timeout 30 proxmox-backup-client snapshot list ct/${ctId} --repository ${repository} 2>&1 || echo "PBS_ERROR"`; const command = `${snapshotEnvStr}timeout 30 proxmox-backup-client snapshot list ct/${ctId} --repository ${repository} 2>&1 || echo "PBS_ERROR"`;
let output = ''; let output = '';
console.log(`[BackupService] Discovering PBS backups for CT ${ctId} on repository ${repository}`); console.log(`[BackupService] Discovering PBS backups for CT ${ctId} on repository ${repository}`);

View File

@@ -1,7 +1,8 @@
// JavaScript wrapper for githubJsonService (for use with node server.js) // JavaScript wrapper for githubJsonService (for use with node server.js)
import { writeFile, mkdir, readdir, readFile } from 'fs/promises'; import { writeFile, mkdir, readdir, readFile, unlink } from 'fs/promises';
import { join } from 'path'; import { join } from 'path';
import { repositoryService } from './repositoryService.js'; import { repositoryService } from './repositoryService.js';
import { listDirectory, downloadRawFile } from '../lib/gitProvider/index.js';
// Get environment variables // Get environment variables
const getEnv = () => ({ const getEnv = () => ({
@@ -28,76 +29,9 @@ class GitHubJsonService {
} }
} }
getBaseUrl(repoUrl) {
const urlMatch = /github\.com\/([^\/]+)\/([^\/]+)/.exec(repoUrl);
if (!urlMatch) {
throw new Error(`Invalid GitHub repository URL: ${repoUrl}`);
}
const [, owner, repo] = urlMatch;
return `https://api.github.com/repos/${owner}/${repo}`;
}
extractRepoPath(repoUrl) {
const match = /github\.com\/([^\/]+)\/([^\/]+)/.exec(repoUrl);
if (!match) {
throw new Error('Invalid GitHub repository URL');
}
return `${match[1]}/${match[2]}`;
}
async fetchFromGitHub(repoUrl, endpoint) {
const baseUrl = this.getBaseUrl(repoUrl);
const env = getEnv();
const headers = {
'Accept': 'application/vnd.github.v3+json',
'User-Agent': 'PVEScripts-Local/1.0',
};
if (env.GITHUB_TOKEN) {
headers.Authorization = `token ${env.GITHUB_TOKEN}`;
}
const response = await fetch(`${baseUrl}${endpoint}`, { headers });
if (!response.ok) {
if (response.status === 403) {
const error = new Error(`GitHub API rate limit exceeded. Consider setting GITHUB_TOKEN for higher limits. Status: ${response.status} ${response.statusText}`);
error.name = 'RateLimitError';
throw error;
}
throw new Error(`GitHub API error: ${response.status} ${response.statusText}`);
}
return response.json();
}
async downloadJsonFile(repoUrl, filePath) { async downloadJsonFile(repoUrl, filePath) {
this.initializeConfig(); this.initializeConfig();
const repoPath = this.extractRepoPath(repoUrl); const content = await downloadRawFile(repoUrl, filePath, this.branch);
const rawUrl = `https://raw.githubusercontent.com/${repoPath}/${this.branch}/${filePath}`;
const env = getEnv();
const headers = {
'User-Agent': 'PVEScripts-Local/1.0',
};
if (env.GITHUB_TOKEN) {
headers.Authorization = `token ${env.GITHUB_TOKEN}`;
}
const response = await fetch(rawUrl, { headers });
if (!response.ok) {
if (response.status === 403) {
const error = new Error(`GitHub rate limit exceeded while downloading ${filePath}. Consider setting GITHUB_TOKEN for higher limits.`);
error.name = 'RateLimitError';
throw error;
}
throw new Error(`Failed to download ${filePath}: ${response.status} ${response.statusText}`);
}
const content = await response.text();
const script = JSON.parse(content); const script = JSON.parse(content);
script.repository_url = repoUrl; script.repository_url = repoUrl;
return script; return script;
@@ -105,16 +39,13 @@ class GitHubJsonService {
async getJsonFiles(repoUrl) { async getJsonFiles(repoUrl) {
this.initializeConfig(); this.initializeConfig();
try { try {
const files = await this.fetchFromGitHub( const entries = await listDirectory(repoUrl, this.jsonFolder, this.branch);
repoUrl, return entries
`/contents/${this.jsonFolder}?ref=${this.branch}` .filter((e) => e.type === 'file' && e.name.endsWith('.json'))
); .map((e) => ({ name: e.name, path: e.path }));
return files.filter(file => file.name.endsWith('.json'));
} catch (error) { } catch (error) {
console.error(`Error fetching JSON files from GitHub (${repoUrl}):`, error); console.error(`Error fetching JSON files from repository (${repoUrl}):`, error);
throw new Error(`Failed to fetch script files from repository: ${repoUrl}`); throw new Error(`Failed to fetch script files from repository: ${repoUrl}`);
} }
} }
@@ -232,25 +163,42 @@ class GitHubJsonService {
const localFiles = await this.getLocalJsonFiles(); const localFiles = await this.getLocalJsonFiles();
console.log(`Found ${localFiles.length} local JSON files`); console.log(`Found ${localFiles.length} local JSON files`);
// Delete local JSON files that belong to this repo but are no longer in the remote
const remoteFilenames = new Set(githubFiles.map((f) => f.name));
const deletedFiles = await this.deleteLocalFilesRemovedFromRepo(repoUrl, remoteFilenames);
if (deletedFiles.length > 0) {
console.log(`Removed ${deletedFiles.length} obsolete JSON file(s) no longer in ${repoUrl}`);
}
const filesToSync = await this.findFilesToSyncForRepo(repoUrl, githubFiles, localFiles); const filesToSync = await this.findFilesToSyncForRepo(repoUrl, githubFiles, localFiles);
console.log(`Found ${filesToSync.length} files that need syncing from ${repoUrl}`); console.log(`Found ${filesToSync.length} files that need syncing from ${repoUrl}`);
if (filesToSync.length === 0) { if (filesToSync.length === 0) {
const msg =
deletedFiles.length > 0
? `All JSON files are up to date for repository: ${repoUrl}. Removed ${deletedFiles.length} obsolete file(s).`
: `All JSON files are up to date for repository: ${repoUrl}`;
return { return {
success: true, success: true,
message: `All JSON files are up to date for repository: ${repoUrl}`, message: msg,
count: 0, count: 0,
syncedFiles: [] syncedFiles: [],
deletedFiles
}; };
} }
const syncedFiles = await this.syncSpecificFiles(repoUrl, filesToSync); const syncedFiles = await this.syncSpecificFiles(repoUrl, filesToSync);
const msg =
deletedFiles.length > 0
? `Successfully synced ${syncedFiles.length} JSON files from ${repoUrl}, removed ${deletedFiles.length} obsolete file(s).`
: `Successfully synced ${syncedFiles.length} JSON files from ${repoUrl}`;
return { return {
success: true, success: true,
message: `Successfully synced ${syncedFiles.length} JSON files from ${repoUrl}`, message: msg,
count: syncedFiles.length, count: syncedFiles.length,
syncedFiles syncedFiles,
deletedFiles
}; };
} catch (error) { } catch (error) {
console.error(`JSON sync failed for ${repoUrl}:`, error); console.error(`JSON sync failed for ${repoUrl}:`, error);
@@ -258,7 +206,8 @@ class GitHubJsonService {
success: false, success: false,
message: `Failed to sync JSON files from ${repoUrl}: ${error instanceof Error ? error.message : 'Unknown error'}`, message: `Failed to sync JSON files from ${repoUrl}: ${error instanceof Error ? error.message : 'Unknown error'}`,
count: 0, count: 0,
syncedFiles: [] syncedFiles: [],
deletedFiles: []
}; };
} }
} }
@@ -274,13 +223,15 @@ class GitHubJsonService {
success: false, success: false,
message: 'No enabled repositories found', message: 'No enabled repositories found',
count: 0, count: 0,
syncedFiles: [] syncedFiles: [],
deletedFiles: []
}; };
} }
console.log(`Found ${enabledRepos.length} enabled repositories`); console.log(`Found ${enabledRepos.length} enabled repositories`);
const allSyncedFiles = []; const allSyncedFiles = [];
const allDeletedFiles = [];
const processedSlugs = new Set(); const processedSlugs = new Set();
let totalSynced = 0; let totalSynced = 0;
@@ -291,6 +242,7 @@ class GitHubJsonService {
const result = await this.syncJsonFilesForRepo(repo.url); const result = await this.syncJsonFilesForRepo(repo.url);
if (result.success) { if (result.success) {
allDeletedFiles.push(...(result.deletedFiles ?? []));
const newFiles = result.syncedFiles.filter(file => { const newFiles = result.syncedFiles.filter(file => {
const slug = file.replace('.json', ''); const slug = file.replace('.json', '');
if (processedSlugs.has(slug)) { if (processedSlugs.has(slug)) {
@@ -312,11 +264,16 @@ class GitHubJsonService {
await this.updateExistingFilesWithRepositoryUrl(); await this.updateExistingFilesWithRepositoryUrl();
const msg =
allDeletedFiles.length > 0
? `Successfully synced ${totalSynced} JSON files from ${enabledRepos.length} repositories, removed ${allDeletedFiles.length} obsolete file(s).`
: `Successfully synced ${totalSynced} JSON files from ${enabledRepos.length} repositories`;
return { return {
success: true, success: true,
message: `Successfully synced ${totalSynced} JSON files from ${enabledRepos.length} repositories`, message: msg,
count: totalSynced, count: totalSynced,
syncedFiles: allSyncedFiles syncedFiles: allSyncedFiles,
deletedFiles: allDeletedFiles
}; };
} catch (error) { } catch (error) {
console.error('Multi-repository JSON sync failed:', error); console.error('Multi-repository JSON sync failed:', error);
@@ -324,7 +281,8 @@ class GitHubJsonService {
success: false, success: false,
message: `Failed to sync JSON files: ${error instanceof Error ? error.message : 'Unknown error'}`, message: `Failed to sync JSON files: ${error instanceof Error ? error.message : 'Unknown error'}`,
count: 0, count: 0,
syncedFiles: [] syncedFiles: [],
deletedFiles: []
}; };
} }
} }
@@ -366,6 +324,32 @@ class GitHubJsonService {
} }
} }
async deleteLocalFilesRemovedFromRepo(repoUrl, remoteFilenames) {
this.initializeConfig();
const localFiles = await this.getLocalJsonFiles();
const deletedFiles = [];
for (const file of localFiles) {
try {
const filePath = join(this.localJsonDirectory, file);
const content = await readFile(filePath, 'utf-8');
const script = JSON.parse(content);
if (script.repository_url === repoUrl && !remoteFilenames.has(file)) {
await unlink(filePath);
const slug = file.replace(/\.json$/, '');
this.scriptCache.delete(slug);
deletedFiles.push(file);
console.log(`Removed obsolete script JSON: ${file} (no longer in ${repoUrl})`);
}
} catch {
// If we can't read or parse the file, skip (do not delete)
}
}
return deletedFiles;
}
async findFilesToSyncForRepo(repoUrl, githubFiles, localFiles) { async findFilesToSyncForRepo(repoUrl, githubFiles, localFiles) {
const filesToSync = []; const filesToSync = [];

View File

@@ -1,8 +1,9 @@
import { writeFile, mkdir, readdir, readFile } from 'fs/promises'; import { writeFile, mkdir, readdir, readFile, unlink } from 'fs/promises';
import { join } from 'path'; import { join } from 'path';
import { env } from '../../env.js'; import { env } from '../../env.js';
import type { Script, ScriptCard, GitHubFile } from '../../types/script'; import type { Script, ScriptCard, GitHubFile } from '../../types/script';
import { repositoryService } from './repositoryService'; import { repositoryService } from './repositoryService';
import { listDirectory, downloadRawFile } from '~/server/lib/gitProvider';
export class GitHubJsonService { export class GitHubJsonService {
private branch: string | null = null; private branch: string | null = null;
@@ -22,96 +23,24 @@ export class GitHubJsonService {
} }
} }
private getBaseUrl(repoUrl: string): string {
const urlMatch = /github\.com\/([^\/]+)\/([^\/]+)/.exec(repoUrl);
if (!urlMatch) {
throw new Error(`Invalid GitHub repository URL: ${repoUrl}`);
}
const [, owner, repo] = urlMatch;
return `https://api.github.com/repos/${owner}/${repo}`;
}
private extractRepoPath(repoUrl: string): string {
const match = /github\.com\/([^\/]+)\/([^\/]+)/.exec(repoUrl);
if (!match) {
throw new Error('Invalid GitHub repository URL');
}
return `${match[1]}/${match[2]}`;
}
private async fetchFromGitHub<T>(repoUrl: string, endpoint: string): Promise<T> {
const baseUrl = this.getBaseUrl(repoUrl);
const headers: HeadersInit = {
'Accept': 'application/vnd.github.v3+json',
'User-Agent': 'PVEScripts-Local/1.0',
};
// Add GitHub token authentication if available
if (env.GITHUB_TOKEN) {
headers.Authorization = `token ${env.GITHUB_TOKEN}`;
}
const response = await fetch(`${baseUrl}${endpoint}`, { headers });
if (!response.ok) {
if (response.status === 403) {
const error = new Error(`GitHub API rate limit exceeded. Consider setting GITHUB_TOKEN for higher limits. Status: ${response.status} ${response.statusText}`);
error.name = 'RateLimitError';
throw error;
}
throw new Error(`GitHub API error: ${response.status} ${response.statusText}`);
}
const data = await response.json();
return data as T;
}
private async downloadJsonFile(repoUrl: string, filePath: string): Promise<Script> { private async downloadJsonFile(repoUrl: string, filePath: string): Promise<Script> {
this.initializeConfig(); this.initializeConfig();
const repoPath = this.extractRepoPath(repoUrl); const content = await downloadRawFile(repoUrl, filePath, this.branch!);
const rawUrl = `https://raw.githubusercontent.com/${repoPath}/${this.branch!}/${filePath}`;
const headers: HeadersInit = {
'User-Agent': 'PVEScripts-Local/1.0',
};
// Add GitHub token authentication if available
if (env.GITHUB_TOKEN) {
headers.Authorization = `token ${env.GITHUB_TOKEN}`;
}
const response = await fetch(rawUrl, { headers });
if (!response.ok) {
if (response.status === 403) {
const error = new Error(`GitHub rate limit exceeded while downloading ${filePath}. Consider setting GITHUB_TOKEN for higher limits. Status: ${response.status} ${response.statusText}`);
error.name = 'RateLimitError';
throw error;
}
throw new Error(`Failed to download ${filePath}: ${response.status} ${response.statusText}`);
}
const content = await response.text();
const script = JSON.parse(content) as Script; const script = JSON.parse(content) as Script;
// Add repository_url to script
script.repository_url = repoUrl; script.repository_url = repoUrl;
return script; return script;
} }
async getJsonFiles(repoUrl: string): Promise<GitHubFile[]> { async getJsonFiles(repoUrl: string): Promise<GitHubFile[]> {
this.initializeConfig(); this.initializeConfig();
try { try {
const files = await this.fetchFromGitHub<GitHubFile[]>( const entries = await listDirectory(repoUrl, this.jsonFolder!, this.branch!);
repoUrl, const files: GitHubFile[] = entries
`/contents/${this.jsonFolder!}?ref=${this.branch!}` .filter((e) => e.type === 'file' && e.name.endsWith('.json'))
); .map((e) => ({ name: e.name, path: e.path } as GitHubFile));
return files;
// Filter for JSON files only
return files.filter(file => file.name.endsWith('.json'));
} catch (error) { } catch (error) {
console.error(`Error fetching JSON files from GitHub (${repoUrl}):`, error); console.error(`Error fetching JSON files from repository (${repoUrl}):`, error);
throw new Error(`Failed to fetch script files from repository: ${repoUrl}`); throw new Error(`Failed to fetch script files from repository: ${repoUrl}`);
} }
} }
@@ -229,12 +158,11 @@ export class GitHubJsonService {
/** /**
* Sync JSON files from a specific repository * Sync JSON files from a specific repository
*/ */
async syncJsonFilesForRepo(repoUrl: string): Promise<{ success: boolean; message: string; count: number; syncedFiles: string[] }> { async syncJsonFilesForRepo(repoUrl: string): Promise<{ success: boolean; message: string; count: number; syncedFiles: string[]; deletedFiles: string[] }> {
try { try {
console.log(`Starting JSON sync from repository: ${repoUrl}`); console.log(`Starting JSON sync from repository: ${repoUrl}`);
// Get file list from GitHub console.log(`Fetching file list from repository (${repoUrl})...`);
console.log(`Fetching file list from GitHub (${repoUrl})...`);
const githubFiles = await this.getJsonFiles(repoUrl); const githubFiles = await this.getJsonFiles(repoUrl);
console.log(`Found ${githubFiles.length} JSON files in repository ${repoUrl}`); console.log(`Found ${githubFiles.length} JSON files in repository ${repoUrl}`);
@@ -242,28 +170,45 @@ export class GitHubJsonService {
const localFiles = await this.getLocalJsonFiles(); const localFiles = await this.getLocalJsonFiles();
console.log(`Found ${localFiles.length} local JSON files`); console.log(`Found ${localFiles.length} local JSON files`);
// Delete local JSON files that belong to this repo but are no longer in the remote
const remoteFilenames = new Set(githubFiles.map((f) => f.name));
const deletedFiles = await this.deleteLocalFilesRemovedFromRepo(repoUrl, remoteFilenames);
if (deletedFiles.length > 0) {
console.log(`Removed ${deletedFiles.length} obsolete JSON file(s) no longer in ${repoUrl}`);
}
// Compare and find files that need syncing // Compare and find files that need syncing
// For multi-repo support, we need to check if file exists AND if it's from this repo // For multi-repo support, we need to check if file exists AND if it's from this repo
const filesToSync = await this.findFilesToSyncForRepo(repoUrl, githubFiles, localFiles); const filesToSync = await this.findFilesToSyncForRepo(repoUrl, githubFiles, localFiles);
console.log(`Found ${filesToSync.length} files that need syncing from ${repoUrl}`); console.log(`Found ${filesToSync.length} files that need syncing from ${repoUrl}`);
if (filesToSync.length === 0) { if (filesToSync.length === 0) {
const msg =
deletedFiles.length > 0
? `All JSON files are up to date for repository: ${repoUrl}. Removed ${deletedFiles.length} obsolete file(s).`
: `All JSON files are up to date for repository: ${repoUrl}`;
return { return {
success: true, success: true,
message: `All JSON files are up to date for repository: ${repoUrl}`, message: msg,
count: 0, count: 0,
syncedFiles: [] syncedFiles: [],
deletedFiles
}; };
} }
// Download and save only the files that need syncing // Download and save only the files that need syncing
const syncedFiles = await this.syncSpecificFiles(repoUrl, filesToSync); const syncedFiles = await this.syncSpecificFiles(repoUrl, filesToSync);
const msg =
deletedFiles.length > 0
? `Successfully synced ${syncedFiles.length} JSON files from ${repoUrl}, removed ${deletedFiles.length} obsolete file(s).`
: `Successfully synced ${syncedFiles.length} JSON files from ${repoUrl}`;
return { return {
success: true, success: true,
message: `Successfully synced ${syncedFiles.length} JSON files from ${repoUrl}`, message: msg,
count: syncedFiles.length, count: syncedFiles.length,
syncedFiles syncedFiles,
deletedFiles
}; };
} catch (error) { } catch (error) {
console.error(`JSON sync failed for ${repoUrl}:`, error); console.error(`JSON sync failed for ${repoUrl}:`, error);
@@ -271,7 +216,8 @@ export class GitHubJsonService {
success: false, success: false,
message: `Failed to sync JSON files from ${repoUrl}: ${error instanceof Error ? error.message : 'Unknown error'}`, message: `Failed to sync JSON files from ${repoUrl}: ${error instanceof Error ? error.message : 'Unknown error'}`,
count: 0, count: 0,
syncedFiles: [] syncedFiles: [],
deletedFiles: []
}; };
} }
} }
@@ -279,7 +225,7 @@ export class GitHubJsonService {
/** /**
* Sync JSON files from all enabled repositories (main repo has priority) * Sync JSON files from all enabled repositories (main repo has priority)
*/ */
async syncJsonFiles(): Promise<{ success: boolean; message: string; count: number; syncedFiles: string[] }> { async syncJsonFiles(): Promise<{ success: boolean; message: string; count: number; syncedFiles: string[]; deletedFiles: string[] }> {
try { try {
console.log('Starting multi-repository JSON sync...'); console.log('Starting multi-repository JSON sync...');
@@ -290,13 +236,15 @@ export class GitHubJsonService {
success: false, success: false,
message: 'No enabled repositories found', message: 'No enabled repositories found',
count: 0, count: 0,
syncedFiles: [] syncedFiles: [],
deletedFiles: []
}; };
} }
console.log(`Found ${enabledRepos.length} enabled repositories`); console.log(`Found ${enabledRepos.length} enabled repositories`);
const allSyncedFiles: string[] = []; const allSyncedFiles: string[] = [];
const allDeletedFiles: string[] = [];
const processedSlugs = new Set<string>(); // Track slugs we've already processed const processedSlugs = new Set<string>(); // Track slugs we've already processed
let totalSynced = 0; let totalSynced = 0;
@@ -308,6 +256,7 @@ export class GitHubJsonService {
const result = await this.syncJsonFilesForRepo(repo.url); const result = await this.syncJsonFilesForRepo(repo.url);
if (result.success) { if (result.success) {
allDeletedFiles.push(...(result.deletedFiles ?? []));
// Only count files that weren't already processed from a higher priority repo // Only count files that weren't already processed from a higher priority repo
const newFiles = result.syncedFiles.filter(file => { const newFiles = result.syncedFiles.filter(file => {
const slug = file.replace('.json', ''); const slug = file.replace('.json', '');
@@ -331,11 +280,16 @@ export class GitHubJsonService {
// Also update existing files that don't have repository_url set (backward compatibility) // Also update existing files that don't have repository_url set (backward compatibility)
await this.updateExistingFilesWithRepositoryUrl(); await this.updateExistingFilesWithRepositoryUrl();
const msg =
allDeletedFiles.length > 0
? `Successfully synced ${totalSynced} JSON files from ${enabledRepos.length} repositories, removed ${allDeletedFiles.length} obsolete file(s).`
: `Successfully synced ${totalSynced} JSON files from ${enabledRepos.length} repositories`;
return { return {
success: true, success: true,
message: `Successfully synced ${totalSynced} JSON files from ${enabledRepos.length} repositories`, message: msg,
count: totalSynced, count: totalSynced,
syncedFiles: allSyncedFiles syncedFiles: allSyncedFiles,
deletedFiles: allDeletedFiles
}; };
} catch (error) { } catch (error) {
console.error('Multi-repository JSON sync failed:', error); console.error('Multi-repository JSON sync failed:', error);
@@ -343,7 +297,8 @@ export class GitHubJsonService {
success: false, success: false,
message: `Failed to sync JSON files: ${error instanceof Error ? error.message : 'Unknown error'}`, message: `Failed to sync JSON files: ${error instanceof Error ? error.message : 'Unknown error'}`,
count: 0, count: 0,
syncedFiles: [] syncedFiles: [],
deletedFiles: []
}; };
} }
} }
@@ -388,6 +343,36 @@ export class GitHubJsonService {
} }
} }
/**
* Delete local JSON files that belong to this repo but are no longer in the remote list.
* Returns the list of deleted filenames.
*/
private async deleteLocalFilesRemovedFromRepo(repoUrl: string, remoteFilenames: Set<string>): Promise<string[]> {
this.initializeConfig();
const localFiles = await this.getLocalJsonFiles();
const deletedFiles: string[] = [];
for (const file of localFiles) {
try {
const filePath = join(this.localJsonDirectory!, file);
const content = await readFile(filePath, 'utf-8');
const script = JSON.parse(content) as Script;
if (script.repository_url === repoUrl && !remoteFilenames.has(file)) {
await unlink(filePath);
const slug = file.replace(/\.json$/, '');
this.scriptCache.delete(slug);
deletedFiles.push(file);
console.log(`Removed obsolete script JSON: ${file} (no longer in ${repoUrl})`);
}
} catch {
// If we can't read or parse the file, skip (do not delete)
}
}
return deletedFiles;
}
/** /**
* Find files that need syncing for a specific repository * Find files that need syncing for a specific repository
* This checks if file exists locally AND if it's from the same repository * This checks if file exists locally AND if it's from the same repository

View File

@@ -1,5 +1,6 @@
// JavaScript wrapper for repositoryService (for use with node server.js) // JavaScript wrapper for repositoryService (for use with node server.js)
import { prisma } from '../db.js'; import { prisma } from '../db.js';
import { isValidRepositoryUrl, REPO_URL_ERROR_MESSAGE } from '../lib/repositoryUrlValidation.js';
class RepositoryService { class RepositoryService {
/** /**
@@ -89,9 +90,8 @@ class RepositoryService {
* Create a new repository * Create a new repository
*/ */
async createRepository(data) { async createRepository(data) {
// Validate GitHub URL if (!isValidRepositoryUrl(data.url)) {
if (!data.url.match(/^https:\/\/github\.com\/[^\/]+\/[^\/]+$/)) { throw new Error(REPO_URL_ERROR_MESSAGE);
throw new Error('Invalid GitHub repository URL. Format: https://github.com/owner/repo');
} }
// Check for duplicates // Check for duplicates
@@ -122,10 +122,9 @@ class RepositoryService {
* Update repository * Update repository
*/ */
async updateRepository(id, data) { async updateRepository(id, data) {
// If updating URL, validate it
if (data.url) { if (data.url) {
if (!data.url.match(/^https:\/\/github\.com\/[^\/]+\/[^\/]+$/)) { if (!isValidRepositoryUrl(data.url)) {
throw new Error('Invalid GitHub repository URL. Format: https://github.com/owner/repo'); throw new Error(REPO_URL_ERROR_MESSAGE);
} }
// Check for duplicates (excluding current repo) // Check for duplicates (excluding current repo)

View File

@@ -1,5 +1,5 @@
/* eslint-disable @typescript-eslint/prefer-regexp-exec */
import { prisma } from '../db'; import { prisma } from '../db';
import { isValidRepositoryUrl, REPO_URL_ERROR_MESSAGE } from '../lib/repositoryUrlValidation';
export class RepositoryService { export class RepositoryService {
/** /**
@@ -93,9 +93,8 @@ export class RepositoryService {
enabled?: boolean; enabled?: boolean;
priority?: number; priority?: number;
}) { }) {
// Validate GitHub URL if (!isValidRepositoryUrl(data.url)) {
if (!data.url.match(/^https:\/\/github\.com\/[^\/]+\/[^\/]+$/)) { throw new Error(REPO_URL_ERROR_MESSAGE);
throw new Error('Invalid GitHub repository URL. Format: https://github.com/owner/repo');
} }
// Check for duplicates // Check for duplicates
@@ -130,10 +129,9 @@ export class RepositoryService {
url?: string; url?: string;
priority?: number; priority?: number;
}) { }) {
// If updating URL, validate it
if (data.url) { if (data.url) {
if (!data.url.match(/^https:\/\/github\.com\/[^\/]+\/[^\/]+$/)) { if (!isValidRepositoryUrl(data.url)) {
throw new Error('Invalid GitHub repository URL. Format: https://github.com/owner/repo'); throw new Error(REPO_URL_ERROR_MESSAGE);
} }
// Check for duplicates (excluding current repo) // Check for duplicates (excluding current repo)

View File

@@ -250,9 +250,16 @@ class RestoreService {
const targetFolder = `/var/lib/vz/dump/vzdump-lxc-${ctId}-${snapshotNameForPath}`; const targetFolder = `/var/lib/vz/dump/vzdump-lxc-${ctId}-${snapshotNameForPath}`;
const targetTar = `${targetFolder}.tar`; const targetTar = `${targetFolder}.tar`;
// Use PBS_PASSWORD env var and add timeout for long downloads // Use PBS_PASSWORD env var and add timeout for long downloads; PBS_FINGERPRINT when set for cert validation
const escapedPassword = credential.pbs_password.replace(/'/g, "'\\''"); const escapedPassword = credential.pbs_password.replace(/'/g, "'\\''");
const restoreCommand = `PBS_PASSWORD='${escapedPassword}' PBS_REPOSITORY='${repository}' timeout 300 proxmox-backup-client restore "${snapshotPath}" root.pxar "${targetFolder}" --repository '${repository}' 2>&1`; const fingerprint = credential.pbs_fingerprint?.trim() ?? '';
const escapedFingerprint = fingerprint ? fingerprint.replace(/'/g, "'\\''") : '';
const restoreEnvParts = [`PBS_PASSWORD='${escapedPassword}'`, `PBS_REPOSITORY='${repository}'`];
if (escapedFingerprint) {
restoreEnvParts.push(`PBS_FINGERPRINT='${escapedFingerprint}'`);
}
const restoreEnvStr = restoreEnvParts.join(' ');
const restoreCommand = `${restoreEnvStr} timeout 300 proxmox-backup-client restore "${snapshotPath}" root.pxar "${targetFolder}" --repository '${repository}' 2>&1`;
let output = ''; let output = '';
let exitCode = 0; let exitCode = 0;

View File

@@ -1,6 +1,7 @@
// Real JavaScript implementation for script downloading // Real JavaScript implementation for script downloading
import { join } from 'path'; import { join } from 'path';
import { writeFile, mkdir, access, readFile, unlink } from 'fs/promises'; import { writeFile, mkdir, access, readFile, unlink } from 'fs/promises';
import { downloadRawFile } from '../lib/gitProvider/index.js';
export class ScriptDownloaderService { export class ScriptDownloaderService {
constructor() { constructor() {
@@ -82,51 +83,18 @@ export class ScriptDownloaderService {
} }
/** /**
* Extract repository path from GitHub URL * Download a file from the repository (GitHub, GitLab, Bitbucket, or custom)
* @param {string} repoUrl - The GitHub repository URL * @param {string} repoUrl - The repository URL
* @returns {string}
*/
extractRepoPath(repoUrl) {
const match = /github\.com\/([^\/]+)\/([^\/]+)/.exec(repoUrl);
if (!match) {
throw new Error(`Invalid GitHub repository URL: ${repoUrl}`);
}
return `${match[1]}/${match[2]}`;
}
/**
* Download a file from GitHub
* @param {string} repoUrl - The GitHub repository URL
* @param {string} filePath - The file path within the repository * @param {string} filePath - The file path within the repository
* @param {string} [branch] - The branch to download from * @param {string} [branch] - The branch to download from
* @returns {Promise<string>} * @returns {Promise<string>}
*/ */
async downloadFileFromGitHub(repoUrl, filePath, branch = 'main') { async downloadFileFromRepo(repoUrl, filePath, branch = 'main') {
this.initializeConfig();
if (!repoUrl) { if (!repoUrl) {
throw new Error('Repository URL is not set'); throw new Error('Repository URL is not set');
} }
console.log(`Downloading from repository: ${repoUrl} (${filePath})`);
const repoPath = this.extractRepoPath(repoUrl); return downloadRawFile(repoUrl, filePath, branch);
const url = `https://raw.githubusercontent.com/${repoPath}/${branch}/${filePath}`;
/** @type {Record<string, string>} */
const headers = {
'User-Agent': 'PVEScripts-Local/1.0',
};
// Add GitHub token authentication if available
if (process.env.GITHUB_TOKEN) {
headers.Authorization = `token ${process.env.GITHUB_TOKEN}`;
}
console.log(`Downloading from GitHub: ${url}`);
const response = await fetch(url, { headers });
if (!response.ok) {
throw new Error(`Failed to download ${filePath} from ${repoUrl}: ${response.status} ${response.statusText}`);
}
return response.text();
} }
/** /**
@@ -184,9 +152,8 @@ export class ScriptDownloaderService {
const fileName = scriptPath.split('/').pop(); const fileName = scriptPath.split('/').pop();
if (fileName) { if (fileName) {
// Download from GitHub using the script's repository URL
console.log(`Downloading script file: ${scriptPath} from ${repoUrl}`); console.log(`Downloading script file: ${scriptPath} from ${repoUrl}`);
const content = await this.downloadFileFromGitHub(repoUrl, scriptPath, branch); const content = await this.downloadFileFromRepo(repoUrl, scriptPath, branch);
// Determine target directory based on script path // Determine target directory based on script path
let targetDir; let targetDir;
@@ -250,7 +217,7 @@ export class ScriptDownloaderService {
const installScriptName = `${script.slug}-install.sh`; const installScriptName = `${script.slug}-install.sh`;
try { try {
console.log(`Downloading install script: install/${installScriptName} from ${repoUrl}`); console.log(`Downloading install script: install/${installScriptName} from ${repoUrl}`);
const installContent = await this.downloadFileFromGitHub(repoUrl, `install/${installScriptName}`, branch); const installContent = await this.downloadFileFromRepo(repoUrl, `install/${installScriptName}`, branch);
const localInstallPath = join(this.scriptsDirectory, 'install', installScriptName); const localInstallPath = join(this.scriptsDirectory, 'install', installScriptName);
await writeFile(localInstallPath, installContent, 'utf-8'); await writeFile(localInstallPath, installContent, 'utf-8');
files.push(`install/${installScriptName}`); files.push(`install/${installScriptName}`);
@@ -274,7 +241,7 @@ export class ScriptDownloaderService {
const alpineInstallScriptName = `alpine-${script.slug}-install.sh`; const alpineInstallScriptName = `alpine-${script.slug}-install.sh`;
try { try {
console.log(`[${script.slug}] Downloading alpine install script: install/${alpineInstallScriptName} from ${repoUrl}`); console.log(`[${script.slug}] Downloading alpine install script: install/${alpineInstallScriptName} from ${repoUrl}`);
const alpineInstallContent = await this.downloadFileFromGitHub(repoUrl, `install/${alpineInstallScriptName}`, branch); const alpineInstallContent = await this.downloadFileFromRepo(repoUrl, `install/${alpineInstallScriptName}`, branch);
const localAlpineInstallPath = join(this.scriptsDirectory, 'install', alpineInstallScriptName); const localAlpineInstallPath = join(this.scriptsDirectory, 'install', alpineInstallScriptName);
await writeFile(localAlpineInstallPath, alpineInstallContent, 'utf-8'); await writeFile(localAlpineInstallPath, alpineInstallContent, 'utf-8');
files.push(`install/${alpineInstallScriptName}`); files.push(`install/${alpineInstallScriptName}`);
@@ -681,7 +648,7 @@ export class ScriptDownloaderService {
console.log(`[Comparison] Local file size: ${localContent.length} bytes`); console.log(`[Comparison] Local file size: ${localContent.length} bytes`);
// Download remote content from the script's repository // Download remote content from the script's repository
const remoteContent = await this.downloadFileFromGitHub(repoUrl, remotePath, branch); const remoteContent = await this.downloadFileFromRepo(repoUrl, remotePath, branch);
console.log(`[Comparison] Remote file size: ${remoteContent.length} bytes`); console.log(`[Comparison] Remote file size: ${remoteContent.length} bytes`);
// Apply modification only for CT scripts, not for other script types // Apply modification only for CT scripts, not for other script types
@@ -739,7 +706,7 @@ export class ScriptDownloaderService {
// Find the corresponding script path in install_methods // Find the corresponding script path in install_methods
const method = script.install_methods?.find(m => m.script === filePath); const method = script.install_methods?.find(m => m.script === filePath);
if (method?.script) { if (method?.script) {
const downloadedContent = await this.downloadFileFromGitHub(repoUrl, method.script, branch); const downloadedContent = await this.downloadFileFromRepo(repoUrl, method.script, branch);
remoteContent = this.modifyScriptContent(downloadedContent); remoteContent = this.modifyScriptContent(downloadedContent);
} }
} catch { } catch {
@@ -756,7 +723,7 @@ export class ScriptDownloaderService {
} }
try { try {
remoteContent = await this.downloadFileFromGitHub(repoUrl, filePath, branch); remoteContent = await this.downloadFileFromRepo(repoUrl, filePath, branch);
} catch { } catch {
// Error downloading remote install script // Error downloading remote install script
} }

View File

@@ -1,6 +1,8 @@
import { spawn } from 'child_process'; import { spawn } from 'child_process';
import { spawn as ptySpawn } from 'node-pty'; import { spawn as ptySpawn } from 'node-pty';
import { existsSync } from 'fs'; import { existsSync, writeFileSync, chmodSync, unlinkSync } from 'fs';
import { join } from 'path';
import { tmpdir } from 'os';
/** /**
@@ -194,26 +196,45 @@ class SSHExecutionService {
*/ */
async transferScriptsFolder(server, onData, onError) { async transferScriptsFolder(server, onData, onError) {
const { ip, user, password, auth_type = 'password', ssh_key_passphrase, ssh_key_path, ssh_port = 22 } = server; const { ip, user, password, auth_type = 'password', ssh_key_passphrase, ssh_key_path, ssh_port = 22 } = server;
const cleanupTempFile = (/** @type {string | null} */ tempPath) => {
if (tempPath) {
try {
unlinkSync(tempPath);
} catch (_) {
// ignore
}
}
};
return new Promise((resolve, reject) => { return new Promise((resolve, reject) => {
/** @type {string | null} */
let tempPath = null;
try { try {
// Build rsync command based on authentication type // Build rsync command based on authentication type.
// Use sshpass -f with a temp file so password/passphrase never go through the shell (safe for special chars like {, $, ").
let rshCommand; let rshCommand;
if (auth_type === 'key') { if (auth_type === 'key') {
if (!ssh_key_path || !existsSync(ssh_key_path)) { if (!ssh_key_path || !existsSync(ssh_key_path)) {
throw new Error('SSH key file not found'); throw new Error('SSH key file not found');
} }
if (ssh_key_passphrase) { if (ssh_key_passphrase) {
rshCommand = `sshpass -P passphrase -p ${ssh_key_passphrase} ssh -i ${ssh_key_path} -p ${ssh_port} -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null`; tempPath = join(tmpdir(), `sshpass-${process.pid}-${Date.now()}.tmp`);
writeFileSync(tempPath, ssh_key_passphrase);
chmodSync(tempPath, 0o600);
rshCommand = `sshpass -P passphrase -f ${tempPath} ssh -i ${ssh_key_path} -p ${ssh_port} -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null`;
} else { } else {
rshCommand = `ssh -i ${ssh_key_path} -p ${ssh_port} -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null`; rshCommand = `ssh -i ${ssh_key_path} -p ${ssh_port} -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null`;
} }
} else { } else {
// Password authentication // Password authentication
rshCommand = `sshpass -p ${password} ssh -p ${ssh_port} -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null`; tempPath = join(tmpdir(), `sshpass-${process.pid}-${Date.now()}.tmp`);
writeFileSync(tempPath, password ?? '');
chmodSync(tempPath, 0o600);
rshCommand = `sshpass -f ${tempPath} ssh -p ${ssh_port} -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null`;
} }
const rsyncCommand = spawn('rsync', [ const rsyncCommand = spawn('rsync', [
'-avz', '-avz',
'--delete', '--delete',
@@ -226,31 +247,31 @@ class SSHExecutionService {
stdio: ['pipe', 'pipe', 'pipe'] stdio: ['pipe', 'pipe', 'pipe']
}); });
rsyncCommand.stdout.on('data', (/** @type {Buffer} */ data) => { rsyncCommand.stdout.on('data', (/** @type {Buffer} */ data) => {
// Ensure proper UTF-8 encoding for ANSI colors const output = data.toString('utf8');
const output = data.toString('utf8'); onData(output);
onData(output); });
});
rsyncCommand.stderr.on('data', (/** @type {Buffer} */ data) => { rsyncCommand.stderr.on('data', (/** @type {Buffer} */ data) => {
// Ensure proper UTF-8 encoding for ANSI colors const output = data.toString('utf8');
const output = data.toString('utf8'); onError(output);
onError(output); });
});
rsyncCommand.on('close', (code) => { rsyncCommand.on('close', (code) => {
if (code === 0) { cleanupTempFile(tempPath);
resolve(); if (code === 0) {
} else { resolve();
reject(new Error(`rsync failed with code ${code}`)); } else {
} reject(new Error(`rsync failed with code ${code}`));
}); }
});
rsyncCommand.on('error', (error) => { rsyncCommand.on('error', (error) => {
reject(error); cleanupTempFile(tempPath);
}); reject(error);
});
} catch (error) { } catch (error) {
cleanupTempFile(tempPath);
reject(error); reject(error);
} }
}); });

View File

@@ -169,16 +169,17 @@ class SSHService {
const timeout = 10000; const timeout = 10000;
let resolved = false; let resolved = false;
// Pass password via env so it is not embedded in the script (safe for special chars like {, $, ").
const expectScript = `#!/usr/bin/expect -f const expectScript = `#!/usr/bin/expect -f
set timeout 10 set timeout 10
spawn ssh -p ${ssh_port} -o ConnectTimeout=10 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o LogLevel=ERROR -o PasswordAuthentication=yes -o PubkeyAuthentication=no ${user}@${ip} "echo SSH_LOGIN_SUCCESS" spawn ssh -p ${ssh_port} -o ConnectTimeout=10 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o LogLevel=ERROR -o PasswordAuthentication=yes -o PubkeyAuthentication=no ${user}@${ip} "echo SSH_LOGIN_SUCCESS"
expect { expect {
"password:" { "password:" {
send "${password}\r" send "$env(SSH_PASSWORD)\\r"
exp_continue exp_continue
} }
"Password:" { "Password:" {
send "${password}\r" send "$env(SSH_PASSWORD)\\r"
exp_continue exp_continue
} }
"SSH_LOGIN_SUCCESS" { "SSH_LOGIN_SUCCESS" {
@@ -193,7 +194,8 @@ expect {
}`; }`;
const expectCommand = spawn('expect', ['-c', expectScript], { const expectCommand = spawn('expect', ['-c', expectScript], {
stdio: ['pipe', 'pipe', 'pipe'] stdio: ['pipe', 'pipe', 'pipe'],
env: { ...process.env, SSH_PASSWORD: password ?? '' }
}); });
const timer = setTimeout(() => { const timer = setTimeout(() => {

View File

@@ -710,11 +710,14 @@ install_and_build() {
log "Building application..." log "Building application..."
# Set NODE_ENV to production for build # Set NODE_ENV to production for build
export NODE_ENV=production export NODE_ENV=production
# Unset TURBOPACK to prevent "Multiple bundler flags" error with --webpack
unset TURBOPACK 2>/dev/null || true
export TURBOPACK=''
# Create temporary file for npm build output # Create temporary file for npm build output
local build_log="/tmp/npm_build_$$.log" local build_log="/tmp/npm_build_$$.log"
if ! npm run build >"$build_log" 2>&1; then if ! TURBOPACK='' npm run build >"$build_log" 2>&1; then
log_error "Failed to build application" log_error "Failed to build application"
log_error "npm run build output:" log_error "npm run build output:"
cat "$build_log" | while read -r line; do cat "$build_log" | while read -r line; do
@@ -781,6 +784,23 @@ start_with_npm() {
fi fi
} }
# Re-enable the systemd service on failure to prevent users from being locked out
re_enable_service_on_failure() {
if check_service; then
log "Re-enabling systemd service after failure..."
if systemctl enable pvescriptslocal.service 2>/dev/null; then
log_success "Service re-enabled"
if systemctl start pvescriptslocal.service 2>/dev/null; then
log_success "Service started"
else
log_warning "Failed to start service - manual intervention may be required"
fi
else
log_warning "Failed to re-enable service - manual intervention may be required"
fi
fi
}
# Rollback function # Rollback function
rollback() { rollback() {
log_warning "Rolling back to previous version..." log_warning "Rolling back to previous version..."
@@ -852,6 +872,9 @@ rollback() {
log_error "No backup directory found for rollback" log_error "No backup directory found for rollback"
fi fi
# Re-enable the service so users aren't locked out
re_enable_service_on_failure
log_error "Update failed. Please check the logs and try again." log_error "Update failed. Please check the logs and try again."
exit 1 exit 1
} }
@@ -870,14 +893,14 @@ check_node_version() {
log "Detected Node.js version: $current" log "Detected Node.js version: $current"
if ((major_version < 24)); then if ((major_version == 24)); then
log_success "Node.js 24 already installed"
elif ((major_version < 24)); then
log_warning "Node.js < 24 detected → upgrading to Node.js 24 LTS..." log_warning "Node.js < 24 detected → upgrading to Node.js 24 LTS..."
upgrade_node_to_24 upgrade_node_to_24
elif ((major_version > 24)); then else
log_warning "Node.js > 24 detected → script tested only up to Node 24" log_warning "Node.js > 24 detected → script tested only up to Node 24"
log "Continuing anyway…" log "Continuing anyway…"
else
log_success "Node.js 24 already installed"
fi fi
} }
@@ -885,22 +908,39 @@ check_node_version() {
upgrade_node_to_24() { upgrade_node_to_24() {
log "Preparing Node.js 24 upgrade…" log "Preparing Node.js 24 upgrade…"
# Remove old nodesource repo if it exists # Remove old nodesource repo files if they exist
if [ -f /etc/apt/sources.list.d/nodesource.list ]; then if [ -f /etc/apt/sources.list.d/nodesource.list ]; then
log "Removing old nodesource.list file..."
rm -f /etc/apt/sources.list.d/nodesource.list rm -f /etc/apt/sources.list.d/nodesource.list
fi fi
if [ -f /etc/apt/sources.list.d/nodesource.sources ]; then
log "Removing old nodesource.sources file..."
rm -f /etc/apt/sources.list.d/nodesource.sources
fi
# Update apt cache first
log "Updating apt cache..."
apt-get update >>"$LOG_FILE" 2>&1 || true
# Install NodeSource repo for Node.js 24 # Install NodeSource repo for Node.js 24
curl -fsSL https://deb.nodesource.com/setup_24.x -o /tmp/node24_setup.sh log "Downloading Node.js 24 setup script..."
if ! curl -fsSL https://deb.nodesource.com/setup_24.x -o /tmp/node24_setup.sh; then
log_error "Failed to download Node.js 24 setup script"
re_enable_service_on_failure
exit 1
fi
if ! bash /tmp/node24_setup.sh >/tmp/node24_setup.log 2>&1; then if ! bash /tmp/node24_setup.sh >/tmp/node24_setup.log 2>&1; then
log_error "Failed to configure Node.js 24 repository" log_error "Failed to configure Node.js 24 repository"
tail -20 /tmp/node24_setup.log | while read -r line; do log_error "$line"; done tail -20 /tmp/node24_setup.log | while read -r line; do log_error "$line"; done
re_enable_service_on_failure
exit 1 exit 1
fi fi
log "Installing Node.js 24…" log "Installing Node.js 24…"
if ! apt-get install -y nodejs >>"$LOG_FILE" 2>&1; then if ! apt-get install -y nodejs >>"$LOG_FILE" 2>&1; then
log_error "Failed to install Node.js 24" log_error "Failed to install Node.js 24"
re_enable_service_on_failure
exit 1 exit 1
fi fi