Compare commits

...

45 Commits

Author SHA1 Message Date
a0cca97d6f Implement a new rough comparison function for sorting ranges within a template-determined percent error
Some checks are pending
checkrefs / checkrefs (push) Waiting to run
pre-commit / build (push) Waiting to run
2024-09-03 22:31:28 +02:00
19e7d14506
Use PEP 8 naming conventions for fontbuilder2 2024-09-03 14:01:47 +02:00
4a049c5f3a
Use PEP 8 naming conventions for i18n tools 2024-09-03 13:51:27 +02:00
80f808df4a
Markdown format for fontbuilder2 README
The fontbuilder2 README file was already partially formatted with
Markdown, but didn't indicate that via its file extension. This commit
changes that and improves the formatting of the README itself.
2024-09-03 13:39:28 +02:00
9fd05e38a4 Update libraries to fix some build errors on recent macOS.
Fixes #6797
Fixes #6902
Refs #6915
Fixes #6916
Refs #4362
2024-09-03 11:17:12 +02:00
f7630b155c Do not hardcode JOBS in CI pipelines
This build environment variable is now set agent-wide and corresponds to
the number of CPUs of each agent.
2024-09-02 21:47:08 +02:00
670f68c1c5 Fix remnant from old Jenkins configuration 2024-09-02 21:47:08 +02:00
f87dfebc71 build-macos-libs.sh: move to posix shell
This removes the remaining bashisms and changes the shebang.

Also remove command chaining in subshells as this allows for comments
to go where they belong and shellcheck directives to be more specific.

Signed-off-by: Ralph Sennhauser <ralph.sennhauser@gmail.com>
2024-09-02 15:37:44 +02:00
79127ec59d Automatically try to unbreak CI incremental builds
This entirely reverts ee3318309b and removes the [CLEANBUILD] feature,
which needed to be used without omission to be efficient.

Instead, in case of build failure, the CI automatically starts a
non-incremental second build by running `make clean` (or the MSBuild
equivalent) before retrying the build.
2024-09-01 13:29:17 +02:00
bcf97b608b
Enable ruff rules for docstrings and comments
This enables some ruff rules for docstrings and comments. The idea is to
not enforce the presence of docstrings, but to ensure they are properly
formatted if they're present.

For comments this adds checks that they don't contain code and verify
the formatting of comments with "TODO" tags.

As part of this, some commented out code which hasn't been touch in the
past 10 years gets removed as well.

The rules enabled enabled by this are:

- check formatting of existing docstrings (D200-)
- check comments for code (ERA)
- check formatting of TODO tags (TD001, TD004-)
2024-08-31 21:09:20 +02:00
91ab55d6ea
Convert license_dbghelp.txt to UTF-8 2024-08-31 17:04:05 +02:00
9ac60514c3 build-macos-libs.sh: partial move to posix shell
Change all non posix constructs except for stacked pushd/popd for the
sake of reviewability.

The reminder will be done in a separate commit.

Signed-off-by: Ralph Sennhauser <ralph.sennhauser@gmail.com>
2024-08-30 18:36:09 +02:00
3564512a63 android/setup-libs.sh: move to posix shell
Convert various non posix consturcts and change shebang to /bin/sh

Signed-off-by: Ralph Sennhauser <ralph.sennhauser@gmail.com>
2024-08-29 12:55:48 +02:00
2634f8762e build-unix-win32.sh: move to posix shell
Convert non posix shell constructs and change shebang to /bin/sh

Signed-off-by: Ralph Sennhauser <ralph.sennhauser@gmail.com>
2024-08-29 12:55:26 +02:00
1c4a32baa4 build-archives.sh: move to posix shell
Convert various no posix constructs and change the shebang to /bin/sh.

Signed-off-by: Ralph Sennhauser <ralph.sennhauser@gmail.com>
2024-08-29 12:55:26 +02:00
8ed1a0cb5a templatessorter.sh: move to posix shell
This one is already clean, so just change the shebang

Signed-off-by: Ralph Sennhauser <ralph.sennhauser@gmail.com>
2024-08-29 12:55:26 +02:00
f2bef8388a
Use UTF-8 as encoding when working with files
This explicitly uses UTF-8 encoding when reading or writing files with
Python. This is necessary as the default locale varies between
operating systems.
2024-08-29 07:22:46 +02:00
c3b99feb60
Enable ruff rules for code simplification
This enables ruff rules which check for code which can be simplified to
improve readability.

The additionally rules getting enabled by this are:

- remove unnecessary nesting of if-statements (SIM102)
- use contextlib.suppress() for no-op exception handling (SIM105)
- use enumerate() for counting in loops (SIM113)
- use context managers for opening files (SIM115)
2024-08-29 07:00:43 +02:00
028ec40165
Add a RC file to add metadata to the pyrogenesis executable 2024-08-28 23:37:36 +02:00
631f7b292e Add pre-commit hook for shellcheck
Using shellcheck-py[1] instead of the official shellcheck-precommit[2]
to avoid a dependency on docker.

[1] https://github.com/shellcheck-py/shellcheck-py
[2] https://github.com/koalaman/shellcheck-precommit

Signed-off-by: Ralph Sennhauser <ralph.sennhauser@gmail.com>
2024-08-28 18:23:18 +02:00
6f5ac7c4ae Fix issues pointed out by shellcheck
Some are real bugs, some are bashisms, but most is zealous quoting of
variables.

Signed-off-by: Ralph Sennhauser <ralph.sennhauser@gmail.com>
2024-08-28 18:23:18 +02:00
0feeb5884a
Cache Python dependencies in pre-commit workflow
This should speed up installation of pre-commit as part of the Gitea
Action a bit, as it caches the Python packages required for it.

The cache functionality of actions/setup-python is unsuitable for this
use case, as we don't have a file with requirements we want to use as
key. Instead, we just want to cache whatever is downloaded via pip from
PyPI and keep it for further invocations. This commit achieves that by
using the same cache key for every workflow run. The cache is being
updated even if pre-commit fails so we always keep the the Python
packages used for installing pre-commit cached.
2024-08-28 07:42:08 +02:00
97e6691c76
Fix Atlas in the nightly build
The Windows autobuilder contains two built versions of wxWidgets. One
corresponds to the oldest supported version of the library, and is used
for CI. However, the nightly build should use a recent version of the
library, compiled with the same toolset as the main game.

This commit adapts the nightly-build pipeline to use the rebuilt recent
copy of wxWidgets.
2024-08-27 21:38:37 +02:00
ea647067f0
Enable ruff rules to check for ambiguous code
This enables some ruff rules to check for ambiguous and dead Python
code, which might cause unintended side-effects.

The enabled rules are:

- a bunch of rules related to shadowing of builtin structures (A)
- a bunch of rules checking for unused arguments (ARG)
- a rule checking for useless expressions (B018)
- a rule checking for unbound loop variables (B023)
- a rule checking redefined function parameters (PLR1704)
2024-08-27 19:28:11 +02:00
ee3318309b
Allow to force non-incremental Jenkins builds
In all the builds using precompiled headers, an update to libraries will
crash the CI, as those headers will become outdated. It is then
necessary to force a full rebuild of the game by cleaning up the
workspaces.

This commit allows this CI behavior to be triggered by specifying
[CLEANBUILD] in the first line of the commit message.
This will constitute an opportunity to inform users that they need to
clean their workspaces when pulling the update.
2024-08-27 17:13:49 +02:00
2a67c9a503
Abort previous builds in pull requests
This will save precious build time by stopping the build of obsolete
commits.
2024-08-27 14:39:19 +02:00
f8ac0e4f68 Run checkrefs.py as Gitea Actions workflow 2024-08-27 13:33:21 +02:00
183a0b13f3 Add .already-built for premake to .gitignore
Signed-off-by: Ralph Sennhauser <ralph.sennhauser@gmail.com>
2024-08-27 12:55:17 +02:00
13da3457bc
Delete wrongly committed file from abdda50892 2024-08-27 12:46:33 +02:00
ae3fad73ce
Add a markdownlint pre-commit hook
This adds a pre-commit hook to link Markdown files and fixes all
reported findings.
2024-08-27 10:06:31 +02:00
af6cda9073 Add shfmt to .pre-commit-config
Run shfmt as part of the ci workflow.

Signed-off-by: Ralph Sennhauser <ralph.sennhauser@gmail.com>
2024-08-26 09:03:00 +02:00
a04a5f70c7 Add global .editorconfig
This sets some reasonable defaults for all files.

Adds settings for *.sh files, doubles as configuration for shfmt.

Signed-off-by: Ralph Sennhauser <ralph.sennhauser@gmail.com>
2024-08-26 09:03:00 +02:00
abdda50892 Format shell scripts using shfmt
This updates shell scripts to use a consistent style that can be enforced
via pre-commit hook.

As for choosing tabs over spaces, some arguments are:

- tabs can help people with visual impairment
- tabs allow for indenting heredocs in bash
- tabs are the default for the tool shfmt

Signed-off-by: Ralph Sennhauser <ralph.sennhauser@gmail.com>
2024-08-26 09:03:00 +02:00
c763c010b8
Add a bunch of additional pre-commit hooks 2024-08-26 07:46:41 +02:00
1800b5f1c0
Add executable bits for files with shebangs 2024-08-26 07:46:38 +02:00
f58dc9b485
Remove unnecessary executable bits
This removes the executable bits from files which aren't supposed to
have them.

Also removes shebangs for files which aren't supposed to be executable.
2024-08-26 07:46:34 +02:00
05e708f987
Refactor check_* functions for better readability 2024-08-25 21:24:00 +02:00
75949e1f5a
Replace use of os.path with pathlib 2024-08-25 21:24:00 +02:00
4b77d7bb74
Compile regex pattern once
This should slightly increase the performance as the pattern has to be
only compiled once, instead of so during every loop iteration.
2024-08-25 21:24:00 +02:00
87029f2a91
Replace uses of re.split() with str.split() 2024-08-25 21:23:59 +02:00
24e67746f9
Log a warning when a specified mod can't be found 2024-08-25 21:23:59 +02:00
92d92fac1b
Remove unnecessary default value for dict.get() 2024-08-25 21:23:59 +02:00
0dea22285e
Fix the exit codes of checkrefs.py 2024-08-25 21:23:59 +02:00
39f2889cf7
Support calling checkrefs.py from other dirs 2024-08-25 21:23:59 +02:00
f22d0d899e Add final to classes introduced in f9114a87f2
A class which can not be derived from should tell that in the
declaration.
2024-08-25 13:38:09 +02:00
109 changed files with 2504 additions and 2259 deletions

14
.editorconfig Normal file
View File

@ -0,0 +1,14 @@
root = true
[*]
charset = utf-8
insert_final_newline = true
trim_trailing_whitespace = true
[*.sh]
indent_style = tab
function_next_line = true
switch_case_indent = true
[build/premake/premake5/**]
ignore = true

View File

@ -26,6 +26,7 @@ All details about each step are documented in [ReleaseProcess](wiki/ReleaseProce
**All the following issues must be fixed in `main` (which closes the issue) and then cherry-picked to the release branch (after that, you can tick the checkbox below).**
Here are the Release Blocking issues currently delaying the release:
- [x] None currently
## Progress Tracking
@ -56,6 +57,7 @@ Here are the Release Blocking issues currently delaying the release:
---
Before moving on with Full Freeze, make sure that:
- [ ] At least 10 days have passed since the string freeze announcement
- [ ] Only this ticket remains in the Milestone
- [ ] All previous checkboxes are ticked

View File

@ -0,0 +1,20 @@
---
name: checkrefs
on:
- push
- pull_request
jobs:
checkrefs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
- name: Install lxml
run: pip3 install lxml
- name: Workaround for authentication problem with LFS
# https://gitea.com/gitea/act_runner/issues/164
run: git config --local http.${{ gitea.server_url }}/${{ gitea.repository }}.git/info/lfs/objects/.extraheader ''
- name: Download necessary LFS assets
run: git lfs pull -I binaries/data/mods/public/maps
- name: Check for missing references
run: ./source/tools/entity/checkrefs.py

View File

@ -9,4 +9,14 @@ jobs:
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
- id: restore-pip-cache
uses: actions/cache/restore@v4
with:
key: pip-cache-v1-${{ github.workflow }}
path: ~/.cache/pip
- uses: pre-commit/action@v3.0.1
- uses: actions/cache/save@v4
if: steps.restore-pip-cache.outcome == 'success'
with:
key: pip-cache-v1-${{ github.workflow }}
path: ~/.cache/pip

1
.gitignore vendored
View File

@ -35,6 +35,7 @@ build/premake/premake5/build/vs200*/
build/premake/premake5/build/vs201[0-5]/
build/premake/premake5/build/xcode4/
build/premake/premake5/packages/
build/premake/.already-built
build/premake/.gccmachine.tmp
build/premake/.gccver.tmp

8
.markdownlint.yaml Normal file
View File

@ -0,0 +1,8 @@
---
commands-show-output: false
default: true
no-bare-urls: false
line-length:
line_length: 100
fenced-code-language: false
no-space-in-emphasis: false

View File

@ -1,5 +1,26 @@
---
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.6.0
hooks:
- id: check-case-conflict
- id: check-executables-have-shebangs
exclude: ^build/premake/premake5
- id: check-json
- id: check-merge-conflict
- id: check-shebang-scripts-are-executable
exclude: ^build/premake/premake5
- id: check-symlinks
- id: check-toml
- id: check-xml
exclude: |
(?x)(
^binaries/data/mods/_test.xero/a/b/test1.xml|
^binaries/data/mods/_test.xero/test1.xml|
^binaries/data/mods/_test.sim/simulation/templates.illformed.xml|
^binaries/data/mods/public/maps/.*\.xml
)
- id: check-yaml
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.6.1
hooks:
@ -24,3 +45,27 @@ repos:
types: [text]
files: ^binaries/
exclude: (^binaries/data/mods/(mod|public)/art/.*\.xml|\.dae$)
- repo: https://github.com/scop/pre-commit-shfmt
rev: v3.9.0-1
hooks:
- id: shfmt
args:
- --diff
- --simplify
stages: [pre-commit]
exclude: ^build/premake/premake5/
- repo: https://github.com/shellcheck-py/shellcheck-py
rev: v0.10.0.1
hooks:
- id: shellcheck
exclude: ^build/premake/premake5/
- repo: https://github.com/igorshubovych/markdownlint-cli
rev: v0.41.0
hooks:
- id: markdownlint
exclude: |
(?x)(
^.gitea/ISSUE_TEMPLATE/|
^build/premake/|
^source/third_party/
)

View File

@ -15,7 +15,6 @@ There are several ways to contact us and find more information:
- Gitea (development info, bug tracker): https://gitea.wildfiregames.com/
- IRC: #0ad on irc.quakenet.org
## Running precompiled binaries on Windows
A precompiled, ready-to-play development version of the game is available from
@ -27,33 +26,28 @@ In a checkout of the `nightly-build` SVN repository, open the "binaries\system"
- To launch the game: Run pyrogenesis.exe
- To launch the map editor: Run Atlas.bat or "pyrogenesis.exe -editor"
## Compiling the game from source code
The instructions for compiling the game on Windows, Linux and OS X are at
[BuildInstructions](https://gitea.wildfiregames.com/0ad/0ad/wiki/BuildInstructions).
## Reporting problems
Bugs should be reported on Gitea. For information on reporting problems
and finding logs, see [ReportingErrors](https://gitea.wildfiregames.com/0ad/0ad/wiki/ReportingErrors).
## Contributing Code
If you want to help out programming for the game, have a look at
[GettingStartedProgrammers](https://gitea.wildfiregames.com/0ad/0ad/wiki/GettingStartedProgrammers)
or contact us on #0ad-dev on irc.quakenet.org
## Contributing Artwork
If you want to make artwork for the game, have a look at
[For Artists](https://gitea.wildfiregames.com/0ad/0ad/wiki#for-artists)
or visit [the forums](https://www.wildfiregames.com/forum).
## Translating
You can help translating the game at https://www.transifex.com/projects/p/0ad

0
binaries/data/mods/_test.gui/gui/event/event.js Executable file → Normal file
View File

0
binaries/data/mods/_test.gui/gui/event/event.xml Executable file → Normal file
View File

0
binaries/data/mods/_test.gui/gui/gui.rng Executable file → Normal file
View File

0
binaries/data/mods/_test.gui/gui/gui_page.rng Executable file → Normal file
View File

0
binaries/data/mods/_test.gui/gui/hotkey/hotkey.js Executable file → Normal file
View File

0
binaries/data/mods/_test.gui/gui/hotkey/hotkey.xml Executable file → Normal file
View File

0
binaries/data/mods/_test.gui/gui/hotkey/page_hotkey.xml Executable file → Normal file
View File

View File

View File

0
binaries/data/mods/public/maps/scenarios/combat_demo.pmp Executable file → Normal file
View File

0
binaries/data/mods/public/maps/scenarios/combat_demo.xml Executable file → Normal file
View File

0
binaries/data/mods/public/maps/scenarios/combat_demo_huge.xml Executable file → Normal file
View File

View File

View File

View File

View File

@ -42,6 +42,11 @@ UnitAI.prototype.Schema =
"<data type='nonNegativeInteger'/>" +
"</element>" +
"</optional>" +
"<optional>" +
"<element name='Precision'>" +
"<data type='nonNegativeInteger'/>" +
"</element>" +
"</optional>" +
"<optional>" +
"<interleave>" +
"<element name='RoamDistance'>" +
@ -3462,6 +3467,7 @@ UnitAI.prototype.Init = function()
this.formationAnimationVariant = undefined;
this.cheeringTime = +(this.template.CheeringTime || 0);
this.precision = +(this.template.Precision || 0);
this.SetStance(this.template.DefaultStance);
};
@ -3776,7 +3782,7 @@ UnitAI.prototype.SetupLOSRangeQuery = function(enable = true)
// Do not compensate for entity sizes: LOS doesn't, and UnitAI relies on that.
this.losRangeQuery = cmpRangeManager.CreateActiveQuery(this.entity,
range.min, range.max, players, IID_Identity,
cmpRangeManager.GetEntityFlagMask("normal"), false);
cmpRangeManager.GetEntityFlagMask("normal"), false, this.precision);
if (enable)
cmpRangeManager.EnableActiveQuery(this.losRangeQuery);
@ -3841,7 +3847,7 @@ UnitAI.prototype.SetupAttackRangeQuery = function(enable = true)
// Do not compensate for entity sizes: LOS doesn't, and UnitAI relies on that.
this.losAttackRangeQuery = cmpRangeManager.CreateActiveQuery(this.entity,
range.min, range.max, players, IID_Resistance,
cmpRangeManager.GetEntityFlagMask("normal"), false);
cmpRangeManager.GetEntityFlagMask("normal"), false, this.precision);
if (enable)
cmpRangeManager.EnableActiveQuery(this.losAttackRangeQuery);

View File

View File

View File

View File

@ -109,6 +109,7 @@
<CanPatrol>true</CanPatrol>
<PatrolWaitTime>1</PatrolWaitTime>
<CheeringTime>2800</CheeringTime>
<Precision>2</Precision>
<Formations datatype="tokens">
special/formations/null
special/formations/box

View File

@ -1,4 +1,4 @@
#!/bin/bash
#!/bin/sh
set -e
set -o nounset
@ -21,150 +21,149 @@ build_js185=true
JOBS=${JOBS:="-j4"}
SYSROOT=$TOOLCHAIN/sysroot
export PATH=$TOOLCHAIN/bin:$PATH
export PATH="$TOOLCHAIN/bin:$PATH"
CFLAGS="-mandroid -mthumb -mcpu=cortex-a9 -mfpu=neon -mfloat-abi=softfp"
mkdir -p files
pushd files
cd files
if [ ! -e boost_1_45_0.tar.bz2 ]; then
wget http://downloads.sourceforge.net/project/boost/boost/1.45.0/boost_1_45_0.tar.bz2
wget http://downloads.sourceforge.net/project/boost/boost/1.45.0/boost_1_45_0.tar.bz2
fi
if [ ! -e curl-7.33.0.tar.bz2 ]; then
wget http://curl.haxx.se/download/curl-7.33.0.tar.bz2
wget http://curl.haxx.se/download/curl-7.33.0.tar.bz2
fi
if [ ! -e MysticTreeGames-Boost-for-Android-70838fc.tar.gz ]; then
wget https://github.com/MysticTreeGames/Boost-for-Android/tarball/70838fcfba930646cac724a77f5c626e930431f6 -O MysticTreeGames-Boost-for-Android-70838fc.tar.gz
wget https://github.com/MysticTreeGames/Boost-for-Android/tarball/70838fcfba930646cac724a77f5c626e930431f6 -O MysticTreeGames-Boost-for-Android-70838fc.tar.gz
fi
if [ ! -e enet-1.3.3.tar.gz ]; then
wget http://enet.bespin.org/download/enet-1.3.3.tar.gz
wget http://enet.bespin.org/download/enet-1.3.3.tar.gz
fi
if [ ! -e js185-1.0.0.tar.gz ]; then
cp ../../../libraries/source/spidermonkey/js185-1.0.0.tar.gz .
cp ../../../libraries/source/spidermonkey/js185-1.0.0.tar.gz .
fi
if [ ! -e libjpeg-turbo-1.3.0.tar.gz ]; then
wget http://downloads.sourceforge.net/project/libjpeg-turbo/1.3.0/libjpeg-turbo-1.3.0.tar.gz
wget http://downloads.sourceforge.net/project/libjpeg-turbo/1.3.0/libjpeg-turbo-1.3.0.tar.gz
fi
if [ ! -e libpng-1.5.8.tar.xz ]; then
wget http://prdownloads.sourceforge.net/libpng/libpng-1.5.8.tar.xz
wget http://prdownloads.sourceforge.net/libpng/libpng-1.5.8.tar.xz
fi
if [ ! -e libxml2-2.7.8.tar.gz ]; then
wget ftp://xmlsoft.org/libxml2/libxml2-2.7.8.tar.gz
wget ftp://xmlsoft.org/libxml2/libxml2-2.7.8.tar.gz
fi
popd
cd -
if [ "$build_toolchain" = "true" ]; then
rm -r $TOOLCHAIN || true
$NDK/build/tools/make-standalone-toolchain.sh --platform=android-14 --toolchain=arm-linux-androideabi-4.6 --install-dir=$TOOLCHAIN --system=linux-x86_64
rm -r "$TOOLCHAIN" || true
"$NDK"/build/tools/make-standalone-toolchain.sh --platform=android-14 --toolchain=arm-linux-androideabi-4.6 --install-dir="$TOOLCHAIN" --system=linux-x86_64
mkdir -p $SYSROOT/usr/local
mkdir -p "$SYSROOT"/usr/local
# Set up some symlinks to make the SpiderMonkey build system happy
ln -sfT ../platforms $NDK/build/platforms
for f in $TOOLCHAIN/bin/arm-linux-androideabi-*; do
ln -sf $f ${f/arm-linux-androideabi-/arm-eabi-}
done
# Set up some symlinks to make the SpiderMonkey build system happy
ln -sfT ../platforms "$NDK"/build/platforms
for f in "$TOOLCHAIN"/bin/arm-linux-androideabi-*; do
ln -sf "$f" "$(printf '%s' "$f" | sed "s/arm-linux-androideabi-/arm-eabi-/")"
done
# Set up some symlinks for the typical autoconf-based build systems
for f in $TOOLCHAIN/bin/arm-linux-androideabi-*; do
ln -sf $f ${f/arm-linux-androideabi-/arm-linux-eabi-}
done
# Set up some symlinks for the typical autoconf-based build systems
for f in "$TOOLCHAIN"/bin/arm-linux-androideabi-*; do
ln -sf "$f" "$(printf '%s' "$f" | sed "s/arm-linux-androideabi-/arm-linux-eabi-/")"
done
fi
mkdir -p temp
if [ "$build_boost" = "true" ]; then
rm -rf temp/MysticTreeGames-Boost-for-Android-70838fc
tar xvf files/MysticTreeGames-Boost-for-Android-70838fc.tar.gz -C temp/
cp files/boost_1_45_0.tar.bz2 temp/MysticTreeGames-Boost-for-Android-70838fc/
patch temp/MysticTreeGames-Boost-for-Android-70838fc/build-android.sh < boost-android-build.patch
pushd temp/MysticTreeGames-Boost-for-Android-70838fc
./build-android.sh $TOOLCHAIN
cp -rv build/{include,lib} $SYSROOT/usr/local/
popd
rm -rf temp/MysticTreeGames-Boost-for-Android-70838fc
tar xvf files/MysticTreeGames-Boost-for-Android-70838fc.tar.gz -C temp/
cp files/boost_1_45_0.tar.bz2 temp/MysticTreeGames-Boost-for-Android-70838fc/
patch temp/MysticTreeGames-Boost-for-Android-70838fc/build-android.sh <boost-android-build.patch
cd temp/MysticTreeGames-Boost-for-Android-70838fc
./build-android.sh "$TOOLCHAIN"
cp -rv build/include build/lib "$SYSROOT"/usr/local/
cd -
fi
if [ "$build_curl" = "true" ]; then
rm -rf temp/curl-7.33.0
tar xvf files/curl-7.33.0.tar.bz2 -C temp/
pushd temp/curl-7.33.0
./configure --host=arm-linux-androideabi --with-sysroot=$SYSROOT --prefix=$SYSROOT/usr/local CFLAGS="$CFLAGS" LDFLAGS="-lm" --disable-shared
make -j3
make install
popd
rm -rf temp/curl-7.33.0
tar xvf files/curl-7.33.0.tar.bz2 -C temp/
cd temp/curl-7.33.0
./configure --host=arm-linux-androideabi --with-sysroot="$SYSROOT" --prefix="$SYSROOT"/usr/local CFLAGS="$CFLAGS" LDFLAGS="-lm" --disable-shared
make -j3
make install
cd -
fi
if [ "$build_libpng" = "true" ]; then
rm -rf temp/libpng-1.5.8
tar xvf files/libpng-1.5.8.tar.xz -C temp/
pushd temp/libpng-1.5.8
./configure --host=arm-linux-eabi --with-sysroot=$SYSROOT --prefix=$SYSROOT/usr/local CFLAGS="$CFLAGS"
make $JOBS
make install
popd
rm -rf temp/libpng-1.5.8
tar xvf files/libpng-1.5.8.tar.xz -C temp/
cd temp/libpng-1.5.8
./configure --host=arm-linux-eabi --with-sysroot="$SYSROOT" --prefix="$SYSROOT"/usr/local CFLAGS="$CFLAGS"
make "$JOBS"
make install
cd -
fi
if [ "$build_libjpeg" = "true" ]; then
rm -rf temp/libjpeg-turbo-1.3.0
tar xvf files/libjpeg-turbo-1.3.0.tar.gz -C temp/
pushd temp/libjpeg-turbo-1.3.0
./configure --host=arm-linux-eabi --with-sysroot=$SYSROOT --prefix=$SYSROOT/usr/local CFLAGS="$CFLAGS" LDFLAGS="-lm"
make $JOBS
make install
popd
rm -rf temp/libjpeg-turbo-1.3.0
tar xvf files/libjpeg-turbo-1.3.0.tar.gz -C temp/
cd temp/libjpeg-turbo-1.3.0
./configure --host=arm-linux-eabi --with-sysroot="$SYSROOT" --prefix="$SYSROOT"/usr/local CFLAGS="$CFLAGS" LDFLAGS="-lm"
make "$JOBS"
make install
cd -
fi
if [ "$build_libxml2" = "true" ]; then
rm -rf temp/libxml2-2.7.8
tar xvf files/libxml2-2.7.8.tar.gz -C temp/
patch temp/libxml2-2.7.8/Makefile.in < libxml2-android-build.patch
pushd temp/libxml2-2.7.8
./configure --host=arm-linux-eabi --with-sysroot=$SYSROOT --prefix=$SYSROOT/usr/local CFLAGS="$CFLAGS"
make $JOBS
make install
popd
rm -rf temp/libxml2-2.7.8
tar xvf files/libxml2-2.7.8.tar.gz -C temp/
patch temp/libxml2-2.7.8/Makefile.in <libxml2-android-build.patch
cd temp/libxml2-2.7.8
./configure --host=arm-linux-eabi --with-sysroot="$SYSROOT" --prefix="$SYSROOT"/usr/local CFLAGS="$CFLAGS"
make "$JOBS"
make install
cd -
fi
if [ "$build_enet" = "true" ]; then
rm -rf temp/enet-1.3.3
tar xvf files/enet-1.3.3.tar.gz -C temp/
pushd temp/enet-1.3.3
./configure --host=arm-linux-eabi --with-sysroot=$SYSROOT --prefix=$SYSROOT/usr/local CFLAGS="$CFLAGS"
make $JOBS
make install
popd
rm -rf temp/enet-1.3.3
tar xvf files/enet-1.3.3.tar.gz -C temp/
cd temp/enet-1.3.3
./configure --host=arm-linux-eabi --with-sysroot="$SYSROOT" --prefix="$SYSROOT"/usr/local CFLAGS="$CFLAGS"
make "$JOBS"
make install
cd -
fi
if [ "$build_js185" = "true" ]; then
rm -rf temp/js-1.8.5
tar xvf files/js185-1.0.0.tar.gz -C temp/
patch temp/js-1.8.5/js/src/assembler/wtf/Platform.h < js185-android-build.patch
pushd temp/js-1.8.5/js/src
CXXFLAGS="-I $TOOLCHAIN/arm-linux-androideabi/include/c++/4.4.3/arm-linux-androideabi" \
HOST_CXXFLAGS="-DFORCE_LITTLE_ENDIAN" \
./configure \
--prefix=$SYSROOT/usr/local \
--target=arm-android-eabi \
--with-android-ndk=$NDK \
--with-android-sdk=$SDK \
--with-android-toolchain=$TOOLCHAIN \
--with-android-version=9 \
--disable-tests \
--disable-shared-js \
--with-arm-kuser \
CFLAGS="$CFLAGS"
make $JOBS JS_DISABLE_SHELL=1
make install JS_DISABLE_SHELL=1
popd
rm -rf temp/js-1.8.5
tar xvf files/js185-1.0.0.tar.gz -C temp/
patch temp/js-1.8.5/js/src/assembler/wtf/Platform.h <js185-android-build.patch
cd temp/js-1.8.5/js/src
CXXFLAGS="-I $TOOLCHAIN/arm-linux-androideabi/include/c++/4.4.3/arm-linux-androideabi" \
HOST_CXXFLAGS="-DFORCE_LITTLE_ENDIAN" \
./configure \
--prefix="$SYSROOT"/usr/local \
--target=arm-android-eabi \
--with-android-ndk="$NDK" \
--with-android-sdk="$SDK" \
--with-android-toolchain="$TOOLCHAIN" \
--with-android-version=9 \
--disable-tests \
--disable-shared-js \
--with-arm-kuser \
CFLAGS="$CFLAGS"
make "$JOBS" JS_DISABLE_SHELL=1
make install JS_DISABLE_SHELL=1
cd -
fi

View File

@ -1,21 +1,21 @@
# Linting
This folder contains tools for linting 0 A.D. code
Linting is done via Arcanist: https://secure.phabricator.com/book/phabricator/article/arcanist_lint/
Linting is done via Arcanist:
https://secure.phabricator.com/book/phabricator/article/arcanist_lint/
## Linters
- `text` is configured to detect whitespace issues.
- `json` detects JSON syntax errors.
- `project-name` detects misspellings of the project name "0 A.D.". In particular the non-breaking space.
- `licence-year` detects Copyright header years and compares against modification time.
- `eslint`, if installed, will run on javascript files.
- `cppcheck`, if installed, will run on C++ files.
## Installation
This assumes you have arcanist already installed. If not, consult https://trac.wildfiregames.com/wiki/Phabricator#UsingArcanist .
This assumes you have arcanist already installed. If not, consult
https://trac.wildfiregames.com/wiki/Phabricator#UsingArcanist .
The linting is done via custom PHP scripts, residing in `pyrolint/`.
Configuration is at the root of the project, under `.arclint`.
@ -26,8 +26,12 @@ We provide dummy replacement for external linters, so that they are not required
#### eslint
Installation via npm is recommended. The linter assumes a global installation of both eslint and the "brace-rules" plugin.
`npm install -g eslint@latest eslint-plugin-brace-rules`
Installation via npm is recommended. The linter assumes a global installation
of both eslint and the "brace-rules" plugin.
```
npm install -g eslint@latest eslint-plugin-brace-rules`
```
See also https://eslint.org/docs/user-guide/getting-started

0
build/arclint/dummies/cppcheck.bat Executable file → Normal file
View File

0
build/arclint/dummies/eslint.bat Executable file → Normal file
View File

View File

@ -8,20 +8,23 @@
rm -f app.info
for APPDIR in ../workspaces/gcc/obj/*; do
lcov -d "$APPDIR" --zerocounters
lcov -d "$APPDIR" -b ../workspaces/gcc --capture --initial -o temp.info
if [ -e app.info ]; then
lcov -a app.info -a temp.info -o app.info
else
mv temp.info app.info
fi
lcov -d "$APPDIR" --zerocounters
lcov -d "$APPDIR" -b ../workspaces/gcc --capture --initial -o temp.info
if [ -e app.info ]; then
lcov -a app.info -a temp.info -o app.info
else
mv temp.info app.info
fi
done
(cd ../../binaries/system/; ./test_dbg)
(
cd ../../binaries/system/ || exit 1
./test_dbg
) || exit 1
for APPDIR in ../workspaces/gcc/obj/*; do
lcov -d "$APPDIR" -b ../workspaces/gcc --capture -o temp.info &&
lcov -a app.info -a temp.info -o app.info
lcov -d "$APPDIR" -b ../workspaces/gcc --capture -o temp.info &&
lcov -a app.info -a temp.info -o app.info
done
lcov -r app.info '/usr/*' -o app.info
@ -30,4 +33,7 @@ lcov -r app.info '*/third_party/*' -o app.info
rm -rf output
mkdir output
(cd output; genhtml ../app.info)
(
cd output || exit 1
genhtml ../app.info
) || exit 1

View File

@ -1,13 +0,0 @@
FROM python:3.12-slim-bookworm
ARG DEBIAN_FRONTEND=noninteractive
ARG DEBCONF_NOWARNINGS="yes"
RUN apt -qq update && apt install -qqy --no-install-recommends \
git \
git-lfs \
subversion \
&& apt clean
RUN git lfs install --system --skip-smudge
RUN pip3 install --upgrade lxml

View File

@ -1,45 +0,0 @@
/* Copyright (C) 2024 Wildfire Games.
* This file is part of 0 A.D.
*
* 0 A.D. is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation, either version 2 of the License, or
* (at your option) any later version.
*
* 0 A.D. is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with 0 A.D. If not, see <http://www.gnu.org/licenses/>.
*/
// This pipeline runs source/tools/entity/checkrefs.py to identify
// mistakes and oversights in references to game content (assets,
// templates, etc).
pipeline {
agent {
dockerfile {
label 'LinuxAgent'
customWorkspace 'workspace/checkrefs'
dir 'build/jenkins/dockerfiles'
filename 'tools.Dockerfile'
}
}
stages {
stage("Asset download") {
steps {
sh "git lfs pull -I binaries/data/mods/public/maps"
}
}
stage("Data checks") {
steps {
sh "cd source/tools/entity/ && python3 checkrefs.py -tax"
}
}
}
}

View File

@ -19,6 +19,9 @@
// Due to a compilation bug with SpiderMonkey, the game is only built with the release configuration.
pipeline {
// Stop previous build in pull requests, but not in branches
options { disableConcurrentBuilds(abortPrevious: env.CHANGE_ID != null) }
agent {
node {
label 'FreeBSDAgent'
@ -39,8 +42,8 @@ pipeline {
sh "git lfs pull -I binaries/data/tests"
sh "git lfs pull -I \"binaries/data/mods/_test.*\""
sh "libraries/build-source-libs.sh -j2 2> freebsd-prebuild-errors.log"
sh "build/workspaces/update-workspaces.sh -j2 --jenkins-tests 2>> freebsd-prebuild-errors.log"
sh "libraries/build-source-libs.sh 2> freebsd-prebuild-errors.log"
sh "build/workspaces/update-workspaces.sh --jenkins-tests 2>> freebsd-prebuild-errors.log"
}
post {
failure {
@ -51,7 +54,15 @@ pipeline {
stage ("Release Build") {
steps {
sh "cd build/workspaces/gcc/ && gmake -j2 config=release"
retry (2) {
script {
try { sh "cd build/workspaces/gcc/ && gmake config=release" }
catch(e) {
sh "cd build/workspaces/gcc/ && gmake clean config=release"
throw e
}
}
}
timeout(time: 15) {
sh "cd binaries/system/ && ./test > cxxtest-release.xml"

View File

@ -18,6 +18,9 @@
// This pipeline builds the game on Linux (with min and max supported versions of GCC and clang) and runs tests.
pipeline {
// Stop previous build in pull requests, but not in branches
options { disableConcurrentBuilds(abortPrevious: env.CHANGE_ID != null) }
agent none
stages {
stage("Setup") {
@ -62,12 +65,13 @@ pipeline {
sh "git lfs pull -I binaries/data/tests"
sh "git lfs pull -I \"binaries/data/mods/_test.*\""
sh "libraries/build-source-libs.sh -j1 2> ${JENKINS_COMPILER}-prebuild-errors.log"
sh "libraries/build-source-libs.sh 2> ${JENKINS_COMPILER}-prebuild-errors.log"
script {
if (env.JENKINS_PCH == "no-pch") {
sh "build/workspaces/update-workspaces.sh -j1 --jenkins-tests --without-pch 2>> ${JENKINS_COMPILER}-prebuild-errors.log"
sh "build/workspaces/update-workspaces.sh --jenkins-tests --without-pch 2>> ${JENKINS_COMPILER}-prebuild-errors.log"
} else {
sh "build/workspaces/update-workspaces.sh -j1 --jenkins-tests 2>> ${JENKINS_COMPILER}-prebuild-errors.log"
sh "build/workspaces/update-workspaces.sh --jenkins-tests 2>> ${JENKINS_COMPILER}-prebuild-errors.log"
}
}
}
@ -80,7 +84,15 @@ pipeline {
stage("Debug Build") {
steps {
sh "cd build/workspaces/gcc/ && make -j1 config=debug"
retry(2) {
script {
try { sh "cd build/workspaces/gcc/ && make config=debug" }
catch(e) {
sh "cd build/workspaces/gcc/ && make clean config=debug"
throw e
}
}
}
timeout(time: 15) {
sh "cd binaries/system/ && ./test_dbg > cxxtest-debug.xml"
}
@ -94,7 +106,15 @@ pipeline {
stage("Release Build") {
steps {
sh "cd build/workspaces/gcc/ && make -j1 config=release"
retry(2) {
script {
try { sh "cd build/workspaces/gcc/ && make config=release" }
catch(e) {
sh "cd build/workspaces/gcc/ && make clean config=release"
throw e
}
}
}
timeout(time: 15) {
sh "cd binaries/system/ && ./test > cxxtest-release.xml"
}
@ -110,7 +130,7 @@ pipeline {
post {
always {
node('LinuxSlave') { ws('workspace/linux') {
node('LinuxAgent') { ws('workspace/linux') {
recordIssues enabledForFailure: true, qualityGates: [[threshold: 1, type: 'NEW']], tools: [clang(), gcc()]
}}
}

View File

@ -18,6 +18,9 @@
// This pipeline builds the game on macOS (which uses the clang-10 compiler) and runs tests.
pipeline {
// Stop previous build in pull requests, but not in branches
options { disableConcurrentBuilds(abortPrevious: env.CHANGE_ID != null) }
agent {
node {
label 'macOSAgent'
@ -33,8 +36,8 @@ pipeline {
sh "git lfs pull -I binaries/data/tests"
sh "git lfs pull -I \"binaries/data/mods/_test.*\""
sh "libraries/build-macos-libs.sh -j4 2> macos-prebuild-errors.log"
sh "build/workspaces/update-workspaces.sh -j4 --jenkins-tests 2>> macos-prebuild-errors.log"
sh "libraries/build-macos-libs.sh 2> macos-prebuild-errors.log"
sh "build/workspaces/update-workspaces.sh --jenkins-tests 2>> macos-prebuild-errors.log"
}
post {
failure {
@ -45,7 +48,15 @@ pipeline {
stage("Debug Build") {
steps {
sh "cd build/workspaces/gcc/ && make -j4 config=debug"
retry(2) {
script {
try { sh "cd build/workspaces/gcc/ && make config=debug" }
catch(e) {
sh "cd build/workspaces/gcc/ && make clean config=debug"
throw e
}
}
}
timeout(time: 15) {
sh "cd binaries/system/ && ./test_dbg > cxxtest-debug.xml"
}
@ -59,7 +70,15 @@ pipeline {
stage("Release Build") {
steps {
sh "cd build/workspaces/gcc/ && make -j4 config=release"
retry(2) {
script {
try { sh "cd build/workspaces/gcc/ && make config=release" }
catch(e) {
sh "cd build/workspaces/gcc/ && make clean config=release"
throw e
}
}
}
timeout(time: 15) {
sh "cd binaries/system/ && ./test > cxxtest-release.xml"
}

View File

@ -63,8 +63,8 @@ pipeline {
stage ("Pre-build") {
steps {
bat "cd libraries && get-windows-libs.bat"
bat "(robocopy C:\\wxwidgets3.0.4\\lib libraries\\win32\\wxwidgets\\lib /MIR /NDL /NJH /NJS /NP /NS /NC) ^& IF %ERRORLEVEL% LEQ 1 exit 0"
bat "(robocopy C:\\wxwidgets3.0.4\\include libraries\\win32\\wxwidgets\\include /MIR /NDL /NJH /NJS /NP /NS /NC) ^& IF %ERRORLEVEL% LEQ 1 exit 0"
bat "(robocopy C:\\wxwidgets3.2.5\\lib libraries\\win32\\wxwidgets\\lib /MIR /NDL /NJH /NJS /NP /NS /NC) ^& IF %ERRORLEVEL% LEQ 1 exit 0"
bat "(robocopy C:\\wxwidgets3.2.5\\include libraries\\win32\\wxwidgets\\include /MIR /NDL /NJH /NJS /NP /NS /NC) ^& IF %ERRORLEVEL% LEQ 1 exit 0"
bat "cd build\\workspaces && update-workspaces.bat --atlas --without-pch --without-tests"
}
}
@ -133,16 +133,16 @@ pipeline {
stage("Update translations") {
steps {
ws("workspace/nightly-svn") {
bat "cd source\\tools\\i18n && python updateTemplates.py"
bat "cd source\\tools\\i18n && python update_templates.py"
withCredentials([string(credentialsId: 'TX_TOKEN', variable: 'TX_TOKEN')]) {
bat "cd source\\tools\\i18n && python pullTranslations.py"
bat "cd source\\tools\\i18n && python pull_translations.py"
}
bat "cd source\\tools\\i18n && python generateDebugTranslation.py --long"
bat "cd source\\tools\\i18n && python cleanTranslationFiles.py"
bat "cd source\\tools\\i18n && python generate_debug_translation.py --long"
bat "cd source\\tools\\i18n && python clean_translation_files.py"
script { if (!params.NEW_REPO) {
bat "python source\\tools\\i18n\\checkDiff.py --verbose"
bat "python source\\tools\\i18n\\check_diff.py --verbose"
}}
bat "cd source\\tools\\i18n && python creditTranslators.py"
bat "cd source\\tools\\i18n && python credit_translators.py"
}
}
}

View File

@ -18,9 +18,12 @@
// This pipeline builds the game on Windows (which uses the MSVC 15.0 compiler from Visual Studio 2017) and runs tests.
def visualStudioPath = "\"C:\\Program Files (x86)\\Microsoft Visual Studio\\2017\\Community\\MSBuild\\15.0\\Bin\\MSBuild.exe\""
def buildOptions = "/p:PlatformToolset=v141_xp /p:XPDeprecationWarning=false /t:pyrogenesis /t:AtlasUI /t:test /m:2 /nologo -clp:NoSummary"
def buildOptions = "/p:PlatformToolset=v141_xp /p:XPDeprecationWarning=false /t:pyrogenesis /t:AtlasUI /t:test /nologo -clp:NoSummary"
pipeline {
// Stop previous build in pull requests, but not in branches
options { disableConcurrentBuilds(abortPrevious: env.CHANGE_ID != null) }
agent {
node {
label 'WindowsAgent'
@ -45,7 +48,15 @@ pipeline {
stage("Debug Build") {
steps {
bat("cd build\\workspaces\\vs2017 && ${visualStudioPath} pyrogenesis.sln /p:Configuration=Debug ${buildOptions}")
retry(2) {
script {
try { bat("cd build\\workspaces\\vs2017 && ${visualStudioPath} pyrogenesis.sln /p:Configuration=Debug ${buildOptions}") }
catch(e) {
bat("cd build\\workspaces\\vs2017 && ${visualStudioPath} pyrogenesis.sln /p:Configuration=Debug /t:Clean")
throw e
}
}
}
timeout(time: 15) {
bat "cd binaries\\system && test_dbg.exe > cxxtest-debug.xml"
}
@ -59,7 +70,15 @@ pipeline {
stage ("Release Build") {
steps {
bat("cd build\\workspaces\\vs2017 && ${visualStudioPath} pyrogenesis.sln /p:Configuration=Release ${buildOptions}")
retry(2) {
script {
try { bat("cd build\\workspaces\\vs2017 && ${visualStudioPath} pyrogenesis.sln /p:Configuration=Release ${buildOptions}") }
catch(e) {
bat("cd build\\workspaces\\vs2017 && ${visualStudioPath} pyrogenesis.sln /p:Configuration=Release /t:Clean")
throw e
}
}
}
timeout(time: 5) {
bat "cd binaries\\system && test.exe > cxxtest-release.xml"
}

View File

@ -3,10 +3,9 @@ set -e
LIB_VERSION="premake-5-alpha-14+wildfiregames.1"
if [ -e .already-built ] && [ "$(cat .already-built)" = "${LIB_VERSION}" ]
then
echo "Premake 5 is already up to date."
exit
if [ -e .already-built ] && [ "$(cat .already-built)" = "${LIB_VERSION}" ]; then
echo "Premake 5 is already up to date."
exit
fi
JOBS=${JOBS:="-j2"}
@ -20,16 +19,16 @@ echo
PREMAKE_BUILD_DIR=premake5/build/gmake2.unix
# BSD and OS X need different Makefiles
case "$OS" in
"GNU/kFreeBSD" )
# use default gmake2.unix (needs -ldl as we have a GNU userland and libc)
;;
*"BSD" )
PREMAKE_BUILD_DIR=premake5/build/gmake2.bsd
;;
"Darwin" )
PREMAKE_BUILD_DIR=premake5/build/gmake2.macosx
;;
"GNU/kFreeBSD")
# use default gmake2.unix (needs -ldl as we have a GNU userland and libc)
;;
*"BSD")
PREMAKE_BUILD_DIR=premake5/build/gmake2.bsd
;;
"Darwin")
PREMAKE_BUILD_DIR=premake5/build/gmake2.macosx
;;
esac
${MAKE} -C "${PREMAKE_BUILD_DIR}" "${JOBS}" CFLAGS="$CFLAGS" config=release
echo "${LIB_VERSION}" > .already-built
echo "${LIB_VERSION}" >.already-built

View File

@ -1070,6 +1070,7 @@ function setup_main_exe ()
if os.istarget("windows") then
files { source_root.."lib/sysdep/os/win/icon.rc" }
files { source_root.."lib/sysdep/os/win/pyrogenesis.rc" }
-- from "lowlevel" static lib; must be added here to be linked in
files { source_root.."lib/sysdep/os/win/error_dialog.rc" }

View File

@ -1,9 +1,9 @@
#!/bin/sh
pyrogenesis=$(which pyrogenesis 2> /dev/null)
if [ -x "$pyrogenesis" ] ; then
"$pyrogenesis" "$@"
pyrogenesis=$(which pyrogenesis 2>/dev/null)
if [ -x "$pyrogenesis" ]; then
"$pyrogenesis" "$@"
else
echo "Error: pyrogenesis not found in ($PATH)"
exit 1
echo "Error: pyrogenesis not found in ($PATH)"
exit 1
fi

View File

@ -3,7 +3,7 @@
# This script remains for backwards compatibility, but consists of a single
# git command.
cd "$(dirname $0)/../../"
cd "$(dirname "$0")/../../" || exit 1
git clean -fx build source
echo "WARNING: This script has been split with libraries/clean-source-libs.sh"

View File

@ -1,14 +1,14 @@
#!/bin/sh
if [ "$(id -u)" = "0" ]; then
echo "Running as root will mess up file permissions. Aborting ..." 1>&2
exit 1
echo "Running as root will mess up file permissions. Aborting ..." 1>&2
exit 1
fi
die()
{
echo ERROR: $*
exit 1
echo ERROR: "$*"
exit 1
}
OS=${OS:="$(uname -s)"}
@ -16,15 +16,15 @@ OS=${OS:="$(uname -s)"}
# Some of our makefiles depend on GNU make, so we set some sane defaults if MAKE
# is not set.
case "$OS" in
"FreeBSD" | "OpenBSD" )
MAKE=${MAKE:="gmake"}
;;
* )
MAKE=${MAKE:="make"}
;;
"FreeBSD" | "OpenBSD")
MAKE=${MAKE:="gmake"}
;;
*)
MAKE=${MAKE:="make"}
;;
esac
cd "$(dirname $0)"
cd "$(dirname "$0")" || die
# Now in build/workspaces/ (where we assume this script resides)
# Parse command-line options:
@ -35,49 +35,47 @@ enable_atlas=true
JOBS=${JOBS:="-j2"}
for i in "$@"
do
case $i in
--with-system-premake5 ) with_system_premake5=true ;;
--enable-atlas ) enable_atlas=true ;;
--disable-atlas ) enable_atlas=false ;;
-j* ) JOBS=$i ;;
# Assume any other --options are for Premake
--* ) premake_args="${premake_args} $i" ;;
esac
for i in "$@"; do
case $i in
--with-system-premake5) with_system_premake5=true ;;
--enable-atlas) enable_atlas=true ;;
--disable-atlas) enable_atlas=false ;;
-j*) JOBS=$i ;;
# Assume any other --options are for Premake
--*) premake_args="${premake_args} $i" ;;
esac
done
if [ "$enable_atlas" = "true" ]; then
premake_args="${premake_args} --atlas"
if [ "$OS" = "Darwin" ]; then
# Provide path to wx-config on OS X (as wxwidgets doesn't support pkgconfig)
export WX_CONFIG="${WX_CONFIG:="$(pwd)/../../libraries/macos/wxwidgets/bin/wx-config"}"
else
export WX_CONFIG="${WX_CONFIG:="wx-config"}"
fi
premake_args="${premake_args} --atlas"
if [ "$OS" = "Darwin" ]; then
# Provide path to wx-config on OS X (as wxwidgets doesn't support pkgconfig)
export WX_CONFIG="${WX_CONFIG:="$(pwd)/../../libraries/macos/wxwidgets/bin/wx-config"}"
else
export WX_CONFIG="${WX_CONFIG:="wx-config"}"
fi
if [ ! -x "$(command -v $WX_CONFIG)" ]
then
echo 'WX_CONFIG must be set and valid or wx-config must be present when --atlas is passed as argument.'
exit 1
fi
if [ ! -x "$(command -v "$WX_CONFIG")" ]; then
echo 'WX_CONFIG must be set and valid or wx-config must be present when --atlas is passed as argument.'
exit 1
fi
fi
if [ "$OS" = "Darwin" ]; then
# Set minimal SDK version
export MIN_OSX_VERSION=${MIN_OSX_VERSION:="10.12"}
# Set minimal SDK version
export MIN_OSX_VERSION="${MIN_OSX_VERSION:="10.12"}"
fi
# Now build Premake or use system's.
cd ../premake
cd ../premake || die
premake_command="premake5"
if [ "$with_system_premake5" = "false" ]; then
# Build bundled premake
MAKE=${MAKE} JOBS=${JOBS} ./build.sh || die "Premake 5 build failed"
# Build bundled premake
MAKE=${MAKE} JOBS=${JOBS} ./build.sh || die "Premake 5 build failed"
premake_command="premake5/bin/release/premake5"
premake_command="premake5/bin/release/premake5"
fi
echo
@ -87,11 +85,14 @@ export HOSTTYPE="$HOSTTYPE"
# Now run Premake to create the makefiles
echo "Premake args: ${premake_args}"
if [ "$OS" != "Darwin" ]; then
${premake_command} --file="premake5.lua" --outpath="../workspaces/gcc/" ${premake_args} gmake || die "Premake failed"
# shellcheck disable=SC2086
${premake_command} --file="premake5.lua" --outpath="../workspaces/gcc/" ${premake_args} gmake || die "Premake failed"
else
${premake_command} --file="premake5.lua" --outpath="../workspaces/gcc/" --macosx-version-min="${MIN_OSX_VERSION}" ${premake_args} gmake || die "Premake failed"
# Also generate xcode workspaces if on OS X
${premake_command} --file="premake5.lua" --outpath="../workspaces/xcode4" --macosx-version-min="${MIN_OSX_VERSION}" ${premake_args} xcode4 || die "Premake failed"
# shellcheck disable=SC2086
${premake_command} --file="premake5.lua" --outpath="../workspaces/gcc/" --macosx-version-min="${MIN_OSX_VERSION}" ${premake_args} gmake || die "Premake failed"
# Also generate xcode workspaces if on OS X
# shellcheck disable=SC2086
${premake_command} --file="premake5.lua" --outpath="../workspaces/xcode4" --macosx-version-min="${MIN_OSX_VERSION}" ${premake_args} xcode4 || die "Premake failed"
fi
# test_root.cpp gets generated by cxxtestgen and passing different arguments to premake could require a regeneration of this file.

File diff suppressed because it is too large Load Diff

View File

@ -2,32 +2,31 @@
die()
{
echo ERROR: $*
exit 1
echo ERROR: "$*"
exit 1
}
# SVN revision to checkout for source-libs
# Update this line when you commit an update to source-libs
source_svnrev="28207"
if [ "`uname -s`" = "Darwin" ]; then
echo 'This script should not be used on macOS: use build-macos-libs.sh instead.'
exit 1
if [ "$(uname -s)" = "Darwin" ]; then
die "This script should not be used on macOS: use build-macos-libs.sh instead."
fi
cd "$(dirname $0)"
cd "$(dirname "$0")" || die
# Now in libraries/ (where we assume this script resides)
# Check for whitespace in absolute path; this will cause problems in the
# SpiderMonkey build and maybe elsewhere, so we just forbid it
# Use perl as an alternative to readlink -f, which isn't available on BSD
SCRIPTPATH=`perl -MCwd -e 'print Cwd::abs_path shift' "$0"`
SCRIPTPATH=$(perl -MCwd -e 'print Cwd::abs_path shift' "$0")
case "$SCRIPTPATH" in
*\ * )
die "Absolute path contains whitespace, which will break the build - move the game to a path without spaces" ;;
*\ *)
die "Absolute path contains whitespace, which will break the build - move the game to a path without spaces"
;;
esac
# Parse command-line options (download options and build options):
source_libs_dir="source"
@ -37,25 +36,24 @@ with_system_mozjs=false
JOBS=${JOBS:="-j2"}
for i in "$@"
do
case $i in
--source-libs-dir=* ) source_libs_dir=${1#*=} ;;
--source-libs-dir ) die "correct syntax is --source-libs-dir=/path/to/dir" ;;
--without-nvtt ) without_nvtt=true ;;
--with-system-nvtt ) with_system_nvtt=true ;;
--with-system-mozjs ) with_system_mozjs=true ;;
-j* ) JOBS=$i ;;
esac
for i in "$@"; do
case $i in
--source-libs-dir=*) source_libs_dir=${1#*=} ;;
--source-libs-dir) die "correct syntax is --source-libs-dir=/path/to/dir" ;;
--without-nvtt) without_nvtt=true ;;
--with-system-nvtt) with_system_nvtt=true ;;
--with-system-mozjs) with_system_mozjs=true ;;
-j*) JOBS=$i ;;
esac
done
# Download source libs
echo "Downloading source libs..."
echo
if [ -e ${source_libs_dir}/.svn ]; then
(cd ${source_libs_dir} && svn cleanup && svn up -r $source_svnrev)
if [ -e "${source_libs_dir}"/.svn ]; then
(cd "${source_libs_dir}" && svn cleanup && svn up -r $source_svnrev)
else
svn co -r $source_svnrev https://svn.wildfiregames.com/public/source-libs/trunk ${source_libs_dir}
svn co -r $source_svnrev https://svn.wildfiregames.com/public/source-libs/trunk "${source_libs_dir}"
fi
# Build/update bundled external libraries
@ -64,53 +62,57 @@ echo
# Some of our makefiles depend on GNU make, so we set some sane defaults if MAKE
# is not set.
case "`uname -s`" in
"FreeBSD" | "OpenBSD" )
MAKE=${MAKE:="gmake"}
;;
* )
MAKE=${MAKE:="make"}
;;
case "$(uname -s)" in
"FreeBSD" | "OpenBSD")
MAKE=${MAKE:="gmake"}
;;
*)
MAKE=${MAKE:="make"}
;;
esac
(cd ${source_libs_dir}/fcollada && MAKE=${MAKE} JOBS=${JOBS} ./build.sh) || die "FCollada build failed"
(cd "${source_libs_dir}"/fcollada && MAKE=${MAKE} JOBS=${JOBS} ./build.sh) || die "FCollada build failed"
echo
if [ "$with_system_nvtt" = "false" ] && [ "$without_nvtt" = "false" ]; then
(cd ${source_libs_dir}/nvtt && MAKE=${MAKE} JOBS=${JOBS} ./build.sh) || die "NVTT build failed"
(cd "${source_libs_dir}"/nvtt && MAKE=${MAKE} JOBS=${JOBS} ./build.sh) || die "NVTT build failed"
fi
echo
if [ "$with_system_mozjs" = "false" ]; then
(cd ${source_libs_dir}/spidermonkey && MAKE=${MAKE} JOBS=${JOBS} ./build.sh) || die "SpiderMonkey build failed"
(cd "${source_libs_dir}"/spidermonkey && MAKE=${MAKE} JOBS=${JOBS} ./build.sh) || die "SpiderMonkey build failed"
fi
echo
echo "Copying built files..."
# Copy built binaries to binaries/system/
cp ${source_libs_dir}/fcollada/bin/* ../binaries/system/
cp "${source_libs_dir}"/fcollada/bin/* ../binaries/system/
if [ "$with_system_nvtt" = "false" ] && [ "$without_nvtt" = "false" ]; then
cp ${source_libs_dir}/nvtt/bin/* ../binaries/system/
cp "${source_libs_dir}"/nvtt/bin/* ../binaries/system/
fi
if [ "$with_system_mozjs" = "false" ]; then
cp ${source_libs_dir}/spidermonkey/bin/* ../binaries/system/
cp "${source_libs_dir}"/spidermonkey/bin/* ../binaries/system/
fi
# If a custom source-libs dir was used, includes and static libs should be copied to libraries/source/
# and all other bundled content should be copied.
if [ "$source_libs_dir" != "source" ]; then
rsync -avzq \
--exclude fcollada \
--exclude nvtt \
--exclude spidermonkey \
${source_libs_dir}/ source
rsync -avzq \
--exclude fcollada \
--exclude nvtt \
--exclude spidermonkey \
"${source_libs_dir}"/ source
mkdir -p source/fcollada
cp -r ${source_libs_dir}/fcollada/{include,lib} source/fcollada/
if [ "$with_system_nvtt" = "false" ] && [ "$without_nvtt" = "false" ]; then
mkdir -p source/nvtt
cp -r ${source_libs_dir}/nvtt/{include,lib} source/nvtt/
fi
if [ "$with_system_mozjs" = "false" ]; then
mkdir -p source/spidermonkey
cp -r ${source_libs_dir}/spidermonkey/{include-unix-debug,include-unix-release,lib} source/spidermonkey/
fi
mkdir -p source/fcollada
cp -r "${source_libs_dir}"/fcollada/include source/fcollada/
cp -r "${source_libs_dir}"/fcollada/lib source/fcollada/
if [ "$with_system_nvtt" = "false" ] && [ "$without_nvtt" = "false" ]; then
mkdir -p source/nvtt
cp -r "${source_libs_dir}"/nvtt/include source/nvtt/
cp -r "${source_libs_dir}"/nvtt/lib source/nvtt/
fi
if [ "$with_system_mozjs" = "false" ]; then
mkdir -p source/spidermonkey
cp -r "${source_libs_dir}"/spidermonkey/include-unix-debug source/spidermonkey/
cp -r "${source_libs_dir}"/spidermonkey/include-unix-release source/spidermonkey/
cp -r "${source_libs_dir}"/spidermonkey/lib source/spidermonkey/
fi
fi
echo "Done."

View File

@ -6,15 +6,15 @@
# will still be there, etc. This is mostly just to quickly fix problems in the
# bundled dependencies.)
cd "$(dirname $0)"
cd "$(dirname "$0")" || exit 1
# Now in libraries/ (where we assume this script resides)
echo "Cleaning bundled third-party dependencies..."
if [ -e source/.svn ]; then
(cd source && svn revert -R . && svn st --no-ignore | cut -c 9- | xargs rm -rf)
(cd source && svn revert -R . && svn st --no-ignore | cut -c 9- | xargs rm -rf)
else
rm -rf source
rm -rf source
fi
echo

View File

@ -1,138 +1,138 @@
MICROSOFT SOFTWARE LICENSE TERMS
MICROSOFT DEBUGGING TOOLS FOR WINDOWS
These license terms are an agreement between Microsoft Corporation (or based on where you live, one of
its affiliates) and you. Please read them. They apply to the software named above, which includes the
MICROSOFT SOFTWARE LICENSE TERMS
MICROSOFT DEBUGGING TOOLS FOR WINDOWS
These license terms are an agreement between Microsoft Corporation (or based on where you live, one of
its affiliates) and you. Please read them. They apply to the software named above, which includes the
media on which you received it, if any. The terms also apply to any Microsoft
* updates,
* supplements,
* Internet-based services
* Internet-based services
* support services, and
* Debugging symbol files that you may access over the internet
* Debugging symbol files that you may access over the internet
for this software, unless other terms accompany those items. If so, those terms apply.
By using the software, you accept these terms. If you do not accept them, do not use the
By using the software, you accept these terms. If you do not accept them, do not use the
software.
If you comply with these license terms, you have the rights below.
1. INSTALLATION AND USE RIGHTS. One user may install and use any number of copies of the
1. INSTALLATION AND USE RIGHTS. One user may install and use any number of copies of the
software on your devices to design, develop, debug and test your programs.
2. ADDITIONAL LICENSING REQUIREMENTS AND/OR USE RIGHTS.
a. Distributable Code. The software contains code that you are permitted to distribute in programs
a. Distributable Code. The software contains code that you are permitted to distribute in programs
you develop if you comply with the terms below.
i. Right to Use and Distribute. The code and text files listed below are “Distributable Code.”
* REDIST.TXT Files. You may copy and distribute the object code form of code listed in
i. Right to Use and Distribute. The code and text files listed below are “Distributable Code.”
* REDIST.TXT Files. You may copy and distribute the object code form of code listed in
REDIST.TXT files.
* Sample Code. You may modify, copy, and distribute the source and object code form of
code marked as “sample.”
* Third Party Distribution. You may permit distributors of your programs to copy and
* Sample Code. You may modify, copy, and distribute the source and object code form of
code marked as “sample.”
* Third Party Distribution. You may permit distributors of your programs to copy and
distribute the Distributable Code as part of those programs.
ii. Distribution Requirements. For any Distributable Code you distribute, you must
* add significant primary functionality to it in your programs;
* require distributors and external end users to agree to terms that protect it at least as much
as this agreement;
* require distributors and external end users to agree to terms that protect it at least as much
as this agreement;
* display your valid copyright notice on your programs; and
* indemnify, defend, and hold harmless Microsoft from any claims, including attorneys’ fees,
* indemnify, defend, and hold harmless Microsoft from any claims, including attorneys’ fees,
related to the distribution or use of your programs.
iii. Distribution Restrictions. You may not
* alter any copyright, trademark or patent notice in the Distributable Code;
* distribute any symbol files which you may access or use under these license terms for the
* alter any copyright, trademark or patent notice in the Distributable Code;
* distribute any symbol files which you may access or use under these license terms for the
software;
* use Microsoft’s trademarks in your programs’ names or in a way that suggests your
programs come from or are endorsed by Microsoft;
* use Microsoft’s trademarks in your programs’ names or in a way that suggests your
programs come from or are endorsed by Microsoft;
* distribute Distributable Code to run on a platform other than the Windows platform;
* include Distributable Code in malicious, deceptive or unlawful programs; or
* modify or distribute the source code of any Distributable Code so that any part of it
becomes subject to an Excluded License. An Excluded License is one that requires, as a
* modify or distribute the source code of any Distributable Code so that any part of it
becomes subject to an Excluded License. An Excluded License is one that requires, as a
condition of use, modification or distribution, that
* the code be disclosed or distributed in source code form; or
* the code be disclosed or distributed in source code form; or
* others have the right to modify it.
3. SCOPE OF LICENSE. The software is licensed, not sold. This agreement only gives you some rights
to use the software. Microsoft reserves all other rights. Unless applicable law gives you more rights
despite this limitation, you may use the software only as expressly permitted in this agreement. In
doing so, you must comply with any technical limitations in the software that only allow you to use it in
3. SCOPE OF LICENSE. The software is licensed, not sold. This agreement only gives you some rights
to use the software. Microsoft reserves all other rights. Unless applicable law gives you more rights
despite this limitation, you may use the software only as expressly permitted in this agreement. In
doing so, you must comply with any technical limitations in the software that only allow you to use it in
certain ways. You may not
* work around any technical limitations in the software;
* reverse engineer, decompile or disassemble the software, except and only to the extent that
* reverse engineer, decompile or disassemble the software, except and only to the extent that
applicable law expressly permits, despite this limitation;
* make more copies of the software than specified in this agreement or allowed by applicable law,
* make more copies of the software than specified in this agreement or allowed by applicable law,
despite this limitation;
* publish the software for others to copy;
* rent, lease or lend the software;
* transfer the software or this agreement to any third party; or
* use the software for commercial software hosting services.
4. INTERNET-BASED SERVICES. Microsoft provides Internet-based services with the software. It may
* use the software for commercial software hosting services.
4. INTERNET-BASED SERVICES. Microsoft provides Internet-based services with the software. It may
change or cancel them at any time.
a. Consent for Internet-Based Services. The software contains features which may connect to
Microsoft or service provider computer systems over the Internet. In some cases, you will not
receive a separate notice when they connect. You may switch these features on or you may
choose not to use them. For more information about these features, see
http://www.microsoft.com/info/privacy/default.mspx. By using these features, you consent to the transmission of
a. Consent for Internet-Based Services. The software contains features which may connect to
Microsoft or service provider computer systems over the Internet. In some cases, you will not
receive a separate notice when they connect. You may switch these features on or you may
choose not to use them. For more information about these features, see
http://www.microsoft.com/info/privacy/default.mspx. By using these features, you consent to the transmission of
this information. Microsoft does not use the information to identify or contact you.
b. Misuse of Internet-based Services. You may not use these services in any way that could
harm them or impair anyone else’s use of them. You may not use the services to try to gain
unauthorized access to any service, data, account or network by any means.
5. BACKUP COPY. You may make one backup copy of the software. You may use it only to reinstall the
b. Misuse of Internet-based Services. You may not use these services in any way that could
harm them or impair anyone else’s use of them. You may not use the services to try to gain
unauthorized access to any service, data, account or network by any means.
5. BACKUP COPY. You may make one backup copy of the software. You may use it only to reinstall the
software.
6. DOCUMENTATION. Any person that has valid access to your computer or internal network may copy
6. DOCUMENTATION. Any person that has valid access to your computer or internal network may copy
and use the documentation for your internal, reference purposes.
7. EXPORT RESTRICTIONS. The software is subject to United States export laws and regulations. You
must comply with all domestic and international export laws and regulations that apply to the software.
These laws include restrictions on destinations, end users and end use. For additional information, see
7. EXPORT RESTRICTIONS. The software is subject to United States export laws and regulations. You
must comply with all domestic and international export laws and regulations that apply to the software.
These laws include restrictions on destinations, end users and end use. For additional information, see
www.microsoft.com/exporting.
8. SUPPORT SERVICES. Because this software is “as is,” we may not provide support services for it.
9. ENTIRE AGREEMENT. This agreement, and the terms for supplements, updates, Internet-based
services and support services that you use, are the entire agreement for the software and support
8. SUPPORT SERVICES. Because this software is “as is,” we may not provide support services for it.
9. ENTIRE AGREEMENT. This agreement, and the terms for supplements, updates, Internet-based
services and support services that you use, are the entire agreement for the software and support
services.
10. APPLICABLE LAW.
a. United States. If you acquired the software in the United States, Washington state law governs
the interpretation of this agreement and applies to claims for breach of it, regardless of conflict of
laws principles. The laws of the state where you live govern all other claims, including claims under
a. United States. If you acquired the software in the United States, Washington state law governs
the interpretation of this agreement and applies to claims for breach of it, regardless of conflict of
laws principles. The laws of the state where you live govern all other claims, including claims under
state consumer protection laws, unfair competition laws, and in tort.
b. Outside the United States. If you acquired the software in any other country, the laws of that
b. Outside the United States. If you acquired the software in any other country, the laws of that
country apply.
11. LEGAL EFFECT. This agreement describes certain legal rights. You may have other rights under the
laws of your country. You may also have rights with respect to the party from whom you acquired the
software. This agreement does not change your rights under the laws of your country if the laws of
11. LEGAL EFFECT. This agreement describes certain legal rights. You may have other rights under the
laws of your country. You may also have rights with respect to the party from whom you acquired the
software. This agreement does not change your rights under the laws of your country if the laws of
your country do not permit it to do so.
12. DISCLAIMER OF WARRANTY. The software is licensed “as-is.” You bear the risk of using
it. Microsoft gives no express warranties, guarantees or conditions. You may have
additional consumer rights under your local laws which this agreement cannot change. To
the extent permitted under your local laws, Microsoft excludes the implied warranties of
12. DISCLAIMER OF WARRANTY. The software is licensed “as-is.” You bear the risk of using
it. Microsoft gives no express warranties, guarantees or conditions. You may have
additional consumer rights under your local laws which this agreement cannot change. To
the extent permitted under your local laws, Microsoft excludes the implied warranties of
merchantability, fitness for a particular purpose and non-infringement.
13. LIMITATION ON AND EXCLUSION OF REMEDIES AND DAMAGES. You can recover from
Microsoft and its suppliers only direct damages up to U.S. $5.00. You cannot recover any
other damages, including consequential, lost profits, special, indirect or incidental
13. LIMITATION ON AND EXCLUSION OF REMEDIES AND DAMAGES. You can recover from
Microsoft and its suppliers only direct damages up to U.S. $5.00. You cannot recover any
other damages, including consequential, lost profits, special, indirect or incidental
damages.
This limitation applies to
* anything related to the software, services, content (including code) on third party Internet sites, or
* anything related to the software, services, content (including code) on third party Internet sites, or
third party programs; and
* claims for breach of contract, breach of warranty, guarantee or condition, strict liability, negligence,
* claims for breach of contract, breach of warranty, guarantee or condition, strict liability, negligence,
or other tort to the extent permitted by applicable law.
It also applies even if Microsoft knew or should have known about the possibility of the damages. The
above limitation or exclusion may not apply to you because your country may not allow the exclusion or
It also applies even if Microsoft knew or should have known about the possibility of the damages. The
above limitation or exclusion may not apply to you because your country may not allow the exclusion or
limitation of incidental, consequential or other damages.
Please note: As this software is distributed in Quebec, Canada, some of the clauses in this
Please note: As this software is distributed in Quebec, Canada, some of the clauses in this
agreement are provided below in French.
Remarque : Ce logiciel étant distribué au Québec, Canada, certaines des clauses dans ce
contrat sont fournies ci-dessous en français.
EXONÉRATION DE GARANTIE. Le logiciel visé par une licence est offert « tel quel ». Toute utilisation de
ce logiciel est à votre seule risque et péril. Microsoft n’accorde aucune autre garantie expresse. Vous pouvez
bénéficier de droits additionnels en vertu du droit local sur la protection des consommateurs, que ce contrat
ne peut modifier. La ou elles sont permises par le droit locale, les garanties implicites de qualité marchande,
d’adéquation à un usage particulier et d’absence de contrefaçon sont exclues.
LIMITATION DES DOMMAGES-INTÉRÊTS ET EXCLUSION DE RESPONSABILITÉ POUR LES
DOMMAGES. Vous pouvez obtenir de Microsoft et de ses fournisseurs une indemnisation en cas de
dommages directs uniquement à hauteur de 5,00 $ US. Vous ne pouvez prétendre à aucune indemnisation
pour les autres dommages, y compris les dommages spéciaux, indirects ou accessoires et pertes de
bénéfices.
Remarque : Ce logiciel étant distribué au Québec, Canada, certaines des clauses dans ce
contrat sont fournies ci-dessous en français.
EXONÉRATION DE GARANTIE. Le logiciel visé par une licence est offert « tel quel ». Toute utilisation de
ce logiciel est à votre seule risque et péril. Microsoft n’accorde aucune autre garantie expresse. Vous pouvez
bénéficier de droits additionnels en vertu du droit local sur la protection des consommateurs, que ce contrat
ne peut modifier. La ou elles sont permises par le droit locale, les garanties implicites de qualité marchande,
d’adéquation à un usage particulier et d’absence de contrefaçon sont exclues.
LIMITATION DES DOMMAGES-INTÉRÊTS ET EXCLUSION DE RESPONSABILITÉ POUR LES
DOMMAGES. Vous pouvez obtenir de Microsoft et de ses fournisseurs une indemnisation en cas de
dommages directs uniquement à hauteur de 5,00 $ US. Vous ne pouvez prétendre à aucune indemnisation
pour les autres dommages, y compris les dommages spéciaux, indirects ou accessoires et pertes de
bénéfices.
Cette limitation concerne :
* tout ce qui est relié au logiciel, aux services ou au contenu (y compris le code) figurant sur des
* tout ce qui est relié au logiciel, aux services ou au contenu (y compris le code) figurant sur des
sites Internet tiers ou dans des programmes tiers ; et
* les réclamations au titre de violation de contrat ou de garantie, ou au titre de responsabilité stricte,
de négligence ou d’une autre faute dans la limite autorisée par la loi en vigueur.
Elle s’applique également, même si Microsoft connaissait ou devrait connaître l’éventualité d’un tel
dommage. Si votre pays n’autorise pas l’exclusion ou la limitation de responsabilité pour les dommages
indirects, accessoires ou de quelque nature que ce soit, il se peut que la limitation ou l’exclusion ci-dessus ne
s’appliquera pas à votre égard.
EFFET JURIDIQUE. Le présent contrat décrit certains droits juridiques. Vous pourriez avoir d’autres droits
prévus par les lois de votre pays. Le présent contrat ne modifie pas les droits que vous confèrent les lois de
* les réclamations au titre de violation de contrat ou de garantie, ou au titre de responsabilité stricte,
de négligence ou d’une autre faute dans la limite autorisée par la loi en vigueur.
Elle s’applique également, même si Microsoft connaissait ou devrait connaître l’éventualité d’un tel
dommage. Si votre pays n’autorise pas l’exclusion ou la limitation de responsabilité pour les dommages
indirects, accessoires ou de quelque nature que ce soit, il se peut que la limitation ou l’exclusion ci-dessus ne
s’appliquera pas à votre égard.
EFFET JURIDIQUE. Le présent contrat décrit certains droits juridiques. Vous pourriez avoir d’autres droits
prévus par les lois de votre pays. Le présent contrat ne modifie pas les droits que vous confèrent les lois de
votre pays si celles-ci ne le permettent pas.

View File

@ -6,17 +6,12 @@ line-ending = "lf"
[lint]
select = ["ALL"]
ignore = [
"A",
"ARG",
"ANN",
"B018",
"B023",
"C90",
"COM812",
"D",
"D10",
"DTZ005",
"EM",
"ERA",
"FA",
"FIX",
"FBT",
@ -27,7 +22,6 @@ ignore = [
"PLR0912",
"PLR0913",
"PLR0915",
"PLR1704",
"PLR2004",
"PLW2901",
"PT",
@ -40,12 +34,9 @@ ignore = [
"S320",
"S603",
"S607",
"SIM102",
"SIM105",
"SIM113",
"SIM115",
"T20",
"TD",
"TD002",
"TD003",
"TRY002",
"TRY003",
"TRY004",

View File

@ -4,6 +4,7 @@
import os
import xml.etree.ElementTree as ET
from contextlib import suppress
from ctypes import *
@ -30,15 +31,16 @@ def log(severity, message):
clog = CFUNCTYPE(None, c_int, c_char_p)(log)
# (the CFUNCTYPE must not be GC'd, so try to keep a reference)
library.set_logger(clog)
skeleton_definitions = open(f"{binaries}/data/tests/collada/skeletons.xml").read()
with open(f"{binaries}/data/tests/collada/skeletons.xml", encoding="utf-8") as fd:
skeleton_definitions = fd.read()
library.set_skeleton_definitions(skeleton_definitions, len(skeleton_definitions))
def _convert_dae(func, filename, expected_status=0):
output = []
def cb(cbdata, str, len):
output.append(string_at(str, len))
def cb(_, ptr, size):
output.append(string_at(ptr, size))
cbtype = CFUNCTYPE(None, POINTER(None), POINTER(c_char), c_uint)
status = func(filename, cbtype(cb), None)
@ -63,10 +65,8 @@ def clean_dir(path):
except OSError:
pass # (ignore errors if files are in use)
# Make sure the directory exists
try:
with suppress(OSError):
os.makedirs(path)
except OSError:
pass # (ignore errors if it already exists)
def create_actor(mesh, texture, anims, props_):
@ -127,9 +127,13 @@ for test_file in ["xsitest3c", "xsitest3e", "jav2d", "jav2d2"]:
input_filename = f"{test_data}/{test_file}.dae"
output_filename = f"{test_mod}/art/meshes/{test_file}.pmd"
input = open(input_filename).read()
output = convert_dae_to_pmd(input)
open(output_filename, "wb").write(output)
with (
open(input_filename, encoding="utf-8") as input_fd,
open(output_filename, "wb") as output_fd,
):
file_input = input_fd.read()
file_output = convert_dae_to_pmd(file_input)
output_fd.write(file_output)
xml = create_actor(
test_file,
@ -142,10 +146,12 @@ for test_file in ["xsitest3c", "xsitest3e", "jav2d", "jav2d2"]:
],
[("helmet", "teapot_basic_static")],
)
open(f"{test_mod}/art/actors/{test_file}.xml", "w").write(xml)
with open(f"{test_mod}/art/actors/{test_file}.xml", "w", encoding="utf-8") as fd:
fd.write(xml)
xml = create_actor_static(test_file, "male")
open(f"{test_mod}/art/actors/{test_file}_static.xml", "w").write(xml)
with open(f"{test_mod}/art/actors/{test_file}_static.xml", "w", encoding="utf-8") as fd:
fd.write(xml)
# for test_file in ['jav2','jav2b', 'jav2d']:
for test_file in ["xsitest3c", "xsitest3e", "jav2d", "jav2d2"]:
@ -155,6 +161,10 @@ for test_file in ["xsitest3c", "xsitest3e", "jav2d", "jav2d2"]:
input_filename = f"{test_data}/{test_file}.dae"
output_filename = f"{test_mod}/art/animation/{test_file}.psa"
input = open(input_filename).read()
output = convert_dae_to_psa(input)
open(output_filename, "wb").write(output)
with (
open(input_filename, encoding="utf-8") as input_fd,
open(output_filename, "wb") as output_fd,
):
file_input = input_fd.read()
file_output = convert_dae_to_psa(file_input)
output_fd.write(file_output)

View File

@ -20,4 +20,6 @@
* SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*/
#define PYROGENESIS_VERSION "0.0.27.0"
#define PYROGENESIS_VERSION_WORD 0,0,27,0
extern wchar_t build_version[];

View File

@ -0,0 +1,38 @@
#include "lib/build_version.h"
#include <winver.h>
#ifndef DEBUG
#define VER_DEBUG 0
#else
#define VER_DEBUG VS_FF_DEBUG
#endif
VS_VERSION_INFO VERSIONINFO
FILEVERSION PYROGENESIS_VERSION_WORD
PRODUCTVERSION PYROGENESIS_VERSION_WORD
FILEFLAGSMASK VS_FFI_FILEFLAGSMASK
FILEFLAGS VER_DEBUG
FILEOS VOS_NT_WINDOWS32
FILETYPE VFT_APP
FILESUBTYPE VFT2_UNKNOWN
BEGIN
BLOCK "StringFileInfo"
BEGIN
BLOCK "040904E4"
BEGIN
VALUE "CompanyName", "Wildfire Games"
VALUE "FileDescription", "Pyrogenesis engine"
VALUE "FileVersion", PYROGENESIS_VERSION
VALUE "InternalName", "pyrogenesis.rc"
VALUE "LegalCopyright", "Copyright (C) 2024 Wildfire Games"
VALUE "OriginalFilename", "pyrogenesis.rc"
VALUE "ProductName", "Pyrogenesis"
VALUE "ProductVersion", PYROGENESIS_VERSION
END
END
BLOCK "VarFileInfo"
BEGIN
VALUE "Translation", 0x409, 0x4E4
END
END

View File

@ -1,4 +1,4 @@
/* Copyright (C) 2020 Wildfire Games.
/* Copyright (C) 2024 Wildfire Games.
* This file is part of 0 A.D.
*
* 0 A.D. is free software: you can redistribute it and/or modify
@ -76,13 +76,13 @@ public:
/// Scalar multiplication by an integer
CFixedVector2D operator*(int n) const
{
return CFixedVector2D(X*n, Y*n);
return CFixedVector2D(X * n, Y * n);
}
/// Scalar division by an integer. Must not have n == 0.
CFixedVector2D operator/(int n) const
{
return CFixedVector2D(X/n, Y/n);
return CFixedVector2D(X / n, Y / n);
}
/**
@ -106,10 +106,10 @@ public:
u64 d2 = xx + yy;
CheckUnsignedAdditionOverflow(d2, xx, L"Overflow in CFixedVector2D::Length() part 1")
u32 d = isqrt64(d2);
u32 d = isqrt64(d2);
CheckU32CastOverflow(d, i32, L"Overflow in CFixedVector2D::Length() part 2")
fixed r;
fixed r;
r.SetInternalValue(static_cast<i32>(d));
return r;
}
@ -169,6 +169,25 @@ public:
return 0;
}
/**
* Returns -1, 0, +1 depending on whether length estimate is less/equal/greater
* than the argument's length.
* Uses a precision value P to add a multiplier of +/- 10%, 20%, or 30% for P = 1,2,3 respectively.
*/
int CompareLengthRough(const CFixedVector2D& other, u8 P) const
{
u64 d2 = SQUARE_U64_FIXED(X) + SQUARE_U64_FIXED(Y);
u64 od2 = SQUARE_U64_FIXED(other.X) + SQUARE_U64_FIXED(other.Y);
d2 = d2 * (10-P + d2 % P*2); //overflow risk with long ranges (designed for unit ranges)
od2 = od2 * 10;
if (d2 < od2)
return -1;
if (d2 > od2)
return +1;
return 0;
}
bool IsZero() const
{
return X.IsZero() && Y.IsZero();
@ -211,11 +230,11 @@ public:
i64 x = MUL_I64_I32_I32(X.GetInternalValue(), v.X.GetInternalValue());
i64 y = MUL_I64_I32_I32(Y.GetInternalValue(), v.Y.GetInternalValue());
CheckSignedAdditionOverflow(i64, x, y, L"Overflow in CFixedVector2D::Dot() part 1", L"Underflow in CFixedVector2D::Dot() part 1")
i64 sum = x + y;
i64 sum = x + y;
sum >>= fixed::fract_bits;
CheckCastOverflow(sum, i32, L"Overflow in CFixedVector2D::Dot() part 2", L"Underflow in CFixedVector2D::Dot() part 2")
fixed ret;
fixed ret;
ret.SetInternalValue(static_cast<i32>(sum));
return ret;
}
@ -246,4 +265,4 @@ public:
}
};
#endif // INCLUDED_FIXED_VECTOR2D
#endif // INCLUDED_FIXED_VECTOR2D

View File

@ -24,7 +24,7 @@
#include "lib/sysdep/sysdep.h"
#include "lib/build_version.h"
const char* engine_version = "0.0.27";
const char* engine_version = PYROGENESIS_VERSION;
const char* main_window_name = "0 A.D.";
// convert contents of file <in_filename> from char to wchar_t and

View File

@ -85,7 +85,7 @@ bool JobQueue::empty() const
js::UniquePtr<JS::JobQueue::SavedJobQueue> JobQueue::saveJobQueue(JSContext*)
{
class SavedJobQueue : public JS::JobQueue::SavedJobQueue
class SavedJobQueue final : public JS::JobQueue::SavedJobQueue
{
public:
SavedJobQueue(QueueType& queue) :

View File

@ -35,7 +35,7 @@ namespace Script
void UnhandledRejectedPromise(JSContext* cx, bool, JS::HandleObject promise,
JS::PromiseRejectionHandlingState state, void*);
class JobQueue : public JS::JobQueue
class JobQueue final : public JS::JobQueue
{
public:
~JobQueue() final = default;

View File

@ -1,4 +1,4 @@
/* Copyright (C) 2022 Wildfire Games.
/* Copyright (C) 2024 Wildfire Games.
* This file is part of 0 A.D.
*
* 0 A.D. is free software: you can redistribute it and/or modify
@ -162,6 +162,7 @@ struct Query
u32 ownersMask;
i32 interface;
u8 flagsMask;
u8 P;
bool enabled;
bool parabolic;
bool accountForSize; // If true, the query accounts for unit sizes, otherwise it treats all entities as points.
@ -255,8 +256,8 @@ static_assert(sizeof(EntityData) == 24);
class EntityDistanceOrdering
{
public:
EntityDistanceOrdering(const EntityMap<EntityData>& entities, const CFixedVector2D& source) :
m_EntityData(entities), m_Source(source)
EntityDistanceOrdering(const EntityMap<EntityData>& entities, const CFixedVector2D& source, u8 P = 0) :
m_EntityData(entities), m_Source(source), m_P(P)
{
}
@ -268,11 +269,12 @@ public:
const EntityData& db = m_EntityData.find(b)->second;
CFixedVector2D vecA = CFixedVector2D(da.x, da.z) - m_Source;
CFixedVector2D vecB = CFixedVector2D(db.x, db.z) - m_Source;
return (vecA.CompareLength(vecB) < 0);
return m_P > 0 ? vecA.CompareLengthRough(vecB, m_P) < 0 : vecA.CompareLength(vecB) < 0;
}
const EntityMap<EntityData>& m_EntityData;
CFixedVector2D m_Source;
u8 m_P;
private:
EntityDistanceOrdering& operator=(const EntityDistanceOrdering&);
@ -294,6 +296,7 @@ struct SerializeHelper<Query>
serialize.NumberU32_Unbounded("owners mask", value.ownersMask);
serialize.NumberI32_Unbounded("interface", value.interface);
Serializer(serialize, "last match", value.lastMatch);
serialize.NumberU8_Unbounded("precision", value.P);
serialize.NumberU8_Unbounded("flagsMask", value.flagsMask);
serialize.Bool("enabled", value.enabled);
serialize.Bool("parabolic",value.parabolic);
@ -923,20 +926,20 @@ public:
tag_t CreateActiveQuery(entity_id_t source,
entity_pos_t minRange, entity_pos_t maxRange,
const std::vector<int>& owners, int requiredInterface, u8 flags, bool accountForSize) override
const std::vector<int>& owners, int requiredInterface, u8 flags, bool accountForSize, u8 P) override
{
tag_t id = m_QueryNext++;
m_Queries[id] = ConstructQuery(source, minRange, maxRange, owners, requiredInterface, flags, accountForSize);
m_Queries[id] = ConstructQuery(source, minRange, maxRange, owners, requiredInterface, flags, accountForSize, P);
return id;
}
tag_t CreateActiveParabolicQuery(entity_id_t source,
entity_pos_t minRange, entity_pos_t maxRange, entity_pos_t yOrigin,
const std::vector<int>& owners, int requiredInterface, u8 flags) override
const std::vector<int>& owners, int requiredInterface, u8 flags, u8 P) override
{
tag_t id = m_QueryNext++;
m_Queries[id] = ConstructParabolicQuery(source, minRange, maxRange, yOrigin, owners, requiredInterface, flags, true);
m_Queries[id] = ConstructParabolicQuery(source, minRange, maxRange, yOrigin, owners, requiredInterface, flags, true, P);
return id;
}
@ -993,25 +996,25 @@ public:
std::vector<entity_id_t> ExecuteQueryAroundPos(const CFixedVector2D& pos,
entity_pos_t minRange, entity_pos_t maxRange,
const std::vector<int>& owners, int requiredInterface, bool accountForSize) override
const std::vector<int>& owners, int requiredInterface, bool accountForSize, u8 P) override
{
Query q = ConstructQuery(INVALID_ENTITY, minRange, maxRange, owners, requiredInterface, GetEntityFlagMask("normal"), accountForSize);
Query q = ConstructQuery(INVALID_ENTITY, minRange, maxRange, owners, requiredInterface, GetEntityFlagMask("normal"), accountForSize, P);
std::vector<entity_id_t> r;
PerformQuery(q, r, pos);
// Return the list sorted by distance from the entity
std::stable_sort(r.begin(), r.end(), EntityDistanceOrdering(m_EntityData, pos));
std::stable_sort(r.begin(), r.end(), EntityDistanceOrdering(m_EntityData, pos, q.P));
return r;
}
std::vector<entity_id_t> ExecuteQuery(entity_id_t source,
entity_pos_t minRange, entity_pos_t maxRange,
const std::vector<int>& owners, int requiredInterface, bool accountForSize) override
const std::vector<int>& owners, int requiredInterface, bool accountForSize, u8 P) override
{
PROFILE("ExecuteQuery");
Query q = ConstructQuery(source, minRange, maxRange, owners, requiredInterface, GetEntityFlagMask("normal"), accountForSize);
Query q = ConstructQuery(source, minRange, maxRange, owners, requiredInterface, GetEntityFlagMask("normal"), accountForSize, P);
std::vector<entity_id_t> r;
@ -1026,7 +1029,7 @@ public:
PerformQuery(q, r, pos);
// Return the list sorted by distance from the entity
std::stable_sort(r.begin(), r.end(), EntityDistanceOrdering(m_EntityData, pos));
std::stable_sort(r.begin(), r.end(), EntityDistanceOrdering(m_EntityData, pos, q.P));
return r;
}
@ -1061,7 +1064,7 @@ public:
q.lastMatch = r;
// Return the list sorted by distance from the entity
std::stable_sort(r.begin(), r.end(), EntityDistanceOrdering(m_EntityData, pos));
std::stable_sort(r.begin(), r.end(), EntityDistanceOrdering(m_EntityData, pos, q.P));
return r;
}
@ -1145,7 +1148,7 @@ public:
continue;
if (cmpSourcePosition && cmpSourcePosition->IsInWorld())
std::stable_sort(added.begin(), added.end(), EntityDistanceOrdering(m_EntityData, cmpSourcePosition->GetPosition2D()));
std::stable_sort(added.begin(), added.end(), EntityDistanceOrdering(m_EntityData, cmpSourcePosition->GetPosition2D(),query.P));
messages.resize(messages.size() + 1);
std::pair<entity_id_t, CMessageRangeUpdate>& back = messages.back();
@ -1404,7 +1407,7 @@ public:
Query ConstructQuery(entity_id_t source,
entity_pos_t minRange, entity_pos_t maxRange,
const std::vector<int>& owners, int requiredInterface, u8 flagsMask, bool accountForSize) const
const std::vector<int>& owners, int requiredInterface, u8 flagsMask, bool accountForSize, u8 P = 0) const
{
// Min range must be non-negative.
if (minRange < entity_pos_t::Zero())
@ -1453,6 +1456,7 @@ public:
LOGWARNING("CCmpRangeManager: No owners in query for entity %u", source);
q.interface = requiredInterface;
q.P = P;
q.flagsMask = flagsMask;
return q;
@ -1460,9 +1464,9 @@ public:
Query ConstructParabolicQuery(entity_id_t source,
entity_pos_t minRange, entity_pos_t maxRange, entity_pos_t yOrigin,
const std::vector<int>& owners, int requiredInterface, u8 flagsMask, bool accountForSize) const
const std::vector<int>& owners, int requiredInterface, u8 flagsMask, bool accountForSize, u8 P = 0) const
{
Query q = ConstructQuery(source, minRange, maxRange, owners, requiredInterface, flagsMask, accountForSize);
Query q = ConstructQuery(source, minRange, maxRange, owners, requiredInterface, flagsMask, accountForSize, P);
q.parabolic = true;
q.yOrigin = yOrigin;
return q;

View File

@ -1,4 +1,4 @@
/* Copyright (C) 2022 Wildfire Games.
/* Copyright (C) 2024 Wildfire Games.
* This file is part of 0 A.D.
*
* 0 A.D. is free software: you can redistribute it and/or modify
@ -132,7 +132,7 @@ public:
* @return list of entities matching the query, ordered by increasing distance from the source entity.
*/
virtual std::vector<entity_id_t> ExecuteQuery(entity_id_t source, entity_pos_t minRange, entity_pos_t maxRange,
const std::vector<int>& owners, int requiredInterface, bool accountForSize) = 0;
const std::vector<int>& owners, int requiredInterface, bool accountForSize, u8 P = 0) = 0;
/**
* Execute a passive query.
@ -145,7 +145,7 @@ public:
* @return list of entities matching the query, ordered by increasing distance from the source entity.
*/
virtual std::vector<entity_id_t> ExecuteQueryAroundPos(const CFixedVector2D& pos, entity_pos_t minRange, entity_pos_t maxRange,
const std::vector<int>& owners, int requiredInterface, bool accountForSize) = 0;
const std::vector<int>& owners, int requiredInterface, bool accountForSize, u8 P = 0) = 0;
/**
* Construct an active query. The query will be disabled by default.
@ -159,7 +159,7 @@ public:
* @return unique non-zero identifier of query.
*/
virtual tag_t CreateActiveQuery(entity_id_t source, entity_pos_t minRange, entity_pos_t maxRange,
const std::vector<int>& owners, int requiredInterface, u8 flags, bool accountForSize) = 0;
const std::vector<int>& owners, int requiredInterface, u8 flags, bool accountForSize, u8 P = 0) = 0;
/**
* Construct an active query of a paraboloic form around the unit.
@ -177,7 +177,7 @@ public:
* @return unique non-zero identifier of query.
*/
virtual tag_t CreateActiveParabolicQuery(entity_id_t source, entity_pos_t minRange, entity_pos_t maxRange, entity_pos_t yOrigin,
const std::vector<int>& owners, int requiredInterface, u8 flags) = 0;
const std::vector<int>& owners, int requiredInterface, u8 flags, u8 P = 0) = 0;
/**

View File

@ -32,8 +32,8 @@ in particular, let us know and we can try to clarify it.
fontbuilder2
MIT
unspecified (FontLoader.py)
IBM CPL (Packer.py)
unspecified (font_loader.py)
IBM CPL (packer.py)
i18n
GPL version 2 (or later)

0
source/tools/cmpgraph/cmpgraph.pl Normal file → Executable file
View File

View File

@ -1,9 +1,9 @@
#!/bin/bash
#!/bin/sh
set -e
die()
{
echo ERROR: $*
echo ERROR: "$*"
exit 1
}
@ -17,26 +17,26 @@ echo "Filtering languages"
# CJK languages are excluded, as they are in mods.
# Note: Needs to be edited manually at each release.
# Keep in sync with the installer languages in 0ad.nsi.
LANGS=("ast" "ca" "cs" "de" "el" "en_GB" "es" "eu" "fi" "fr" "gd" "hu" "id" "it" "nl" "pl" "pt_BR" "ru" "sk" "sv" "tr" "uk")
LANGS="ast ca cs de el en_GB es eu fi fr gd hu id it nl pl pt_BR ru sk sv tr uk"
REGEX=$(printf "\|%s" "${LANGS[@]}")
REGEX=".*/\("${REGEX:2}"\)\.[-A-Za-z0-9_.]\+\.po"
# shellcheck disable=SC2086
REGEX=$(printf "\|%s" ${LANGS} | cut -c 2-)
REGEX=".*/\(${REGEX}\)\.[-A-Za-z0-9_.]\+\.po"
find binaries/ -name "*.po" | grep -v "$REGEX" | xargs rm -v || die "Error filtering languages."
# Build archive(s) - don't archive the _test.* mods
pushd binaries/data/mods > /dev/null
cd binaries/data/mods || die
archives=""
ONLY_MOD="${ONLY_MOD:=false}"
if [ "${ONLY_MOD}" = true ]; then
archives="mod"
else
for modname in [a-zA-Z0-9]*
do
for modname in [a-zA-Z0-9]*; do
archives="${archives} ${modname}"
done
fi
popd > /dev/null
cd - || die
BUILD_SHADERS="${BUILD_SHADERS:=true}"
if [ "${BUILD_SHADERS}" = true ]; then
@ -48,35 +48,31 @@ if [ "${BUILD_SHADERS}" = true ]; then
[ -n "${GLSLC}" ] || die "Error: glslc is not available. Install it with the Vulkan SDK before proceeding."
[ -n "${SPIRV_REFLECT}" ] || die "Error: spirv-reflect is not available. Install it with the Vulkan SDK before proceeding."
pushd "source/tools/spirv" > /dev/null
cd source/tools/spirv || die
ENGINE_VERSION=${ENGINE_VERSION:="0.0.xx"}
rulesFile="rules.${ENGINE_VERSION}.json"
if [ ! -e "$rulesFile" ]
then
if [ ! -e "$rulesFile" ]; then
# The rules.json file should be present in release tarballs, for
# some Linux CIs don't have access to the internet.
download="$(command -v wget || echo "curl -sLo ""${rulesFile}""")"
$download "https://releases.wildfiregames.com/spir-v/$rulesFile"
fi
for modname in $archives
do
for modname in $archives; do
modLocation="../../../binaries/data/mods/${modname}"
if [ -e "${modLocation}/shaders/spirv/" ]
then
if [ -e "${modLocation}/shaders/spirv/" ]; then
echo "Removing existing spirv shaders for '${modname}'..."
rm -rf "${modLocation}/shaders/spirv"
fi
echo "Building shader for '${modname}'..."
$PYTHON compile.py "$modLocation" "$rulesFile" "$modLocation" --dependency "../../../binaries/data/mods/mod/"
done
popd > /dev/null
cd - || die
fi
for modname in $archives
do
for modname in $archives; do
echo "Building archive for '${modname}'..."
ARCHIVEBUILD_INPUT="binaries/data/mods/${modname}"
ARCHIVEBUILD_OUTPUT="archives/${modname}"
@ -84,5 +80,5 @@ do
mkdir -p "${ARCHIVEBUILD_OUTPUT}"
(./binaries/system/pyrogenesis -mod=mod -archivebuild="${ARCHIVEBUILD_INPUT}" -archivebuild-output="${ARCHIVEBUILD_OUTPUT}/${modname}.zip") || die "Archive build for '${modname}' failed!"
cp "${ARCHIVEBUILD_INPUT}/mod.json" "${ARCHIVEBUILD_OUTPUT}" &> /dev/null || true
cp "${ARCHIVEBUILD_INPUT}/mod.json" "${ARCHIVEBUILD_OUTPUT}" >/dev/null 2>&1 || true
done

View File

@ -2,39 +2,39 @@
# Build the Pyrogenesis executable, used to create the bundle and run the archiver.
export ARCH=${ARCH:=$(uname -m)}
export ARCH="${ARCH:=$(uname -m)}"
# Set mimimum required OS X version, SDK location and tools
# Old SDKs can be found at https://github.com/phracker/MacOSX-SDKs
export MIN_OSX_VERSION=${MIN_OSX_VERSION:="10.12"}
export MIN_OSX_VERSION="${MIN_OSX_VERSION:="10.12"}"
# Note that the 10.12 SDK is know to be too old for FMT 7.
export SYSROOT=${SYSROOT:="/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk"}
export CC=${CC:="clang"} CXX=${CXX:="clang++"}
export SYSROOT="${SYSROOT:="/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk"}"
export CC="${CC:="clang"}" CXX="${CXX:="clang++"}"
die()
{
echo ERROR: $*
exit 1
echo ERROR: "$*"
exit 1
}
# Check that we're actually on OS X
if [ "`uname -s`" != "Darwin" ]; then
die "This script is intended for OS X only"
if [ "$(uname -s)" != "Darwin" ]; then
die "This script is intended for OS X only"
fi
# Check SDK exists
if [ ! -d "${SYSROOT}" ]; then
die "${SYSROOT} does not exist! You probably need to install Xcode"
die "${SYSROOT} does not exist! You probably need to install Xcode"
fi
cd "build/workspaces/"
cd "build/workspaces/" || die
JOBS=${JOBS:="-j5"}
# Toggle whether this is a full rebuild, including libraries (takes much longer).
FULL_REBUILD=${FULL_REBUILD:=false}
if $FULL_REBUILD = true; then
if [ "$FULL_REBUILD" = true ]; then
CLEAN_WORKSPACE_ARGS=""
BUILD_LIBS_ARGS="--force-rebuild"
else
@ -45,26 +45,33 @@ fi
./clean-workspaces.sh "${CLEAN_WORKSPACE_ARGS}"
# Build libraries against SDK
echo "\nBuilding libraries\n"
pushd ../../libraries/osx > /dev/null
echo
echo "Building libraries"
echo
cd ../../libraries/osx || die
SYSROOT="${SYSROOT}" MIN_OSX_VERSION="${MIN_OSX_VERSION}" \
./build-osx-libs.sh $JOBS "${BUILD_LIBS_ARGS}" || die "Libraries build script failed"
popd > /dev/null
./build-osx-libs.sh "$JOBS" "${BUILD_LIBS_ARGS}" || die "Libraries build script failed"
cd - || die
# Update workspaces
echo "\nGenerating workspaces\n"
echo
echo "Generating workspaces"
echo
# Pass OS X options through to Premake
(SYSROOT="${SYSROOT}" MIN_OSX_VERSION="${MIN_OSX_VERSION}" \
./update-workspaces.sh --sysroot="${SYSROOT}" --macosx-version-min="${MIN_OSX_VERSION}") || die "update-workspaces.sh failed!"
(./update-workspaces.sh --sysroot="${SYSROOT}" --macosx-version-min="${MIN_OSX_VERSION}") || die "update-workspaces.sh failed!"
pushd gcc > /dev/null
echo "\nBuilding game\n"
(make clean && CC="$CC -arch $ARCH" CXX="$CXX -arch $ARCH" make ${JOBS}) || die "Game build failed!"
popd > /dev/null
echo
echo "Building game"
echo
cd gcc || die
(make clean && CC="$CC -arch $ARCH" CXX="$CXX -arch $ARCH" make "${JOBS}") || die "Game build failed!"
cd - || die
# Run test to confirm all is OK
pushd ../../binaries/system > /dev/null
echo "\nRunning tests\n"
echo
echo "Running tests"
echo
cd ../../binaries/system || die
./test || die "Post-build testing failed!"
popd > /dev/null
cd - || die

View File

@ -1,4 +1,4 @@
#!/bin/bash
#!/bin/sh
set -ev
XZOPTS="-9 -e"
@ -8,47 +8,55 @@ BUNDLE_VERSION=${BUNDLE_VERSION:="0.0.xxx"}
PREFIX="0ad-${BUNDLE_VERSION}-alpha"
# Collect the relevant files
tar cf $PREFIX-unix-build.tar \
tar cf "$PREFIX"-unix-build.tar \
--exclude='*.bat' --exclude='*.dll' --exclude='*.exe' --exclude='*.lib' \
--exclude='libraries/source/fcollada/src/FCollada/FColladaTest' \
--exclude='libraries/source/spidermonkey/include-*' \
--exclude='libraries/source/spidermonkey/lib*' \
--exclude='source/test_root.cpp' \
-s "|.|$PREFIX/~|" \
{source,build,libraries/source,binaries/system/readme.txt,binaries/data/l10n,binaries/data/tests,binaries/data/mods/_test.*,*.txt}
source build libraries/source binaries/system/readme.txt binaries/data/l10n binaries/data/tests binaries/data/mods/_test.* ./*.txt
tar cf $PREFIX-unix-data.tar \
tar cf "$PREFIX"-unix-data.tar \
--exclude='binaries/data/config/dev.cfg' \
-s "|archives|$PREFIX/binaries/data/mods|" \
-s "|binaries|$PREFIX/binaries|" \
binaries/data/{config,tools} archives/
-s "|archives|$PREFIX/binaries/data/mods|" \
-s "|binaries|$PREFIX/binaries|" \
binaries/data/config binaries/data/tools archives/
# TODO: ought to include generated docs in here, perhaps?
# Compress
xz -kv ${XZOPTS} $PREFIX-unix-build.tar
xz -kv ${XZOPTS} $PREFIX-unix-data.tar
# shellcheck disable=SC2086
xz -kv ${XZOPTS} "$PREFIX"-unix-build.tar
# shellcheck disable=SC2086
xz -kv ${XZOPTS} "$PREFIX"-unix-data.tar
DO_GZIP=${DO_GZIP:=true}
if $DO_GZIP = true; then
7z a ${GZIP7ZOPTS} $PREFIX-unix-build.tar.gz $PREFIX-unix-build.tar
7z a ${GZIP7ZOPTS} $PREFIX-unix-data.tar.gz $PREFIX-unix-data.tar
if [ "$DO_GZIP" = true ]; then
7z a ${GZIP7ZOPTS} "$PREFIX"-unix-build.tar.gz "$PREFIX"-unix-build.tar
7z a ${GZIP7ZOPTS} "$PREFIX"-unix-data.tar.gz "$PREFIX"-unix-data.tar
fi
# Create Windows installer
# This needs nsisbi for files > 2GB
makensis -V4 -nocd \
-dcheckoutpath="." \
-dversion=${BUNDLE_VERSION} \
-dprefix=${PREFIX} \
-dversion="${BUNDLE_VERSION}" \
-dprefix="${PREFIX}" \
-darchive_path="archives/" \
source/tools/dist/0ad.nsi
# Fix permissions
chmod -f 644 ${PREFIX}-{unix-{build,data}.tar.xz,win32.exe}
chmod -f 644 "${PREFIX}-unix-build.tar.xz"
chmod -f 644 "${PREFIX}-unix-data.tar.xz"
chmod -f 644 "${PREFIX}-win32.exe"
# Print digests for copying into wiki page
shasum -a 1 ${PREFIX}-{unix-{build,data}.tar.xz,win32.exe}
shasum -a 1 "${PREFIX}-unix-build.tar.xz"
shasum -a 1 "${PREFIX}-unix-data.tar.xz"
shasum -a 1 "${PREFIX}-win32.exe"
if $DO_GZIP = true; then
chmod -f 644 ${PREFIX}-unix-{build,data}.tar.gz
shasum -a 1 ${PREFIX}-unix-{build,data}.tar.gz
if [ "$DO_GZIP" = true ]; then
chmod -f 644 "${PREFIX}-unix-build.tar.gz"
chmod -f 644 "${PREFIX}-unix-data.tar.gz"
shasum -a 1 "${PREFIX}-unix-build.tar.gz"
shasum -a 1 "${PREFIX}-unix-data.tar.gz"
fi

View File

@ -1,13 +1,14 @@
#!/usr/bin/env python3
import re
import sys
from argparse import ArgumentParser
from collections import defaultdict
from io import BytesIO
from json import load, loads
from logging import INFO, WARNING, Filter, Formatter, StreamHandler, getLogger
from os.path import basename, exists, sep
from pathlib import Path
from re import match, split
from struct import calcsize, unpack
from typing import Dict, List, Set, Tuple
from xml.etree import ElementTree as ET
from scriptlib import SimulTemplateEntity, find_files
@ -26,22 +27,18 @@ class SingleLevelFilter(Filter):
class CheckRefs:
def __init__(self):
# list of relative root file:str
self.files = []
# list of relative file:str
self.roots = []
# list of tuple (parent_file:str, dep_file:str)
self.deps = []
self.files: List[Path] = []
self.roots: List[Path] = []
self.deps: List[Tuple[Path, Path]] = []
self.vfs_root = Path(__file__).resolve().parents[3] / "binaries" / "data" / "mods"
self.supportedTextureFormats = ("dds", "png")
self.supportedMeshesFormats = ("pmd", "dae")
self.supportedAnimationFormats = ("psa", "dae")
self.supportedAudioFormats = "ogg"
self.mods = []
self.__init_logger
self.__init_logger()
self.inError = False
@property
def __init_logger(self):
logger = getLogger(__name__)
logger.setLevel(INFO)
@ -131,27 +128,28 @@ class CheckRefs:
if args.check_unused:
self.check_unused()
if args.validate_templates:
sys.path.append("../xmlvalidator/")
sys.path.append(str(Path(__file__).parent.parent / "xmlvalidator"))
from validate_grammar import RelaxNGValidator
validate = RelaxNGValidator(self.vfs_root, self.mods)
if not validate.run():
self.inError = True
if args.validate_actors:
sys.path.append("../xmlvalidator/")
sys.path.append(str(Path(__file__).parent.parent / "xmlvalidator"))
from validator import Validator
validator = Validator(self.vfs_root, self.mods)
if not validator.run():
self.inError = True
return self.inError
return not self.inError
def get_mod_dependencies(self, *mods):
modjsondeps = []
for mod in mods:
mod_json_path = self.vfs_root / mod / "mod.json"
if not exists(mod_json_path):
if not mod_json_path.exists():
self.logger.warning('Failed to find the mod.json for "%s"', mod)
continue
with open(mod_json_path, encoding="utf-8") as f:
@ -184,8 +182,8 @@ class CheckRefs:
actor_prefix = "actor|"
resource_prefix = "resource|"
for fp, ffp in sorted(mapfiles):
self.files.append(str(fp))
self.roots.append(str(fp))
self.files.append(fp)
self.roots.append(fp)
et_map = ET.parse(ffp).getroot()
entities = et_map.find("Entities")
used = (
@ -195,18 +193,18 @@ class CheckRefs:
)
for template in used:
if template.startswith(actor_prefix):
self.deps.append((str(fp), f"art/actors/{template[len(actor_prefix):]}"))
self.deps.append((fp, Path(f"art/actors/{template[len(actor_prefix) :]}")))
elif template.startswith(resource_prefix):
self.deps.append(
(str(fp), f"simulation/templates/{template[len(resource_prefix):]}.xml")
(fp, Path(f"simulation/templates/{template[len(resource_prefix):]}.xml"))
)
else:
self.deps.append((str(fp), f"simulation/templates/{template}.xml"))
self.deps.append((fp, Path(f"simulation/templates/{template}.xml")))
# Map previews
settings = loads(et_map.find("ScriptSettings").text)
if settings.get("Preview", None):
if settings.get("Preview"):
self.deps.append(
(str(fp), f'art/textures/ui/session/icons/mappreview/{settings["Preview"]}')
(fp, Path(f"art/textures/ui/session/icons/mappreview/{settings['Preview']}"))
)
def add_maps_pmp(self):
@ -229,8 +227,8 @@ class CheckRefs:
mapfiles = self.find_files("maps/scenarios", "pmp")
mapfiles.extend(self.find_files("maps/skirmishes", "pmp"))
for fp, ffp in sorted(mapfiles):
self.files.append(str(fp))
self.roots.append(str(fp))
self.files.append(fp)
self.roots.append(fp)
with open(ffp, "rb") as f:
expected_header = b"PSMP"
header = f.read(len(expected_header))
@ -250,8 +248,12 @@ class CheckRefs:
terrain_name = f.read(length).decode("ascii") # suppose ascii encoding
self.deps.append(
(
str(fp),
terrains.get(terrain_name, f"art/terrains/(unknown)/{terrain_name}"),
fp,
Path(
terrains.get(
terrain_name, f"art/terrains/(unknown)/{terrain_name}"
)
),
)
)
@ -260,7 +262,7 @@ class CheckRefs:
for _, ffp in sorted(self.find_files("simulation/data/civs", "json")):
with open(ffp, encoding="utf-8") as f:
civ = load(f)
code = civ.get("Code", None)
code = civ.get("Code")
if code is not None:
existing_civs.add(code)
@ -271,12 +273,14 @@ class CheckRefs:
custom_phase_techs = []
for fp, _ in self.find_files("simulation/data/technologies", "json"):
path_str = str(fp)
if "phase" in path_str:
# Get the last part of the phase tech name.
if basename(path_str).split("_")[-1].split(".")[0] in existing_civs:
custom_phase_techs.append(
str(fp.relative_to("simulation/data/technologies")).replace(sep, "/")
)
if "phase" not in path_str:
continue
# Get the last part of the phase tech name.
if Path(path_str).stem.split("_")[-1] in existing_civs:
custom_phase_techs.append(
fp.relative_to("simulation/data/technologies").as_posix()
)
return custom_phase_techs
@ -288,47 +292,46 @@ class CheckRefs:
simul_template_entity = SimulTemplateEntity(self.vfs_root, self.logger)
custom_phase_techs = self.get_custom_phase_techs()
for fp, _ in sorted(self.find_files(simul_templates_path, "xml")):
self.files.append(str(fp))
self.files.append(fp)
entity = simul_template_entity.load_inherited(
simul_templates_path, str(fp.relative_to(simul_templates_path)), self.mods
)
if entity.get("parent"):
for parent in entity.get("parent").split("|"):
self.deps.append((str(fp), str(simul_templates_path / (parent + ".xml"))))
self.deps.append((fp, simul_templates_path / (parent + ".xml")))
if not str(fp).startswith("template_"):
self.roots.append(str(fp))
self.roots.append(fp)
if (
entity.find("VisualActor") is not None
and entity.find("VisualActor").find("Actor") is not None
and entity.find("Identity") is not None
):
if entity.find("Identity") is not None:
phenotype_tag = entity.find("Identity").find("Phenotype")
phenotypes = split(
r"\s",
phenotype_tag.text
if phenotype_tag is not None and phenotype_tag.text
else "default",
)
actor = entity.find("VisualActor").find("Actor")
if "{phenotype}" in actor.text:
for phenotype in phenotypes:
# See simulation2/components/CCmpVisualActor.cpp and Identity.js
# for explanation.
actor_path = actor.text.replace("{phenotype}", phenotype)
self.deps.append((str(fp), f"art/actors/{actor_path}"))
else:
actor_path = actor.text
self.deps.append((str(fp), f"art/actors/{actor_path}"))
foundation_actor = entity.find("VisualActor").find("FoundationActor")
if foundation_actor is not None:
self.deps.append((str(fp), f"art/actors/{foundation_actor.text}"))
phenotype_tag = entity.find("Identity").find("Phenotype")
phenotypes = (
phenotype_tag.text.split()
if (phenotype_tag is not None and phenotype_tag.text)
else ["default"]
)
actor = entity.find("VisualActor").find("Actor")
if "{phenotype}" in actor.text:
for phenotype in phenotypes:
# See simulation2/components/CCmpVisualActor.cpp and Identity.js
# for explanation.
actor_path = actor.text.replace("{phenotype}", phenotype)
self.deps.append((fp, Path(f"art/actors/{actor_path}")))
else:
actor_path = actor.text
self.deps.append((fp, Path(f"art/actors/{actor_path}")))
foundation_actor = entity.find("VisualActor").find("FoundationActor")
if foundation_actor is not None:
self.deps.append((fp, Path(f"art/actors/{foundation_actor.text}")))
if entity.find("Sound") is not None:
phenotype_tag = entity.find("Identity").find("Phenotype")
phenotypes = split(
r"\s",
phenotype_tag.text
if phenotype_tag is not None and phenotype_tag.text
else "default",
phenotypes = (
phenotype_tag.text.split()
if (phenotype_tag is not None and phenotype_tag.text)
else ["default"]
)
lang_tag = entity.find("Identity").find("Lang")
lang = lang_tag.text if lang_tag is not None and lang_tag.text else "greek"
@ -343,20 +346,20 @@ class CheckRefs:
sound_path = sound_group.text.replace(
"{phenotype}", phenotype
).replace("{lang}", lang)
self.deps.append((str(fp), f"audio/{sound_path}"))
self.deps.append((fp, Path(f"audio/{sound_path}")))
else:
sound_path = sound_group.text.replace("{lang}", lang)
self.deps.append((str(fp), f"audio/{sound_path}"))
self.deps.append((fp, Path(f"audio/{sound_path}")))
if entity.find("Identity") is not None:
icon = entity.find("Identity").find("Icon")
if icon is not None and icon.text:
if entity.find("Formation") is not None:
self.deps.append(
(str(fp), f"art/textures/ui/session/icons/{icon.text}")
(fp, Path(f"art/textures/ui/session/icons/{icon.text}"))
)
else:
self.deps.append(
(str(fp), f"art/textures/ui/session/portraits/{icon.text}")
(fp, Path(f"art/textures/ui/session/portraits/{icon.text}"))
)
if (
entity.find("Heal") is not None
@ -366,7 +369,7 @@ class CheckRefs:
for tag in ("LineTexture", "LineTextureMask"):
elem = range_overlay.find(tag)
if elem is not None and elem.text:
self.deps.append((str(fp), f"art/textures/selection/{elem.text}"))
self.deps.append((fp, Path(f"art/textures/selection/{elem.text}")))
if (
entity.find("Selectable") is not None
and entity.find("Selectable").find("Overlay") is not None
@ -376,51 +379,51 @@ class CheckRefs:
for tag in ("MainTexture", "MainTextureMask"):
elem = texture.find(tag)
if elem is not None and elem.text:
self.deps.append((str(fp), f"art/textures/selection/{elem.text}"))
self.deps.append((fp, Path(f"art/textures/selection/{elem.text}")))
if entity.find("Formation") is not None:
icon = entity.find("Formation").find("Icon")
if icon is not None and icon.text:
self.deps.append((str(fp), f"art/textures/ui/session/icons/{icon.text}"))
self.deps.append((fp, Path(f"art/textures/ui/session/icons/{icon.text}")))
cmp_auras = entity.find("Auras")
if cmp_auras is not None:
auraString = cmp_auras.text
for aura in split(r"\s+", auraString):
for aura in auraString.split():
if not aura:
continue
if aura.startswith("-"):
continue
self.deps.append((str(fp), f"simulation/data/auras/{aura}.json"))
self.deps.append((fp, Path(f"simulation/data/auras/{aura}.json")))
cmp_identity = entity.find("Identity")
if cmp_identity is not None:
reqTag = cmp_identity.find("Requirements")
if reqTag is not None:
def parse_requirements(req, recursionDepth=1):
def parse_requirements(fp, req, recursionDepth=1):
techsTag = req.find("Techs")
if techsTag is not None:
for techTag in techsTag.text.split():
self.deps.append(
(str(fp), f"simulation/data/technologies/{techTag}.json")
(fp, Path(f"simulation/data/technologies/{techTag}.json"))
)
if recursionDepth > 0:
recursionDepth -= 1
allReqTag = req.find("All")
if allReqTag is not None:
parse_requirements(allReqTag, recursionDepth)
parse_requirements(fp, allReqTag, recursionDepth)
anyReqTag = req.find("Any")
if anyReqTag is not None:
parse_requirements(anyReqTag, recursionDepth)
parse_requirements(fp, anyReqTag, recursionDepth)
parse_requirements(reqTag)
parse_requirements(fp, reqTag)
cmp_researcher = entity.find("Researcher")
if cmp_researcher is not None:
techString = cmp_researcher.find("Technologies")
if techString is not None:
for tech in split(r"\s+", techString.text):
for tech in techString.text.split():
if not tech:
continue
if tech.startswith("-"):
@ -442,7 +445,7 @@ class CheckRefs:
civ = "generic"
tech = tech.replace("{civ}", civ)
self.deps.append(
(str(fp), f"simulation/data/technologies/{tech}.json")
(fp, Path(f"simulation/data/technologies/{tech}.json"))
)
def append_variant_dependencies(self, variant, fp):
@ -465,17 +468,17 @@ class CheckRefs:
else []
)
if variant_file:
self.deps.append((str(fp), f"art/variants/{variant_file}"))
self.deps.append((fp, Path(f"art/variants/{variant_file}")))
if mesh is not None and mesh.text:
self.deps.append((str(fp), f"art/meshes/{mesh.text}"))
self.deps.append((fp, Path(f"art/meshes/{mesh.text}")))
if particles is not None and particles.get("file"):
self.deps.append((str(fp), f'art/particles/{particles.get("file")}'))
self.deps.append((fp, Path(f"art/particles/{particles.get('file')}")))
for texture_file in [x for x in texture_files if x]:
self.deps.append((str(fp), f"art/textures/skins/{texture_file}"))
self.deps.append((fp, Path(f"art/textures/skins/{texture_file}")))
for prop_actor in [x for x in prop_actors if x]:
self.deps.append((str(fp), f"art/actors/{prop_actor}"))
self.deps.append((fp, Path(f"art/actors/{prop_actor}")))
for animation_file in [x for x in animation_files if x]:
self.deps.append((str(fp), f"art/animation/{animation_file}"))
self.deps.append((fp, Path(f"art/animation/{animation_file}")))
def append_actor_dependencies(self, actor, fp):
for group in actor.findall("group"):
@ -483,13 +486,13 @@ class CheckRefs:
self.append_variant_dependencies(variant, fp)
material = actor.find("material")
if material is not None and material.text:
self.deps.append((str(fp), f"art/materials/{material.text}"))
self.deps.append((fp, Path(f"art/materials/{material.text}")))
def add_actors(self):
self.logger.info("Loading actors...")
for fp, ffp in sorted(self.find_files("art/actors", "xml")):
self.files.append(str(fp))
self.roots.append(str(fp))
self.files.append(fp)
self.roots.append(fp)
root = ET.parse(ffp).getroot()
if root.tag == "actor":
self.append_actor_dependencies(root, fp)
@ -505,8 +508,8 @@ class CheckRefs:
def add_variants(self):
self.logger.info("Loading variants...")
for fp, ffp in sorted(self.find_files("art/variants", "xml")):
self.files.append(str(fp))
self.roots.append(str(fp))
self.files.append(fp)
self.roots.append(fp)
variant = ET.parse(ffp).getroot()
self.append_variant_dependencies(variant, fp)
@ -514,7 +517,7 @@ class CheckRefs:
self.logger.info("Loading art files...")
self.files.extend(
[
str(fp)
fp
for (fp, ffp) in self.find_files(
"art/textures/particles", *self.supportedTextureFormats
)
@ -522,7 +525,7 @@ class CheckRefs:
)
self.files.extend(
[
str(fp)
fp
for (fp, ffp) in self.find_files(
"art/textures/terrain", *self.supportedTextureFormats
)
@ -530,56 +533,53 @@ class CheckRefs:
)
self.files.extend(
[
str(fp)
fp
for (fp, ffp) in self.find_files(
"art/textures/skins", *self.supportedTextureFormats
)
]
)
self.files.extend(
[str(fp) for (fp, ffp) in self.find_files("art/meshes", *self.supportedMeshesFormats)]
[fp for (fp, ffp) in self.find_files("art/meshes", *self.supportedMeshesFormats)]
)
self.files.extend(
[
str(fp)
for (fp, ffp) in self.find_files("art/animation", *self.supportedAnimationFormats)
]
[fp for (fp, ffp) in self.find_files("art/animation", *self.supportedAnimationFormats)]
)
def add_materials(self):
self.logger.info("Loading materials...")
for fp, ffp in sorted(self.find_files("art/materials", "xml")):
self.files.append(str(fp))
self.files.append(fp)
material_elem = ET.parse(ffp).getroot()
for alternative in material_elem.findall("alternative"):
material = alternative.get("material")
if material is not None:
self.deps.append((str(fp), f"art/materials/{material}"))
self.deps.append((fp, Path(f"art/materials/{material}")))
def add_particles(self):
self.logger.info("Loading particles...")
for fp, ffp in sorted(self.find_files("art/particles", "xml")):
self.files.append(str(fp))
self.roots.append(str(fp))
self.files.append(fp)
self.roots.append(fp)
particle = ET.parse(ffp).getroot()
texture = particle.find("texture")
if texture is not None:
self.deps.append((str(fp), texture.text))
self.deps.append((fp, Path(texture.text)))
def add_soundgroups(self):
self.logger.info("Loading sound groups...")
for fp, ffp in sorted(self.find_files("audio", "xml")):
self.files.append(str(fp))
self.roots.append(str(fp))
self.files.append(fp)
self.roots.append(fp)
sound_group = ET.parse(ffp).getroot()
path = sound_group.find("Path").text.rstrip("/")
for sound in sound_group.findall("Sound"):
self.deps.append((str(fp), f"{path}/{sound.text}"))
self.deps.append((fp, Path(f"{path}/{sound.text}")))
def add_audio(self):
self.logger.info("Loading audio files...")
self.files.extend(
[str(fp) for (fp, ffp) in self.find_files("audio/", self.supportedAudioFormats)]
[fp for (fp, ffp) in self.find_files("audio/", self.supportedAudioFormats)]
)
def add_gui_object_repeat(self, obj, fp):
@ -597,7 +597,7 @@ class CheckRefs:
for include in obj.findall("include"):
included_file = include.get("file")
if included_file:
self.deps.append((str(fp), f"{included_file}"))
self.deps.append((fp, Path(included_file)))
def add_gui_object(self, parent, fp):
if parent is None:
@ -616,39 +616,40 @@ class CheckRefs:
def add_gui_xml(self):
self.logger.info("Loading GUI XML...")
gui_page_regex = re.compile(r".*[\\\/]page(_[^.\/\\]+)?\.xml$")
for fp, ffp in sorted(self.find_files("gui", "xml")):
self.files.append(str(fp))
self.files.append(fp)
# GUI page definitions are assumed to be named page_[something].xml and alone in that.
if match(r".*[\\\/]page(_[^.\/\\]+)?\.xml$", str(fp)):
self.roots.append(str(fp))
if gui_page_regex.match(str(fp)):
self.roots.append(fp)
root_xml = ET.parse(ffp).getroot()
for include in root_xml.findall("include"):
# If including an entire directory, find all the *.xml files
if include.text.endswith("/"):
self.deps.extend(
[
(str(fp), str(sub_fp))
(fp, sub_fp)
for (sub_fp, sub_ffp) in self.find_files(
f"gui/{include.text}", "xml"
)
]
)
else:
self.deps.append((str(fp), f"gui/{include.text}"))
self.deps.append((fp, Path(f"gui/{include.text}")))
else:
xml = ET.parse(ffp)
root_xml = xml.getroot()
name = root_xml.tag
self.roots.append(str(fp))
self.roots.append(fp)
if name in ("objects", "object"):
for script in root_xml.findall("script"):
if script.get("file"):
self.deps.append((str(fp), script.get("file")))
self.deps.append((fp, Path(script.get("file"))))
if script.get("directory"):
# If including an entire directory, find all the *.js files
self.deps.extend(
[
(str(fp), str(sub_fp))
(fp, sub_fp)
for (sub_fp, sub_ffp) in self.find_files(
script.get("directory"), "js"
)
@ -661,20 +662,20 @@ class CheckRefs:
elif name == "styles":
for style in root_xml.findall("style"):
if style.get("sound_opened"):
self.deps.append((str(fp), f"{style.get('sound_opened')}"))
self.deps.append((fp, Path(f"{style.get('sound_opened')}")))
if style.get("sound_closed"):
self.deps.append((str(fp), f"{style.get('sound_closed')}"))
self.deps.append((fp, Path(f"{style.get('sound_closed')}")))
if style.get("sound_selected"):
self.deps.append((str(fp), f"{style.get('sound_selected')}"))
self.deps.append((fp, Path(f"{style.get('sound_selected')}")))
if style.get("sound_disabled"):
self.deps.append((str(fp), f"{style.get('sound_disabled')}"))
self.deps.append((fp, Path(f"{style.get('sound_disabled')}")))
# TODO: look at sprites, styles, etc
elif name == "sprites":
for sprite in root_xml.findall("sprite"):
for image in sprite.findall("image"):
if image.get("texture"):
self.deps.append(
(str(fp), f"art/textures/ui/{image.get('texture')}")
(fp, Path(f"art/textures/ui/{image.get('texture')}"))
)
else:
bio = BytesIO()
@ -686,19 +687,16 @@ class CheckRefs:
def add_gui_data(self):
self.logger.info("Loading GUI data...")
self.files.extend([str(fp) for (fp, ffp) in self.find_files("gui", "js")])
self.files.extend([str(fp) for (fp, ffp) in self.find_files("gamesettings", "js")])
self.files.extend([str(fp) for (fp, ffp) in self.find_files("autostart", "js")])
self.roots.extend([str(fp) for (fp, ffp) in self.find_files("autostart", "js")])
self.files.extend([fp for (fp, ffp) in self.find_files("gui", "js")])
self.files.extend([fp for (fp, ffp) in self.find_files("gamesettings", "js")])
self.files.extend([fp for (fp, ffp) in self.find_files("autostart", "js")])
self.roots.extend([fp for (fp, ffp) in self.find_files("autostart", "js")])
self.files.extend(
[
str(fp)
for (fp, ffp) in self.find_files("art/textures/ui", *self.supportedTextureFormats)
]
[fp for (fp, ffp) in self.find_files("art/textures/ui", *self.supportedTextureFormats)]
)
self.files.extend(
[
str(fp)
fp
for (fp, ffp) in self.find_files(
"art/textures/selection", *self.supportedTextureFormats
)
@ -708,65 +706,59 @@ class CheckRefs:
def add_civs(self):
self.logger.info("Loading civs...")
for fp, ffp in sorted(self.find_files("simulation/data/civs", "json")):
self.files.append(str(fp))
self.roots.append(str(fp))
self.files.append(fp)
self.roots.append(fp)
with open(ffp, encoding="utf-8") as f:
civ = load(f)
for music in civ.get("Music", []):
self.deps.append((str(fp), f"audio/music/{music['File']}"))
self.deps.append((fp, Path(f"audio/music/{music['File']}")))
def add_tips(self):
self.logger.info("Loading tips...")
for fp, _ffp in sorted(self.find_files("gui/text/tips", "txt")):
relative_path = str(fp)
self.files.append(relative_path)
self.roots.append(relative_path)
self.deps.append(
(
relative_path,
f"art/textures/ui/loading/tips/{basename(relative_path).split('.')[0]}.png",
)
)
self.files.append(fp)
self.roots.append(fp)
self.deps.append((fp, Path(f"art/textures/ui/loading/tips/{fp.stem}.png")))
def add_rms(self):
self.logger.info("Loading random maps...")
self.files.extend([str(fp) for (fp, ffp) in self.find_files("maps/random", "js")])
self.files.extend([fp for (fp, ffp) in self.find_files("maps/random", "js")])
for fp, ffp in sorted(self.find_files("maps/random", "json")):
if str(fp).startswith("maps/random/rmbiome"):
continue
self.files.append(str(fp))
self.roots.append(str(fp))
self.files.append(fp)
self.roots.append(fp)
with open(ffp, encoding="utf-8") as f:
randmap = load(f)
settings = randmap.get("settings", {})
if settings.get("Script", None):
self.deps.append((str(fp), f"maps/random/{settings['Script']}"))
if settings.get("Script"):
self.deps.append((fp, Path(f"maps/random/{settings['Script']}")))
# Map previews
if settings.get("Preview", None):
if settings.get("Preview"):
self.deps.append(
(str(fp), f'art/textures/ui/session/icons/mappreview/{settings["Preview"]}')
(fp, Path(f"art/textures/ui/session/icons/mappreview/{settings['Preview']}"))
)
def add_techs(self):
self.logger.info("Loading techs...")
for fp, ffp in sorted(self.find_files("simulation/data/technologies", "json")):
self.files.append(str(fp))
self.files.append(fp)
with open(ffp, encoding="utf-8") as f:
tech = load(f)
if tech.get("autoResearch", None):
self.roots.append(str(fp))
if tech.get("icon", None):
if tech.get("autoResearch"):
self.roots.append(fp)
if tech.get("icon"):
self.deps.append(
(str(fp), f"art/textures/ui/session/portraits/technologies/{tech['icon']}")
(fp, Path(f"art/textures/ui/session/portraits/technologies/{tech['icon']}"))
)
if tech.get("supersedes", None):
if tech.get("supersedes"):
self.deps.append(
(str(fp), f"simulation/data/technologies/{tech['supersedes']}.json")
(fp, Path(f"simulation/data/technologies/{tech['supersedes']}.json"))
)
if tech.get("top", None):
self.deps.append((str(fp), f"simulation/data/technologies/{tech['top']}.json"))
if tech.get("bottom", None):
self.deps.append((str(fp), f"simulation/data/technologies/{tech['bottom']}.json"))
if tech.get("top"):
self.deps.append((fp, Path(f"simulation/data/technologies/{tech['top']}.json")))
if tech.get("bottom"):
self.deps.append((fp, Path(f"simulation/data/technologies/{tech['bottom']}.json")))
def add_terrains(self):
self.logger.info("Loading terrains...")
@ -774,81 +766,70 @@ class CheckRefs:
# ignore terrains.xml
if str(fp).endswith("terrains.xml"):
continue
self.files.append(str(fp))
self.roots.append(str(fp))
self.files.append(fp)
self.roots.append(fp)
terrain = ET.parse(ffp).getroot()
for texture in terrain.find("textures").findall("texture"):
if texture.get("file"):
self.deps.append((str(fp), f"art/textures/terrain/{texture.get('file')}"))
self.deps.append((fp, Path(f"art/textures/terrain/{texture.get('file')}")))
if terrain.find("material") is not None:
material = terrain.find("material").text
self.deps.append((str(fp), f"art/materials/{material}"))
self.deps.append((fp, Path(f"art/materials/{material}")))
def add_auras(self):
self.logger.info("Loading auras...")
for fp, ffp in sorted(self.find_files("simulation/data/auras", "json")):
self.files.append(str(fp))
self.files.append(fp)
with open(ffp, encoding="utf-8") as f:
aura = load(f)
if aura.get("overlayIcon", None):
self.deps.append((str(fp), aura["overlayIcon"]))
if aura.get("overlayIcon"):
self.deps.append((fp, Path(aura["overlayIcon"])))
range_overlay = aura.get("rangeOverlay", {})
for prop in ("lineTexture", "lineTextureMask"):
if range_overlay.get(prop, None):
self.deps.append((str(fp), f"art/textures/selection/{range_overlay[prop]}"))
if range_overlay.get(prop):
self.deps.append((fp, Path(f"art/textures/selection/{range_overlay[prop]}")))
def check_deps(self):
self.logger.info("Looking for missing files...")
uniq_files = set(self.files)
uniq_files = [r.replace(sep, "/") for r in uniq_files]
uniq_files = {r.as_posix() for r in self.files}
lower_case_files = {f.lower(): f for f in uniq_files}
reverse_deps = {}
for parent, dep in self.deps:
if sep != "/":
parent = parent.replace(sep, "/")
dep = dep.replace(sep, "/")
if dep not in reverse_deps:
reverse_deps[dep] = {parent}
else:
reverse_deps[dep].add(parent)
for dep in sorted(reverse_deps.keys()):
if "simulation/templates" in dep and (
dep.replace("templates/", "template/special/filter/") in uniq_files
or dep.replace("templates/", "template/mixins/") in uniq_files
missing_files: Dict[str, Set[str]] = defaultdict(set)
for parent, dep in self.deps:
dep_str = dep.as_posix()
if "simulation/templates" in dep_str and (
dep_str.replace("templates/", "template/special/filter/") in uniq_files
or dep_str.replace("templates/", "template/mixins/") in uniq_files
):
continue
if dep in uniq_files:
if dep_str in uniq_files:
continue
callers = [str(self.vfs_to_relative_to_mods(ref)) for ref in reverse_deps[dep]]
missing_files[dep_str].add(parent.as_posix())
for dep, parents in sorted(missing_files.items()):
callers = [str(self.vfs_to_relative_to_mods(ref)) for ref in parents]
self.logger.error(
"Missing file '%s' referenced by: %s", dep, ", ".join(sorted(callers))
)
self.inError = True
if dep.lower() in lower_case_files:
self.logger.warning(
"### Case-insensitive match (found '%s')", lower_case_files[dep.lower()]
)
self.inError = True
def check_unused(self):
self.logger.info("Looking for unused files...")
deps = {}
deps = defaultdict(set)
for parent, dep in self.deps:
if sep != "/":
parent = parent.replace(sep, "/")
dep = dep.replace(sep, "/")
deps[parent.as_posix()].add(dep.as_posix())
if parent not in deps:
deps[parent] = {dep}
else:
deps[parent].add(dep)
uniq_files = set(self.files)
uniq_files = [r.replace(sep, "/") for r in uniq_files]
reachable = list(set(self.roots))
reachable = [r.replace(sep, "/") for r in reachable]
uniq_files = {r.as_posix() for r in self.files}
reachable = [r.as_posix() for r in set(self.roots)]
while True:
new_reachable = []
for r in reachable:
@ -859,13 +840,7 @@ class CheckRefs:
break
for f in sorted(uniq_files):
if any(
(
f in reachable,
"art/terrains/" in f,
"maps/random/" in f,
)
):
if f in reachable or f.startswith(("art/terrains/", "maps/random/")):
continue
self.logger.warning("Unused file '%s'", str(self.vfs_to_relative_to_mods(f)))

View File

@ -20,7 +20,7 @@ def main():
vfs_root = Path(__file__).resolve().parents[3] / "binaries" / "data" / "mods"
simul_templates_path = Path("simulation/templates")
simul_template_entity = SimulTemplateEntity(vfs_root)
with open("creation.dot", "w") as dot_f:
with open("creation.dot", "w", encoding="utf-8") as dot_f:
dot_f.write("digraph G {\n")
files = sorted(find_entities(vfs_root))
for f in files:

View File

@ -20,7 +20,7 @@ Please create the file: {}
You can do that by running 'pyrogenesis -dumpSchema' in the 'system' directory
"""
XMLLINT_ERROR_MSG = (
"xmllint not found in your PATH, please install it " "(usually in libxml2 package)"
"xmllint not found in your PATH, please install it (usually in libxml2 package)"
)

View File

@ -2,7 +2,9 @@
## Description
This script checks the game files for missing dependencies, unused files, and for file integrity. If mods are specified, all their dependencies are also checked recursively. This script is particularly useful to detect broken actors or templates.
This script checks the game files for missing dependencies, unused files, and for file integrity.
If mods are specified, all their dependencies are also checked recursively. This script is
particularly useful to detect broken actors or templates.
## Requirements
@ -20,13 +22,15 @@ Checks the game files for missing dependencies, unused files, and for file integ
options:
-h, --help show this help message and exit
-u, --check-unused check for all the unused files in the given mods and their dependencies. Implies --check-map-
xml. Currently yields a lot of false positives.
-u, --check-unused check for all the unused files in the given mods and their dependencies.
Implies --check-map-xml. Currently yields a lot of false positives.
-x, --check-map-xml check maps for missing actor and templates.
-a, --validate-actors
run the validator.py script to check if the actors files have extra or missing textures.
run the validator.py script to check if the actors files have extra or
missing textures.
-t, --validate-templates
run the validator.py script to check if the xml files match their (.rng) grammar file.
run the validator.py script to check if the xml files match their (.rng)
grammar file.
-m MOD [MOD ...], --mods MOD [MOD ...]
specify which mods to check. Default to public.
```

View File

@ -33,9 +33,7 @@ class SimulTemplateEntity:
return main_mod
def apply_layer(self, base_tag, tag):
"""
apply tag layer to base_tag
"""
"""Apply tag layer to base_tag."""
if tag.get("datatype") == "tokens":
base_tokens = split(r"\s+", base_tag.text or "")
tokens = split(r"\s+", tag.text or "")
@ -89,9 +87,7 @@ class SimulTemplateEntity:
return entity
def _load_inherited(self, base_path, vfs_path, mods, base=None):
"""
vfs_path should be relative to base_path in a mod
"""
# vfs_path should be relative to base_path in a mod
if "|" in vfs_path:
paths = vfs_path.split("|", 1)
base = self._load_inherited(base_path, paths[1], mods, base)
@ -119,15 +115,16 @@ class SimulTemplateEntity:
def find_files(vfs_root, mods, vfs_path, *ext_list):
"""
returns a list of 2-size tuple with:
"""Find files.
Returns a list of 2-size tuple with:
- Path relative to the mod base
- full Path
"""
full_exts = ["." + ext for ext in ext_list]
def find_recursive(dp, base):
"""(relative Path, full Path) generator"""
"""(relative Path, full Path) generator."""
if dp.is_dir():
if dp.name not in (".svn", ".git") and not dp.name.endswith("~"):
for fp in dp.iterdir():

View File

@ -1,35 +1,34 @@
0 A.D. Font Builder
====================
# 0 A.D. Font Builder
The Font Builder generates pre-rendered font glyphs for use in the game engine. Its output for each font consists
of an 8-bit greyscale PNG image and a descriptor .fnt file that describes and locates each individual glyph in the image
(see fileformat.txt for details).
The Font Builder generates pre-rendered font glyphs for use in the game engine. Its output for each
font consists of an 8-bit greyscale PNG image and a descriptor .fnt file that describes and locates
each individual glyph in the image (see `fileformat.txt` for details).
See the wiki page for more information:
http://trac.wildfiregames.com/wiki/Font_Builder2
Prerequisites
-------------
## Prerequisites
The main prerequisite for the fontbuilder is the Cairo imaging library and its Python bindings, PyCairo. On most
Linux distributions, this should be merely a matter of installing a package (e.g. 'python-cairo' for Debian/Ubuntu),
but on Windows it's more involved.
The main prerequisite for the fontbuilder is the Cairo imaging library and its Python bindings,
PyCairo. On most Linux distributions, this should be merely a matter of installing a package (e.g.
`python3-cairo` for Debian/Ubuntu), but on Windows it's more involved.
We'll demonstrate the process for Windows 32-bit first. Grab a Win32 binary for PyCairo from
http://ftp.gnome.org/pub/GNOME/binaries/win32/pycairo/1.8/
and install it using the installer. There are installers available for Python versions 2.6 and 2.7. The installer
extracts the necessary files into Lib\site-packages\cairo within your Python installation directory.
and install it using the installer. There are installers available for Python versions 2.6 and 2.7.
The installer extracts the necessary files into Lib\site-packages\cairo within your Python
installation directory.
Next is Cairo itself, and some dependencies which are required for Cairo to work. Head to
http://ftp.gnome.org/pub/gnome/binaries/win32/dependencies/
and get the following binaries. Listed next to each are their version numbers at the time of writing; these may vary
over time, so be adaptive!
and get the following binaries. Listed next to each are their version numbers at the time of
writing; these may vary over time, so be adaptive!
- Cairo (cairo_1.8.10-3_win32.zip)
- Fontconfig (fontconfig_2.8.0-2_win32.zip)
- Freetype (freetype_2.4.4-1_win32.zip)
@ -37,36 +36,35 @@ over time, so be adaptive!
- libpng (libpng_1.4.3-1_win32.zip)
- zlib (zlib_1.2.5-2_win32.zip).
Each ZIP file will contain a bin subfolder with a DLL file in it. Put the following DLLs in Lib\site-packages\cairo
within your Python installation:
Each ZIP file will contain a bin subfolder with a DLL file in it. Put the following DLLs in
`Lib\site-packages\cairo` within your Python installation:
libcairo-2.dll (from cairo_1.8.10-3_win32.zip)
libfontconfig-1.dll (from fontconfig_2.8.0-2_win32.zip)
freetype6.dll (from freetype_2.4.4-1_win32.zip)
libexpat-1.dll (from expat_2.0.1-1_win32.zip)
libpng14-14.dll (from libpng_1.4.3-1_win32.zip)
zlib1.dll (from zlib_1.2.5-2_win32.zip).
zlib1.dll (from zlib_1.2.5-2_win32.zip)
You should be all set now. To test whether PyCairo installed successfully, try running the following command on a
command line:
You should be all set now. To test whether PyCairo installed successfully, try running the
following command on a command line:
python -c "import cairo"
python -c "import cairo"
If it doesn't complain, then it's installed successfully.
On Windows 64-bit, the process is similar, but no pre-built PyCairo executable appears to be available from Gnome at
the time of writing. Instead, you can install PyGTK+ for 64-bit Windows, which includes Cairo, PyCairo, and the
same set of dependencies. See this page for details:
On Windows 64-bit, the process is similar, but no pre-built PyCairo executable appears to be
available from Gnome at the time of writing. Instead, you can install PyGTK+ for 64-bit Windows,
which includes Cairo, PyCairo, and the same set of dependencies. See this page for details:
http://www.pygtk.org/downloads.html
Running
-------
## Running
Running the font-builder is fairly straight-forward; there are no configuration options. One caveat is that you must
run it from its own directory as the current directory.
Running the font-builder is fairly straight-forward; there are no configuration options. One
caveat is that you must run it from its own directory as the current directory.
python fontbuilder.py
This will generate the output .png and .fnt files straight into the binaries/data/mods/mod/fonts directory, ready
for in-game use.
This will generate the output .png and .fnt files straight into the `binaries/data/mods/mod/fonts`
directory, ready for in-game use.

View File

@ -2,12 +2,12 @@
# list of decimal codepoints (from U+0001 to U+FFFF) for which that font
# contains some glyph data.
import FontLoader
import font_loader
def dump_font(ttf):
(face, indexes) = FontLoader.create_cairo_font_face_for_file(
f"../../../binaries/data/tools/fontbuilder/fonts/{ttf}", 0, FontLoader.FT_LOAD_DEFAULT
(face, indexes) = font_loader.create_cairo_font_face_for_file(
f"../../../binaries/data/tools/fontbuilder/fonts/{ttf}", 0, font_loader.FT_LOAD_DEFAULT
)
mappings = [(c, indexes(chr(c))) for c in range(1, 65535)]

View File

@ -1,11 +1,10 @@
#!/usr/bin/env python3
import codecs
import math
import cairo
import FontLoader
import Packer
import font_loader
import packer
# Representation of a rendered glyph
@ -51,13 +50,8 @@ class Glyph:
self.w = bb[2] - bb[0]
self.h = bb[3] - bb[1]
# Force multiple of 4, to avoid leakage across S3TC blocks
# (TODO: is this useful?)
# self.w += (4 - (self.w % 4)) % 4
# self.h += (4 - (self.h % 4)) % 4
def pack(self, packer):
self.pos = packer.Pack(self.w, self.h)
def pack(self, packer_instance):
self.pos = packer_instance.pack(self.w, self.h)
def render(self, ctx):
if ctx.get_font_face() != self.face:
@ -87,56 +81,55 @@ class Glyph:
# Load the set of characters contained in the given text file
def load_char_list(filename):
f = codecs.open(filename, "r", "utf-8")
chars = f.read()
f.close()
with open(filename, encoding="utf-8") as f:
chars = f.read()
return set(chars)
# Construct a Cairo context and surface for rendering text with the given parameters
def setup_context(width, height, renderstyle):
format = cairo.FORMAT_ARGB32 if "colour" in renderstyle else cairo.FORMAT_A8
surface = cairo.ImageSurface(format, width, height)
surface_format = cairo.FORMAT_ARGB32 if "colour" in renderstyle else cairo.FORMAT_A8
surface = cairo.ImageSurface(surface_format, width, height)
ctx = cairo.Context(surface)
ctx.set_line_join(cairo.LINE_JOIN_ROUND)
return ctx, surface
def generate_font(outname, ttfNames, loadopts, size, renderstyle, dsizes):
faceList = []
indexList = []
for i in range(len(ttfNames)):
(face, indices) = FontLoader.create_cairo_font_face_for_file(
f"../../../binaries/data/tools/fontbuilder/fonts/{ttfNames[i]}", 0, loadopts
def generate_font(outname, ttf_names, loadopts, size, renderstyle, dsizes):
face_list = []
index_list = []
for i in range(len(ttf_names)):
(face, indices) = font_loader.create_cairo_font_face_for_file(
f"../../../binaries/data/tools/fontbuilder/fonts/{ttf_names[i]}", 0, loadopts
)
faceList.append(face)
if ttfNames[i] not in dsizes:
dsizes[ttfNames[i]] = 0
indexList.append(indices)
face_list.append(face)
if ttf_names[i] not in dsizes:
dsizes[ttf_names[i]] = 0
index_list.append(indices)
(ctx, _) = setup_context(1, 1, renderstyle)
# TODO this gets the line height from the default font
# TODO: this gets the line height from the default font
# while entire texts can be in the fallback font
ctx.set_font_face(faceList[0])
ctx.set_font_size(size + dsizes[ttfNames[0]])
ctx.set_font_face(face_list[0])
ctx.set_font_size(size + dsizes[ttf_names[0]])
(_, _, linespacing, _, _) = ctx.font_extents()
# Estimate the 'average' height of text, for vertical center alignment
charheight = round(ctx.glyph_extents([(indexList[0]("I"), 0.0, 0.0)])[3])
charheight = round(ctx.glyph_extents([(index_list[0]("I"), 0.0, 0.0)])[3])
# Translate all the characters into glyphs
# (This is inefficient if multiple characters have the same glyph)
glyphs = []
# for c in chars:
for c in range(0x20, 0xFFFE):
for i in range(len(indexList)):
idx = indexList[i](chr(c))
for i in range(len(index_list)):
idx = index_list[i](chr(c))
if c == 0xFFFD and idx == 0: # use "?" if the missing-glyph glyph is missing
idx = indexList[i]("?")
idx = index_list[i]("?")
if idx:
glyphs.append(
Glyph(ctx, renderstyle, chr(c), idx, faceList[i], size + dsizes[ttfNames[i]])
Glyph(ctx, renderstyle, chr(c), idx, face_list[i], size + dsizes[ttf_names[i]])
)
break
@ -156,11 +149,10 @@ def generate_font(outname, ttfNames, loadopts, size, renderstyle, dsizes):
try:
# Using the dump pacher usually creates bigger textures, but runs faster
# In practice the size difference is so small it always ends up in the same size
packer = Packer.DumbRectanglePacker(w, h)
# packer = Packer.CygonRectanglePacker(w, h)
packer_instance = packer.DumbRectanglePacker(w, h)
for g in glyphs:
g.pack(packer)
except Packer.OutOfSpaceError:
g.pack(packer_instance)
except packer.OutOfSpaceError:
continue
ctx, surface = setup_context(w, h, renderstyle)
@ -169,33 +161,29 @@ def generate_font(outname, ttfNames, loadopts, size, renderstyle, dsizes):
surface.write_to_png(f"{outname}.png")
# Output the .fnt file with all the glyph positions etc
fnt = open(f"{outname}.fnt", "w")
fnt.write("101\n")
fnt.write("%d %d\n" % (w, h))
fnt.write("%s\n" % ("rgba" if "colour" in renderstyle else "a"))
fnt.write("%d\n" % len(glyphs))
fnt.write("%d\n" % linespacing)
fnt.write("%d\n" % charheight)
# sorting unneeded, as glyphs are added in increasing order
# glyphs.sort(key = lambda g: ord(g.char))
for g in glyphs:
x0 = g.x0
y0 = g.y0
# UGLY HACK: see http://trac.wildfiregames.com/ticket/1039 ;
# to handle a-macron-acute characters without the hassle of
# doing proper OpenType GPOS layout (which the font
# doesn't support anyway), we'll just shift the combining acute
# glyph by an arbitrary amount to make it roughly the right
# place when used after an a-macron glyph.
if ord(g.char) == 0x0301:
y0 += charheight / 3
with open(f"{outname}.fnt", "w", encoding="utf-8") as fnt:
fnt.write("101\n")
fnt.write("%d %d\n" % (w, h))
fnt.write("%s\n" % ("rgba" if "colour" in renderstyle else "a"))
fnt.write("%d\n" % len(glyphs))
fnt.write("%d\n" % linespacing)
fnt.write("%d\n" % charheight)
for g in glyphs:
x0 = g.x0
y0 = g.y0
# UGLY HACK: see http://trac.wildfiregames.com/ticket/1039 ;
# to handle a-macron-acute characters without the hassle of
# doing proper OpenType GPOS layout (which the font
# doesn't support anyway), we'll just shift the combining acute
# glyph by an arbitrary amount to make it roughly the right
# place when used after an a-macron glyph.
if ord(g.char) == 0x0301:
y0 += charheight / 3
fnt.write(
"%d %d %d %d %d %d %d %d\n"
% (ord(g.char), g.pos.x, h - g.pos.y, g.w, g.h, -x0, y0, g.xadvance)
)
fnt.close()
fnt.write(
"%d %d %d %d %d %d %d %d\n"
% (ord(g.char), g.pos.x, h - g.pos.y, g.w, g.h, -x0, y0, g.xadvance)
)
return
print("Failed to fit glyphs in texture")
@ -211,12 +199,12 @@ stroked2 = {"colour": True, "stroke": [((0, 0, 0, 1), 2.0)], "fill": [(1, 1, 1,
stroked3 = {"colour": True, "stroke": [((0, 0, 0, 1), 2.5)], "fill": [(1, 1, 1, 1), (1, 1, 1, 1)]}
# For extra glyph support, add your preferred font to the font array
Sans = (["LinBiolinum_Rah.ttf", "FreeSans.ttf"], FontLoader.FT_LOAD_DEFAULT)
Sans_Bold = (["LinBiolinum_RBah.ttf", "FreeSansBold.ttf"], FontLoader.FT_LOAD_DEFAULT)
Sans_Italic = (["LinBiolinum_RIah.ttf", "FreeSansOblique.ttf"], FontLoader.FT_LOAD_DEFAULT)
SansMono = (["DejaVuSansMono.ttf", "FreeMono.ttf"], FontLoader.FT_LOAD_DEFAULT)
Serif = (["texgyrepagella-regular.otf", "FreeSerif.ttf"], FontLoader.FT_LOAD_NO_HINTING)
Serif_Bold = (["texgyrepagella-bold.otf", "FreeSerifBold.ttf"], FontLoader.FT_LOAD_NO_HINTING)
Sans = (["LinBiolinum_Rah.ttf", "FreeSans.ttf"], font_loader.FT_LOAD_DEFAULT)
Sans_Bold = (["LinBiolinum_RBah.ttf", "FreeSansBold.ttf"], font_loader.FT_LOAD_DEFAULT)
Sans_Italic = (["LinBiolinum_RIah.ttf", "FreeSansOblique.ttf"], font_loader.FT_LOAD_DEFAULT)
SansMono = (["DejaVuSansMono.ttf", "FreeMono.ttf"], font_loader.FT_LOAD_DEFAULT)
Serif = (["texgyrepagella-regular.otf", "FreeSerif.ttf"], font_loader.FT_LOAD_NO_HINTING)
Serif_Bold = (["texgyrepagella-bold.otf", "FreeSerifBold.ttf"], font_loader.FT_LOAD_NO_HINTING)
# Define the size differences used to render different fallback fonts
# I.e. when adding a fallback font has smaller glyphs than the original, you can bump it

View File

@ -29,12 +29,12 @@ class Point:
self.y = y
def __cmp__(self, other):
"""Compares the starting position of height slices"""
"""Compare the starting position of height slices."""
return self.x - other.x
class RectanglePacker:
"""Base class for rectangle packing algorithms
"""Base class for rectangle packing algorithms.
By uniting all rectangle packers under this common base class, you can
easily switch between different algorithms to find the most efficient or
@ -44,32 +44,32 @@ class RectanglePacker:
http://www.csc.liv.ac.uk/~epa/surveyhtml.html
"""
def __init__(self, packingAreaWidth, packingAreaHeight):
"""Initializes a new rectangle packer
def __init__(self, packing_area_width, packing_area_height):
"""Initialize a new rectangle packer.
packingAreaWidth: Maximum width of the packing area
packingAreaHeight: Maximum height of the packing area
"""
self.packingAreaWidth = packingAreaWidth
self.packingAreaHeight = packingAreaHeight
self.packingAreaWidth = packing_area_width
self.packingAreaHeight = packing_area_height
def Pack(self, rectangleWidth, rectangleHeight):
"""Allocates space for a rectangle in the packing area
def pack(self, rectangle_width, rectangle_height):
"""Allocate space for a rectangle in the packing area.
rectangleWidth: Width of the rectangle to allocate
rectangleHeight: Height of the rectangle to allocate
Returns the location at which the rectangle has been placed
"""
point = self.TryPack(rectangleWidth, rectangleHeight)
point = self.try_pack(rectangle_width, rectangle_height)
if not point:
raise OutOfSpaceError("Rectangle does not fit in packing area")
return point
def TryPack(self, rectangleWidth, rectangleHeight):
"""Tries to allocate space for a rectangle in the packing area
def try_pack(self, rectangle_width, rectangle_height):
"""Try to allocate space for a rectangle in the packing area.
rectangleWidth: Width of the rectangle to allocate
rectangleHeight: Height of the rectangle to allocate
@ -81,29 +81,28 @@ class RectanglePacker:
class DumbRectanglePacker(RectanglePacker):
def __init__(self, packingAreaWidth, packingAreaHeight):
RectanglePacker.__init__(self, packingAreaWidth, packingAreaHeight)
def __init__(self, packing_area_width, packing_area_height):
RectanglePacker.__init__(self, packing_area_width, packing_area_height)
self.x = 0
self.y = 0
self.rowh = 0
def TryPack(self, rectangleWidth, rectangleHeight):
if self.x + rectangleWidth >= self.packingAreaWidth:
def try_pack(self, rectangle_width, rectangle_height):
if self.x + rectangle_width >= self.packingAreaWidth:
self.x = 0
self.y += self.rowh
self.rowh = 0
if self.y + rectangleHeight >= self.packingAreaHeight:
if self.y + rectangle_height >= self.packingAreaHeight:
return None
r = Point(self.x, self.y)
self.x += rectangleWidth
self.rowh = max(self.rowh, rectangleHeight)
self.x += rectangle_width
self.rowh = max(self.rowh, rectangle_height)
return r
class CygonRectanglePacker(RectanglePacker):
"""
Packer using a custom algorithm by Markus 'Cygon' Ewald
"""Packer using a custom algorithm by Markus 'Cygon' Ewald.
Algorithm conceived by Markus Ewald (cygon at nuclex dot org), though
I'm quite sure I'm not the first one to come up with it :)
@ -119,22 +118,22 @@ class CygonRectanglePacker(RectanglePacker):
analyzed to find the position where the rectangle would achieve the lowest
"""
def __init__(self, packingAreaWidth, packingAreaHeight):
"""Initializes a new rectangle packer
def __init__(self, packing_area_width, packing_area_height):
"""Initialize a new rectangle packer.
packingAreaWidth: Maximum width of the packing area
packingAreaHeight: Maximum height of the packing area
"""
RectanglePacker.__init__(self, packingAreaWidth, packingAreaHeight)
RectanglePacker.__init__(self, packing_area_width, packing_area_height)
# Stores the height silhouette of the rectangles
self.heightSlices = []
self.height_slices = []
# At the beginning, the packing area is a single slice of height 0
self.heightSlices.append(Point(0, 0))
self.height_slices.append(Point(0, 0))
def TryPack(self, rectangleWidth, rectangleHeight):
"""Tries to allocate space for a rectangle in the packing area
def try_pack(self, rectangle_width, rectangle_height):
"""Try to allocate space for a rectangle in the packing area.
rectangleWidth: Width of the rectangle to allocate
rectangleHeight: Height of the rectangle to allocate
@ -146,21 +145,21 @@ class CygonRectanglePacker(RectanglePacker):
# If the rectangle is larger than the packing area in any dimension,
# it will never fit!
if rectangleWidth > self.packingAreaWidth or rectangleHeight > self.packingAreaHeight:
if rectangle_width > self.packingAreaWidth or rectangle_height > self.packingAreaHeight:
return None
# Determine the placement for the new rectangle
placement = self.tryFindBestPlacement(rectangleWidth, rectangleHeight)
placement = self.try_find_best_placement(rectangle_width, rectangle_height)
# If a place for the rectangle could be found, update the height slice
# table to mark the region of the rectangle as being taken.
if placement:
self.integrateRectangle(placement.x, rectangleWidth, placement.y + rectangleHeight)
self.integrate_rectangle(placement.x, rectangle_width, placement.y + rectangle_height)
return placement
def tryFindBestPlacement(self, rectangleWidth, rectangleHeight):
"""Finds the best position for a rectangle of the given dimensions
def try_find_best_placement(self, rectangle_width, rectangle_height):
"""Find the best position for a rectangle of the given dimensions.
rectangleWidth: Width of the rectangle to find a position for
rectangleHeight: Height of the rectangle to find a position for
@ -170,117 +169,117 @@ class CygonRectanglePacker(RectanglePacker):
"""
# Slice index, vertical position and score of the best placement we
# could find
bestSliceIndex = -1 # Slice index where the best placement was found
bestSliceY = 0 # Y position of the best placement found
best_slice_index = -1 # Slice index where the best placement was found
best_slice_y = 0 # Y position of the best placement found
# lower == better!
bestScore = self.packingAreaHeight
best_score = self.packingAreaHeight
# This is the counter for the currently checked position. The search
# works by skipping from slice to slice, determining the suitability
# of the location for the placement of the rectangle.
leftSliceIndex = 0
left_slice_index = 0
# Determine the slice in which the right end of the rectangle is located
rightSliceIndex = bisect_left(self.heightSlices, Point(rectangleWidth, 0))
right_slice_index = bisect_left(self.height_slices, Point(rectangle_width, 0))
while rightSliceIndex <= len(self.heightSlices):
while right_slice_index <= len(self.height_slices):
# Determine the highest slice within the slices covered by the
# rectangle at its current placement. We cannot put the rectangle
# any lower than this without overlapping the other rectangles.
highest = self.heightSlices[leftSliceIndex].y
for index in range(leftSliceIndex + 1, rightSliceIndex):
highest = max(self.heightSlices[index].y, highest)
highest = self.height_slices[left_slice_index].y
for index in range(left_slice_index + 1, right_slice_index):
highest = max(self.height_slices[index].y, highest)
# Only process this position if it doesn't leave the packing area
if highest + rectangleHeight < self.packingAreaHeight:
if highest + rectangle_height < self.packingAreaHeight:
score = highest
if score < bestScore:
bestSliceIndex = leftSliceIndex
bestSliceY = highest
bestScore = score
if score < best_score:
best_slice_index = left_slice_index
best_slice_y = highest
best_score = score
# Advance the starting slice to the next slice start
leftSliceIndex += 1
if leftSliceIndex >= len(self.heightSlices):
left_slice_index += 1
if left_slice_index >= len(self.height_slices):
break
# Advance the ending slice until we're on the proper slice again,
# given the new starting position of the rectangle.
rightRectangleEnd = self.heightSlices[leftSliceIndex].x + rectangleWidth
while rightSliceIndex <= len(self.heightSlices):
if rightSliceIndex == len(self.heightSlices):
rightSliceStart = self.packingAreaWidth
right_rectangle_end = self.height_slices[left_slice_index].x + rectangle_width
while right_slice_index <= len(self.height_slices):
if right_slice_index == len(self.height_slices):
right_slice_start = self.packingAreaWidth
else:
rightSliceStart = self.heightSlices[rightSliceIndex].x
right_slice_start = self.height_slices[right_slice_index].x
# Is this the slice we're looking for?
if rightSliceStart > rightRectangleEnd:
if right_slice_start > right_rectangle_end:
break
rightSliceIndex += 1
right_slice_index += 1
# If we crossed the end of the slice array, the rectangle's right
# end has left the packing area, and thus, our search ends.
if rightSliceIndex > len(self.heightSlices):
if right_slice_index > len(self.height_slices):
break
# Return the best placement we found for this rectangle. If the
# rectangle didn't fit anywhere, the slice index will still have its
# initialization value of -1 and we can report that no placement
# could be found.
if bestSliceIndex == -1:
if best_slice_index == -1:
return None
return Point(self.heightSlices[bestSliceIndex].x, bestSliceY)
return Point(self.height_slices[best_slice_index].x, best_slice_y)
def integrateRectangle(self, left, width, bottom):
"""Integrates a new rectangle into the height slice table
def integrate_rectangle(self, left, width, bottom):
"""Integrate a new rectangle into the height slice table.
left: Position of the rectangle's left side
width: Width of the rectangle
bottom: Position of the rectangle's lower side
"""
# Find the first slice that is touched by the rectangle
startSlice = bisect_left(self.heightSlices, Point(left, 0))
start_slice = bisect_left(self.height_slices, Point(left, 0))
# We scored a direct hit, so we can replace the slice we have hit
firstSliceOriginalHeight = self.heightSlices[startSlice].y
self.heightSlices[startSlice] = Point(left, bottom)
first_slice_original_height = self.height_slices[start_slice].y
self.height_slices[start_slice] = Point(left, bottom)
right = left + width
startSlice += 1
start_slice += 1
# Special case, the rectangle started on the last slice, so we cannot
# use the start slice + 1 for the binary search and the possibly
# already modified start slice height now only remains in our temporary
# firstSliceOriginalHeight variable
if startSlice >= len(self.heightSlices):
# first_slice_original_height variable
if start_slice >= len(self.height_slices):
# If the slice ends within the last slice (usual case, unless it
# has the exact same width the packing area has), add another slice
# to return to the original height at the end of the rectangle.
if right < self.packingAreaWidth:
self.heightSlices.append(Point(right, firstSliceOriginalHeight))
self.height_slices.append(Point(right, first_slice_original_height))
else: # The rectangle doesn't start on the last slice
endSlice = bisect_left(
self.heightSlices, Point(right, 0), startSlice, len(self.heightSlices)
end_slice = bisect_left(
self.height_slices, Point(right, 0), start_slice, len(self.height_slices)
)
# Another direct hit on the final slice's end?
if endSlice < len(self.heightSlices) and not (
Point(right, 0) < self.heightSlices[endSlice]
if end_slice < len(self.height_slices) and not (
Point(right, 0) < self.height_slices[end_slice]
):
del self.heightSlices[startSlice:endSlice]
del self.height_slices[start_slice:end_slice]
else: # No direct hit, rectangle ends inside another slice
# Find out to which height we need to return at the right end of
# the rectangle
if endSlice == startSlice:
returnHeight = firstSliceOriginalHeight
if end_slice == start_slice:
return_height = first_slice_original_height
else:
returnHeight = self.heightSlices[endSlice - 1].y
return_height = self.height_slices[end_slice - 1].y
# Remove all slices covered by the rectangle and begin a new
# slice at its end to return back to the height of the slice on
# which the rectangle ends.
del self.heightSlices[startSlice:endSlice]
del self.height_slices[start_slice:end_slice]
if right < self.packingAreaWidth:
self.heightSlices.insert(startSlice, Point(right, returnHeight))
self.height_slices.insert(start_slice, Point(right, return_height))

View File

@ -4,7 +4,8 @@ This is a collection of scripts to automate 0 A.D.'s i18n process.
See `maintenanceTasks.sh` for the full process.
### Run tests
## Run tests
```sh
pip3 install pytest
python3 -m pytest

View File

@ -23,12 +23,12 @@ import os
import subprocess
from typing import List
from i18n_helper import projectRootDirectory
from i18n_helper import PROJECT_ROOT_DIRECTORY
def get_diff():
"""Return a diff using svn diff"""
os.chdir(projectRootDirectory)
"""Return a diff using svn diff."""
os.chdir(PROJECT_ROOT_DIRECTORY)
diff_process = subprocess.run(["svn", "diff", "binaries"], capture_output=True, check=False)
if diff_process.returncode != 0:
@ -37,8 +37,10 @@ def get_diff():
return io.StringIO(diff_process.stdout.decode("utf-8"))
def check_diff(diff: io.StringIO, verbose=False) -> List[str]:
"""Run through a diff of .po files and check that some of the changes
def check_diff(diff: io.StringIO) -> List[str]:
"""Check a diff of .po files for meaningful changes.
Run through a diff of .po files and check that some of the changes
are real translations changes and not just noise (line changes....).
The algorithm isn't extremely clever, but it is quite fast.
"""
@ -97,7 +99,7 @@ def revert_files(files: List[str], verbose=False):
def add_untracked(verbose=False):
"""Add untracked .po files to svn"""
"""Add untracked .po files to svn."""
diff_process = subprocess.run(["svn", "st", "binaries"], capture_output=True, check=False)
if diff_process.stderr != b"":
print(f"Error running svn st: {diff_process.stderr.decode('utf-8')}. Exiting.")
@ -126,6 +128,6 @@ if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("--verbose", help="Print reverted files.", action="store_true")
args = parser.parse_args()
need_revert = check_diff(get_diff(), args.verbose)
need_revert = check_diff(get_diff())
revert_files(need_revert, args.verbose)
add_untracked(args.verbose)

View File

@ -21,9 +21,9 @@ import os
import re
import sys
from i18n_helper import l10nFolderName, projectRootDirectory
from i18n_helper import L10N_FOLDER_NAME, PROJECT_ROOT_DIRECTORY
from i18n_helper.catalog import Catalog
from i18n_helper.globber import getCatalogs
from i18n_helper.globber import get_catalogs
VERBOSE = 0
@ -36,76 +36,76 @@ class MessageChecker:
self.regex = re.compile(regex, re.IGNORECASE)
self.human_name = human_name
def check(self, inputFilePath, templateMessage, translatedCatalogs):
def check(self, input_file_path, template_message, translated_catalogs):
patterns = set(
self.regex.findall(
templateMessage.id[0] if templateMessage.pluralizable else templateMessage.id
template_message.id[0] if template_message.pluralizable else template_message.id
)
)
# As a sanity check, verify that the template message is coherent.
# Note that these tend to be false positives.
# TODO: the pssible tags are usually comments, we ought be able to find them.
if templateMessage.pluralizable:
pluralUrls = set(self.regex.findall(templateMessage.id[1]))
if pluralUrls.difference(patterns):
if template_message.pluralizable:
plural_urls = set(self.regex.findall(template_message.id[1]))
if plural_urls.difference(patterns):
print(
f"{inputFilePath} - Different {self.human_name} in "
f"{input_file_path} - Different {self.human_name} in "
f"singular and plural source strings "
f"for '{templateMessage}' in '{inputFilePath}'"
f"for '{template_message}' in '{input_file_path}'"
)
for translationCatalog in translatedCatalogs:
translationMessage = translationCatalog.get(
templateMessage.id, templateMessage.context
for translation_catalog in translated_catalogs:
translation_message = translation_catalog.get(
template_message.id, template_message.context
)
if not translationMessage:
if not translation_message:
continue
translatedPatterns = set(
translated_patterns = set(
self.regex.findall(
translationMessage.string[0]
if translationMessage.pluralizable
else translationMessage.string
translation_message.string[0]
if translation_message.pluralizable
else translation_message.string
)
)
unknown_patterns = translatedPatterns.difference(patterns)
unknown_patterns = translated_patterns.difference(patterns)
if unknown_patterns:
print(
f'{inputFilePath} - {translationCatalog.locale}: '
f'{input_file_path} - {translation_catalog.locale}: '
f'Found unknown {self.human_name} '
f'{", ".join(["`" + x + "`" for x in unknown_patterns])} '
f'in the translation which do not match any of the URLs '
f'in the template: {", ".join(["`" + x + "`" for x in patterns])}'
)
if templateMessage.pluralizable and translationMessage.pluralizable:
for indx, val in enumerate(translationMessage.string):
if template_message.pluralizable and translation_message.pluralizable:
for indx, val in enumerate(translation_message.string):
if indx == 0:
continue
translatedPatternsMulti = set(self.regex.findall(val))
unknown_patterns_multi = translatedPatternsMulti.difference(pluralUrls)
translated_patterns_multi = set(self.regex.findall(val))
unknown_patterns_multi = translated_patterns_multi.difference(plural_urls)
if unknown_patterns_multi:
print(
f'{inputFilePath} - {translationCatalog.locale}: '
f'{input_file_path} - {translation_catalog.locale}: '
f'Found unknown {self.human_name} '
f'{", ".join(["`" + x + "`" for x in unknown_patterns_multi])} '
f'in the pluralised translation which do not '
f'match any of the URLs in the template: '
f'{", ".join(["`" + x + "`" for x in pluralUrls])}'
f'{", ".join(["`" + x + "`" for x in plural_urls])}'
)
def check_translations(inputFilePath):
def check_translations(input_file_path):
if VERBOSE:
print(f"Checking {inputFilePath}")
templateCatalog = Catalog.readFrom(inputFilePath)
print(f"Checking {input_file_path}")
template_catalog = Catalog.read_from(input_file_path)
# If language codes were specified on the command line, filter by those.
filters = sys.argv[1:]
# Load existing translation catalogs.
existingTranslationCatalogs = getCatalogs(inputFilePath, filters)
existing_translation_catalogs = get_catalogs(input_file_path, filters)
spam = MessageChecker("url", r"https?://(?:[a-z0-9-_$@./&+]|(?:%[0-9a-fA-F][0-9a-fA-F]))+")
sprintf = MessageChecker("sprintf", r"%\([^)]+\)s")
@ -115,37 +115,37 @@ def check_translations(inputFilePath):
# Loop through all messages in the .POT catalog for URLs.
# For each, check for the corresponding key in the .PO catalogs.
# If found, check that URLS in the .PO keys are the same as those in the .POT key.
for templateMessage in templateCatalog:
spam.check(inputFilePath, templateMessage, existingTranslationCatalogs)
sprintf.check(inputFilePath, templateMessage, existingTranslationCatalogs)
tags.check(inputFilePath, templateMessage, existingTranslationCatalogs)
for template_message in template_catalog:
spam.check(input_file_path, template_message, existing_translation_catalogs)
sprintf.check(input_file_path, template_message, existing_translation_catalogs)
tags.check(input_file_path, template_message, existing_translation_catalogs)
if VERBOSE:
print(f"Done checking {inputFilePath}")
print(f"Done checking {input_file_path}")
def main():
print(
"\n\tWARNING: Remember to regenerate the POT files with “updateTemplates.py” "
"\n\tWARNING: Remember to regenerate the POT files with “update_templates.py” "
"before you run this script.\n\tPOT files are not in the repository.\n"
)
foundPots = 0
for root, _folders, filenames in os.walk(projectRootDirectory):
found_pots = 0
for root, _folders, filenames in os.walk(PROJECT_ROOT_DIRECTORY):
for filename in filenames:
if (
len(filename) > 4
and filename[-4:] == ".pot"
and os.path.basename(root) == l10nFolderName
and os.path.basename(root) == L10N_FOLDER_NAME
):
foundPots += 1
found_pots += 1
multiprocessing.Process(
target=check_translations, args=(os.path.join(root, filename),)
).start()
if foundPots == 0:
if found_pots == 0:
print(
"This script did not work because no '.pot' files were found. "
"Please run 'updateTemplates.py' to generate the '.pot' files, "
"and run 'pullTranslations.py' to pull the latest translations from Transifex. "
"Please run 'update_templates.py' to generate the '.pot' files, "
"and run 'pull_translations.py' to pull the latest translations from Transifex. "
"Then you can run this script to check for spam in translations."
)

View File

@ -1,73 +0,0 @@
#!/usr/bin/env python3
#
# Copyright (C) 2024 Wildfire Games.
# This file is part of 0 A.D.
#
# 0 A.D. is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 2 of the License, or
# (at your option) any later version.
#
# 0 A.D. is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with 0 A.D. If not, see <http://www.gnu.org/licenses/>.
"""
This file removes unneeded personal data from the translators. Most notably
the e-mail addresses. We need to translators' nicks for the credits, but no
more data is required.
TODO: ideally we don't even pull the e-mail addresses in the .po files.
However that needs to be fixed on the transifex side, see rP25896. For now
strip the e-mails using this script.
"""
import fileinput
import glob
import os
import re
import sys
from i18n_helper import l10nFolderName, projectRootDirectory, transifexClientFolder
def main():
translatorMatch = re.compile(r"^(#\s+[^,<]*)\s+<.*>(.*)")
lastTranslatorMatch = re.compile(r"^(\"Last-Translator:[^,<]*)\s+<.*>(.*)")
for root, folders, _ in os.walk(projectRootDirectory):
for folder in folders:
if folder == l10nFolderName:
if os.path.exists(os.path.join(root, folder, transifexClientFolder)):
path = os.path.join(root, folder, "*.po")
files = glob.glob(path)
for file in files:
usernames = []
reached = False
for line in fileinput.input(
file.replace("\\", "/"), inplace=True, encoding="utf-8"
):
if reached:
if line == "# \n":
line = ""
m = translatorMatch.match(line)
if m:
if m.group(1) in usernames:
line = ""
else:
line = m.group(1) + m.group(2) + "\n"
usernames.append(m.group(1))
m2 = lastTranslatorMatch.match(line)
if m2:
line = re.sub(lastTranslatorMatch, r"\1\2", line)
elif line.strip() == "# Translators:":
reached = True
sys.stdout.write(line)
if __name__ == "__main__":
main()

View File

@ -0,0 +1,78 @@
#!/usr/bin/env python3
#
# Copyright (C) 2024 Wildfire Games.
# This file is part of 0 A.D.
#
# 0 A.D. is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 2 of the License, or
# (at your option) any later version.
#
# 0 A.D. is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with 0 A.D. If not, see <http://www.gnu.org/licenses/>.
"""Remove unnecessary personal data of translators.
This file removes unneeded personal data from the translators. Most notably
the e-mail addresses. We need to translators' nicks for the credits, but no
more data is required.
TODO: ideally we don't even pull the e-mail addresses in the .po files.
However that needs to be fixed on the transifex side, see rP25896. For now
strip the e-mails using this script.
"""
import fileinput
import glob
import os
import re
import sys
from i18n_helper import L10N_FOLDER_NAME, PROJECT_ROOT_DIRECTORY, TRANSIFEX_CLIENT_FOLDER
def main():
translator_match = re.compile(r"^(#\s+[^,<]*)\s+<.*>(.*)")
last_translator_match = re.compile(r"^(\"Last-Translator:[^,<]*)\s+<.*>(.*)")
for root, folders, _ in os.walk(PROJECT_ROOT_DIRECTORY):
for folder in folders:
if folder != L10N_FOLDER_NAME:
continue
if not os.path.exists(os.path.join(root, folder, TRANSIFEX_CLIENT_FOLDER)):
continue
path = os.path.join(root, folder, "*.po")
files = glob.glob(path)
for file in files:
usernames = []
reached = False
for line in fileinput.input(
file.replace("\\", "/"), inplace=True, encoding="utf-8"
):
if reached:
if line == "# \n":
line = ""
m = translator_match.match(line)
if m:
if m.group(1) in usernames:
line = ""
else:
line = m.group(1) + m.group(2) + "\n"
usernames.append(m.group(1))
m2 = last_translator_match.match(line)
if m2:
line = re.sub(last_translator_match, r"\1\2", line)
elif line.strip() == "# Translators:":
reached = True
sys.stdout.write(line)
if __name__ == "__main__":
main()

View File

@ -16,9 +16,10 @@
# You should have received a copy of the GNU General Public License
# along with 0 A.D. If not, see <http://www.gnu.org/licenses/>.
"""
This file updates the translators credits located in the public mod GUI files, using
translators names from the .po files.
"""Update the translator credits.
This file updates the translator credits located in the public mod GUI files, using
translator names from the .po files.
If translators change their names on Transifex, the script will remove the old names.
TODO: It should be possible to add people in the list manually, and protect them against
@ -26,7 +27,7 @@ automatic deletion. This has not been needed so far. A possibility would be to a
optional boolean entry to the dictionary containing the name.
Translatable strings will be extracted from the generated file, so this should be run
once before updateTemplates.py.
once before update_templates.py.
"""
import json
@ -36,18 +37,20 @@ from collections import defaultdict
from pathlib import Path
from babel import Locale, UnknownLocaleError
from i18n_helper import l10nFolderName, projectRootDirectory, transifexClientFolder
from i18n_helper import L10N_FOLDER_NAME, PROJECT_ROOT_DIRECTORY, TRANSIFEX_CLIENT_FOLDER
poLocations = []
for root, folders, _filenames in os.walk(projectRootDirectory):
po_locations = []
for root, folders, _filenames in os.walk(PROJECT_ROOT_DIRECTORY):
for folder in folders:
if folder == l10nFolderName:
if os.path.exists(os.path.join(root, folder, transifexClientFolder)):
poLocations.append(os.path.join(root, folder))
if folder != L10N_FOLDER_NAME:
continue
creditsLocation = os.path.join(
projectRootDirectory,
if os.path.exists(os.path.join(root, folder, TRANSIFEX_CLIENT_FOLDER)):
po_locations.append(os.path.join(root, folder))
credits_location = os.path.join(
PROJECT_ROOT_DIRECTORY,
"binaries",
"data",
"mods",
@ -59,19 +62,19 @@ creditsLocation = os.path.join(
)
# This dictionary will hold creditors lists for each language, indexed by code
langsLists = defaultdict(list)
langs_lists = defaultdict(list)
# Create the new JSON data
newJSONData = {"Title": "Translators", "Content": []}
new_json_data = {"Title": "Translators", "Content": []}
# Now go through the list of languages and search the .po files for people
# Prepare some regexes
translatorMatch = re.compile(r"^#\s+([^,<]*)")
deletedUsernameMatch = re.compile(r"[0-9a-f]{32}(_[0-9a-f]{7})?")
translator_match = re.compile(r"^#\s+([^,<]*)")
deleted_username_match = re.compile(r"[0-9a-f]{32}(_[0-9a-f]{7})?")
# Search
for location in poLocations:
for location in po_locations:
files = Path(location).glob("*.po")
for file in files:
@ -81,47 +84,46 @@ for location in poLocations:
if lang in ("debug", "long"):
continue
with file.open(encoding="utf-8") as poFile:
with file.open(encoding="utf-8") as po_file:
reached = False
for line in poFile:
for line in po_file:
if reached:
m = translatorMatch.match(line)
m = translator_match.match(line)
if not m:
break
username = m.group(1)
if not deletedUsernameMatch.fullmatch(username):
langsLists[lang].append(username)
if not deleted_username_match.fullmatch(username):
langs_lists[lang].append(username)
if line.strip() == "# Translators:":
reached = True
# Sort translator names and remove duplicates
# Sorting should ignore case, but prefer versions of names starting
# with an upper case letter to have a neat credits list.
for lang in langsLists:
for lang in langs_lists:
translators = {}
for name in sorted(langsLists[lang], reverse=True):
for name in sorted(langs_lists[lang], reverse=True):
if name.lower() not in translators or name.istitle():
translators[name.lower()] = name
langsLists[lang] = sorted(translators.values(), key=lambda s: s.lower())
langs_lists[lang] = sorted(translators.values(), key=lambda s: s.lower())
# Now insert the new data into the new JSON file
for langCode, langList in sorted(langsLists.items()):
for lang_code, lang_list in sorted(langs_lists.items()):
try:
lang_name = Locale.parse(langCode).english_name
lang_name = Locale.parse(lang_code).english_name
except UnknownLocaleError:
lang_name = Locale.parse("en").languages.get(langCode)
lang_name = Locale.parse("en").languages.get(lang_code)
if not lang_name:
raise
translators = [{"name": name} for name in langList]
newJSONData["Content"].append({"LangName": lang_name, "List": translators})
translators = [{"name": name} for name in lang_list]
new_json_data["Content"].append({"LangName": lang_name, "List": translators})
# Sort languages by their English names
newJSONData["Content"] = sorted(newJSONData["Content"], key=lambda x: x["LangName"])
new_json_data["Content"] = sorted(new_json_data["Content"], key=lambda x: x["LangName"])
# Save the JSON data to the credits file
creditsFile = open(creditsLocation, "w", encoding="utf-8")
json.dump(newJSONData, creditsFile, indent=4)
creditsFile.close()
with open(credits_location, "w", encoding="utf-8") as credits_file:
json.dump(new_json_data, credits_file, indent=4)

185
source/tools/i18n/extractors/extractors.py Executable file → Normal file
View File

@ -1,5 +1,3 @@
#!/usr/bin/env python3
#
# Copyright (C) 2024 Wildfire Games.
# All rights reserved.
#
@ -26,7 +24,7 @@
# ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import codecs
import json as jsonParser
import json
import os
import re
import sys
@ -34,7 +32,7 @@ from textwrap import dedent
def pathmatch(mask, path):
"""Matches paths to a mask, where the mask supports * and **.
"""Match paths to a mask, where the mask supports * and **.
Paths use / as the separator
* matches a sequence of characters without /.
@ -56,8 +54,8 @@ def pathmatch(mask, path):
class Extractor:
def __init__(self, directoryPath, filemasks, options):
self.directoryPath = directoryPath
def __init__(self, directory_path, filemasks, options):
self.directoryPath = directory_path
self.options = options
if isinstance(filemasks, dict):
@ -68,15 +66,15 @@ class Extractor:
self.excludeMasks = []
def run(self):
"""Extracts messages.
"""Extract messages.
:return: An iterator over ``(message, plural, context, (location, pos), comment)``
tuples.
:rtype: ``iterator``
"""
empty_string_pattern = re.compile(r"^\s*$")
directoryAbsolutePath = os.path.abspath(self.directoryPath)
for root, folders, filenames in os.walk(directoryAbsolutePath):
directory_absolute_path = os.path.abspath(self.directoryPath)
for root, folders, filenames in os.walk(directory_absolute_path):
for subdir in folders:
if subdir.startswith((".", "_")):
folders.remove(subdir)
@ -92,14 +90,14 @@ class Extractor:
else:
for filemask in self.includeMasks:
if pathmatch(filemask, filename):
filepath = os.path.join(directoryAbsolutePath, filename)
filepath = os.path.join(directory_absolute_path, filename)
for (
message,
plural,
context,
position,
comments,
) in self.extractFromFile(filepath):
) in self.extract_from_file(filepath):
if empty_string_pattern.match(message):
continue
@ -107,15 +105,15 @@ class Extractor:
filename = "\u2068" + filename + "\u2069"
yield message, plural, context, (filename, position), comments
def extractFromFile(self, filepath):
"""Extracts messages from a specific file.
def extract_from_file(self, filepath):
"""Extract messages from a specific file.
:return: An iterator over ``(message, plural, context, position, comments)`` tuples.
:rtype: ``iterator``
"""
class javascript(Extractor):
class JavascriptExtractor(Extractor):
"""Extract messages from JavaScript source code."""
empty_msgid_warning = (
@ -123,7 +121,7 @@ class javascript(Extractor):
"returns the header entry with meta information, not the empty string."
)
def extractJavascriptFromFile(self, fileObject):
def extract_javascript_from_file(self, file_object):
from babel.messages.jslexer import tokenize, unquote_string
funcname = message_lineno = None
@ -136,7 +134,7 @@ class javascript(Extractor):
comment_tags = self.options.get("commentTags", [])
keywords = self.options.get("keywords", {}).keys()
for token in tokenize(fileObject.read(), dotted=False):
for token in tokenize(file_object.read(), dotted=False):
if token.type == "operator" and (
token.value == "(" or (call_stack != -1 and (token.value in ("[", "{")))
):
@ -238,9 +236,11 @@ class javascript(Extractor):
last_token = token
def extractFromFile(self, filepath):
with codecs.open(filepath, "r", encoding="utf-8-sig") as fileObject:
for lineno, funcname, messages, comments in self.extractJavascriptFromFile(fileObject):
def extract_from_file(self, filepath):
with codecs.open(filepath, "r", encoding="utf-8-sig") as file_object:
for lineno, funcname, messages, comments in self.extract_javascript_from_file(
file_object
):
spec = self.options.get("keywords", {})[funcname] or (1,) if funcname else (1,)
if not isinstance(messages, (list, tuple)):
messages = [messages]
@ -278,7 +278,7 @@ class javascript(Extractor):
if not messages[first_msg_index]:
# An empty string msgid isn't valid, emit a warning
where = "%s:%i" % (
hasattr(fileObject, "name") and fileObject.name or "(unknown)",
hasattr(file_object, "name") and file_object.name or "(unknown)",
lineno,
)
print(self.empty_msgid_warning % where, file=sys.stderr)
@ -293,54 +293,54 @@ class javascript(Extractor):
yield message, plural, context, lineno, comments
class cpp(javascript):
class CppExtractor(JavascriptExtractor):
"""Extract messages from C++ source code."""
class txt(Extractor):
class TXTExtractor(Extractor):
"""Extract messages from plain text files."""
def extractFromFile(self, filepath):
with codecs.open(filepath, "r", encoding="utf-8-sig") as fileObject:
lineno = 0
for line in [line.strip("\n\r") for line in fileObject.readlines()]:
lineno += 1
def extract_from_file(self, filepath):
with codecs.open(filepath, "r", encoding="utf-8-sig") as file_object:
for lineno, line in enumerate(
[line.strip("\n\r") for line in file_object.readlines()], start=1
):
if line:
yield line, None, None, lineno, []
class json(Extractor):
class JsonExtractor(Extractor):
"""Extract messages from JSON files."""
def __init__(self, directoryPath=None, filemasks=None, options=None):
def __init__(self, directory_path=None, filemasks=None, options=None):
if options is None:
options = {}
if filemasks is None:
filemasks = []
super().__init__(directoryPath, filemasks, options)
super().__init__(directory_path, filemasks, options)
self.keywords = self.options.get("keywords", {})
self.context = self.options.get("context", None)
self.comments = self.options.get("comments", [])
def setOptions(self, options):
def set_options(self, options):
self.options = options
self.keywords = self.options.get("keywords", {})
self.context = self.options.get("context", None)
self.comments = self.options.get("comments", [])
def extractFromFile(self, filepath):
with codecs.open(filepath, "r", "utf-8") as fileObject:
for message, context in self.extractFromString(fileObject.read()):
def extract_from_file(self, filepath):
with codecs.open(filepath, "r", "utf-8") as file_object:
for message, context in self.extract_from_string(file_object.read()):
yield message, None, context, None, self.comments
def extractFromString(self, string):
jsonDocument = jsonParser.loads(string)
if isinstance(jsonDocument, list):
for message, context in self.parseList(jsonDocument):
def extract_from_string(self, string):
json_document = json.loads(string)
if isinstance(json_document, list):
for message, context in self.parse_list(json_document):
if message: # Skip empty strings.
yield message, context
elif isinstance(jsonDocument, dict):
for message, context in self.parseDictionary(jsonDocument):
elif isinstance(json_document, dict):
for message, context in self.parse_dictionary(json_document):
if message: # Skip empty strings.
yield message, context
else:
@ -349,24 +349,22 @@ class json(Extractor):
"You must extend the JSON extractor to support it."
)
def parseList(self, itemsList):
index = 0
for listItem in itemsList:
if isinstance(listItem, list):
for message, context in self.parseList(listItem):
def parse_list(self, items_list):
for list_item in items_list:
if isinstance(list_item, list):
for message, context in self.parse_list(list_item):
yield message, context
elif isinstance(listItem, dict):
for message, context in self.parseDictionary(listItem):
elif isinstance(list_item, dict):
for message, context in self.parse_dictionary(list_item):
yield message, context
index += 1
def parseDictionary(self, dictionary):
def parse_dictionary(self, dictionary):
for keyword in dictionary:
if keyword in self.keywords:
if isinstance(dictionary[keyword], str):
yield self.extractString(dictionary[keyword], keyword)
yield self.extract_string(dictionary[keyword], keyword)
elif isinstance(dictionary[keyword], list):
for message, context in self.extractList(dictionary[keyword], keyword):
for message, context in self.extract_list(dictionary[keyword], keyword):
yield message, context
elif isinstance(dictionary[keyword], dict):
extract = None
@ -374,22 +372,22 @@ class json(Extractor):
"extractFromInnerKeys" in self.keywords[keyword]
and self.keywords[keyword]["extractFromInnerKeys"]
):
for message, context in self.extractDictionaryInnerKeys(
for message, context in self.extract_dictionary_inner_keys(
dictionary[keyword], keyword
):
yield message, context
else:
extract = self.extractDictionary(dictionary[keyword], keyword)
extract = self.extract_dictionary(dictionary[keyword], keyword)
if extract:
yield extract
elif isinstance(dictionary[keyword], list):
for message, context in self.parseList(dictionary[keyword]):
for message, context in self.parse_list(dictionary[keyword]):
yield message, context
elif isinstance(dictionary[keyword], dict):
for message, context in self.parseDictionary(dictionary[keyword]):
for message, context in self.parse_dictionary(dictionary[keyword]):
yield message, context
def extractString(self, string, keyword):
def extract_string(self, string, keyword):
context = None
if "tagAsContext" in self.keywords[keyword]:
context = keyword
@ -399,18 +397,16 @@ class json(Extractor):
context = self.context
return string, context
def extractList(self, itemsList, keyword):
index = 0
for listItem in itemsList:
if isinstance(listItem, str):
yield self.extractString(listItem, keyword)
elif isinstance(listItem, dict):
extract = self.extractDictionary(listItem[keyword], keyword)
def extract_list(self, items_list, keyword):
for list_item in items_list:
if isinstance(list_item, str):
yield self.extract_string(list_item, keyword)
elif isinstance(list_item, dict):
extract = self.extract_dictionary(list_item[keyword], keyword)
if extract:
yield extract
index += 1
def extractDictionary(self, dictionary, keyword):
def extract_dictionary(self, dictionary, keyword):
message = dictionary.get("_string", None)
if message and isinstance(message, str):
context = None
@ -425,45 +421,47 @@ class json(Extractor):
return message, context
return None
def extractDictionaryInnerKeys(self, dictionary, keyword):
for innerKeyword in dictionary:
if isinstance(dictionary[innerKeyword], str):
yield self.extractString(dictionary[innerKeyword], keyword)
elif isinstance(dictionary[innerKeyword], list):
yield from self.extractList(dictionary[innerKeyword], keyword)
elif isinstance(dictionary[innerKeyword], dict):
extract = self.extractDictionary(dictionary[innerKeyword], keyword)
def extract_dictionary_inner_keys(self, dictionary, keyword):
for inner_keyword in dictionary:
if isinstance(dictionary[inner_keyword], str):
yield self.extract_string(dictionary[inner_keyword], keyword)
elif isinstance(dictionary[inner_keyword], list):
yield from self.extract_list(dictionary[inner_keyword], keyword)
elif isinstance(dictionary[inner_keyword], dict):
extract = self.extract_dictionary(dictionary[inner_keyword], keyword)
if extract:
yield extract
class xml(Extractor):
class XmlExtractor(Extractor):
"""Extract messages from XML files."""
def __init__(self, directoryPath, filemasks, options):
super().__init__(directoryPath, filemasks, options)
def __init__(self, directory_path, filemasks, options):
super().__init__(directory_path, filemasks, options)
self.keywords = self.options.get("keywords", {})
self.jsonExtractor = None
def getJsonExtractor(self):
def get_json_extractor(self):
if not self.jsonExtractor:
self.jsonExtractor = json()
self.jsonExtractor = JsonExtractor()
return self.jsonExtractor
def extractFromFile(self, filepath):
def extract_from_file(self, filepath):
from lxml import etree
with codecs.open(filepath, "r", encoding="utf-8-sig") as fileObject:
xmlDocument = etree.parse(fileObject)
with codecs.open(filepath, "r", encoding="utf-8-sig") as file_object:
xml_document = etree.parse(file_object)
for keyword in self.keywords:
for element in xmlDocument.iter(keyword):
for element in xml_document.iter(keyword):
lineno = element.sourceline
if element.text is not None:
comments = []
if "extractJson" in self.keywords[keyword]:
jsonExtractor = self.getJsonExtractor()
jsonExtractor.setOptions(self.keywords[keyword]["extractJson"])
for message, context in jsonExtractor.extractFromString(element.text):
json_extractor = self.get_json_extractor()
json_extractor.set_options(self.keywords[keyword]["extractJson"])
for message, context in json_extractor.extract_from_string(
element.text
):
yield message, None, context, lineno, comments
else:
context = None
@ -480,12 +478,12 @@ class xml(Extractor):
) # Remove tabs, line breaks and unecessary spaces.
comments.append(comment)
if "splitOnWhitespace" in self.keywords[keyword]:
for splitText in element.text.split():
for split_text in element.text.split():
# split on whitespace is used for token lists, there, a
# leading '-' means the token has to be removed, so it's not
# to be processed here either
if splitText[0] != "-":
yield str(splitText), None, context, lineno, comments
if split_text[0] != "-":
yield str(split_text), None, context, lineno, comments
else:
yield str(element.text), None, context, lineno, comments
@ -506,18 +504,19 @@ class FakeSectionHeader:
return self.fp.readline()
class ini(Extractor):
class IniExtractor(Extractor):
"""Extract messages from INI files."""
def __init__(self, directoryPath, filemasks, options):
super().__init__(directoryPath, filemasks, options)
def __init__(self, directory_path, filemasks, options):
super().__init__(directory_path, filemasks, options)
self.keywords = self.options.get("keywords", [])
def extractFromFile(self, filepath):
def extract_from_file(self, filepath):
import ConfigParser
config = ConfigParser.RawConfigParser()
config.readfp(FakeSectionHeader(open(filepath)))
with open(filepath, encoding="utf-8") as fd:
config.read_file(FakeSectionHeader(fd))
for keyword in self.keywords:
message = config.get("root", keyword).strip('"').strip("'")
context = None

View File

@ -21,17 +21,17 @@ import multiprocessing
import os
import sys
from i18n_helper import l10nFolderName, projectRootDirectory
from i18n_helper import L10N_FOLDER_NAME, PROJECT_ROOT_DIRECTORY
from i18n_helper.catalog import Catalog
from i18n_helper.globber import getCatalogs
from i18n_helper.globber import get_catalogs
DEBUG_PREFIX = "X_X "
def generate_long_strings(root_path, input_file_name, output_file_name, languages=None):
"""
Generate the 'long strings' debug catalog.
"""Generate the 'long strings' debug catalog.
This catalog contains the longest singular and plural string,
found amongst all translated languages or a filtered subset.
It can be used to check if GUI elements are large enough.
@ -41,7 +41,7 @@ def generate_long_strings(root_path, input_file_name, output_file_name, language
input_file_path = os.path.join(root_path, input_file_name)
output_file_path = os.path.join(root_path, output_file_name)
template_catalog = Catalog.readFrom(input_file_path)
template_catalog = Catalog.read_from(input_file_path)
# Pretend we write English to get plurals.
long_string_catalog = Catalog(locale="en")
@ -55,7 +55,7 @@ def generate_long_strings(root_path, input_file_name, output_file_name, language
)
# Load existing translation catalogs.
existing_translation_catalogs = getCatalogs(input_file_path, languages)
existing_translation_catalogs = get_catalogs(input_file_path, languages)
# If any existing translation has more characters than the average expansion, use that instead.
for translation_catalog in existing_translation_catalogs:
@ -100,12 +100,12 @@ def generate_long_strings(root_path, input_file_name, output_file_name, language
longest_plural_string,
]
translation_message = long_string_catalog_message
long_string_catalog.writeTo(output_file_path)
long_string_catalog.write_to(output_file_path)
def generate_debug(root_path, input_file_name, output_file_name):
"""
Generate a debug catalog to identify untranslated strings.
"""Generate a debug catalog to identify untranslated strings.
This prefixes all strings with DEBUG_PREFIX, to easily identify
untranslated strings while still making the game navigable.
The catalog is debug.*.po
@ -114,7 +114,7 @@ def generate_debug(root_path, input_file_name, output_file_name):
input_file_path = os.path.join(root_path, input_file_name)
output_file_path = os.path.join(root_path, output_file_name)
template_catalog = Catalog.readFrom(input_file_path)
template_catalog = Catalog.read_from(input_file_path)
# Pretend we write English to get plurals.
out_catalog = Catalog(locale="en")
@ -134,7 +134,7 @@ def generate_debug(root_path, input_file_name, output_file_name):
auto_comments=message.auto_comments,
)
out_catalog.writeTo(output_file_path)
out_catalog.write_to(output_file_path)
def main():
@ -159,12 +159,12 @@ def main():
sys.exit(0)
found_pot_files = 0
for root, _, filenames in os.walk(projectRootDirectory):
for root, _, filenames in os.walk(PROJECT_ROOT_DIRECTORY):
for filename in filenames:
if (
len(filename) > 4
and filename[-4:] == ".pot"
and os.path.basename(root) == l10nFolderName
and os.path.basename(root) == L10N_FOLDER_NAME
):
found_pot_files += 1
if args.debug:
@ -180,8 +180,8 @@ def main():
if found_pot_files == 0:
print(
"This script did not work because no '.pot' files were found. "
"Please, run 'updateTemplates.py' to generate the '.pot' files, and run "
"'pullTranslations.py' to pull the latest translations from Transifex. "
"Please, run 'update_templates.py' to generate the '.pot' files, and run "
"'pull_translations.py' to pull the latest translations from Transifex. "
"Then you can run this script to generate '.po' files with obvious debug strings."
)

View File

@ -4,12 +4,12 @@ set -ev
# Download translations from the latest nightly build
# This will overwrite any uncommitted changes to messages.json files
cd "$(dirname $0)"
cd "$(dirname "$0")"
svn export --force --depth files https://svn.wildfiregames.com/nightly-build/trunk/binaries/data/l10n ../../../binaries/data/l10n
for m in "mod" "public"; do
svn export --force --depth files https://svn.wildfiregames.com/nightly-build/trunk/binaries/data/mods/${m}/l10n ../../../binaries/data/mods/${m}/l10n
svn export --force --depth files https://svn.wildfiregames.com/nightly-build/trunk/binaries/data/mods/${m}/l10n ../../../binaries/data/mods/${m}/l10n
done
svn export --force https://svn.wildfiregames.com/nightly-build/trunk/binaries/data/mods/public/gui/credits/texts/translators.json ../../../binaries/data/mods/public/gui/credits/texts/translators.json

View File

@ -1,9 +1,9 @@
import os
l10nFolderName = "l10n"
transifexClientFolder = ".tx"
l10nToolsDirectory = os.path.dirname(os.path.realpath(__file__))
projectRootDirectory = os.path.abspath(
os.path.join(l10nToolsDirectory, os.pardir, os.pardir, os.pardir, os.pardir)
L10N_FOLDER_NAME = "l10n"
TRANSIFEX_CLIENT_FOLDER = ".tx"
L10N_TOOLS_DIRECTORY = os.path.dirname(os.path.realpath(__file__))
PROJECT_ROOT_DIRECTORY = os.path.abspath(
os.path.join(L10N_TOOLS_DIRECTORY, os.pardir, os.pardir, os.pardir, os.pardir)
)

View File

@ -1,4 +1,4 @@
"""Wrapper around babel Catalog / .po handling"""
"""Wrapper around babel Catalog / .po handling."""
from datetime import datetime
@ -44,13 +44,10 @@ class Catalog(BabelCatalog):
return [("Project-Id-Version", self._project), *headers]
@staticmethod
def readFrom(file_path, locale=None):
return read_po(open(file_path, "r+", encoding="utf-8"), locale=locale)
def read_from(file_path, locale=None):
with open(file_path, "r+", encoding="utf-8") as fd:
return read_po(fd, locale=locale)
def writeTo(self, file_path):
return write_po(
fileobj=open(file_path, "wb+"),
catalog=self,
width=90,
sort_by_file=True,
)
def write_to(self, file_path):
with open(file_path, "wb+") as fd:
return write_po(fileobj=fd, catalog=self, width=90, sort_by_file=True)

View File

@ -1,4 +1,4 @@
"""Utils to list .po"""
"""Utils to list .po."""
import os
from typing import List, Optional
@ -6,22 +6,22 @@ from typing import List, Optional
from i18n_helper.catalog import Catalog
def getCatalogs(inputFilePath, filters: Optional[List[str]] = None) -> List[Catalog]:
"""Returns a list of "real" catalogs (.po) in the given folder."""
existingTranslationCatalogs = []
l10nFolderPath = os.path.dirname(inputFilePath)
inputFileName = os.path.basename(inputFilePath)
def get_catalogs(input_file_path, filters: Optional[List[str]] = None) -> List[Catalog]:
"""Return a list of "real" catalogs (.po) in the given folder."""
existing_translation_catalogs = []
l10n_folder_path = os.path.dirname(input_file_path)
input_file_name = os.path.basename(input_file_path)
for filename in os.listdir(str(l10nFolderPath)):
for filename in os.listdir(str(l10n_folder_path)):
if filename.startswith("long") or not filename.endswith(".po"):
continue
if filename.split(".")[1] != inputFileName.split(".")[0]:
if filename.split(".")[1] != input_file_name.split(".")[0]:
continue
if not filters or filename.split(".")[0] in filters:
existingTranslationCatalogs.append(
Catalog.readFrom(
os.path.join(l10nFolderPath, filename), locale=filename.split(".")[0]
existing_translation_catalogs.append(
Catalog.read_from(
os.path.join(l10n_folder_path, filename), locale=filename.split(".")[0]
)
)
return existingTranslationCatalogs
return existing_translation_catalogs

View File

@ -19,18 +19,20 @@
import os
import subprocess
from i18n_helper import l10nFolderName, projectRootDirectory, transifexClientFolder
from i18n_helper import L10N_FOLDER_NAME, PROJECT_ROOT_DIRECTORY, TRANSIFEX_CLIENT_FOLDER
def main():
for root, folders, _ in os.walk(projectRootDirectory):
for root, folders, _ in os.walk(PROJECT_ROOT_DIRECTORY):
for folder in folders:
if folder == l10nFolderName:
if os.path.exists(os.path.join(root, folder, transifexClientFolder)):
path = os.path.join(root, folder)
os.chdir(path)
print(f"INFO: Starting to pull translations in {path}...")
subprocess.run(["tx", "pull", "-a", "-f"], check=False)
if folder != L10N_FOLDER_NAME:
continue
if os.path.exists(os.path.join(root, folder, TRANSIFEX_CLIENT_FOLDER)):
path = os.path.join(root, folder)
os.chdir(path)
print(f"INFO: Starting to pull translations in {path}...")
subprocess.run(["tx", "pull", "-a", "-f"], check=False)
if __name__ == "__main__":

View File

@ -1,7 +1,7 @@
import io
import pytest
from checkDiff import check_diff
from check_diff import check_diff
PATCHES = [

View File

@ -1,134 +0,0 @@
#!/usr/bin/env python3
#
# Copyright (C) 2022 Wildfire Games.
# This file is part of 0 A.D.
#
# 0 A.D. is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 2 of the License, or
# (at your option) any later version.
#
# 0 A.D. is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with 0 A.D. If not, see <http://www.gnu.org/licenses/>.
import json
import multiprocessing
import os
from importlib import import_module
from i18n_helper import l10nFolderName, projectRootDirectory
from i18n_helper.catalog import Catalog
messagesFilename = "messages.json"
def warnAboutUntouchedMods():
"""
Warn about mods that are not properly configured to get their messages extracted.
"""
modsRootFolder = os.path.join(projectRootDirectory, "binaries", "data", "mods")
untouchedMods = {}
for modFolder in os.listdir(modsRootFolder):
if modFolder[0] != "_" and modFolder[0] != ".":
if not os.path.exists(os.path.join(modsRootFolder, modFolder, l10nFolderName)):
untouchedMods[modFolder] = (
f"There is no '{l10nFolderName}' folder in the root folder of this mod."
)
elif not os.path.exists(
os.path.join(modsRootFolder, modFolder, l10nFolderName, messagesFilename)
):
untouchedMods[modFolder] = (
f"There is no '{messagesFilename}' file within the '{l10nFolderName}' folder "
f"in the root folder of this mod."
)
if untouchedMods:
print("" "Warning: No messages were extracted from the following mods:" "")
for mod in untouchedMods:
print(f"• {mod}: {untouchedMods[mod]}")
print(
""
f"For this script to extract messages from a mod folder, this mod folder must contain "
f"a '{l10nFolderName}' folder, and this folder must contain a '{messagesFilename}' "
f"file that describes how to extract messages for the mod. See the folder of the main "
f"mod ('public') for an example, and see the documentation for more information."
)
def generatePOT(templateSettings, rootPath):
if "skip" in templateSettings and templateSettings["skip"] == "yes":
return
inputRootPath = rootPath
if "inputRoot" in templateSettings:
inputRootPath = os.path.join(rootPath, templateSettings["inputRoot"])
template = Catalog(
project=templateSettings["project"],
copyright_holder=templateSettings["copyrightHolder"],
locale="en",
)
for rule in templateSettings["rules"]:
if "skip" in rule and rule["skip"] == "yes":
return
options = rule.get("options", {})
extractorClass = getattr(import_module("extractors.extractors"), rule["extractor"])
extractor = extractorClass(inputRootPath, rule["filemasks"], options)
formatFlag = None
if "format" in options:
formatFlag = options["format"]
for message, plural, context, location, comments in extractor.run():
message_id = (message, plural) if plural else message
saved_message = template.get(message_id, context) or template.add(
id=message_id,
context=context,
auto_comments=comments,
flags=[formatFlag] if formatFlag and message.find("%") != -1 else [],
)
saved_message.locations.append(location)
saved_message.flags.discard("python-format")
template.writeTo(os.path.join(rootPath, templateSettings["output"]))
print('Generated "{}" with {} messages.'.format(templateSettings["output"], len(template)))
def generateTemplatesForMessagesFile(messagesFilePath):
with open(messagesFilePath) as fileObject:
settings = json.load(fileObject)
for templateSettings in settings:
multiprocessing.Process(
target=generatePOT, args=(templateSettings, os.path.dirname(messagesFilePath))
).start()
def main():
import argparse
parser = argparse.ArgumentParser()
parser.add_argument(
"--scandir",
help="Directory to start scanning for l10n folders in. "
"Type '.' for current working directory",
)
args = parser.parse_args()
for root, folders, _filenames in os.walk(args.scandir or projectRootDirectory):
for folder in folders:
if folder == l10nFolderName:
messagesFilePath = os.path.join(root, folder, messagesFilename)
if os.path.exists(messagesFilePath):
generateTemplatesForMessagesFile(messagesFilePath)
warnAboutUntouchedMods()
if __name__ == "__main__":
main()

View File

@ -0,0 +1,134 @@
#!/usr/bin/env python3
#
# Copyright (C) 2022 Wildfire Games.
# This file is part of 0 A.D.
#
# 0 A.D. is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 2 of the License, or
# (at your option) any later version.
#
# 0 A.D. is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with 0 A.D. If not, see <http://www.gnu.org/licenses/>.
import json
import multiprocessing
import os
from importlib import import_module
from i18n_helper import L10N_FOLDER_NAME, PROJECT_ROOT_DIRECTORY
from i18n_helper.catalog import Catalog
messages_filename = "messages.json"
def warn_about_untouched_mods():
"""Warn about mods that are not properly configured to get their messages extracted."""
mods_root_folder = os.path.join(PROJECT_ROOT_DIRECTORY, "binaries", "data", "mods")
untouched_mods = {}
for mod_folder in os.listdir(mods_root_folder):
if mod_folder[0] != "_" and mod_folder[0] != ".":
if not os.path.exists(os.path.join(mods_root_folder, mod_folder, L10N_FOLDER_NAME)):
untouched_mods[mod_folder] = (
f"There is no '{L10N_FOLDER_NAME}' folder in the root folder of this mod."
)
elif not os.path.exists(
os.path.join(mods_root_folder, mod_folder, L10N_FOLDER_NAME, messages_filename)
):
untouched_mods[mod_folder] = (
f"There is no '{messages_filename}' file within the '{L10N_FOLDER_NAME}' "
f"folder in the root folder of this mod."
)
if untouched_mods:
print("Warning: No messages were extracted from the following mods:")
for mod in untouched_mods:
print(f"• {mod}: {untouched_mods[mod]}")
print(
""
f"For this script to extract messages from a mod folder, this mod folder must contain "
f"a '{L10N_FOLDER_NAME}' folder, and this folder must contain a '{messages_filename}' "
f"file that describes how to extract messages for the mod. See the folder of the main "
f"mod ('public') for an example, and see the documentation for more information."
)
def generate_pot(template_settings, root_path):
if "skip" in template_settings and template_settings["skip"] == "yes":
return
input_root_path = root_path
if "inputRoot" in template_settings:
input_root_path = os.path.join(root_path, template_settings["inputRoot"])
template = Catalog(
project=template_settings["project"],
copyright_holder=template_settings["copyrightHolder"],
locale="en",
)
for rule in template_settings["rules"]:
if "skip" in rule and rule["skip"] == "yes":
return
options = rule.get("options", {})
extractor_class = getattr(
import_module("extractors.extractors"), f'{rule["extractor"].title()}Extractor'
)
extractor = extractor_class(input_root_path, rule["filemasks"], options)
format_flag = None
if "format" in options:
format_flag = options["format"]
for message, plural, context, location, comments in extractor.run():
message_id = (message, plural) if plural else message
saved_message = template.get(message_id, context) or template.add(
id=message_id,
context=context,
auto_comments=comments,
flags=[format_flag] if format_flag and message.find("%") != -1 else [],
)
saved_message.locations.append(location)
saved_message.flags.discard("python-format")
template.write_to(os.path.join(root_path, template_settings["output"]))
print('Generated "{}" with {} messages.'.format(template_settings["output"], len(template)))
def generate_templates_for_messages_file(messages_file_path):
with open(messages_file_path, encoding="utf-8") as file_object:
settings = json.load(file_object)
for template_settings in settings:
multiprocessing.Process(
target=generate_pot, args=(template_settings, os.path.dirname(messages_file_path))
).start()
def main():
import argparse
parser = argparse.ArgumentParser()
parser.add_argument(
"--scandir",
help="Directory to start scanning for l10n folders in. "
"Type '.' for current working directory",
)
args = parser.parse_args()
for root, folders, _filenames in os.walk(args.scandir or PROJECT_ROOT_DIRECTORY):
for folder in folders:
if folder == L10N_FOLDER_NAME:
messages_file_path = os.path.join(root, folder, messages_filename)
if os.path.exists(messages_file_path):
generate_templates_for_messages_file(messages_file_path)
warn_about_untouched_mods()
if __name__ == "__main__":
main()

View File

@ -126,7 +126,10 @@ for xmlFile in args.files:
os.rename(pmpFile + "~", pmpFile)
if os.path.isfile(xmlFile):
with open(xmlFile) as f1, open(xmlFile + "~", "w") as f2:
with (
open(xmlFile, encoding="utf-8") as f1,
open(xmlFile + "~", "w", encoding="utf-8") as f2,
):
data = f1.read()
# bump version number (rely on how Atlas formats the XML)

0
source/tools/replayprofile/extract.pl Normal file → Executable file
View File

View File

@ -1,21 +1,29 @@
# 0 AD Python Client
This directory contains `zero_ad`, a python client for 0 AD which enables users to control the environment headlessly.
This directory contains `zero_ad`, a python client for 0 AD which enables users to control the
environment headlessly.
## Installation
`zero_ad` can be installed with `pip` by running the following from the current directory:
```
pip install .
```
Development dependencies can be installed with `pip install -r requirements-dev.txt`. Tests are using pytest and can be run with `python -m pytest`.
Development dependencies can be installed with `pip install -r requirements-dev.txt`. Tests are
using pytest and can be run with `python -m pytest`.
## Basic Usage
If there is not a running instance of 0 AD, first start 0 AD with the RL interface enabled:
```
pyrogenesis --rl-interface=127.0.0.1:6000
```
Next, the python client can be connected with:
```
import zero_ad
from zero_ad import ZeroAD
@ -32,7 +40,10 @@ with open('./samples/arcadia.json', 'r') as f:
state = game.reset(arcadia_config)
```
where `./samples/arcadia.json` is the path to a game configuration JSON (included in the first line of the commands.txt file in a game replay directory) and `state` contains the initial game state for the given map. The game engine can be stepped (optionally applying actions at each step) with:
where `./samples/arcadia.json` is the path to a game configuration JSON (included in the first
line of the commands.txt file in a game replay directory) and `state` contains the initial game
state for the given map. The game engine can be stepped (optionally applying actions at each step)
with:
```
state = game.step()

View File

@ -33,7 +33,7 @@ game = zero_ad.ZeroAD("http://localhost:6000")
# Load the Arcadia map
samples_dir = path.dirname(path.realpath(__file__))
scenario_config_path = path.join(samples_dir, "arcadia.json")
with open(scenario_config_path, encoding="utf8") as f:
with open(scenario_config_path, encoding="utf-8") as f:
arcadia_config = f.read()
state = game.reset(arcadia_config)
@ -42,14 +42,14 @@ state = game.reset(arcadia_config)
state = game.step()
# Units can be queried from the game state
citizen_soldiers = state.units(owner=1, type="infantry")
citizen_soldiers = state.units(owner=1, entity_type="infantry")
# (including gaia units like trees or other resources)
nearby_tree = closest(state.units(owner=0, type="tree"), center(citizen_soldiers))
nearby_tree = closest(state.units(owner=0, entity_type="tree"), center(citizen_soldiers))
# Action commands can be created using zero_ad.actions
collect_wood = zero_ad.actions.gather(citizen_soldiers, nearby_tree)
female_citizens = state.units(owner=1, type="female_citizen")
female_citizens = state.units(owner=1, entity_type="female_citizen")
house_tpl = "structures/spart/house"
x = 680
z = 640
@ -69,7 +69,7 @@ print("female citizen's max health is", female_citizen.max_health())
print(female_citizen.data)
# Units can be built using the "train action"
civic_center = state.units(owner=1, type="civil_centre")[0]
civic_center = state.units(owner=1, entity_type="civil_centre")[0]
spearman_type = "units/spart/infantry_spearman_b"
train_spearmen = zero_ad.actions.train([civic_center], spearman_type)
@ -96,11 +96,11 @@ for _ in range(150):
# Let's attack with our entire military
state = game.step([zero_ad.actions.chat("An attack is coming!")])
while len(state.units(owner=2, type="unit")) > 0:
while len(state.units(owner=2, entity_type="unit")) > 0:
attack_units = [
unit for unit in state.units(owner=1, type="unit") if "female" not in unit.type()
unit for unit in state.units(owner=1, entity_type="unit") if "female" not in unit.type()
]
target = closest(state.units(owner=2, type="unit"), center(attack_units))
target = closest(state.units(owner=2, entity_type="unit"), center(attack_units))
state = game.step([zero_ad.actions.attack(attack_units, target)])
while state.unit(target.id()):

View File

@ -6,7 +6,7 @@ import zero_ad
game = zero_ad.ZeroAD("http://localhost:6000")
scriptdir = path.dirname(path.realpath(__file__))
with open(path.join(scriptdir, "..", "samples", "arcadia.json"), encoding="utf8") as f:
with open(path.join(scriptdir, "..", "samples", "arcadia.json"), encoding="utf-8") as f:
config = f.read()
@ -33,23 +33,23 @@ def closest(units, position):
def test_construct():
state = game.reset(config)
female_citizens = state.units(owner=1, type="female_citizen")
female_citizens = state.units(owner=1, entity_type="female_citizen")
house_tpl = "structures/spart/house"
house_count = len(state.units(owner=1, type=house_tpl))
house_count = len(state.units(owner=1, entity_type=house_tpl))
x = 680
z = 640
build_house = zero_ad.actions.construct(female_citizens, house_tpl, x, z, autocontinue=True)
# Check that they start building the house
state = game.step([build_house])
while len(state.units(owner=1, type=house_tpl)) == house_count:
while len(state.units(owner=1, entity_type=house_tpl)) == house_count:
state = game.step()
def test_gather():
state = game.reset(config)
female_citizen = state.units(owner=1, type="female_citizen")[0]
state.units(owner=0, type="tree")
nearby_tree = closest(state.units(owner=0, type="tree"), female_citizen.position())
female_citizen = state.units(owner=1, entity_type="female_citizen")[0]
state.units(owner=0, entity_type="tree")
nearby_tree = closest(state.units(owner=0, entity_type="tree"), female_citizen.position())
collect_wood = zero_ad.actions.gather([female_citizen], nearby_tree)
state = game.step([collect_wood])
@ -59,19 +59,19 @@ def test_gather():
def test_train():
state = game.reset(config)
civic_centers = state.units(owner=1, type="civil_centre")
civic_centers = state.units(owner=1, entity_type="civil_centre")
spearman_type = "units/spart/infantry_spearman_b"
spearman_count = len(state.units(owner=1, type=spearman_type))
spearman_count = len(state.units(owner=1, entity_type=spearman_type))
train_spearmen = zero_ad.actions.train(civic_centers, spearman_type)
state = game.step([train_spearmen])
while len(state.units(owner=1, type=spearman_type)) == spearman_count:
while len(state.units(owner=1, entity_type=spearman_type)) == spearman_count:
state = game.step()
def test_walk():
state = game.reset(config)
female_citizens = state.units(owner=1, type="female_citizen")
female_citizens = state.units(owner=1, entity_type="female_citizen")
x = 680
z = 640
initial_distance = dist(center(female_citizens), [x, z])
@ -81,14 +81,14 @@ def test_walk():
distance = initial_distance
while distance >= initial_distance:
state = game.step()
female_citizens = state.units(owner=1, type="female_citizen")
female_citizens = state.units(owner=1, entity_type="female_citizen")
distance = dist(center(female_citizens), [x, z])
def test_attack():
state = game.reset(config)
unit = state.units(owner=1, type="cavalry")[0]
target = state.units(owner=2, type="female_citizen")[0]
unit = state.units(owner=1, entity_type="cavalry")[0]
target = state.units(owner=2, entity_type="female_citizen")[0]
initial_health_target = target.health()
initial_health_unit = unit.health()

View File

@ -5,10 +5,10 @@ import zero_ad
game = zero_ad.ZeroAD("http://localhost:6000")
scriptdir = path.dirname(path.realpath(__file__))
with open(path.join(scriptdir, "..", "samples", "arcadia.json")) as f:
with open(path.join(scriptdir, "..", "samples", "arcadia.json"), encoding="utf-8") as f:
config = f.read()
with open(path.join(scriptdir, "fastactions.js")) as f:
with open(path.join(scriptdir, "fastactions.js"), encoding="utf-8") as f:
fastactions = f.read()

View File

@ -58,18 +58,20 @@ class GameState:
self.game = game
self.mapSize = self.data["mapSize"]
def units(self, owner=None, type=None):
def units(self, owner=None, entity_type=None):
def filter_fn(e):
return (owner is None or e["owner"] == owner) and (
type is None or type in e["template"]
entity_type is None or entity_type in e["template"]
)
return [Entity(e, self.game) for e in self.data["entities"].values() if filter_fn(e)]
def unit(self, id):
id = str(id)
def unit(self, entity_id):
entity_id = str(entity_id)
return (
Entity(self.data["entities"][id], self.game) if id in self.data["entities"] else None
Entity(self.data["entities"][entity_id], self.game)
if entity_id in self.data["entities"]
else None
)

View File

@ -1,6 +1,7 @@
## Instructions
# Instructions
Install python 3 and the python dependencies
```sh
pip install -r requirements.txt
```
@ -8,7 +9,7 @@ pip install -r requirements.txt
Install glslc and spirv-tools 2023+ (the easiest way is to install Vulkan SDK)
Run the compile.py script
```sh
python compile.py path-to-folder-with-input-mods mod-output-path rules-path
```

View File

@ -88,9 +88,7 @@ def resolve_if(defines, expression):
return False
def compile_and_reflect(
input_mod_path, output_mod_path, dependencies, stage, path, out_path, defines
):
def compile_and_reflect(input_mod_path, dependencies, stage, path, out_path, defines):
keep_debug = False
input_path = os.path.normpath(path)
output_path = os.path.normpath(out_path)
@ -296,7 +294,7 @@ def compile_and_reflect(
def output_xml_tree(tree, path):
"""We use a simple custom printer to have the same output for all platforms."""
with open(path, "w") as handle:
with open(path, "w", encoding="utf-8") as handle:
handle.write('<?xml version="1.0" encoding="utf-8"?>\n')
handle.write(f"<!-- DO NOT EDIT: GENERATED BY SCRIPT {os.path.basename(__file__)} -->\n")
@ -443,7 +441,6 @@ def build(rules, input_mod_path, output_mod_path, dependencies, program_name):
reflection = compile_and_reflect(
input_mod_path,
output_mod_path,
dependencies,
shader["type"],
input_glsl_path,
@ -558,7 +555,7 @@ def run():
if not os.path.isfile(args.rules_path):
sys.stderr.write(f'Rules "{args.rules_path}" are not found\n')
return
with open(args.rules_path) as handle:
with open(args.rules_path, encoding="utf-8") as handle:
rules = json.load(handle)
if not os.path.isdir(args.input_mod_path):

Some files were not shown because too many files have changed in this diff Show More