1
0
forked from 0ad/0ad

Compare commits

...

46 Commits

Author SHA1 Message Date
c5eab43423 Disable cutoff distance option when cover whole map is ON
Gray out the "Cutoff distance" slider on the "Graphics (advanced)" tab
of the game settings screen when this distance is ignored because the
"Cover whole map" configuration option is ON.
2024-09-24 20:39:57 +02:00
288d5002f3 Add support for vendored spirv-reflect
This allows build-archives.sh to fall back to vendored spirv-reflect if
it can't be found in PATH.

Also update error messages to suggest additional alternatives.

Signed-off-by: Ralph Sennhauser <ralph.sennhauser@gmail.com>
2024-09-24 18:45:22 +02:00
39f3fa7d5b Package spirv-reflect for building shaders
Usually not available in Linux Distributions as it isn't customary to
package the SDK but only the bits needed as separate packages.

Signed-off-by: Ralph Sennhauser <ralph.sennhauser@gmail.com>
2024-09-24 18:45:22 +02:00
cf909a81db Add support for vendored spirv-reflect to compile.py
The script build-archive.sh sets a variable SPIRV_REFLECT, even respects
it if it's set in env but without support from the compile.py script for
it there isn't much point.

This commit adds support SPIRV_REFLECT in compile.py and and adds a
fallback to use vendored spirv-reflect for when the envvar is unset and
the tool can't be found in PATH

Signed-off-by: Ralph Sennhauser <ralph.sennhauser@gmail.com>
2024-09-24 18:45:22 +02:00
f0d99f8f54 Add output of build-archives to gitignore
build-archives.sh generates output in archives, add it as well as the
rules files for spirv shaders which might be fetched during build to
gitignore.

Signed-off-by: Ralph Sennhauser <ralph.sennhauser@gmail.com>
2024-09-24 18:45:22 +02:00
9770391bc4 Make failure messages visible for build-archives
Due to "set -e" the script terminates when the required tools aren't
found and the arguably helpful messages later on won't be printed.

Make the initial check for tools non fatal and allow for the later check
to take care of missing tools.

Signed-off-by: Ralph Sennhauser <ralph.sennhauser@gmail.com>
2024-09-24 18:45:22 +02:00
eb328fc2df Allow building archives without translations
Currently you have to fetch translations first so they can be filtered
by build-archives.sh without the script failing.

Signed-off-by: Ralph Sennhauser <ralph.sennhauser@gmail.com>
2024-09-24 18:45:22 +02:00
7e91f70e02 Add some more AI names
A total of 16 possible AI names are added to four civilizations.

Athenians: Callias, Chabrias, Conon, Ephialtes, Nicias, Phocion, and
Pisistratus

Britons: Cassivellaunus, Imanuentius, and Mandubracius

Iberians: Litennon

Spartans: Anaxandridas, Cleomenes, Eurybiades, Gylippus, and
Leotychidas

The lists are sorted alphabetically.
2024-09-24 17:30:56 +02:00
9c72741e69
Fix x86_64 cross-compilation on macOS 2024-09-22 16:16:40 +02:00
660dd63792
Delete existing SPIR-V shaders before regeneration
As compile.py only creates shaders for program combinations which don't
have already existing shaders on disk, this removes all SPIR-V shaders
prior to rebuilding them to ensure they actually do get recreated.
2024-09-22 10:49:46 +02:00
57308bb847
Avoid unnecessary computations
This refactors the script for cleaning the translations to get the
same result by doing less. This is achieved by the following changes:

- Use glob-patterns for finding the files to clean more efficiently,
  without the need to exclude collected files again.
- Only write files which are supposed to be modified (previously all
  portable object files did get rewritten by this script, no matter if
  it did change something or not).
- Stop searching for sections in files to clean up, once they are
  already passed.
2024-09-22 07:59:07 +02:00
c59030857d
Rebuild SPIR-V shaders when compile script changes 2024-09-22 06:28:58 +02:00
8d70ced693
Add myself as code owner for ruff.toml 2024-09-21 20:54:32 +02:00
0ea6d32fa5
Enable various ruff rules
This commit enables a bunch of unrelated ruff rules, which only require
minimal changes to the code base to enable them.

The rules enabled by this commit are:

- check the use of datetime objects without timezone (DTZ005)
- check the performance of try-except in loops (PERF203)
- check the number of function arguments (PLR0913)
- check for mutable class defaults (RUF012)
- check for the use of secure hashing algorithms (S324)
- check for raising base exceptions (TRY002)
- check for raising other exceptions where TypeErrors should be raised
  (TRY004)
2024-09-21 20:54:30 +02:00
c0232c6b5f
Specify the Python target version in ruff.toml
This ensures the same Python target version used for `ruff format`
is used for `ruff check` as well. It also allows ruff, even if it's not
run through pre-commit, to use the correct target Python version.
2024-09-21 20:54:24 +02:00
265ed76131
Simplify check for identical shaders
Previously when checking if two SPIR-V shaders are identical the
hashs of their file content would be compared and afterwards their
(unhashed) file contents as well. Comparing the file contents isn't
necessary, as the hash function used is a cryptographic one, which
guarantees the hash can be used as a representative of the hashed data.
2024-09-21 20:39:59 +02:00
668ae8a20e Increase height of middle panel to prevent icon overflow
The icon/portrait of the middle panel when a single entity was selected
was very subtly (4 px) overflowing its lane and invading that of the
entity's name. This change fixes that by:

 - Raising the top of the middle panel by 4 px to leave room for the
   portrait/icon. This avoids having to shrink it and lose quality.

 - Distributing the 4 px of difference in height in the statistics area
   by lowering 1 px the top bar, 1 px the middle bar (if any), and 3 px
   the bottom bar (if any). The rest of the elements are lowered 4 px,
   and therefore remain in the same place.

 - Increasing the height of the minimap panel by 4 px so that it remains
   aligned with the middle panel, vertically centering the minimap, and
   making the necessary adjustments to the position of its buttons.

Additionally, a couple of minor changes are applied:

 - The separators between the statistics area and the attack/resistance
   and resources area, and between the attack/resistance and resources
   area and the entity name area, which had different heights, are set
   to the same height/thickness.

 - The attack/resistance icon, which was very close to the entity
   icon/portrait, is moved 1 px to the right, and the
   resourceCarryingText (in the same area), which was very far from the
   resourceCarryingIcon, is also moved 3 px to the right.

Fixes #7029
2024-09-18 13:38:05 +02:00
b15eb6909e Remove unnecessary comments in selection_details.js
Related: #7012
2024-09-18 06:46:51 +02:00
798cff1f6f Left-click the portrait to follow the entity
- Left-clicking the portrait of a unit will make the camera follow that
   unit (before: no action). A tooltip informs the player of this
   possibility.

 - Left-clicking the portrait of a structure will make the camera focus on
   that structure (before: no action). A tooltip informs the player of
   this possibility.

 - Double-clicking a hero/treasure icon will make the camera follow that
   hero/treasure (before: just focus on that hero/treasure).

 - Some minor related changes.

Fixes #6879
2024-09-18 06:46:51 +02:00
230c7ca27d
Add EditorConfig options for Python
While the desired options for indent size and style are Python's
defaults, let's make it explicit by specifying it in the EditorConfig.

As part of this, this also removes unnecessary inline formatting options
for Python files.
2024-09-17 11:03:15 +02:00
e56ebb3f46
Enable ruff naming rules 2024-09-13 11:04:07 +02:00
cd8b4266a4
Fix class name in xmlvalidator 2024-09-13 11:04:06 +02:00
8c7cc7373d
Fix variable names in SPIRV compile.py 2024-09-13 11:04:06 +02:00
0d3e3fbc29
Rename simple-example.py 2024-09-13 11:04:05 +02:00
661328ab15
Fix variable naming for map compatibility file 2024-09-13 11:04:05 +02:00
616f2e134b
Fix variable names in checkrefs.py 2024-09-13 11:04:04 +02:00
ea4b580527
Simplify JSON parsing 2024-09-11 17:52:10 +02:00
0e84957979
Simplify XML parsing by iterating only once
This simplifies the XML parsing, by iterating over the DOM tree only
once. Curiously this doesn't result in significant performance gains.

As the keywords are now found in the order they appear in the
document instead of the order they are mentioned in messages.json, the
order of a few strings in the PO-templates changes caused by the changes
in this commit.
2024-09-11 17:52:10 +02:00
eeb502c115
Simplify code by making use of early returns 2024-09-11 17:52:10 +02:00
f4c40b740c
Remove unnecessary extractors package
This simplifies the code structure, by removing the extractors package,
which only contained a single module, the extractors module. This module
is now located in the i18n_helper package.
2024-09-11 17:52:09 +02:00
20ab96a0f4
Make some attribute names PEP 8 compatible 2024-09-11 17:52:09 +02:00
ac48b72550
Move imports to the top of the file 2024-09-11 17:52:09 +02:00
4d3be23bac
Remove broken and unused ini-file extractor
The ini-file extractor has been broken since the transition to Python 3
and nobody noticed, because it isn't used nowadays. Therefore, let's
remove it.
2024-09-11 17:52:09 +02:00
f856a7663f
Add a cache for mask patterns
This increases the performance of updating the PO-templates
significantly by adding a cache for the building of mask patterns. In
non-representative tests it increased the performance of updating the
PO-templates by >25%.
2024-09-11 17:52:09 +02:00
7575dfe3c8
Remove unnecessary use of codecs module 2024-09-11 17:52:08 +02:00
e86fd58524
Simplify and speed up finding of messages.json 2024-09-11 17:51:58 +02:00
04aa01a39b
Speed up fetching of translations from Transifex
This increases the number of workers to use when fetching translations
from Transifex from 5 (the default) to 12. While the Transifex CLI
allows up to 20 workers, using more workers results in frequent request
throttling, which hurts performance more than it improves it.
2024-09-10 07:34:48 +02:00
ccb1d747f0
Use PEP 8 naming conventions for templatesanalyzer 2024-09-10 07:29:33 +02:00
5bea0a0f97 Add the large address aware flag to the nightly build. 2024-09-09 11:46:31 +03:00
26994b156b Split source package downloads
Instead of fetching the whole svn repo containing all current
dependencies into source, fetch them individually into subdirectories of
source.

Adds a wrapper script to each package to place the build output where
it's currently expected.

This allows adding and removing dependencies as well as changing origin
of those packages on a case by case basis.

It's recommended to run "git clean -dxf libraries/source" to free up
extra space and remove untracked files no longer covered by changes in
.gitignore.

Signed-off-by: Ralph Sennhauser <ralph.sennhauser@gmail.com>
2024-09-08 22:17:08 +02:00
2b5ecd02a7 build-source-libs.sh: drop Perl dependency
Instead of using Perl to get the absolute path due to readlink -f
implementations differing behaviour use realpath. Since the inclusion
into coreutils this should be a valid alternative.

Also check for [[:space:]] instead of only \s as if that one is an issue
the others are as well.

Signed-off-by: Ralph Sennhauser <ralph.sennhauser@gmail.com>
2024-09-08 17:51:16 +02:00
bf82871ca8 build-source-libs.sh: remove --source-libs-dir
During the migration it apparently looked possibly useful at some point,
in the end it remained unused.

Further the clean-source-libs.sh doesn't handle the case where
build-source-libs.sh was invoked with --source-libs-dir.

Finally, when switching to support individual package this becomes very
complex to support.

For all the above reason remove the option.

Signed-off-by: Ralph Sennhauser <ralph.sennhauser@gmail.com>
2024-09-08 17:51:16 +02:00
33134af6c3 Stop using the source-libs repository on Windows
All prebuilt files for Windows libraries are now stored in the
windows-libs SVN repository for the foreseeable future.
2024-09-08 17:51:15 +02:00
966d859050 Add yamllint to pre-commit
Adding a hook to pre-commit to enforce a consistent style and remove the
hook check-yaml which only checks if a document is parsable and so
become redundant.

Signed-off-by: Ralph Sennhauser <ralph.sennhauser@gmail.com>
2024-09-08 13:08:11 +02:00
87f667732c Format yaml files
The schema at https://json.schemastore.org/gitea-issue-forms.json
doesn't like empty titles, so just remove the key to the same effect as
a zero length string.

Add document start markers where they are missing.

Use a max line length of 100 as discussed with Dunedan.

Signed-off-by: Ralph Sennhauser <ralph.sennhauser@gmail.com>
2024-09-08 13:08:11 +02:00
10e7513bba
Revert changes in check_diff.py in batches
This is to avoid running into errors caused by the limited length of
command line input when reverting lots of files.
2024-09-08 11:39:08 +02:00
58 changed files with 977 additions and 855 deletions

View File

@ -5,6 +5,10 @@ charset = utf-8
insert_final_newline = true
trim_trailing_whitespace = true
[*.py]
indent_size = 4
indent_style = space
[*.sh]
indent_style = tab
function_next_line = true

View File

@ -3,6 +3,7 @@
\\.gitea/.* @Stan @Itms
## Linting
\\.pre-commit-config\\.yaml @Dunedan
ruff\\.toml @Dunedan
## == Build & Libraries
(build|libraries)/.* @Itms @Stan

View File

@ -1,19 +1,24 @@
---
name: Defect
about: Report an issue with the game. Errors, crashes, unexpected behaviour should be reported this way.
title: ""
about: >
Report an issue with the game. Errors, crashes, unexpected behaviour should be reported this way.
labels:
- "Type/Defect"
- "Priority/3: Should Have"
body:
- type: markdown
attributes:
value: |
**Please select a Theme label that corresponds best to your issue. You can also adjust the Priority label.**
value: >
**Please select a Theme label that corresponds best to your issue. You can also adjust the
Priority label.**
- type: checkboxes
attributes:
label: Reporting Errors
description: For crashes and errors, you must read the [ReportingErrors](wiki/ReportingErrors) page. In particular, if you are reporting a crash, you must upload crashlog files in the Description field below.
description: >
For crashes and errors, you must read the [ReportingErrors](wiki/ReportingErrors) page. In
particular, if you are reporting a crash, you must upload crashlog files in the Description
field below.
options:
- label: I have read the ReportingErrors wiki page.
required: true
@ -29,6 +34,8 @@ body:
- type: input
attributes:
label: Version
description: Type the version of the game you are running (displayed at the bottom of the main menu, or the Alpha version, or "nightly-build").
description: >
Type the version of the game you are running (displayed at the bottom of the main menu, or
the Alpha version, or "nightly-build").
validations:
required: true

View File

@ -1,14 +1,15 @@
---
name: Enhancement
about: Ask us for an improvement you wish to see in the game.
title: ""
labels:
- "Type/Enhancement"
- "Priority/3: Should Have"
body:
- type: markdown
attributes:
value: |
**Please select a Theme label that corresponds best to your issue. You can also adjust the Priority label.**
value: >
**Please select a Theme label that corresponds best to your issue. You can also adjust the
Priority label.**
- type: textarea
attributes:
@ -18,7 +19,9 @@ body:
- type: markdown
attributes:
value: |
**Important Note:** Gameplay and balance changes require preliminary discussion, and consensus must be reached with the Balancing team.<br>
If this is a gameplay change, please add the *Needs Design Input* label, and open a forum topic for discussing your proposal.<br>
value: >
**Important Note:** Gameplay and balance changes require preliminary discussion, and
consensus must be reached with the Balancing team.<br>
If this is a gameplay change, please add the *Needs Design Input* label, and open a forum
topic for discussing your proposal.<br>
You should link that forum topic in the ticket Description above.

View File

@ -13,7 +13,9 @@ jobs:
run: pip3 install lxml
- name: Workaround for authentication problem with LFS
# https://gitea.com/gitea/act_runner/issues/164
run: git config --local http.${{ gitea.server_url }}/${{ gitea.repository }}.git/info/lfs/objects/.extraheader ''
run: >
git config --local
http.${{ gitea.server_url }}/${{ gitea.repository }}.git/info/lfs/objects/.extraheader ''
- name: Download necessary LFS assets
run: git lfs pull -I binaries/data/mods/public/maps
- name: Check for missing references

13
.gitignore vendored
View File

@ -14,9 +14,20 @@ build/workspaces/vs2017
# Libraries
libraries/macos
libraries/source
libraries/win32
libraries/source/cxxtest-4.4/*
libraries/source/fcollada/*
libraries/source/nvtt/*
libraries/source/spidermonkey/*
libraries/source/spirv-reflect/*
!libraries/source/**/build.sh
!libraries/source/**/patches/
# Tools
archives/
source/tools/spirv/rules.*.json
# premake5 build files, adapted from upstream
build/premake/premake5/**/.gitignore
build/premake/premake5/**/.travis.yml

View File

@ -20,7 +20,6 @@ repos:
^binaries/data/mods/_test.sim/simulation/templates.illformed.xml|
^binaries/data/mods/public/maps/.*\.xml
)
- id: check-yaml
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.6.1
hooks:
@ -31,8 +30,6 @@ repos:
- id: ruff-format
args:
- --check
- --target-version
- py311
exclude: ^source/tools/webservices/
- repo: local
hooks:
@ -57,8 +54,8 @@ repos:
- repo: https://github.com/shellcheck-py/shellcheck-py
rev: v0.10.0.1
hooks:
- id: shellcheck
exclude: ^build/premake/premake5/
- id: shellcheck
exclude: ^build/premake/premake5/
- repo: https://github.com/igorshubovych/markdownlint-cli
rev: v0.41.0
hooks:
@ -69,3 +66,10 @@ repos:
^build/premake/|
^source/third_party/
)
- repo: https://github.com/adrienverge/yamllint
rev: v1.35.1
hooks:
- id: yamllint
args:
- --strict
exclude: ^build/premake/premake5/

8
.yamllint.yaml Normal file
View File

@ -0,0 +1,8 @@
---
# https://yamllint.readthedocs.io/en/stable/index.html
extends: default
rules:
line-length:
max: 100
truthy:
check-keys: false

View File

@ -1219,16 +1219,28 @@ function getResourceDropsiteTooltip(template)
});
}
function showTemplateViewerOnRightClickTooltip()
function getFocusOnLeftClickTooltip()
{
// Translation: Appears in a tooltip to indicate that right-clicking the corresponding GUI element will open the Template Details GUI page.
return translate("Right-click to view more information.");
// Translation: Appears in a tooltip to indicate that left-clicking the corresponding GUI element will center the view on the selected entity.
return translate("Left-click to focus.");
}
function showTemplateViewerOnClickTooltip()
function getFollowOnLeftClickTooltip()
{
// Translation: Appears in a tooltip to indicate that left-clicking the corresponding GUI element will make the camera follow the selected unit.
return translate("Left-click to follow.");
}
function getTemplateViewerOnRightClickTooltip()
{
// Translation: Appears in a tooltip to indicate that right-clicking the corresponding GUI element will open the Template Details GUI page.
return translate("Right-click for more information.");
}
function getTemplateViewerOnClickTooltip()
{
// Translation: Appears in a tooltip to indicate that clicking the corresponding GUI element will open the Template Details GUI page.
return translate("Click to view more information.");
return translate("Click for more information.");
}
/**

View File

@ -344,15 +344,6 @@
"dependencies": ["shadows"],
"config": "shadowpcf"
},
{
"type": "slider",
"label": "Cutoff distance",
"tooltip": "Hides shadows beyond a certain distance from a camera.",
"dependencies": ["shadows"],
"config": "shadowscutoffdistance",
"min": 100,
"max": 1500
},
{
"type": "boolean",
"label": "Cover whole map",
@ -360,6 +351,15 @@
"dependencies": ["shadows"],
"config": "shadowscovermap"
},
{
"type": "slider",
"label": "Cutoff distance",
"tooltip": "Hides shadows beyond a certain distance from a camera.",
"dependencies": ["shadows", { "config": "shadowscovermap", "value": "false" }],
"config": "shadowscutoffdistance",
"min": 100,
"max": 1500
},
{
"type": "boolean",
"label": "Water effects",

View File

@ -45,7 +45,7 @@ class EntityBox
static compileTooltip(template)
{
return ReferencePage.buildText(template, this.prototype.TooltipFunctions) + "\n" + showTemplateViewerOnClickTooltip();
return ReferencePage.buildText(template, this.prototype.TooltipFunctions) + "\n" + getTemplateViewerOnClickTooltip();
}
/**

View File

@ -16,7 +16,7 @@
<!-- Background circle -->
<object
type="image"
size="4 4 100%-4 100%-4"
size="4 6 100%-4 100%-6"
sprite="stretched:session/minimap_circle_modern.png"
ghost="true"
/>
@ -24,7 +24,7 @@
<!-- Idle Worker Button -->
<object name="idleWorkerButton"
type="button"
size="100%-119 100%-120 100%-4 100%-5"
size="100%-119 100%-121 100%-4 100%-6"
tooltip_style="sessionToolTip"
hotkey="selection.idleworker"
sprite="stretched:session/minimap-idle.png"
@ -45,7 +45,7 @@
<!-- Diplomacy Colors Button -->
<object name="diplomacyColorsButton"
type="button"
size="4 100%-120 119 100%-5"
size="3 100%-121 118 100%-6"
tooltip_style="sessionToolTip"
hotkey="session.diplomacycolors"
/>
@ -54,7 +54,7 @@
<object
name="flareButton"
type="button"
size="3 3 118 118"
size="2 4 117 119"
tooltip_style="sessionToolTip"
hotkey="session.flareactivate"
sprite="stretched:session/minimap-flare.png"
@ -66,7 +66,7 @@
<!-- MiniMap -->
<object
name="minimap"
size="8 8 100%-8 100%-8"
size="8 10 100%-8 100%-10"
type="minimap"
mask="true"
flare_texture_count="16"

View File

@ -479,8 +479,8 @@ EntitySelection.prototype.selectAndMoveTo = function(entityID)
this.reset();
this.addList([entityID]);
Engine.CameraMoveTo(entState.position.x, entState.position.z);
}
setCameraFollow(entityID);
};
/**
* Adds the formation members of a selected entities to the selection.

View File

@ -324,9 +324,17 @@ function displaySingle(entState)
// TODO: we should require all entities to have icons
Engine.GetGUIObjectByName("icon").sprite = template.icon ? ("stretched:session/portraits/" + template.icon) : "BackgroundBlack";
if (template.icon)
Engine.GetGUIObjectByName("iconBorder").onPressRight = () => {
{
const iconBorder = Engine.GetGUIObjectByName("iconBorder");
iconBorder.onPress = () => {
setCameraFollow(entState.id);
};
iconBorder.onPressRight = () => {
showTemplateDetails(entState.template, playerState.civ);
};
}
let detailedTooltip = [
getAttackTooltip,
@ -357,10 +365,12 @@ function displaySingle(entState)
getVisibleEntityClassesFormatted,
getAurasTooltip,
getEntityTooltip,
getTreasureTooltip,
showTemplateViewerOnRightClickTooltip
getTreasureTooltip
].map(func => func(template)));
const leftClickTooltip = hasClass(entState, "Unit") ? getFollowOnLeftClickTooltip() : getFocusOnLeftClickTooltip();
iconTooltips.push(leftClickTooltip + " " + getTemplateViewerOnRightClickTooltip());
Engine.GetGUIObjectByName("iconBorder").tooltip = iconTooltips.filter(tip => tip).join("\n");
Engine.GetGUIObjectByName("detailsAreaSingle").hidden = false;

View File

@ -207,7 +207,7 @@ g_SelectionPanels.Construction = {
getGarrisonTooltip(template),
getTurretsTooltip(template),
getPopulationBonusTooltip(template),
showTemplateViewerOnRightClickTooltip(template)
getTemplateViewerOnRightClickTooltip(template)
);
@ -578,7 +578,7 @@ g_SelectionPanels.Queue = {
"neededSlots": queuedItem.neededSlots
}));
}
tooltips.push(showTemplateViewerOnRightClickTooltip(template));
tooltips.push(getTemplateViewerOnRightClickTooltip(template));
data.button.tooltip = tooltips.join("\n");
data.countDisplay.caption = queuedItem.count > 1 ? queuedItem.count : "";
@ -763,7 +763,7 @@ g_SelectionPanels.Research = {
getEntityNamesFormatted,
getEntityTooltip,
getEntityCostTooltip,
showTemplateViewerOnRightClickTooltip
getTemplateViewerOnRightClickTooltip
].map(func => func(template));
if (!requirementsPassed)
@ -1058,7 +1058,7 @@ g_SelectionPanels.Training = {
getResourceDropsiteTooltip
].map(func => func(template)));
tooltips.push(showTemplateViewerOnRightClickTooltip());
tooltips.push(getTemplateViewerOnRightClickTooltip());
tooltips.push(
formatBatchTrainingString(buildingsCountToTrainFullBatch, fullBatchSize, remainderBatch),
getRequirementsTooltip(requirementsMet, template.requirements, GetSimState().players[data.player].civ),
@ -1180,7 +1180,8 @@ g_SelectionPanels.Upgrade = {
formatMatchLimitString(limits.matchLimit, limits.matchCount, limits.type),
getRequirementsTooltip(requirementsMet, data.item.requirements, GetSimState().players[data.player].civ),
getNeededResourcesTooltip(neededResources),
showTemplateViewerOnRightClickTooltip());
getTemplateViewerOnRightClickTooltip()
);
tooltip = tooltips.filter(tip => tip).join("\n");

View File

@ -386,14 +386,16 @@ function cancelUpgradeEntity()
}
/**
* Set the camera to follow the given entity if it's a unit.
* Otherwise stop following.
* Focus the camera on the entity and follow if it's a unit.
* If that's not possible, stop any current follow.
*/
function setCameraFollow(entity)
{
let entState = entity && GetEntityState(entity);
const entState = entity && GetEntityState(entity);
if (entState && hasClass(entState, "Unit"))
Engine.CameraFollow(entity);
else if (entState?.position)
Engine.CameraMoveTo(entState.position.x, entState.position.z);
else
Engine.CameraFollow(0);
}

View File

@ -5,7 +5,7 @@
>
<!-- Names and civilization emblem etc. (This must come before the attack and resistance icon to avoid clipping issues.) -->
<object size="0 92 100% 100%" name="statsArea" type="image" sprite="edgedPanelShader">
<object size="0 96 100% 100%" name="statsArea" type="image" sprite="edgedPanelShader">
<!-- Civilization tooltip. -->
<object size="0 38 100% 62" tooltip_style="sessionToolTip">
<!-- Civilization emblem. -->
@ -26,14 +26,14 @@
</object>
<!-- Stats Bars -->
<object size= "0 0 100% 96" type="image" tooltip_style="sessionToolTip">
<object size= "0 0 100% 100" type="image" tooltip_style="sessionToolTip">
<object size="0 0 100% 60" type="image" sprite="topEdgedPanelShader">
<object size="0 0 100% 65" type="image" sprite="topEdgedPanelShader">
<!-- Placeholders storing the position for the bars. -->
<object size="96 1 100% 24" name="sectionPosTop" hidden="true"/>
<object size="96 11 100% 34" name="sectionPosMiddle" hidden="true"/>
<object size="96 32 100% 55" name="sectionPosBottom" hidden="true"/>
<object size="96 2 100% 24" name="sectionPosTop" hidden="true"/>
<object size="96 12 100% 34" name="sectionPosMiddle" hidden="true"/>
<object size="96 35 100% 55" name="sectionPosBottom" hidden="true"/>
<!-- Capture bar -->
<object name="captureSection">
@ -78,15 +78,15 @@
</object>
</object>
<object size="0 59 100% 95" type="image" sprite="edgedPanelShader">
<object size="0 63 100% 98" type="image" sprite="edgedPanelShader">
<!-- Attack and Resistance -->
<object size="96 0 128 32" name="attackAndResistanceStats" type="image" sprite="stretched:session/icons/stances/defensive.png" tooltip_style="sessionToolTipInstantly">
<object size="97 0 129 32" name="attackAndResistanceStats" type="image" sprite="stretched:session/icons/stances/defensive.png" tooltip_style="sessionToolTipInstantly">
<translatableAttribute id="tooltip">Attack and Resistance</translatableAttribute>
</object>
<!-- Resource carrying icon/counter -->
<!-- Used also for number of gatherers/builders -->
<object size="100%-96 0 100%-36 32" type="text" name="resourceCarryingText" style="CarryingTextRight"/>
<object size="100%-96 0 100%-33 32" type="text" name="resourceCarryingText" style="CarryingTextRight"/>
<object size="100%-32 0 100% 32" type="image" name="resourceCarryingIcon" tooltip_style="sessionToolTip"/>
</object>

View File

@ -79,7 +79,7 @@
<!-- Limit to the minimal supported width of 1024 pixels. -->
<object size="50%-512 0 50%+512 100%">
<object size="50%-512 100%-200 50%-312 100%">
<object size="50%-512 100%-204 50%-312 100%">
<include file="gui/session/minimap/MiniMap.xml"/>
</object>
@ -95,7 +95,7 @@
<!-- Selection Details Panel (middle). -->
<object name="selectionDetails"
size="50%-114 100%-200 50%+114 100%"
size="50%-114 100%-204 50%+114 100%"
sprite="selectionDetailsPanel"
type="image"
>

View File

@ -56,18 +56,25 @@
}
],
"AINames": [
"Cimon",
"Aristides",
"Xenophon",
"Hippias",
"Cleisthenes",
"Thucydides",
"Alcibiades",
"Miltiades",
"Aristides",
"Callias",
"Chabrias",
"Cimon",
"Cleisthenes",
"Cleon",
"Cleophon",
"Conon",
"Demosthenes",
"Ephialtes",
"Hippias",
"Miltiades",
"Nicias",
"Phocion",
"Pisistratus",
"Thrasybulus",
"Demosthenes"
"Thucydides",
"Xenophon"
],
"SkirmishReplacements": {
"skirmish/units/default_infantry_ranged_b": "units/athen/infantry_slinger_b",

View File

@ -76,13 +76,16 @@
}
],
"AINames": [
"Prasutagus",
"Venutius",
"Adminius",
"Cassivellaunus",
"Cogidubnus",
"Commius",
"Comux",
"Adminius",
"Dubnovellaunus",
"Imanuentius",
"Mandubracius",
"Prasutagus",
"Venutius",
"Vosenius"
],
"SkirmishReplacements": {

View File

@ -63,6 +63,7 @@
"Audax",
"Calcus",
"Ditalcus",
"Litennon",
"Minurus",
"Olyndicus",
"Orison",

View File

@ -60,16 +60,21 @@
}
],
"AINames": [
"Dienekes",
"Agis II",
"Archidamus",
"Lysander",
"Pausanias",
"Agesilaus",
"Agesipolis",
"Agis II",
"Anaxandridas",
"Archidamus",
"Cleomenes",
"Dienekes",
"Echestratus",
"Eurycrates",
"Eucleidas",
"Agesipolis"
"Eurybiades",
"Eurycrates",
"Gylippus",
"Leotychidas",
"Lysander",
"Pausanias"
],
"SkirmishReplacements": {
"skirmish/structures/default_house_10": "structures/{civ}/house",

View File

@ -53,7 +53,10 @@ pipeline {
stage("Check for shader changes") {
when {
changeset 'binaries/data/mods/**/shaders/**/*.xml'
anyOf {
changeset 'binaries/data/mods/**/shaders/**/*.xml'
changeset 'source/tools/spirv/compile.py'
}
}
steps {
script { buildSPIRV = true }
@ -65,7 +68,7 @@ pipeline {
bat "cd libraries && get-windows-libs.bat"
bat "(robocopy C:\\wxwidgets3.2.5\\lib libraries\\win32\\wxwidgets\\lib /MIR /NDL /NJH /NJS /NP /NS /NC) ^& IF %ERRORLEVEL% LEQ 1 exit 0"
bat "(robocopy C:\\wxwidgets3.2.5\\include libraries\\win32\\wxwidgets\\include /MIR /NDL /NJH /NJS /NP /NS /NC) ^& IF %ERRORLEVEL% LEQ 1 exit 0"
bat "cd build\\workspaces && update-workspaces.bat --atlas --without-pch --without-tests"
bat "cd build\\workspaces && update-workspaces.bat --atlas --without-pch --large-address-aware --without-tests"
}
}
@ -124,6 +127,8 @@ pipeline {
}
steps {
ws("workspace/nightly-svn") {
bat "del /s /q binaries\\data\\mods\\mod\\shaders\\spirv"
bat "del /s /q binaries\\data\\mods\\public\\shaders\\spirv"
bat "python source/tools/spirv/compile.py -d binaries/data/mods/mod binaries/data/mods/mod source/tools/spirv/rules.json binaries/data/mods/mod"
bat "python source/tools/spirv/compile.py -d binaries/data/mods/mod binaries/data/mods/public source/tools/spirv/rules.json binaries/data/mods/public"
}

View File

@ -55,7 +55,7 @@ pipeline {
stage("Template Analyzer") {
steps {
ws("/zpool0/entity-docs"){
sh "cd source/tools/templatesanalyzer/ && python3 unitTables.py"
sh "cd source/tools/templatesanalyzer/ && python3 unit_tables.py"
}
}
}

View File

@ -14,7 +14,11 @@ else
-- No Unix-specific libs yet (use source directory instead!)
end
-- directory for shared, bundled libraries
libraries_source_dir = rootdir.."/libraries/source/"
if os.istarget("windows") then
libraries_source_dir = rootdir.."/libraries/win32/"
else
libraries_source_dir = rootdir.."/libraries/source/"
end
third_party_source_dir = rootdir.."/source/third_party/"
local function add_default_lib_paths(extern_lib)

View File

@ -55,7 +55,6 @@ MOLTENVK_VERSION="1.2.2"
# * NVTT
# * FCollada
# --------------------------------------------------------------
source_svnrev="28207"
# --------------------------------------------------------------
# Provided by OS X:
# * OpenAL
@ -812,6 +811,7 @@ GNUTLS_DIR="$(pwd)/gnutls"
HOGWEED_LIBS="-L${NETTLE_DIR}/lib -lhogweed" \
GMP_CFLAGS="-I${GMP_DIR}/include" \
GMP_LIBS="-L${GMP_DIR}/lib -lgmp" \
"$HOST_PLATFORM" \
--prefix="$INSTALL_DIR" \
--enable-shared=no \
--without-idn \
@ -819,8 +819,10 @@ GNUTLS_DIR="$(pwd)/gnutls"
--with-included-libtasn1 \
--without-p11-kit \
--without-brotli \
--without-zstd \
--without-tpm2 \
--disable-libdane \
--disable-tests \
--disable-guile \
--disable-doc \
--disable-tools \
--disable-nls
@ -1120,55 +1122,27 @@ echo "Building Molten VK..."
# --------------------------------------------------------------------
# The following libraries are shared on different OSes and may
# be customized, so we build and install them from bundled sources
# (served over SVN)
# (served over SVN or other sources)
# --------------------------------------------------------------------
if [ -e ../source/.svn ]; then
(
cd ../source
svn cleanup
svn up -r $source_svnrev
) || die "Failed update of source libs"
else
svn co -r $source_svnrev https://svn.wildfiregames.com/public/source-libs/trunk ../source
fi
# SpiderMonkey - bundled, no download
(
cd ../source/spidermonkey/
if [ $force_rebuild = "true" ]; then
rm -f .already-built
fi
# Use the regular build script for SM.
JOBS="$JOBS" ZLIB_DIR="$ZLIB_DIR" ARCH="$ARCH" ./build.sh || die "Error building spidermonkey"
cp bin/* ../../../binaries/system/
) || die "Failed to build spidermonkey"
export ARCH CXXFLAGS CFLAGS LDFLAGS CMAKE_FLAGS JOBS
# --------------------------------------------------------------
# NVTT - bundled, no download
(
cd ../source/nvtt
echo "Building cxxtest..."
if [ $force_rebuild = "true" ]; then
rm -f .already-built
fi
CXXFLAGS="$CXXFLAGS" CFLAGS="$CFLAGS" LDFLAGS="$LDFLAGS" CMAKE_FLAGS=$CMAKE_FLAGS JOBS="$JOBS" ./build.sh || die "Error building NVTT"
cp bin/* ../../../binaries/system/
) || die "Failed to build nvtt"
./../source/cxxtest-4.4/build.sh || die "cxxtest build failed"
# --------------------------------------------------------------
# FCollada - bundled, no download
(
cd ../source/fcollada/
echo "Building FCollada..."
if [ $force_rebuild = "true" ]; then
rm -f .already-built
fi
./../source/fcollada/build.sh || die "FCollada build failed"
CXXFLAGS="$CXXFLAGS" CFLAGS="$CFLAGS" LDFLAGS="$LDFLAGS" JOBS="$JOBS" ./build.sh || die "Error building FCollada"
cp bin/* ../../../binaries/system/
) || die "Failed to build fcollada"
# --------------------------------------------------------------
echo "Building nvtt..."
./../source/nvtt/build.sh || die "NVTT build failed"
# --------------------------------------------------------------
echo "Building Spidermonkey..."
./../source/spidermonkey/build.sh || die "SpiderMonkey build failed"

View File

@ -6,10 +6,6 @@ die()
exit 1
}
# SVN revision to checkout for source-libs
# Update this line when you commit an update to source-libs
source_svnrev="28207"
if [ "$(uname -s)" = "Darwin" ]; then
die "This script should not be used on macOS: use build-macos-libs.sh instead."
fi
@ -18,48 +14,30 @@ cd "$(dirname "$0")" || die
# Now in libraries/ (where we assume this script resides)
# Check for whitespace in absolute path; this will cause problems in the
# SpiderMonkey build and maybe elsewhere, so we just forbid it
# Use perl as an alternative to readlink -f, which isn't available on BSD
SCRIPTPATH=$(perl -MCwd -e 'print Cwd::abs_path shift' "$0")
case "$SCRIPTPATH" in
*\ *)
# SpiderMonkey build and maybe elsewhere, so we just forbid it.
case "$(realpath .)" in
*[[:space:]]*)
die "Absolute path contains whitespace, which will break the build - move the game to a path without spaces"
;;
esac
# Parse command-line options (download options and build options):
source_libs_dir="source"
without_nvtt=false
with_system_nvtt=false
with_system_mozjs=false
with_spirv_reflect=false
JOBS=${JOBS:="-j2"}
for i in "$@"; do
case $i in
--source-libs-dir=*) source_libs_dir=${1#*=} ;;
--source-libs-dir) die "correct syntax is --source-libs-dir=/path/to/dir" ;;
--without-nvtt) without_nvtt=true ;;
--with-system-nvtt) with_system_nvtt=true ;;
--with-system-mozjs) with_system_mozjs=true ;;
--with-spirv-reflect) with_spirv_reflect=true ;;
-j*) JOBS=$i ;;
esac
done
# Download source libs
echo "Downloading source libs..."
echo
if [ -e "${source_libs_dir}"/.svn ]; then
(cd "${source_libs_dir}" && svn cleanup && svn up -r $source_svnrev)
else
svn co -r $source_svnrev https://svn.wildfiregames.com/public/source-libs/trunk "${source_libs_dir}"
fi
# Build/update bundled external libraries
echo "Building third-party dependencies..."
echo
# Some of our makefiles depend on GNU make, so we set some sane defaults if MAKE
# is not set.
case "$(uname -s)" in
@ -71,48 +49,28 @@ case "$(uname -s)" in
;;
esac
(cd "${source_libs_dir}"/fcollada && MAKE=${MAKE} JOBS=${JOBS} ./build.sh) || die "FCollada build failed"
export MAKE JOBS
# Build/update bundled external libraries
echo "Building third-party dependencies..."
echo
./source/cxxtest-4.4/build.sh || die "cxxtest build failed"
echo
./source/fcollada/build.sh || die "FCollada build failed"
echo
if [ "$with_system_nvtt" = "false" ] && [ "$without_nvtt" = "false" ]; then
(cd "${source_libs_dir}"/nvtt && MAKE=${MAKE} JOBS=${JOBS} ./build.sh) || die "NVTT build failed"
./source/nvtt/build.sh || die "NVTT build failed"
cp source/nvtt/bin/* ../binaries/system/
fi
echo
if [ "$with_system_mozjs" = "false" ]; then
(cd "${source_libs_dir}"/spidermonkey && MAKE=${MAKE} JOBS=${JOBS} ./build.sh) || die "SpiderMonkey build failed"
./source/spidermonkey/build.sh || die "SpiderMonkey build failed"
cp source/spidermonkey/bin/* ../binaries/system/
fi
echo
if [ "$with_spirv_reflect" = "true" ]; then
./source/spirv-reflect/build.sh || die "spirv-reflect build failed"
fi
echo "Copying built files..."
# Copy built binaries to binaries/system/
cp "${source_libs_dir}"/fcollada/bin/* ../binaries/system/
if [ "$with_system_nvtt" = "false" ] && [ "$without_nvtt" = "false" ]; then
cp "${source_libs_dir}"/nvtt/bin/* ../binaries/system/
fi
if [ "$with_system_mozjs" = "false" ]; then
cp "${source_libs_dir}"/spidermonkey/bin/* ../binaries/system/
fi
# If a custom source-libs dir was used, includes and static libs should be copied to libraries/source/
# and all other bundled content should be copied.
if [ "$source_libs_dir" != "source" ]; then
rsync -avzq \
--exclude fcollada \
--exclude nvtt \
--exclude spidermonkey \
"${source_libs_dir}"/ source
mkdir -p source/fcollada
cp -r "${source_libs_dir}"/fcollada/include source/fcollada/
cp -r "${source_libs_dir}"/fcollada/lib source/fcollada/
if [ "$with_system_nvtt" = "false" ] && [ "$without_nvtt" = "false" ]; then
mkdir -p source/nvtt
cp -r "${source_libs_dir}"/nvtt/include source/nvtt/
cp -r "${source_libs_dir}"/nvtt/lib source/nvtt/
fi
if [ "$with_system_mozjs" = "false" ]; then
mkdir -p source/spidermonkey
cp -r "${source_libs_dir}"/spidermonkey/include-unix-debug source/spidermonkey/
cp -r "${source_libs_dir}"/spidermonkey/include-unix-release source/spidermonkey/
cp -r "${source_libs_dir}"/spidermonkey/lib source/spidermonkey/
fi
fi
echo "Done."

View File

@ -1,24 +1,11 @@
rem **Download sources and binaries of libraries**
rem **SVN revision to checkout for source-libs and windows-libs**
rem **Update this line when you commit an update to source-libs or windows-libs**
set "svnrev=28207"
rem **SVN revision to checkout for windows-libs**
rem **Update this line when you commit an update to windows-libs**
set "svnrev=28209"
if exist source\.svn (
cd source && svn cleanup && svn up -r %svnrev% && cd ..
) else (
svn co -r %svnrev% https://svn.wildfiregames.com/public/source-libs/trunk source
)
if exist win32\.svn (
cd win32 && svn cleanup && svn up -r %svnrev% && cd ..
) else (
svn co -r %svnrev% https://svn.wildfiregames.com/public/windows-libs/trunk win32
)
svn co https://svn.wildfiregames.com/public/windows-libs/trunk@%svnrev% win32
rem **Copy binaries to binaries/system/**
copy source\fcollada\bin\* ..\binaries\system\
copy source\nvtt\bin\* ..\binaries\system\
copy source\spidermonkey\bin\* ..\binaries\system\
for /d %%l in (win32\*) do (if exist %%l\bin copy /y %%l\bin\* ..\binaries\system\)

View File

@ -0,0 +1,27 @@
#!/bin/sh
set -e
cd "$(dirname "$0")"
LIB_VERSION=28209
if [ -e .already-built ] && [ "$(cat .already-built)" = "${LIB_VERSION}" ]; then
echo "cxxtest-4.4 is already up to date."
exit
fi
# fetch
svn co https://svn.wildfiregames.com/public/source-libs/trunk/cxxtest-4.4@${LIB_VERSION} cxxtest-4.4-svn
# unpack
rm -Rf cxxtest-4.4-build
cp -R cxxtest-4.4-svn cxxtest-4.4-build
# nothing to actually build
# built as part of building tests
# install
rm -Rf bin cxxtest python
cp -R cxxtest-4.4-build/bin cxxtest-4.4-build/cxxtest cxxtest-4.4-build/python .
echo "${LIB_VERSION}" >.already-built

View File

@ -0,0 +1,30 @@
#!/bin/sh
set -e
cd "$(dirname "$0")"
LIB_VERSION=28209
if [ -e .already-built ] && [ "$(cat .already-built)" = "${LIB_VERSION}" ]; then
echo "FCollada is already up to date."
exit
fi
# fetch
svn co "https://svn.wildfiregames.com/public/source-libs/trunk/fcollada@${LIB_VERSION}" fcollada-svn
# unpack
rm -Rf fcollada-build
cp -R fcollada-svn fcollada-build
# build
(
cd fcollada-build
./build.sh
)
# install
rm -Rf include lib
cp -R fcollada-build/include fcollada-build/lib .
echo "${LIB_VERSION}" >.already-built

31
libraries/source/nvtt/build.sh Executable file
View File

@ -0,0 +1,31 @@
#!/bin/sh
set -e
cd "$(dirname "$0")"
LIB_VERSION=28209
if [ -e .already-built ] && [ "$(cat .already-built)" = "${LIB_VERSION}" ]; then
echo "NVTT is already up to date."
exit
fi
# fetch
svn co "https://svn.wildfiregames.com/public/source-libs/trunk/nvtt@${LIB_VERSION}" nvtt-svn
# unpack
rm -Rf nvtt-build
cp -R nvtt-svn nvtt-build
# build
(
cd nvtt-build
mkdir bin lib
./build.sh
)
# install
rm -Rf bin include lib
cp -R nvtt-build/bin nvtt-build/include nvtt-build/lib .
echo "${LIB_VERSION}" >.already-built

View File

@ -0,0 +1,34 @@
#!/bin/sh
set -e
cd "$(dirname "$0")"
LIB_VERSION=28209
if [ -e .already-built ] && [ "$(cat .already-built)" = "${LIB_VERSION}" ]; then
echo "Spidermonkey is already up to date."
exit
fi
# fetch
svn co "https://svn.wildfiregames.com/public/source-libs/trunk/spidermonkey@${LIB_VERSION}" spidermonkey-svn
# unpack
rm -Rf spidermonkey-build
cp -R spidermonkey-svn spidermonkey-build
# build
(
cd spidermonkey-build
mkdir bin lib
./build.sh
)
# install
rm -Rf bin include-unix-debug include-unix-release lib
cp -R spidermonkey-build/bin spidermonkey-build/include-unix-release spidermonkey-build/lib .
if [ "$(uname -s)" != "FreeBSD" ]; then
cp -R spidermonkey-build/include-unix-debug .
fi
echo "${LIB_VERSION}" >.already-built

View File

@ -0,0 +1,35 @@
#!/bin/sh
set -e
cd "$(dirname "$0")"
PV=1.3.290.0
LIB_VERSION=${PV}
if [ -e .already-built ] && [ "$(cat .already-built || true)" = "${LIB_VERSION}" ]; then
echo "spirv-reflect is already up to date."
exit
fi
# fetch
if [ ! -e "vulkan-sdk-${PV}.tar.gz" ]; then
curl -fLo "vulkan-sdk-${PV}.tar.gz" \
"https://github.com/KhronosGroup/SPIRV-Reflect/archive/refs/tags/vulkan-sdk-${PV}.tar.gz"
fi
# unpack
rm -Rf "SPIRV-Reflect-vulkan-sdk-${PV}"
tar xf "vulkan-sdk-${PV}.tar.gz"
# configure
cmake -B build -S "SPIRV-Reflect-vulkan-sdk-${PV}" \
-DCMAKE_INSTALL_PREFIX="$(realpath . || true)"
# build
cmake --build build
# install
rm -Rf bin
cmake --install build
echo "${LIB_VERSION}" >.already-built

View File

@ -1,5 +1,7 @@
line-length = 99
target-version = "py311"
[format]
line-ending = "lf"
@ -10,36 +12,29 @@ ignore = [
"C90",
"COM812",
"D10",
"DTZ005",
"EM",
"FA",
"FIX",
"FBT",
"ISC001",
"N",
"PERF203",
"N817",
"PERF401",
"PLR0912",
"PLR0913",
"PLR0915",
"PLR2004",
"PLW2901",
"PT",
"PTH",
"RUF012",
"S101",
"S310",
"S314",
"S324",
"S320",
"S603",
"S607",
"T20",
"TD002",
"TD003",
"TRY002",
"TRY003",
"TRY004",
"UP038",
"W505"
]
@ -52,3 +47,6 @@ max-doc-length = 72
[lint.pydocstyle]
convention = "pep257"
[lint.pylint]
max-args = 8

View File

@ -23,7 +23,7 @@ LANGS="ast ca cs de el en_GB es eu fi fr gd hu id it nl pl pt_BR ru sk sv tr uk"
REGEX=$(printf "\|%s" ${LANGS} | cut -c 2-)
REGEX=".*/\(${REGEX}\)\.[-A-Za-z0-9_.]\+\.po"
find binaries/ -name "*.po" | grep -v "$REGEX" | xargs rm -v || die "Error filtering languages."
find binaries/ -name "*.po" | grep -v "$REGEX" | xargs rm -fv || die "Error filtering languages."
# Build archive(s) - don't archive the _test.* mods
cd binaries/data/mods || die
@ -40,13 +40,21 @@ cd - || die
BUILD_SHADERS="${BUILD_SHADERS:=true}"
if [ "${BUILD_SHADERS}" = true ]; then
PYTHON=${PYTHON:=$(command -v python3 || command -v python)}
GLSLC=${GLSLC:=$(command -v glslc)}
SPIRV_REFLECT=${SPIRV_REFLECT:=$(command -v spirv-reflect)}
PYTHON=${PYTHON:=$(command -v python3 || command -v python || true)}
GLSLC=${GLSLC:=$(command -v glslc || true)}
SPIRV_REFLECT=${SPIRV_REFLECT:=$(command -v spirv-reflect || true)}
if [ -e "$(realpath libraries/source/spirv-reflect/bin/spirv-reflect || true)" ]; then
: "${SPIRV_REFLECT:=$(realpath libraries/source/spirv-reflect/bin/spirv-reflect || true)}"
fi
export SPIRV_REFLECT
[ -n "${PYTHON}" ] || die "Error: python is not available. Install it before proceeding."
[ -n "${GLSLC}" ] || die "Error: glslc is not available. Install it with the Vulkan SDK before proceeding."
[ -n "${SPIRV_REFLECT}" ] || die "Error: spirv-reflect is not available. Install it with the Vulkan SDK before proceeding."
[ -n "${GLSLC}" ] ||
die "Error: glslc is not available." \
" Install it with the Vulkan SDK or shaderc package before proceeding."
[ -n "${SPIRV_REFLECT}" ] ||
die "Error: spirv-reflect is not available." \
" Install it with the Vulkan SDK or build vendored spirv-reflect before proceeding."
cd source/tools/spirv || die

View File

@ -1,4 +1,3 @@
# -*- coding: utf-8 -*-
import os.path
#

View File

@ -8,7 +8,6 @@ from json import load, loads
from logging import INFO, WARNING, Filter, Formatter, StreamHandler, getLogger
from pathlib import Path
from struct import calcsize, unpack
from typing import Dict, List, Set, Tuple
from xml.etree import ElementTree as ET
from scriptlib import SimulTemplateEntity, find_files
@ -27,9 +26,9 @@ class SingleLevelFilter(Filter):
class CheckRefs:
def __init__(self):
self.files: List[Path] = []
self.roots: List[Path] = []
self.deps: List[Tuple[Path, Path]] = []
self.files: list[Path] = []
self.roots: list[Path] = []
self.deps: list[tuple[Path, Path]] = []
self.vfs_root = Path(__file__).resolve().parents[3] / "binaries" / "data" / "mods"
self.supportedTextureFormats = ("dds", "png")
self.supportedMeshesFormats = ("pmd", "dae")
@ -387,8 +386,8 @@ class CheckRefs:
cmp_auras = entity.find("Auras")
if cmp_auras is not None:
auraString = cmp_auras.text
for aura in auraString.split():
aura_string = cmp_auras.text
for aura in aura_string.split():
if not aura:
continue
if aura.startswith("-"):
@ -397,33 +396,33 @@ class CheckRefs:
cmp_identity = entity.find("Identity")
if cmp_identity is not None:
reqTag = cmp_identity.find("Requirements")
if reqTag is not None:
req_tag = cmp_identity.find("Requirements")
if req_tag is not None:
def parse_requirements(fp, req, recursionDepth=1):
techsTag = req.find("Techs")
if techsTag is not None:
for techTag in techsTag.text.split():
def parse_requirements(fp, req, recursion_depth=1):
techs_tag = req.find("Techs")
if techs_tag is not None:
for tech_tag in techs_tag.text.split():
self.deps.append(
(fp, Path(f"simulation/data/technologies/{techTag}.json"))
(fp, Path(f"simulation/data/technologies/{tech_tag}.json"))
)
if recursionDepth > 0:
recursionDepth -= 1
allReqTag = req.find("All")
if allReqTag is not None:
parse_requirements(fp, allReqTag, recursionDepth)
anyReqTag = req.find("Any")
if anyReqTag is not None:
parse_requirements(fp, anyReqTag, recursionDepth)
if recursion_depth > 0:
recursion_depth -= 1
all_req_tag = req.find("All")
if all_req_tag is not None:
parse_requirements(fp, all_req_tag, recursion_depth)
any_req_tag = req.find("Any")
if any_req_tag is not None:
parse_requirements(fp, any_req_tag, recursion_depth)
parse_requirements(fp, reqTag)
parse_requirements(fp, req_tag)
cmp_researcher = entity.find("Researcher")
if cmp_researcher is not None:
techString = cmp_researcher.find("Technologies")
if techString is not None:
for tech in techString.text.split():
tech_string = cmp_researcher.find("Technologies")
if tech_string is not None:
for tech in tech_string.text.split():
if not tech:
continue
if tech.startswith("-"):
@ -794,7 +793,7 @@ class CheckRefs:
uniq_files = {r.as_posix() for r in self.files}
lower_case_files = {f.lower(): f for f in uniq_files}
missing_files: Dict[str, Set[str]] = defaultdict(set)
missing_files: dict[str, set[str]] = defaultdict(set)
for parent, dep in self.deps:
dep_str = dep.as_posix()

View File

@ -7,7 +7,6 @@ import shutil
import sys
from pathlib import Path
from subprocess import CalledProcessError, run
from typing import Sequence
from xml.etree import ElementTree as ET
from scriptlib import SimulTemplateEntity, find_files
@ -50,7 +49,7 @@ errorch.setFormatter(logging.Formatter("%(levelname)s - %(message)s"))
logger.addHandler(errorch)
def main(argv: Sequence[str] | None = None) -> int:
def main() -> int:
parser = argparse.ArgumentParser(description="Validate templates")
parser.add_argument("-m", "--mod-name", required=True, help="The name of the mod to validate.")
parser.add_argument(
@ -73,7 +72,7 @@ def main(argv: Sequence[str] | None = None) -> int:
)
parser.add_argument("-v", "--verbose", help="Be verbose about the output.", default=False)
args = parser.parse_args(argv)
args = parser.parse_args()
if not args.relaxng_schema.exists():
logging.error(RELAXNG_SCHEMA_ERROR_MSG.format(args.relaxng_schema))

View File

@ -1,7 +1,10 @@
# Adapted from http://cairographics.org/freetypepython/
# ruff: noqa: TRY002
import ctypes
import sys
from typing import ClassVar
import cairo
@ -40,7 +43,7 @@ _surface = cairo.ImageSurface(cairo.FORMAT_A8, 0, 0)
class PycairoContext(ctypes.Structure):
_fields_ = [
_fields_: ClassVar = [
("PyObject_HEAD", ctypes.c_byte * object.__basicsize__),
("ctx", ctypes.c_void_p),
("base", ctypes.c_void_p),

View File

@ -21,7 +21,7 @@
import io
import os
import subprocess
from typing import List
from itertools import islice
from i18n_helper import PROJECT_ROOT_DIRECTORY
@ -37,7 +37,7 @@ def get_diff():
return io.StringIO(diff_process.stdout.decode("utf-8"))
def check_diff(diff: io.StringIO) -> List[str]:
def check_diff(diff: io.StringIO) -> list[str]:
"""Check a diff of .po files for meaningful changes.
Run through a diff of .po files and check that some of the changes
@ -86,16 +86,31 @@ def check_diff(diff: io.StringIO) -> List[str]:
return list(files.difference(keep))
def revert_files(files: List[str], verbose=False):
revert_process = subprocess.run(["svn", "revert", *files], capture_output=True, check=False)
if revert_process.returncode != 0:
print(
"Warning: Some files could not be reverted. "
f"Error: {revert_process.stderr.decode('utf-8')}"
def revert_files(files: list[str], verbose=False):
def batched(iterable, n):
"""Split an iterable in equally sized chunks.
Can be removed in favor of itertools.batched(), once Python
3.12 is the minimum required Python version.
"""
iterable = iter(iterable)
return iter(lambda: tuple(islice(iterable, n)), ())
errors = []
for batch in batched(files, 100):
revert_process = subprocess.run(
["svn", "revert", *batch], capture_output=True, check=False
)
if revert_process.returncode != 0:
errors.append(revert_process.stderr.decode())
if verbose:
for file in files:
print(f"Reverted {file}")
if errors:
print()
print("Warning: Some files could not be reverted. Errors:")
print("\n".join(errors))
def add_untracked(verbose=False):

View File

@ -27,51 +27,64 @@ However that needs to be fixed on the transifex side, see rP25896. For now
strip the e-mails using this script.
"""
import fileinput
import glob
import os
import re
import sys
from i18n_helper import L10N_FOLDER_NAME, PROJECT_ROOT_DIRECTORY, TRANSIFEX_CLIENT_FOLDER
TRANSLATOR_REGEX = re.compile(r"^(#\s+[^,<]*)\s+<.*>(.*)$")
LAST_TRANSLATION_REGEX = re.compile(r"^(\"Last-Translator:[^,<]*)\s+<.*>(.*)$")
def main():
translator_match = re.compile(r"^(#\s+[^,<]*)\s+<.*>(.*)")
last_translator_match = re.compile(r"^(\"Last-Translator:[^,<]*)\s+<.*>(.*)")
for folder in glob.iglob(
f"**/{L10N_FOLDER_NAME}/{TRANSIFEX_CLIENT_FOLDER}/",
root_dir=PROJECT_ROOT_DIRECTORY,
recursive=True,
):
for file in glob.iglob(
f"{os.path.join(folder, os.pardir)}/*.po", root_dir=PROJECT_ROOT_DIRECTORY
):
absolute_file_path = os.path.abspath(f"{PROJECT_ROOT_DIRECTORY}/{file}")
for root, folders, _ in os.walk(PROJECT_ROOT_DIRECTORY):
for folder in folders:
if folder != L10N_FOLDER_NAME:
continue
if not os.path.exists(os.path.join(root, folder, TRANSIFEX_CLIENT_FOLDER)):
continue
path = os.path.join(root, folder, "*.po")
files = glob.glob(path)
for file in files:
usernames = []
reached = False
for line in fileinput.input(
file.replace("\\", "/"), inplace=True, encoding="utf-8"
):
if reached:
file_content = []
usernames = []
changes = False
in_translators = False
found_last_translator = False
with open(absolute_file_path, "r+", encoding="utf-8") as fd:
for line in fd:
if line.strip() == "# Translators:":
in_translators = True
elif not line.strip().startswith("#"):
in_translators = False
elif in_translators:
if line == "# \n":
line = ""
m = translator_match.match(line)
if m:
if m.group(1) in usernames:
line = ""
else:
line = m.group(1) + m.group(2) + "\n"
usernames.append(m.group(1))
m2 = last_translator_match.match(line)
if m2:
line = re.sub(last_translator_match, r"\1\2", line)
elif line.strip() == "# Translators:":
reached = True
sys.stdout.write(line)
changes = True
continue
translator_match = TRANSLATOR_REGEX.match(line)
if translator_match:
changes = True
if translator_match.group(1) in usernames:
continue
line = TRANSLATOR_REGEX.sub(r"\1\2", line)
usernames.append(translator_match.group(1))
if not in_translators and not found_last_translator:
last_translator_match = LAST_TRANSLATION_REGEX.match(line)
if last_translator_match:
found_last_translator = True
changes = True
line = LAST_TRANSLATION_REGEX.sub(r"\1\2", line)
file_content.append(line)
if changes:
fd.seek(0)
fd.truncate()
fd.writelines(file_content)
if __name__ == "__main__":

View File

@ -1,6 +1,6 @@
"""Wrapper around babel Catalog / .po handling."""
from datetime import datetime
from datetime import UTC, datetime
from babel.messages.catalog import Catalog as BabelCatalog
from babel.messages.pofile import read_po, write_po
@ -10,7 +10,7 @@ class Catalog(BabelCatalog):
"""Wraps a BabelCatalog for convenience."""
def __init__(self, *args, project=None, copyright_holder=None, **other_kwargs):
date = datetime.now()
date = datetime.now(tz=UTC)
super().__init__(
*args,
header_comment=(

View File

@ -23,13 +23,32 @@
# OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
# ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import codecs
import json
import os
import re
import sys
from functools import lru_cache
from textwrap import dedent
from babel.messages.jslexer import tokenize, unquote_string
from lxml import etree
@lru_cache
def get_mask_pattern(mask: str) -> re.Pattern:
"""Build a regex pattern for matching file paths."""
parts = re.split(r"([*][*]?)", mask)
pattern = ""
for i, part in enumerate(parts):
if i % 2 != 0:
pattern += "[^/]+"
if len(part) == 2:
pattern += "(/[^/]+)*"
else:
pattern += re.escape(part)
pattern += "$"
return re.compile(pattern)
def pathmatch(mask, path):
"""Match paths to a mask, where the mask supports * and **.
@ -38,32 +57,22 @@ def pathmatch(mask, path):
* matches a sequence of characters without /.
** matches a sequence of characters without / followed by a / and
sequence of characters without /
:return: true iff path matches the mask, false otherwise
:return: true if path matches the mask, false otherwise
"""
s = re.split(r"([*][*]?)", mask)
p = ""
for i in range(len(s)):
if i % 2 != 0:
p = p + "[^/]+"
if len(s[i]) == 2:
p = p + "(/[^/]+)*"
else:
p = p + re.escape(s[i])
p = p + "$"
return re.match(p, path) is not None
return get_mask_pattern(mask).match(path) is not None
class Extractor:
def __init__(self, directory_path, filemasks, options):
self.directoryPath = directory_path
self.directory_path = directory_path
self.options = options
if isinstance(filemasks, dict):
self.includeMasks = filemasks["includeMasks"]
self.excludeMasks = filemasks["excludeMasks"]
self.include_masks = filemasks["includeMasks"]
self.exclude_masks = filemasks["excludeMasks"]
else:
self.includeMasks = filemasks
self.excludeMasks = []
self.include_masks = filemasks
self.exclude_masks = []
def run(self):
"""Extract messages.
@ -73,7 +82,7 @@ class Extractor:
:rtype: ``iterator``
"""
empty_string_pattern = re.compile(r"^\s*$")
directory_absolute_path = os.path.abspath(self.directoryPath)
directory_absolute_path = os.path.abspath(self.directory_path)
for root, folders, filenames in os.walk(directory_absolute_path):
for subdir in folders:
if subdir.startswith((".", "_")):
@ -82,28 +91,26 @@ class Extractor:
filenames.sort()
for filename in filenames:
filename = os.path.relpath(
os.path.join(root, filename), self.directoryPath
os.path.join(root, filename), self.directory_path
).replace(os.sep, "/")
for filemask in self.excludeMasks:
for filemask in self.exclude_masks:
if pathmatch(filemask, filename):
break
else:
for filemask in self.includeMasks:
if pathmatch(filemask, filename):
filepath = os.path.join(directory_absolute_path, filename)
for (
message,
plural,
context,
position,
comments,
) in self.extract_from_file(filepath):
if empty_string_pattern.match(message):
continue
for filemask in self.include_masks:
if not pathmatch(filemask, filename):
continue
if " " in filename or "\t" in filename:
filename = "\u2068" + filename + "\u2069"
yield message, plural, context, (filename, position), comments
filepath = os.path.join(directory_absolute_path, filename)
for message, plural, context, position, comments in self.extract_from_file(
filepath
):
if empty_string_pattern.match(message):
continue
if " " in filename or "\t" in filename:
filename = "\u2068" + filename + "\u2069"
yield message, plural, context, (filename, position), comments
def extract_from_file(self, filepath):
"""Extract messages from a specific file.
@ -122,8 +129,6 @@ class JavascriptExtractor(Extractor):
)
def extract_javascript_from_file(self, file_object):
from babel.messages.jslexer import tokenize, unquote_string
funcname = message_lineno = None
messages = []
last_argument = None
@ -237,7 +242,7 @@ class JavascriptExtractor(Extractor):
last_token = token
def extract_from_file(self, filepath):
with codecs.open(filepath, "r", encoding="utf-8-sig") as file_object:
with open(filepath, encoding="utf-8-sig") as file_object:
for lineno, funcname, messages, comments in self.extract_javascript_from_file(
file_object
):
@ -301,10 +306,8 @@ class TxtExtractor(Extractor):
"""Extract messages from plain text files."""
def extract_from_file(self, filepath):
with codecs.open(filepath, "r", encoding="utf-8-sig") as file_object:
for lineno, line in enumerate(
[line.strip("\n\r") for line in file_object.readlines()], start=1
):
with open(filepath, encoding="utf-8-sig") as file_object:
for lineno, line in enumerate([line.strip("\n\r") for line in file_object], start=1):
if line:
yield line, None, None, lineno, []
@ -329,66 +332,38 @@ class JsonExtractor(Extractor):
self.comments = self.options.get("comments", [])
def extract_from_file(self, filepath):
with codecs.open(filepath, "r", "utf-8") as file_object:
with open(filepath, encoding="utf-8") as file_object:
for message, context in self.extract_from_string(file_object.read()):
yield message, None, context, None, self.comments
def extract_from_string(self, string):
json_document = json.loads(string)
if isinstance(json_document, list):
for message, context in self.parse_list(json_document):
if message: # Skip empty strings.
yield message, context
elif isinstance(json_document, dict):
for message, context in self.parse_dictionary(json_document):
if message: # Skip empty strings.
yield message, context
else:
raise Exception(
"Unexpected JSON document parent structure (not a list or a dictionary). "
"You must extend the JSON extractor to support it."
)
yield from self.parse(json_document)
def parse_list(self, items_list):
for list_item in items_list:
if isinstance(list_item, list):
for message, context in self.parse_list(list_item):
yield message, context
elif isinstance(list_item, dict):
for message, context in self.parse_dictionary(list_item):
yield message, context
def parse_dictionary(self, dictionary):
for keyword in dictionary:
if keyword in self.keywords:
if isinstance(dictionary[keyword], str):
yield self.extract_string(dictionary[keyword], keyword)
elif isinstance(dictionary[keyword], list):
for message, context in self.extract_list(dictionary[keyword], keyword):
yield message, context
elif isinstance(dictionary[keyword], dict):
extract = None
if (
"extractFromInnerKeys" in self.keywords[keyword]
and self.keywords[keyword]["extractFromInnerKeys"]
):
for message, context in self.extract_dictionary_inner_keys(
dictionary[keyword], keyword
):
yield message, context
else:
extract = self.extract_dictionary(dictionary[keyword], keyword)
if extract:
yield extract
elif isinstance(dictionary[keyword], list):
for message, context in self.parse_list(dictionary[keyword]):
yield message, context
elif isinstance(dictionary[keyword], dict):
for message, context in self.parse_dictionary(dictionary[keyword]):
yield message, context
def parse(self, data, key=None):
"""Recursively parse JSON data and extract strings."""
if isinstance(data, list):
for item in data:
yield from self.parse(item)
elif isinstance(data, dict):
for key2, value in data.items():
if key2 in self.keywords:
if isinstance(value, str):
yield self.extract_string(value, key2)
elif isinstance(value, list):
yield from self.extract_list(value, key2)
elif isinstance(value, dict):
if self.keywords[key2].get("extractFromInnerKeys"):
for value2 in value.values():
yield from self.parse(value2, key2)
else:
yield from self.extract_dictionary(value, key2)
else:
yield from self.parse(value, key2)
elif isinstance(data, str) and key in self.keywords:
yield self.extract_string(data, key)
def extract_string(self, string, keyword):
context = None
if "tagAsContext" in self.keywords[keyword]:
context = keyword
elif "customContext" in self.keywords[keyword]:
@ -409,7 +384,6 @@ class JsonExtractor(Extractor):
def extract_dictionary(self, dictionary, keyword):
message = dictionary.get("_string", None)
if message and isinstance(message, str):
context = None
if "context" in dictionary:
context = str(dictionary["context"])
elif "tagAsContext" in self.keywords[keyword]:
@ -418,19 +392,7 @@ class JsonExtractor(Extractor):
context = self.keywords[keyword]["customContext"]
else:
context = self.context
return message, context
return None
def extract_dictionary_inner_keys(self, dictionary, keyword):
for inner_keyword in dictionary:
if isinstance(dictionary[inner_keyword], str):
yield self.extract_string(dictionary[inner_keyword], keyword)
elif isinstance(dictionary[inner_keyword], list):
yield from self.extract_list(dictionary[inner_keyword], keyword)
elif isinstance(dictionary[inner_keyword], dict):
extract = self.extract_dictionary(dictionary[inner_keyword], keyword)
if extract:
yield extract
yield message, context
class XmlExtractor(Extractor):
@ -439,86 +401,50 @@ class XmlExtractor(Extractor):
def __init__(self, directory_path, filemasks, options):
super().__init__(directory_path, filemasks, options)
self.keywords = self.options.get("keywords", {})
self.jsonExtractor = None
self.json_extractor = None
def get_json_extractor(self):
if not self.jsonExtractor:
self.jsonExtractor = JsonExtractor()
return self.jsonExtractor
if not self.json_extractor:
self.json_extractor = JsonExtractor(self.directory_path)
return self.json_extractor
def extract_from_file(self, filepath):
from lxml import etree
with codecs.open(filepath, "r", encoding="utf-8-sig") as file_object:
with open(filepath, encoding="utf-8-sig") as file_object:
xml_document = etree.parse(file_object)
for keyword in self.keywords:
for element in xml_document.iter(keyword):
lineno = element.sourceline
if element.text is not None:
comments = []
if "extractJson" in self.keywords[keyword]:
json_extractor = self.get_json_extractor()
json_extractor.set_options(self.keywords[keyword]["extractJson"])
for message, context in json_extractor.extract_from_string(
element.text
):
yield message, None, context, lineno, comments
else:
context = None
if "context" in element.attrib:
context = str(element.get("context"))
elif "tagAsContext" in self.keywords[keyword]:
context = keyword
elif "customContext" in self.keywords[keyword]:
context = self.keywords[keyword]["customContext"]
if "comment" in element.attrib:
comment = element.get("comment")
comment = " ".join(
comment.split()
) # Remove tabs, line breaks and unecessary spaces.
comments.append(comment)
if "splitOnWhitespace" in self.keywords[keyword]:
for split_text in element.text.split():
# split on whitespace is used for token lists, there, a
# leading '-' means the token has to be removed, so it's not
# to be processed here either
if split_text[0] != "-":
yield str(split_text), None, context, lineno, comments
else:
yield str(element.text), None, context, lineno, comments
for element in xml_document.iter(*self.keywords.keys()):
keyword = element.tag
# Hack from http://stackoverflow.com/a/2819788
class FakeSectionHeader:
def __init__(self, fp):
self.fp = fp
self.sechead = "[root]\n"
lineno = element.sourceline
if element.text is None:
continue
def readline(self):
if self.sechead:
try:
return self.sechead
finally:
self.sechead = None
else:
return self.fp.readline()
class IniExtractor(Extractor):
"""Extract messages from INI files."""
def __init__(self, directory_path, filemasks, options):
super().__init__(directory_path, filemasks, options)
self.keywords = self.options.get("keywords", [])
def extract_from_file(self, filepath):
import ConfigParser
config = ConfigParser.RawConfigParser()
with open(filepath, encoding="utf-8") as fd:
config.read_file(FakeSectionHeader(fd))
for keyword in self.keywords:
message = config.get("root", keyword).strip('"').strip("'")
context = None
comments = []
yield message, None, context, None, comments
if "extractJson" in self.keywords[keyword]:
json_extractor = self.get_json_extractor()
json_extractor.set_options(self.keywords[keyword]["extractJson"])
for message, context in json_extractor.extract_from_string(element.text):
yield message, None, context, lineno, comments
else:
context = None
if "context" in element.attrib:
context = str(element.get("context"))
elif "tagAsContext" in self.keywords[keyword]:
context = keyword
elif "customContext" in self.keywords[keyword]:
context = self.keywords[keyword]["customContext"]
if "comment" in element.attrib:
comment = element.get("comment")
comment = " ".join(
comment.split()
) # Remove tabs, line breaks and unnecessary spaces.
comments.append(comment)
if "splitOnWhitespace" in self.keywords[keyword]:
for split_text in element.text.split():
# split on whitespace is used for token lists, there, a
# leading '-' means the token has to be removed, so it's not
# to be processed here either
if split_text[0] != "-":
yield str(split_text), None, context, lineno, comments
else:
yield str(element.text), None, context, lineno, comments

View File

@ -1,12 +1,11 @@
"""Utils to list .po."""
import os
from typing import List, Optional
from i18n_helper.catalog import Catalog
def get_catalogs(input_file_path, filters: Optional[List[str]] = None) -> List[Catalog]:
def get_catalogs(input_file_path, filters: list[str] | None = None) -> list[Catalog]:
"""Return a list of "real" catalogs (.po) in the given folder."""
existing_translation_catalogs = []
l10n_folder_path = os.path.dirname(input_file_path)

View File

@ -32,7 +32,7 @@ def main():
path = os.path.join(root, folder)
os.chdir(path)
print(f"INFO: Starting to pull translations in {path}...")
subprocess.run(["tx", "pull", "-a", "-f"], check=False)
subprocess.run(["tx", "pull", "--all", "--force", "--workers=12"], check=False)
if __name__ == "__main__":

View File

@ -97,7 +97,7 @@ PATCHES_EXPECT_REVERT = [
]
@pytest.fixture(params=zip(PATCHES, PATCHES_EXPECT_REVERT))
@pytest.fixture(params=zip(PATCHES, PATCHES_EXPECT_REVERT, strict=False))
def patch(request):
return [io.StringIO(request.param[0]), request.param[1]]

View File

@ -16,6 +16,8 @@
# You should have received a copy of the GNU General Public License
# along with 0 A.D. If not, see <http://www.gnu.org/licenses/>.
import argparse
import glob
import json
import multiprocessing
import os
@ -33,18 +35,21 @@ def warn_about_untouched_mods():
mods_root_folder = os.path.join(PROJECT_ROOT_DIRECTORY, "binaries", "data", "mods")
untouched_mods = {}
for mod_folder in os.listdir(mods_root_folder):
if mod_folder[0] != "_" and mod_folder[0] != ".":
if not os.path.exists(os.path.join(mods_root_folder, mod_folder, L10N_FOLDER_NAME)):
untouched_mods[mod_folder] = (
f"There is no '{L10N_FOLDER_NAME}' folder in the root folder of this mod."
)
elif not os.path.exists(
os.path.join(mods_root_folder, mod_folder, L10N_FOLDER_NAME, messages_filename)
):
untouched_mods[mod_folder] = (
f"There is no '{messages_filename}' file within the '{L10N_FOLDER_NAME}' "
f"folder in the root folder of this mod."
)
if mod_folder.startswith(("_", ".")):
continue
if not os.path.exists(os.path.join(mods_root_folder, mod_folder, L10N_FOLDER_NAME)):
untouched_mods[mod_folder] = (
f"There is no '{L10N_FOLDER_NAME}' folder in the root folder of this mod."
)
elif not os.path.exists(
os.path.join(mods_root_folder, mod_folder, L10N_FOLDER_NAME, messages_filename)
):
untouched_mods[mod_folder] = (
f"There is no '{messages_filename}' file within the '{L10N_FOLDER_NAME}' "
f"folder in the root folder of this mod."
)
if untouched_mods:
print("Warning: No messages were extracted from the following mods:")
for mod in untouched_mods:
@ -78,7 +83,7 @@ def generate_pot(template_settings, root_path):
options = rule.get("options", {})
extractor_class = getattr(
import_module("extractors.extractors"), f'{rule["extractor"].title()}Extractor'
import_module("i18n_helper.extractors"), f'{rule["extractor"].title()}Extractor'
)
extractor = extractor_class(input_root_path, rule["filemasks"], options)
format_flag = None
@ -111,8 +116,6 @@ def generate_templates_for_messages_file(messages_file_path):
def main():
import argparse
parser = argparse.ArgumentParser()
parser.add_argument(
"--scandir",
@ -120,12 +123,13 @@ def main():
"Type '.' for current working directory",
)
args = parser.parse_args()
for root, folders, _filenames in os.walk(args.scandir or PROJECT_ROOT_DIRECTORY):
for folder in folders:
if folder == L10N_FOLDER_NAME:
messages_file_path = os.path.join(root, folder, messages_filename)
if os.path.exists(messages_file_path):
generate_templates_for_messages_file(messages_file_path)
dir_to_scan = args.scandir or PROJECT_ROOT_DIRECTORY
for messages_file_path in glob.glob(
f"**/{L10N_FOLDER_NAME}/{messages_filename}", root_dir=dir_to_scan, recursive=True
):
generate_templates_for_messages_file(
os.path.abspath(f"{dir_to_scan}/{messages_file_path}")
)
warn_about_untouched_mods()

View File

@ -56,13 +56,13 @@ args = parser.parse_args()
HEIGHTMAP_BIT_SHIFT = 3
for xmlFile in args.files:
pmpFile = xmlFile[:-3] + "pmp"
for xml_file in args.files:
pmp_file = xml_file[:-3] + "pmp"
print("Processing " + xmlFile + " ...")
print("Processing " + xml_file + " ...")
if os.path.isfile(pmpFile):
with open(pmpFile, "rb") as f1, open(pmpFile + "~", "wb") as f2:
if os.path.isfile(pmp_file):
with open(pmp_file, "rb") as f1, open(pmp_file + "~", "wb") as f2:
# 4 bytes PSMP to start the file
f2.write(f1.read(4))
@ -73,7 +73,7 @@ for xmlFile in args.files:
elif args.reverse:
if version != 6:
print(
f"Warning: File {pmpFile} was not at version 6, while a negative version "
f"Warning: File {pmp_file} was not at version 6, while a negative version "
f"bump was requested.\nABORTING ..."
)
continue
@ -81,7 +81,7 @@ for xmlFile in args.files:
else:
if version != 5:
print(
f"Warning: File {pmpFile} was not at version 5, while a version bump was "
f"Warning: File {pmp_file} was not at version 5, while a version bump was "
f"requested.\nABORTING ..."
)
continue
@ -122,13 +122,13 @@ for xmlFile in args.files:
f1.close()
# replace the old file, comment to see both files
os.remove(pmpFile)
os.rename(pmpFile + "~", pmpFile)
os.remove(pmp_file)
os.rename(pmp_file + "~", pmp_file)
if os.path.isfile(xmlFile):
if os.path.isfile(xml_file):
with (
open(xmlFile, encoding="utf-8") as f1,
open(xmlFile + "~", "w", encoding="utf-8") as f2,
open(xml_file, encoding="utf-8") as f1,
open(xml_file + "~", "w", encoding="utf-8") as f2,
):
data = f1.read()
@ -137,7 +137,7 @@ for xmlFile in args.files:
if args.reverse:
if data.find('<Scenario version="6">') == -1:
print(
f"Warning: File {xmlFile} was not at version 6, while a negative "
f"Warning: File {xml_file} was not at version 6, while a negative "
f"version bump was requested.\nABORTING ..."
)
sys.exit()
@ -145,7 +145,7 @@ for xmlFile in args.files:
data = data.replace('<Scenario version="6">', '<Scenario version="5">')
elif data.find('<Scenario version="5">') == -1:
print(
f"Warning: File {xmlFile} was not at version 5, while a version bump "
f"Warning: File {xml_file} was not at version 5, while a version bump "
f"was requested.\nABORTING ..."
)
sys.exit()
@ -164,5 +164,5 @@ for xmlFile in args.files:
f2.close()
# replace the old file, comment to see both files
os.remove(xmlFile)
os.rename(xmlFile + "~", xmlFile)
os.remove(xml_file)
os.rename(xml_file + "~", xml_file)

View File

@ -58,4 +58,4 @@ actions = [zero_ad.actions.attack(my_units, enemy_units[0])]
state = game.step(actions)
```
For a more thorough example, check out samples/simple-example.py!
For a more thorough example, check out samples/simple_example.py!

View File

@ -7,11 +7,11 @@ import zero_ad
def dist(p1, p2):
return math.sqrt(sum(math.pow(x2 - x1, 2) for (x1, x2) in zip(p1, p2)))
return math.sqrt(sum(math.pow(x2 - x1, 2) for (x1, x2) in zip(p1, p2, strict=False)))
def center(units):
sum_position = map(sum, zip(*(u.position() for u in units)))
sum_position = map(sum, zip(*(u.position() for u in units), strict=False))
return [x / len(units) for x in sum_position]

View File

@ -11,11 +11,11 @@ with open(path.join(scriptdir, "..", "samples", "arcadia.json"), encoding="utf-8
def dist(p1, p2):
return math.sqrt(sum(math.pow(x2 - x1, 2) for (x1, x2) in zip(p1, p2)))
return math.sqrt(sum(math.pow(x2 - x1, 2) for (x1, x2) in zip(p1, p2, strict=False)))
def center(units):
sum_position = map(sum, zip(*(u.position() for u in units)))
sum_position = map(sum, zip(*(u.position() for u in units), strict=False))
return [x / len(units) for x in sum_position]

View File

@ -26,7 +26,7 @@ class RLAPI:
def get_templates(self, names):
post_data = "\n".join(names)
response = self.post("templates", post_data)
return zip(names, response.decode().split("\n"))
return zip(names, response.decode().split("\n"), strict=False)
def evaluate(self, code):
response = self.post("evaluate", code)

View File

@ -17,7 +17,7 @@ class ZeroAD:
actions = []
player_ids = cycle([self.player_id]) if player is None else cycle(player)
cmds = zip(player_ids, actions)
cmds = zip(player_ids, actions, strict=False)
cmds = ((player, action) for (player, action) in cmds if action is not None)
state_json = self.api.step(cmds)
self.current_state = GameState(json.loads(state_json), self)

View File

@ -1,7 +1,6 @@
#!/usr/bin/env python3
# -*- mode: python-mode; python-indent-offset: 4; -*-
#
# Copyright (C) 2023 Wildfire Games.
# Copyright (C) 2024 Wildfire Games.
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
@ -26,9 +25,12 @@ import hashlib
import itertools
import json
import os
import shutil
import subprocess
import sys
import xml.etree.ElementTree as ET
from enum import Enum
from pathlib import Path
import yaml
@ -46,15 +48,7 @@ def execute(command):
def calculate_hash(path):
assert os.path.isfile(path)
with open(path, "rb") as handle:
return hashlib.sha1(handle.read()).hexdigest()
def compare_spirv(path1, path2):
with open(path1, "rb") as handle:
spirv1 = handle.read()
with open(path2, "rb") as handle:
spirv2 = handle.read()
return spirv1 == spirv2
return hashlib.sha256(handle.read()).hexdigest()
def resolve_if(defines, expression):
@ -132,7 +126,17 @@ def compile_and_reflect(input_mod_path, dependencies, stage, path, out_path, def
)
execute(command[:-2] + ["-g", "-E", "-o", preprocessor_output_path])
raise ValueError(err)
ret, out, err = execute(["spirv-reflect", "-y", "-v", "1", output_path])
spirv_reflect = os.getenv("SPIRV_REFLECT", "spirv-reflect")
if shutil.which(spirv_reflect) is None:
spirv_reflect = (
Path(__file__).resolve().parent.parent.parent.parent
/ "libraries"
/ "source"
/ "spirv-reflect"
/ "bin"
/ "spirv-reflect"
)
ret, out, err = execute([spirv_reflect, "-y", "-v", "1", output_path])
if ret:
sys.stderr.write(
"Command returned {}:\nCommand: {}\nInput path: {}\nOutput path: {}\n"
@ -169,18 +173,21 @@ def compile_and_reflect(input_mod_path, dependencies, stage, path, out_path, def
add_push_constants(module["push_constants"][0], push_constants)
descriptor_sets = []
if module.get("descriptor_sets"):
VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER = 1
VK_DESCRIPTOR_TYPE_STORAGE_IMAGE = 3
VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER = 6
VK_DESCRIPTOR_TYPE_STORAGE_BUFFER = 7
class VkDescriptorType(Enum):
COMBINED_IMAGE_SAMPLER = 1
STORAGE_IMAGE = 3
UNIFORM_BUFFER = 6
STORAGE_BUFFER = 7
for descriptor_set in module["descriptor_sets"]:
UNIFORM_SET = 1 if use_descriptor_indexing else 0
STORAGE_SET = 2
uniform_set = 1 if use_descriptor_indexing else 0
storage_set = 2
bindings = []
if descriptor_set["set"] == UNIFORM_SET:
if descriptor_set["set"] == uniform_set:
assert descriptor_set["binding_count"] > 0
for binding in descriptor_set["bindings"]:
assert binding["set"] == UNIFORM_SET
assert binding["set"] == uniform_set
block = binding["block"]
members = []
for member in block["members"]:
@ -200,15 +207,15 @@ def compile_and_reflect(input_mod_path, dependencies, stage, path, out_path, def
}
)
binding = descriptor_set["bindings"][0]
assert binding["descriptor_type"] == VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER
elif descriptor_set["set"] == STORAGE_SET:
assert binding["descriptor_type"] == VkDescriptorType.UNIFORM_BUFFER.value
elif descriptor_set["set"] == storage_set:
assert descriptor_set["binding_count"] > 0
for binding in descriptor_set["bindings"]:
is_storage_image = (
binding["descriptor_type"] == VK_DESCRIPTOR_TYPE_STORAGE_IMAGE
binding["descriptor_type"] == VkDescriptorType.STORAGE_IMAGE.value
)
is_storage_buffer = (
binding["descriptor_type"] == VK_DESCRIPTOR_TYPE_STORAGE_BUFFER
binding["descriptor_type"] == VkDescriptorType.STORAGE_BUFFER.value
)
assert is_storage_image or is_storage_buffer
assert (
@ -217,13 +224,13 @@ def compile_and_reflect(input_mod_path, dependencies, stage, path, out_path, def
)
assert binding["image"]["arrayed"] == 0
assert binding["image"]["ms"] == 0
bindingType = "storageImage"
binding_type = "storageImage"
if is_storage_buffer:
bindingType = "storageBuffer"
binding_type = "storageBuffer"
bindings.append(
{
"binding": binding["binding"],
"type": bindingType,
"type": binding_type,
"name": binding["name"],
}
)
@ -232,7 +239,8 @@ def compile_and_reflect(input_mod_path, dependencies, stage, path, out_path, def
assert descriptor_set["binding_count"] >= 1
for binding in descriptor_set["bindings"]:
assert (
binding["descriptor_type"] == VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER
binding["descriptor_type"]
== VkDescriptorType.COMBINED_IMAGE_SAMPLER.value
)
assert binding["array"]["dims"][0] == 16384
if binding["binding"] == 0:
@ -246,7 +254,9 @@ def compile_and_reflect(input_mod_path, dependencies, stage, path, out_path, def
else:
assert descriptor_set["binding_count"] > 0
for binding in descriptor_set["bindings"]:
assert binding["descriptor_type"] == VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER
assert (
binding["descriptor_type"] == VkDescriptorType.COMBINED_IMAGE_SAMPLER.value
)
assert binding["image"]["sampled"] == 1
assert binding["image"]["arrayed"] == 0
assert binding["image"]["ms"] == 0
@ -450,19 +460,10 @@ def build(rules, input_mod_path, output_mod_path, dependencies, program_name):
spirv_hash = calculate_hash(output_spirv_path)
if spirv_hash not in hashed_cache:
hashed_cache[spirv_hash] = [file_name]
hashed_cache[spirv_hash] = file_name
else:
found_candidate = False
for candidate_name in hashed_cache[spirv_hash]:
candidate_path = os.path.join(output_spirv_mod_path, candidate_name)
if compare_spirv(output_spirv_path, candidate_path):
found_candidate = True
file_name = candidate_name
break
if found_candidate:
os.remove(output_spirv_path)
else:
hashed_cache[spirv_hash].append(file_name)
file_name = hashed_cache[spirv_hash]
os.remove(output_spirv_path)
shader_element = ET.SubElement(program_root, shader["type"])
shader_element.set("file", "spirv/" + file_name)

View File

@ -4,7 +4,7 @@ This python tool has been written by wraitii and updated to 0ad A25 by hyiltiz.
Its purpose is to help with unit and civ balancing by allowing quick comparison
between important template data.
Run it using `python unitTables.py` or `pypy unitTables.py` (if you have pypy
Run it using `python unit_tables.py` or `pypy unit_tables.py` (if you have pypy
installed). The output will be located in an HTML file called
`unit_summary_table.html` in this folder.
@ -17,15 +17,15 @@ The script generates 3 informative tables:
You can customize the script by changing the units to include (loading all units
might make it slightly unreadable). To change this, change the
`LoadTemplatesIfParent` variable. You can also consider only some civilizations.
`LOAD_TEMPLATES_IF_PARENT` constant. You can also consider only some civilizations.
You may also filter some templates based on their name, if you want to remove
specific templates. By default it loads all citizen soldiers and all champions,
and ignores non-interesting units for the comparison/efficienicy table (2nd
and ignores non-interesting units for the comparison/efficiency table (2nd
table).
The HTML page comes with a JavaScript extension that allows to filter and sort
in-place, to help with comparisons. You can disable this by disabling javascript
or by changing the `AddSortingOverlay` parameter in the script. This JS
or by changing the `ADD_SORTING_OVERLAY` constant in the script. This JS
extension, called TableFilter, is released under the MIT license. The version
used can be found at https://github.com/koalyptus/TableFilter/
@ -62,13 +62,13 @@ getting familiarized with the analyzer. Note that you'll need `dot` engine provi
by the `graphviz` package. You can install `graphviz` using your system's package manager.
pip3 install pyan3==1.1.1
python3 -m pyan unitTables.py --uses --no-defines --colored \
python3 -m pyan unit_tables.py --uses --no-defines --colored \
--grouped --annotated --html > fundeps.html
Alternatively, only create the `.dot` file using the following line, and render
it with an online renderer like http://viz-js.com/
python3 -m pyan unitTables.py --uses --no-defines --colored \
python3 -m pyan unit_tables.py --uses --no-defines --colored \
--grouped --annotated --dot > fundeps.dot
Enjoy!

View File

@ -1,7 +1,6 @@
#!/usr/bin/env python3
# -*- mode: python-mode; python-indent-offset: 4; -*-
#
# Copyright (C) 2023 Wildfire Games.
# Copyright (C) 2024 Wildfire Games.
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
@ -34,14 +33,14 @@ sys.path.append("../entity")
from scriptlib import SimulTemplateEntity
AttackTypes = ["Hack", "Pierce", "Crush", "Poison", "Fire"]
Resources = ["food", "wood", "stone", "metal"]
ATTACK_TYPES = ["Hack", "Pierce", "Crush", "Poison", "Fire"]
RESOURCES = ["food", "wood", "stone", "metal"]
# Generic templates to load
# The way this works is it tries all generic templates
# But only loads those who have one of the following parents
# EG adding "template_unit.xml" will load all units.
LoadTemplatesIfParent = [
LOAD_TEMPLATES_IF_PARENT = [
"template_unit_infantry.xml",
"template_unit_cavalry.xml",
"template_unit_champion.xml",
@ -51,7 +50,7 @@ LoadTemplatesIfParent = [
# Those describe Civs to analyze.
# The script will load all entities that derive (to the nth degree) from one of
# the above templates.
Civs = [
CIVS = [
"athen",
"brit",
"cart",
@ -70,16 +69,16 @@ Civs = [
]
# Remote Civ templates with those strings in their name.
FilterOut = ["marian", "thureophoros", "thorakites", "kardakes"]
FILTER_OUT = ["marian", "thureophoros", "thorakites", "kardakes"]
# In the Civilization specific units table, do you want to only show the units
# that are different from the generic templates?
showChangedOnly = True
SHOW_CHANGED_ONLY = True
# Sorting parameters for the "roster variety" table
ComparativeSortByCav = True
ComparativeSortByChamp = True
ClassesUsedForSort = [
COMPARATIVE_SORT_BY_CAV = True
COMPARATIVE_SORT_BY_CHAMP = True
CLASSES_USED_FOR_SORT = [
"Support",
"Pike",
"Spear",
@ -92,16 +91,16 @@ ClassesUsedForSort = [
# Disable if you want the more compact basic data. Enable to allow filtering and
# sorting in-place.
AddSortingOverlay = True
ADD_SORTING_OVERLAY = True
# This is the path to the /templates/ folder to consider. Change this for mod
# support.
modsFolder = Path(__file__).resolve().parents[3] / "binaries" / "data" / "mods"
basePath = modsFolder / "public" / "simulation" / "templates"
mods_folder = Path(__file__).resolve().parents[3] / "binaries" / "data" / "mods"
base_path = mods_folder / "public" / "simulation" / "templates"
# For performance purposes, cache opened templates files.
globalTemplatesList = {}
sim_entity = SimulTemplateEntity(modsFolder, None)
global_templates_list = {}
sim_entity = SimulTemplateEntity(mods_folder, None)
def htbout(file, balise, value):
@ -112,27 +111,27 @@ def htout(file, value):
file.write("<p>" + value + "</p>\n")
def fastParse(template_name):
def fast_parse(template_name):
"""Run ET.parse() with memoising in a global table."""
if template_name in globalTemplatesList:
return globalTemplatesList[template_name]
if template_name in global_templates_list:
return global_templates_list[template_name]
parent_string = ET.parse(template_name).getroot().get("parent")
globalTemplatesList[template_name] = sim_entity.load_inherited(
global_templates_list[template_name] = sim_entity.load_inherited(
"simulation/templates/", str(template_name), ["public"]
)
globalTemplatesList[template_name].set("parent", parent_string)
return globalTemplatesList[template_name]
global_templates_list[template_name].set("parent", parent_string)
return global_templates_list[template_name]
def getParents(template_name):
template_data = fastParse(template_name)
def get_parents(template_name):
template_data = fast_parse(template_name)
parents_string = template_data.get("parent")
if parents_string is None:
return set()
parents = set()
for parent in parents_string.split("|"):
parents.add(parent)
for element in getParents(
for element in get_parents(
sim_entity.get_file("simulation/templates/", parent + ".xml", "public")
):
parents.add(element)
@ -140,18 +139,18 @@ def getParents(template_name):
return parents
def ExtractValue(value):
def extract_value(value):
return float(value.text) if value is not None else 0.0
# This function checks that a template has the given parent.
def hasParentTemplate(template_name, parentName):
return any(parentName == parent + ".xml" for parent in getParents(template_name))
def has_parent_template(template_name, parent_name):
return any(parent_name == parent + ".xml" for parent in get_parents(template_name))
def CalcUnit(UnitName, existingUnit=None):
if existingUnit is not None:
unit = existingUnit
def calc_unit(unit_name, existing_unit=None):
if existing_unit is not None:
unit = existing_unit
else:
unit = {
"HP": 0,
@ -180,177 +179,177 @@ def CalcUnit(UnitName, existingUnit=None):
"Civ": None,
}
Template = fastParse(UnitName)
template = fast_parse(unit_name)
# 0ad started using unit class/category prefixed to the unit name
# separated by |, known as mixins since A25 (rP25223)
# We strip these categories for now
# This can be used later for classification
unit["Parent"] = Template.get("parent").split("|")[-1] + ".xml"
unit["Civ"] = Template.find("./Identity/Civ").text
unit["HP"] = ExtractValue(Template.find("./Health/Max"))
unit["BuildTime"] = ExtractValue(Template.find("./Cost/BuildTime"))
unit["Cost"]["population"] = ExtractValue(Template.find("./Cost/Population"))
unit["Parent"] = template.get("parent").split("|")[-1] + ".xml"
unit["Civ"] = template.find("./Identity/Civ").text
unit["HP"] = extract_value(template.find("./Health/Max"))
unit["BuildTime"] = extract_value(template.find("./Cost/BuildTime"))
unit["Cost"]["population"] = extract_value(template.find("./Cost/Population"))
resource_cost = Template.find("./Cost/Resources")
resource_cost = template.find("./Cost/Resources")
if resource_cost is not None:
for resource_type in list(resource_cost):
unit["Cost"][resource_type.tag] = ExtractValue(resource_type)
unit["Cost"][resource_type.tag] = extract_value(resource_type)
if Template.find("./Attack/Melee") is not None:
unit["RepeatRate"]["Melee"] = ExtractValue(Template.find("./Attack/Melee/RepeatTime"))
unit["PrepRate"]["Melee"] = ExtractValue(Template.find("./Attack/Melee/PrepareTime"))
if template.find("./Attack/Melee") is not None:
unit["RepeatRate"]["Melee"] = extract_value(template.find("./Attack/Melee/RepeatTime"))
unit["PrepRate"]["Melee"] = extract_value(template.find("./Attack/Melee/PrepareTime"))
for atttype in AttackTypes:
unit["Attack"]["Melee"][atttype] = ExtractValue(
Template.find("./Attack/Melee/Damage/" + atttype)
for atttype in ATTACK_TYPES:
unit["Attack"]["Melee"][atttype] = extract_value(
template.find("./Attack/Melee/Damage/" + atttype)
)
attack_melee_bonus = Template.find("./Attack/Melee/Bonuses")
attack_melee_bonus = template.find("./Attack/Melee/Bonuses")
if attack_melee_bonus is not None:
for Bonus in attack_melee_bonus:
Against = []
CivAg = []
if Bonus.find("Classes") is not None and Bonus.find("Classes").text is not None:
Against = Bonus.find("Classes").text.split(" ")
if Bonus.find("Civ") is not None and Bonus.find("Civ").text is not None:
CivAg = Bonus.find("Civ").text.split(" ")
Val = float(Bonus.find("Multiplier").text)
unit["AttackBonuses"][Bonus.tag] = {
"Classes": Against,
"Civs": CivAg,
"Multiplier": Val,
for bonus in attack_melee_bonus:
against = []
civ_ag = []
if bonus.find("Classes") is not None and bonus.find("Classes").text is not None:
against = bonus.find("Classes").text.split(" ")
if bonus.find("Civ") is not None and bonus.find("Civ").text is not None:
civ_ag = bonus.find("Civ").text.split(" ")
val = float(bonus.find("Multiplier").text)
unit["AttackBonuses"][bonus.tag] = {
"Classes": against,
"Civs": civ_ag,
"Multiplier": val,
}
attack_restricted_classes = Template.find("./Attack/Melee/RestrictedClasses")
attack_restricted_classes = template.find("./Attack/Melee/RestrictedClasses")
if attack_restricted_classes is not None:
newClasses = attack_restricted_classes.text.split(" ")
for elem in newClasses:
new_classes = attack_restricted_classes.text.split(" ")
for elem in new_classes:
if elem.find("-") != -1:
newClasses.pop(newClasses.index(elem))
new_classes.pop(new_classes.index(elem))
if elem in unit["Restricted"]:
unit["Restricted"].pop(newClasses.index(elem))
unit["Restricted"] += newClasses
unit["Restricted"].pop(new_classes.index(elem))
unit["Restricted"] += new_classes
elif Template.find("./Attack/Ranged") is not None:
elif template.find("./Attack/Ranged") is not None:
unit["Ranged"] = True
unit["Range"] = ExtractValue(Template.find("./Attack/Ranged/MaxRange"))
unit["Spread"] = ExtractValue(Template.find("./Attack/Ranged/Projectile/Spread"))
unit["RepeatRate"]["Ranged"] = ExtractValue(Template.find("./Attack/Ranged/RepeatTime"))
unit["PrepRate"]["Ranged"] = ExtractValue(Template.find("./Attack/Ranged/PrepareTime"))
unit["Range"] = extract_value(template.find("./Attack/Ranged/MaxRange"))
unit["Spread"] = extract_value(template.find("./Attack/Ranged/Projectile/Spread"))
unit["RepeatRate"]["Ranged"] = extract_value(template.find("./Attack/Ranged/RepeatTime"))
unit["PrepRate"]["Ranged"] = extract_value(template.find("./Attack/Ranged/PrepareTime"))
for atttype in AttackTypes:
unit["Attack"]["Ranged"][atttype] = ExtractValue(
Template.find("./Attack/Ranged/Damage/" + atttype)
for atttype in ATTACK_TYPES:
unit["Attack"]["Ranged"][atttype] = extract_value(
template.find("./Attack/Ranged/Damage/" + atttype)
)
if Template.find("./Attack/Ranged/Bonuses") is not None:
for Bonus in Template.find("./Attack/Ranged/Bonuses"):
Against = []
CivAg = []
if Bonus.find("Classes") is not None and Bonus.find("Classes").text is not None:
Against = Bonus.find("Classes").text.split(" ")
if Bonus.find("Civ") is not None and Bonus.find("Civ").text is not None:
CivAg = Bonus.find("Civ").text.split(" ")
Val = float(Bonus.find("Multiplier").text)
unit["AttackBonuses"][Bonus.tag] = {
"Classes": Against,
"Civs": CivAg,
"Multiplier": Val,
if template.find("./Attack/Ranged/Bonuses") is not None:
for bonus in template.find("./Attack/Ranged/Bonuses"):
against = []
civ_ag = []
if bonus.find("Classes") is not None and bonus.find("Classes").text is not None:
against = bonus.find("Classes").text.split(" ")
if bonus.find("Civ") is not None and bonus.find("Civ").text is not None:
civ_ag = bonus.find("Civ").text.split(" ")
val = float(bonus.find("Multiplier").text)
unit["AttackBonuses"][bonus.tag] = {
"Classes": against,
"Civs": civ_ag,
"Multiplier": val,
}
if Template.find("./Attack/Melee/RestrictedClasses") is not None:
newClasses = Template.find("./Attack/Melee/RestrictedClasses").text.split(" ")
for elem in newClasses:
if template.find("./Attack/Melee/RestrictedClasses") is not None:
new_classes = template.find("./Attack/Melee/RestrictedClasses").text.split(" ")
for elem in new_classes:
if elem.find("-") != -1:
newClasses.pop(newClasses.index(elem))
new_classes.pop(new_classes.index(elem))
if elem in unit["Restricted"]:
unit["Restricted"].pop(newClasses.index(elem))
unit["Restricted"] += newClasses
unit["Restricted"].pop(new_classes.index(elem))
unit["Restricted"] += new_classes
if Template.find("Resistance") is not None:
for atttype in AttackTypes:
unit["Resistance"][atttype] = ExtractValue(
Template.find("./Resistance/Entity/Damage/" + atttype)
if template.find("Resistance") is not None:
for atttype in ATTACK_TYPES:
unit["Resistance"][atttype] = extract_value(
template.find("./Resistance/Entity/Damage/" + atttype)
)
if (
Template.find("./UnitMotion") is not None
and Template.find("./UnitMotion/WalkSpeed") is not None
template.find("./UnitMotion") is not None
and template.find("./UnitMotion/WalkSpeed") is not None
):
unit["WalkSpeed"] = ExtractValue(Template.find("./UnitMotion/WalkSpeed"))
unit["WalkSpeed"] = extract_value(template.find("./UnitMotion/WalkSpeed"))
if Template.find("./Identity/VisibleClasses") is not None:
newClasses = Template.find("./Identity/VisibleClasses").text.split(" ")
for elem in newClasses:
if template.find("./Identity/VisibleClasses") is not None:
new_classes = template.find("./Identity/VisibleClasses").text.split(" ")
for elem in new_classes:
if elem.find("-") != -1:
newClasses.pop(newClasses.index(elem))
new_classes.pop(new_classes.index(elem))
if elem in unit["Classes"]:
unit["Classes"].pop(newClasses.index(elem))
unit["Classes"] += newClasses
unit["Classes"].pop(new_classes.index(elem))
unit["Classes"] += new_classes
if Template.find("./Identity/Classes") is not None:
newClasses = Template.find("./Identity/Classes").text.split(" ")
for elem in newClasses:
if template.find("./Identity/Classes") is not None:
new_classes = template.find("./Identity/Classes").text.split(" ")
for elem in new_classes:
if elem.find("-") != -1:
newClasses.pop(newClasses.index(elem))
new_classes.pop(new_classes.index(elem))
if elem in unit["Classes"]:
unit["Classes"].pop(newClasses.index(elem))
unit["Classes"] += newClasses
unit["Classes"].pop(new_classes.index(elem))
unit["Classes"] += new_classes
return unit
def WriteUnit(Name, UnitDict):
def write_unit(name, unit_dict):
ret = "<tr>"
ret += '<td class="Sub">' + Name + "</td>"
ret += "<td>" + str("{:.0f}".format(float(UnitDict["HP"]))) + "</td>"
ret += "<td>" + str("{:.0f}".format(float(UnitDict["BuildTime"]))) + "</td>"
ret += "<td>" + str("{:.1f}".format(float(UnitDict["WalkSpeed"]))) + "</td>"
ret += '<td class="Sub">' + name + "</td>"
ret += "<td>" + str("{:.0f}".format(float(unit_dict["HP"]))) + "</td>"
ret += "<td>" + str("{:.0f}".format(float(unit_dict["BuildTime"]))) + "</td>"
ret += "<td>" + str("{:.1f}".format(float(unit_dict["WalkSpeed"]))) + "</td>"
for atype in AttackTypes:
PercentValue = 1.0 - (0.9 ** float(UnitDict["Resistance"][atype]))
for atype in ATTACK_TYPES:
percent_value = 1.0 - (0.9 ** float(unit_dict["Resistance"][atype]))
ret += (
"<td>"
+ str("{:.0f}".format(float(UnitDict["Resistance"][atype])))
+ str("{:.0f}".format(float(unit_dict["Resistance"][atype])))
+ " / "
+ str("%.0f" % (PercentValue * 100.0))
+ str("%.0f" % (percent_value * 100.0))
+ "%</td>"
)
attType = "Ranged" if UnitDict["Ranged"] is True else "Melee"
if UnitDict["RepeatRate"][attType] != "0":
for atype in AttackTypes:
repeatTime = float(UnitDict["RepeatRate"][attType]) / 1000.0
att_type = "Ranged" if unit_dict["Ranged"] is True else "Melee"
if unit_dict["RepeatRate"][att_type] != "0":
for atype in ATTACK_TYPES:
repeat_time = float(unit_dict["RepeatRate"][att_type]) / 1000.0
ret += (
"<td>"
+ str("%.1f" % (float(UnitDict["Attack"][attType][atype]) / repeatTime))
+ str("%.1f" % (float(unit_dict["Attack"][att_type][atype]) / repeat_time))
+ "</td>"
)
ret += "<td>" + str("%.1f" % (float(UnitDict["RepeatRate"][attType]) / 1000.0)) + "</td>"
ret += "<td>" + str("%.1f" % (float(unit_dict["RepeatRate"][att_type]) / 1000.0)) + "</td>"
else:
for _ in AttackTypes:
for _ in ATTACK_TYPES:
ret += "<td> - </td>"
ret += "<td> - </td>"
if UnitDict["Ranged"] is True and UnitDict["Range"] > 0:
ret += "<td>" + str("{:.1f}".format(float(UnitDict["Range"]))) + "</td>"
spread = float(UnitDict["Spread"])
if unit_dict["Ranged"] is True and unit_dict["Range"] > 0:
ret += "<td>" + str("{:.1f}".format(float(unit_dict["Range"]))) + "</td>"
spread = float(unit_dict["Spread"])
ret += "<td>" + str(f"{spread:.1f}") + "</td>"
else:
ret += "<td> - </td><td> - </td>"
for rtype in Resources:
ret += "<td>" + str("{:.0f}".format(float(UnitDict["Cost"][rtype]))) + "</td>"
for rtype in RESOURCES:
ret += "<td>" + str("{:.0f}".format(float(unit_dict["Cost"][rtype]))) + "</td>"
ret += "<td>" + str("{:.0f}".format(float(UnitDict["Cost"]["population"]))) + "</td>"
ret += "<td>" + str("{:.0f}".format(float(unit_dict["Cost"]["population"]))) + "</td>"
ret += '<td style="text-align:left;">'
for Bonus in UnitDict["AttackBonuses"]:
for bonus in unit_dict["AttackBonuses"]:
ret += "["
for classe in UnitDict["AttackBonuses"][Bonus]["Classes"]:
for classe in unit_dict["AttackBonuses"][bonus]["Classes"]:
ret += classe + " "
ret += ": {}] ".format(UnitDict["AttackBonuses"][Bonus]["Multiplier"])
ret += ": {}] ".format(unit_dict["AttackBonuses"][bonus]["Multiplier"])
ret += "</td>"
ret += "</tr>\n"
@ -358,37 +357,37 @@ def WriteUnit(Name, UnitDict):
# Sort the templates dictionary.
def SortFn(A):
sortVal = 0
for classe in ClassesUsedForSort:
sortVal += 1
if classe in A[1]["Classes"]:
def sort_fn(a):
sort_val = 0
for classe in CLASSES_USED_FOR_SORT:
sort_val += 1
if classe in a[1]["Classes"]:
break
if ComparativeSortByChamp is True and A[0].find("champion") == -1:
sortVal -= 20
if ComparativeSortByCav is True and A[0].find("cavalry") == -1:
sortVal -= 10
if A[1]["Civ"] is not None and A[1]["Civ"] in Civs:
sortVal += 100 * Civs.index(A[1]["Civ"])
return sortVal
if COMPARATIVE_SORT_BY_CHAMP is True and a[0].find("champion") == -1:
sort_val -= 20
if COMPARATIVE_SORT_BY_CAV is True and a[0].find("cavalry") == -1:
sort_val -= 10
if a[1]["Civ"] is not None and a[1]["Civ"] in CIVS:
sort_val += 100 * CIVS.index(a[1]["Civ"])
return sort_val
def WriteColouredDiff(file, diff, isChanged):
def write_coloured_diff(file, diff, is_changed):
"""Help to write coloured text.
diff value must always be computed as a unit_spec - unit_generic.
A positive imaginary part represents advantageous trait.
"""
def cleverParse(diff):
def clever_parse(diff):
if float(diff) - int(diff) < 0.001:
return str(int(diff))
return str(f"{float(diff):.1f}")
isAdvantageous = diff.imag > 0
is_advantageous = diff.imag > 0
diff = diff.real
if diff != 0:
isChanged = True
is_changed = True
else:
# do not change its value if one parameter is not changed (yet)
# some other parameter might be different
@ -396,32 +395,32 @@ def WriteColouredDiff(file, diff, isChanged):
if diff == 0:
rgb_str = "200,200,200"
elif isAdvantageous and diff > 0 or (not isAdvantageous) and diff < 0:
elif is_advantageous and diff > 0 or (not is_advantageous) and diff < 0:
rgb_str = "180,0,0"
else:
rgb_str = "0,150,0"
file.write(
f"""<td><span style="color:rgb({rgb_str});">{cleverParse(diff)}</span></td>
f"""<td><span style="color:rgb({rgb_str});">{clever_parse(diff)}</span></td>
"""
)
return isChanged
return is_changed
def computeUnitEfficiencyDiff(TemplatesByParent, Civs):
def compute_unit_efficiency_diff(templates_by_parent, civs):
efficiency_table = {}
for parent in TemplatesByParent:
for parent in templates_by_parent:
for template in [
template for template in TemplatesByParent[parent] if template[1]["Civ"] not in Civs
template for template in templates_by_parent[parent] if template[1]["Civ"] not in civs
]:
print(template)
TemplatesByParent[parent] = [
template for template in TemplatesByParent[parent] if template[1]["Civ"] in Civs
templates_by_parent[parent] = [
template for template in templates_by_parent[parent] if template[1]["Civ"] in civs
]
TemplatesByParent[parent].sort(key=lambda x: Civs.index(x[1]["Civ"]))
templates_by_parent[parent].sort(key=lambda x: civs.index(x[1]["Civ"]))
for tp in TemplatesByParent[parent]:
for tp in templates_by_parent[parent]:
# HP
diff = -1j + (int(tp[1]["HP"]) - int(templates[parent]["HP"]))
efficiency_table[(parent, tp[0], "HP")] = diff
@ -436,7 +435,7 @@ def computeUnitEfficiencyDiff(TemplatesByParent, Civs):
efficiency_table[(parent, tp[0], "WalkSpeed")] = diff
# Resistance
for atype in AttackTypes:
for atype in ATTACK_TYPES:
diff = -1j + (
float(tp[1]["Resistance"][atype])
- float(templates[parent]["Resistance"][atype])
@ -444,35 +443,37 @@ def computeUnitEfficiencyDiff(TemplatesByParent, Civs):
efficiency_table[(parent, tp[0], "Resistance/" + atype)] = diff
# Attack types (DPS) and rate.
attType = "Ranged" if tp[1]["Ranged"] is True else "Melee"
if tp[1]["RepeatRate"][attType] != "0":
for atype in AttackTypes:
myDPS = float(tp[1]["Attack"][attType][atype]) / (
float(tp[1]["RepeatRate"][attType]) / 1000.0
att_type = "Ranged" if tp[1]["Ranged"] is True else "Melee"
if tp[1]["RepeatRate"][att_type] != "0":
for atype in ATTACK_TYPES:
my_dps = float(tp[1]["Attack"][att_type][atype]) / (
float(tp[1]["RepeatRate"][att_type]) / 1000.0
)
parentDPS = float(templates[parent]["Attack"][attType][atype]) / (
float(templates[parent]["RepeatRate"][attType]) / 1000.0
parent_dps = float(templates[parent]["Attack"][att_type][atype]) / (
float(templates[parent]["RepeatRate"][att_type]) / 1000.0
)
diff = -1j + (myDPS - parentDPS)
efficiency_table[(parent, tp[0], "Attack/" + attType + "/" + atype)] = diff
diff = -1j + (my_dps - parent_dps)
efficiency_table[(parent, tp[0], "Attack/" + att_type + "/" + atype)] = diff
diff = -1j + (
float(tp[1]["RepeatRate"][attType]) / 1000.0
- float(templates[parent]["RepeatRate"][attType]) / 1000.0
float(tp[1]["RepeatRate"][att_type]) / 1000.0
- float(templates[parent]["RepeatRate"][att_type]) / 1000.0
)
efficiency_table[
(parent, tp[0], "Attack/" + attType + "/" + atype + "/RepeatRate")
(parent, tp[0], "Attack/" + att_type + "/" + atype + "/RepeatRate")
] = diff
# range and spread
if tp[1]["Ranged"] is True:
diff = -1j + (float(tp[1]["Range"]) - float(templates[parent]["Range"]))
efficiency_table[(parent, tp[0], "Attack/" + attType + "/Ranged/Range")] = diff
diff = float(tp[1]["Spread"]) - float(templates[parent]["Spread"])
efficiency_table[(parent, tp[0], "Attack/" + attType + "/Ranged/Spread")] = (
efficiency_table[(parent, tp[0], "Attack/" + att_type + "/Ranged/Range")] = (
diff
)
for rtype in Resources:
diff = float(tp[1]["Spread"]) - float(templates[parent]["Spread"])
efficiency_table[(parent, tp[0], "Attack/" + att_type + "/Ranged/Spread")] = (
diff
)
for rtype in RESOURCES:
diff = +1j + (
float(tp[1]["Cost"][rtype]) - float(templates[parent]["Cost"][rtype])
)
@ -486,25 +487,25 @@ def computeUnitEfficiencyDiff(TemplatesByParent, Civs):
return efficiency_table
def computeTemplates(LoadTemplatesIfParent):
def compute_templates(load_templates_if_parent):
"""Loops over template XMLs and selectively insert into templates dict."""
pwd = os.getcwd()
os.chdir(basePath)
os.chdir(base_path)
templates = {}
for template in list(glob.glob("template_*.xml")):
if os.path.isfile(template):
found = False
for possParent in LoadTemplatesIfParent:
if hasParentTemplate(template, possParent):
for poss_parent in load_templates_if_parent:
if has_parent_template(template, poss_parent):
found = True
break
if found is True:
templates[template] = CalcUnit(template)
templates[template] = calc_unit(template)
os.chdir(pwd)
return templates
def computeCivTemplates(Civs: list):
def compute_civ_templates(civs: list):
"""Load Civ specific templates.
NOTE: whether a Civ can train a certain unit is not recorded in the unit
@ -518,87 +519,83 @@ def computeCivTemplates(Civs: list):
up with the game engine.
"""
pwd = os.getcwd()
os.chdir(basePath)
os.chdir(base_path)
CivTemplates = {}
civ_templates = {}
for Civ in Civs:
CivTemplates[Civ] = {}
for civ in civs:
civ_templates[civ] = {}
# Load all templates that start with that civ indicator
# TODO: consider adding mixin/civs here too
civ_list = list(glob.glob("units/" + Civ + "/*.xml"))
civ_list = list(glob.glob("units/" + civ + "/*.xml"))
for template in civ_list:
if os.path.isfile(template):
# filter based on FilterOut
breakIt = False
for civ_filter in FilterOut:
break_it = False
for civ_filter in FILTER_OUT:
if template.find(civ_filter) != -1:
breakIt = True
if breakIt:
break_it = True
if break_it:
continue
# filter based on loaded generic templates
breakIt = True
for possParent in LoadTemplatesIfParent:
if hasParentTemplate(template, possParent):
breakIt = False
break_it = True
for poss_parent in LOAD_TEMPLATES_IF_PARENT:
if has_parent_template(template, poss_parent):
break_it = False
break
if breakIt:
if break_it:
continue
unit = CalcUnit(template)
unit = calc_unit(template)
# Remove variants for now
if unit["Parent"].find("template_") == -1:
continue
# load template
CivTemplates[Civ][template] = unit
civ_templates[civ][template] = unit
os.chdir(pwd)
return CivTemplates
return civ_templates
def computeTemplatesByParent(templates: dict, Civs: list, CivTemplates: dict):
def compute_templates_by_parent(templates: dict, civs: list, civ_templates: dict):
"""Get them in the array."""
# Civs:list -> CivTemplates:dict -> templates:dict -> TemplatesByParent
TemplatesByParent = {}
for Civ in Civs:
for CivUnitTemplate in CivTemplates[Civ]:
parent = CivTemplates[Civ][CivUnitTemplate]["Parent"]
# civs:list -> civ_templates:dict -> templates:dict -> templates_by_parent
templates_by_parent = {}
for civ in civs:
for civ_unit_template in civ_templates[civ]:
parent = civ_templates[civ][civ_unit_template]["Parent"]
# We have the following constant equality
# templates[*]["Civ"] === gaia
# if parent in templates and templates[parent]["Civ"] == None:
if parent in templates:
if parent not in TemplatesByParent:
TemplatesByParent[parent] = []
TemplatesByParent[parent].append(
(CivUnitTemplate, CivTemplates[Civ][CivUnitTemplate])
if parent not in templates_by_parent:
templates_by_parent[parent] = []
templates_by_parent[parent].append(
(civ_unit_template, civ_templates[civ][civ_unit_template])
)
# debug after CivTemplates are non-empty
return TemplatesByParent
return templates_by_parent
############################################################
## Pre-compute all tables
templates = computeTemplates(LoadTemplatesIfParent)
CivTemplates = computeCivTemplates(Civs)
TemplatesByParent = computeTemplatesByParent(templates, Civs, CivTemplates)
templates = compute_templates(LOAD_TEMPLATES_IF_PARENT)
CivTemplates = compute_civ_templates(CIVS)
TemplatesByParent = compute_templates_by_parent(templates, CIVS, CivTemplates)
# Not used; use it for your own custom analysis
efficiencyTable = computeUnitEfficiencyDiff(TemplatesByParent, Civs)
efficiency_table = compute_unit_efficiency_diff(TemplatesByParent, CIVS)
############################################################
def writeHTML():
def write_html():
"""Create the HTML file."""
f = open(
os.path.realpath(__file__).replace("unitTables.py", "") + "unit_summary_table.html",
"w",
encoding="utf-8",
)
f = open(Path(__file__).parent / "unit_summary_table.html", "w", encoding="utf8")
f.write(
"""
@ -639,7 +636,7 @@ def writeHTML():
"""
)
for template in templates:
f.write(WriteUnit(template, templates[template]))
f.write(write_unit(template, templates[template]))
f.write("</table>")
# Write unit specialization
@ -681,14 +678,10 @@ differences between the two.
"""
)
for parent in TemplatesByParent:
TemplatesByParent[parent].sort(key=lambda x: Civs.index(x[1]["Civ"]))
TemplatesByParent[parent].sort(key=lambda x: CIVS.index(x[1]["Civ"]))
for tp in TemplatesByParent[parent]:
isChanged = False
ff = open(
os.path.realpath(__file__).replace("unitTables.py", "") + ".cache",
"w",
encoding="utf-8",
)
is_changed = False
ff = open(Path(__file__).parent / ".cache", "w", encoding="utf8")
ff.write("<tr>")
ff.write(
@ -702,54 +695,56 @@ differences between the two.
# HP
diff = -1j + (int(tp[1]["HP"]) - int(templates[parent]["HP"]))
isChanged = WriteColouredDiff(ff, diff, isChanged)
is_changed = write_coloured_diff(ff, diff, is_changed)
# Build Time
diff = +1j + (int(tp[1]["BuildTime"]) - int(templates[parent]["BuildTime"]))
isChanged = WriteColouredDiff(ff, diff, isChanged)
is_changed = write_coloured_diff(ff, diff, is_changed)
# walk speed
diff = -1j + (float(tp[1]["WalkSpeed"]) - float(templates[parent]["WalkSpeed"]))
isChanged = WriteColouredDiff(ff, diff, isChanged)
is_changed = write_coloured_diff(ff, diff, is_changed)
# Resistance
for atype in AttackTypes:
for atype in ATTACK_TYPES:
diff = -1j + (
float(tp[1]["Resistance"][atype])
- float(templates[parent]["Resistance"][atype])
)
isChanged = WriteColouredDiff(ff, diff, isChanged)
is_changed = write_coloured_diff(ff, diff, is_changed)
# Attack types (DPS) and rate.
attType = "Ranged" if tp[1]["Ranged"] is True else "Melee"
if tp[1]["RepeatRate"][attType] != "0":
for atype in AttackTypes:
myDPS = float(tp[1]["Attack"][attType][atype]) / (
float(tp[1]["RepeatRate"][attType]) / 1000.0
att_type = "Ranged" if tp[1]["Ranged"] is True else "Melee"
if tp[1]["RepeatRate"][att_type] != "0":
for atype in ATTACK_TYPES:
my_dps = float(tp[1]["Attack"][att_type][atype]) / (
float(tp[1]["RepeatRate"][att_type]) / 1000.0
)
parentDPS = float(templates[parent]["Attack"][attType][atype]) / (
float(templates[parent]["RepeatRate"][attType]) / 1000.0
parent_dps = float(templates[parent]["Attack"][att_type][atype]) / (
float(templates[parent]["RepeatRate"][att_type]) / 1000.0
)
isChanged = WriteColouredDiff(ff, -1j + (myDPS - parentDPS), isChanged)
isChanged = WriteColouredDiff(
is_changed = write_coloured_diff(ff, -1j + (my_dps - parent_dps), is_changed)
is_changed = write_coloured_diff(
ff,
-1j
+ (
float(tp[1]["RepeatRate"][attType]) / 1000.0
- float(templates[parent]["RepeatRate"][attType]) / 1000.0
float(tp[1]["RepeatRate"][att_type]) / 1000.0
- float(templates[parent]["RepeatRate"][att_type]) / 1000.0
),
isChanged,
is_changed,
)
# range and spread
if tp[1]["Ranged"] is True:
isChanged = WriteColouredDiff(
is_changed = write_coloured_diff(
ff,
-1j + (float(tp[1]["Range"]) - float(templates[parent]["Range"])),
isChanged,
is_changed,
)
my_spread = float(tp[1]["Spread"])
parent_spread = float(templates[parent]["Spread"])
is_changed = write_coloured_diff(
ff, +1j + (my_spread - parent_spread), is_changed
)
mySpread = float(tp[1]["Spread"])
parentSpread = float(templates[parent]["Spread"])
isChanged = WriteColouredDiff(ff, +1j + (mySpread - parentSpread), isChanged)
else:
ff.write(
"<td><span style='color:rgb(200,200,200);'>-</span></td><td>"
@ -758,39 +753,36 @@ differences between the two.
else:
ff.write("<td></td><td></td><td></td><td></td><td></td><td></td>")
for rtype in Resources:
isChanged = WriteColouredDiff(
for rtype in RESOURCES:
is_changed = write_coloured_diff(
ff,
+1j + (float(tp[1]["Cost"][rtype]) - float(templates[parent]["Cost"][rtype])),
isChanged,
is_changed,
)
isChanged = WriteColouredDiff(
is_changed = write_coloured_diff(
ff,
+1j
+ (
float(tp[1]["Cost"]["population"])
- float(templates[parent]["Cost"]["population"])
),
isChanged,
is_changed,
)
ff.write("<td>" + tp[1]["Civ"] + "</td>")
ff.write("</tr>\n")
ff.close() # to actually write into the file
with open(
os.path.realpath(__file__).replace("unitTables.py", "") + ".cache",
encoding="utf-8",
) as ff:
unitStr = ff.read()
with open(Path(__file__).parent / ".cache", encoding="utf-8") as ff:
unit_str = ff.read()
if showChangedOnly:
if isChanged:
f.write(unitStr)
if SHOW_CHANGED_ONLY:
if is_changed:
f.write(unit_str)
else:
# print the full table if showChangedOnly is false
f.write(unitStr)
# print the full table if SHOW_CHANGED_ONLY is false
f.write(unit_str)
f.write("<table/>")
@ -813,17 +805,17 @@ each loaded generic template.
<th>Template </th>
"""
)
for civ in Civs:
for civ in CIVS:
f.write('<td class="vertical-text">' + civ + "</td>\n")
f.write("</tr>\n")
sortedDict = sorted(templates.items(), key=SortFn)
sorted_dict = sorted(templates.items(), key=sort_fn)
for tp in sortedDict:
for tp in sorted_dict:
if tp[0] not in TemplatesByParent:
continue
f.write("<tr><td>" + tp[0] + "</td>\n")
for civ in Civs:
for civ in CIVS:
found = 0
for temp in TemplatesByParent[tp[0]]:
if temp[1]["Civ"] == civ:
@ -841,7 +833,7 @@ each loaded generic template.
'<tr style="margin-top:2px;border-top:2px #aaa solid;">\
<th style="text-align:right; padding-right:10px;">Total:</th>\n'
)
for civ in Civs:
for civ in CIVS:
count = 0
for _units in CivTemplates[civ]:
count += 1
@ -853,7 +845,7 @@ each loaded generic template.
# Add a simple script to allow filtering on sorting directly in the HTML
# page.
if AddSortingOverlay:
if ADD_SORTING_OVERLAY:
f.write(
"""
<script src="tablefilter/tablefilter.js"></script>
@ -941,4 +933,4 @@ tf2.init();
if __name__ == "__main__":
writeHTML()
write_html()

View File

@ -21,7 +21,7 @@ class SingleLevelFilter(Filter):
return record.levelno == self.passlevel
class VFS_File:
class VFSFile:
def __init__(self, mod_name, vfs_path):
self.mod_name = mod_name
self.vfs_path = vfs_path
@ -201,7 +201,7 @@ class RelaxNGValidator:
try:
doc = lxml.etree.parse(str(file[1]))
relaxng.assertValid(doc)
except Exception:
except (lxml.etree.DocumentInvalid, lxml.etree.XMLSyntaxError):
error_count = error_count + 1
self.logger.exception(file[1])