You might be the best people to suggest a workflow for this.
I have an engine and part of it has a compiled plugin. As git repos shouldn’t contain compiled code, everytime we do a cache apps, we have to copy/compile the required plugin into the install location for it to function. If this engine was an official release, I doubt that people using it would want to do this manually. I need to automate the process somehow.
I have thought of a few solutions to this problem, but want to run it past you guys, who might have a better idea.
Create a release of the repo, and attach a zipped version of it which includes the plugin, as an asset.
The github_release descriptor can be changed to allow the user to specify what asset to download and install instead of the standard zipped source code. This method creates a lot of duplication with the possibility of there being multiple builds of the asset against differing libraries and OS. It would require the user to know exactly what asset to pull and they would have to be in charge of updating their configs accordingly. It also requires more work from the developer (unless we can use ci jobs to build all variants of the build and attach them).
Assets that are just the compiled plugins and the descriptor knows where to put them.
This would require a fair bit of standardisation to work, there would have to be a specific place to put built code and that would have to be reflected in the startup.py location for the engines to make sure paths are set up correctly. The user would still need to specify what asset to download, unless they just download all assets and somehow update the paths using the launch context (OS, application version, etc), but we would have to cover a lot of builds for that to be feasible and flexible enough to seamlessly work on all platforms and configuration.
We trigger a build on the user side as a post_download operation.
Again this would need some standardisation to fully automate. I reckon having a build.sh or setup.py file at the root that will get triggered if it exists. Only issues I can see is that caching apps will take a lot longer as it will include build time. We would have to figure out how to build against the user’s system consistently (getting lib and include locations, picking up the correct version of software, etc), which might not be possible if studios have their own environment set up, like rez. Using docker is a solution, but it would have to be a requisite of the engine for it to work.
The plugin is separate to the engine
The likelihood of the plugin needing compiling all the time might be small, so maybe this can exist within a studio’s environment, just like all other applications and plugins. It may however be dependant on the engine code to function. The user would have to make sure that the engine and plugin are kept in sync somehow if any changes occur
Screw it, studios just have to build it on their own.
One issue with all of the above is that it might completely remove the system-agnostic approach that shotgun sets out to achieve. If a localized config is being run, there can’t exist plugins for differing operating systems within the same version, for example, and we can’t run multiple versions of the same engine concurrently without creating multiple launchers, and that defeats the object a little bit.
Does anybody have any idea to how this might work? With applications like Katana and USD only allowing for C++ plugins, this is going to be an issue moving forward
Thanks in advance