Getting the pipeline configuration name from within pipeline_configuration_init.py?

How can I get the pipeline config name from within the pipeline_configuration_init ?

Same like what self.parent.configuration_name would output in the tank_init (which doesn’t work from within this hook)

EDIT:
Also can’t seem to find how to get the actual project root, I can get the roots but need the project path

1 Like
# Get project path
dp = self.parent._storage_roots.default_path
project_name = self.parent._project_name
project_path = os.path.join(dp.current_os, project_name)

As for the config name… what does self.log.info(dir(self.parent)) expose ? There might be something useful in there; eg a descriptor. Post the result here if you can.

3 Likes

I don’t know if pipeline_configuration_init.py is too early in the chain of events for this to work, but maybe this is a possibility?:

import tank
pc_path = tank.pipelineconfig_utils.get_path_to_current_core()
pc_meta = tank.pipelineconfig_utils.get_metadata(pc_path)
pc_name = pc_meta['pc_name']

If not, apologies for the noise.

1 Like

This seems to be what I need!

Still toying with the pipeline_config name but I may get somewhere with your suggestion! :slight_smile:

1 Like

I think I tried that and it was too early.

Also I think those commands are now deprecrated so kinda want to stay future proof :slight_smile:

1 Like

Hi,

The safest way to get the tank name for your pipeline configuration inside this hook is to access self.parent.get_project_disk_name()

With this being said, I’m not fond of this hook to be honest. The pipeline configuration object is not meant for public access, as documented here. It’s unfortunate that this hook was introduced all these years ago.

If you look at our developer docs for tk-core, there is no documentation for this class. This is by design and not an oversight. If something is not part of the documentation, it is not part of our public facing API and is prone to break at some point since we offer no guarantee of compatibility for the internal API. We’ve been working very hard to never break our public facing API (we may have introduced bugs overtime obviously!), so if you base your code on those APIs you are pretty much guaranteed you’re pipeline will keep running after an update.

May I ask what you are trying to achieve @Ricardo_Musch ? Maybe there is different way you could achieve the desired result by using the official APIs we have put in place. For example, the Sgtk object has the property project_path which gives the path to the primary storage suffixed with the tank_name field. This hook is called once the sgtk object is created by the tank_init hook. This hook is called after the templates have been read on disk, but before your environment files and includes have been parsed. Could that work?

2 Likes

Yeah I read those comments and tryin to avoid using those functions!

I’m trying to achieve the override system we are discussing in the other topic :wink:

That’s my problem.
If I add an include to the templates.yml that references a environement variable that needs to access the project this will only work when that variable is set before the templates are read.

As far as I know the pipeline_configuration_init hook is the only hook that runs before the templates are read. If there is another place I should know about I’d love to know!

I think I have suggested before to add a hook that runs just after selecting the project inside sg-desktop so we can set environemt variables for the whole project instead of only in hooks like before_app_launch.
Because sometimes we need (or it’s just easiest to have) certain variables inside things like the standalone publisher app (which doesn’t execute the before_app_launch hook).
We can do it in core hooks, yes, but maybe for readability a before_config_load hook would be handy for the core hooks?

But yes, basically I was also building my override system and I need to do that as early on in the load process as possible to be able to override the reading of templates and env files.

1 Like

There is a hook that is getting executed before a project is launched in Desktop called the launch python hook, but unfortunately we’re not passing in project information to that hook directly. You could unpickle the file we’re passing in however to get to it, but that’s also hacky. :frowning: I think the right feature here would be to get the project entity passed into the hook. That would actually be helpful…

4 Likes

Right, that sounds like a good plan!.. and one assumes a fairly trivial feature to implement? :)…

2 Likes

Hehe, it should be, but as with everything, there’s a cost to any feature, even trivial ones. :smile: We’ll keep it in mind however!

2 Likes

Please dont break the current pipeline_configuration_init :pray:

:stuck_out_tongue:

3 Likes

Since the launch_python_hook doesn’t get the project, a potentially reasonable moment to set an env var might be when the tk-desktop engine in launched in a project context through the engine_init hook.

That is assuming that you wouldn’t need any customization done through environment variables for that engine. This way you could set it then and let Maya/Nuke/Houdini inherit the env var.

How does that sound?

No, in the case of env vars required for settings yamls, they may need the project path, so need to be parsed prior to engine init, otherwise the config will error or not form correctly without the project env vars being set.

1 Like

Yeah I agree, the current way I’m overriding YML files means that I’m literally setting paths for them to point them either at the installed config or to a external folder where the override’s live depending on if an override exists.

Would love to have an official override method though.

Digging this one up as it came up in a search related to me wanting to identify the PC that had been intialised.

As far as overriding yml files for a project, this PR for optional includes works nicely. If an include path is pointing to a project path generically via an ENV var then if the file exists it will include it.