Launching an application via launchapp from the command line

Hi,

I’ve been looking so far unsuccessfully to find a way to launch an application from the command line, in the same way that it would get initialised through Shotgun Desktop/tk-multi-launchapp, setting up an environment via the before_app_launch hook etc. My configuration has all my apps set up as SoftwareEntities, which works well in Shotgun Desktop and in the web browser.

My goal here is to be able to send a job to the farm that initialises a Nuke environment, and starts Nuke with a script that generates a quicktime and uploads it to shotgun as a Version (hopefully using tk-multi-reviewsubmission?). I want it to initialise a Nuke environment on the farm, since I’d like to be able to use this tool from any app (eg. to be able to publish renders from Houdini) - in that case I won’t be already in a Nuke environment that I could copy to the farm environment.

I found the SoftwareLauncher API, but it doesn’t show any software versions when I run scan_software():

(sgtk imported, with an authenticated user)
>>> tk = sgtk.sgtk_from_path("/path_to_project")
>>> context = tk.context_from_path("/path_to_project")
>>> launcher = sgtk.platform.create_engine_launcher(tk, context, "tk-nuke")
>>> software_versions = launcher.scan_software()
>>> software_versions
[]

Could this because all my software is in Shotgun in the form of Software Entities?

3 Likes

Hi Matt

There a few ways of doing this:

  1. Usually, on the farm (in my experience), the farm manager is responsible for launching the software. In this situation, you would need to provide the job with a startup script so that once the software had been launched the script could bootstrap the engine, and run your process. There is an example I posted here:
    Distributed Config and Render Farm Setup
    You would maybe set environment variables on your job, which the bootstrap script can look for and use, such as the context you want to bootstrap into.
  2. If you are going to manage how to start the software then, you have a couple of options. If you have an installed config, you can use the tank command to launch the software. Something like:
    /path/to/my/my_project_config/tank.bat Task 1234 nuke_11.3v2
    That will use the tk-multi-launchapp to start the software.
    The tricky bit here is that you need to provide a path to the config for that project and pass it the Task you are wanting it to operate on. You would also need to provide a startup script for Nuke so that it ran your processing script.
  3. Instead of using the tank command, you could use the bootstrap script again but this time you would bootstrap the tk-shell engine in python before launching the software and then once the tk-shell engine starts, you would run engine.execute_command("nuke_11.3v2",[]) to use the tk-multi-launchapp to start the software. Again you would need to provide a startup script to nuke to run your processing once it launched.

Best
Phil

4 Likes

Thanks Philip.

We’re already using before_app_launch.py in SGTK to do all the pre-launch environment setup, per application. Eg. setting license details, setting HOUDINI_PATH or NUKE_PATH, plugins paths, etc. I’m hoping that I can re-use that same infrastructure to get a working environment ready on the farm.

Currently our farm submission process copies the existing environment to the farm job so it is available when the job starts and all it needs is the path to an executable, but this only works when I want to run the same type of environment that I’m already in (eg. submitting a Nuke job, from a local Nuke). In my current situation I want to be able to for example submit a Nuke job, from a local Houdini. In this case I want to be able to spin up a Nuke environment with all our customised stuff, the same ‘official’ way as if it was launched via shotgun desktop.

One option for this could be to submit a generic script job to the farm that takes care of launching Nuke correctly with all the SGTK environment available.

For 1. I already have a bootstrap setup working properly, but it gets its environment presented to it already, so I’d still need a way to initialise the environment first.

For 3. I saw a post of yours previously mentioning something like this but I can’t see an execute_command() method in the Engine API docs or in the tk-core python files?

But 2. Seems interesting, I just tried it and seemed to work fine. I didn’t even need a Task, it seemed happy to start up in a Project context. I could pass on an environment variable with the path to the tank executable per pipeline configuration on disk so that aspect should be ok.

One thing though, just from what I’ve read while hunting around online it seemed a bit like the tank command was a bit of a legacy thing, and perhaps deprecated? It seems to be associated with some older workflows. Is this the case, and is it safe to rely on this moving forward?

thanks!

1 Like

Yeah, that is true. Unfortunately, that is an area that is inconsistent across engines. The tk-shell engine has a execute_command method which isn’t documented. Some other engines don’t have a public method for executiong of commands and some do, it’s a bit hit and miss. But it should be fine for the tk-shell.

That’s also true, you could pass a project id and then switch the context post bootstrap to match that of your file your processing. It’s up to you, whether you launch in the context off the bat or switch post-launch. I also guessed at Task as most work is most commonly done in an asset step or shot step environment.

One thing though, just from what I’ve read while hunting around online it seemed a bit like the tank command was a bit of a legacy thing, and perhaps deprecated? It seems to be associated with some older workflows. Is this the case, and is it safe to rely on this moving forward?

It is not deprecated currently, though it is certainly older tech. The tank command is not really available with distributed/not installed configs as well. Deprecation is something I could potentially foresee happening but we have no current plans.

I would say probably option 3 is your best bet then.

2 Likes

Hi,

So I’ve been looking into this technique again to bootstrap a nuke process on the farm from scratch, without needing an existing environment (as long as it has access to the sgtk module though. I can happily report that I just got the method using tk-shell engine.execute_command() working, but with one exception.

I have a Software entity named ‘nuke_batch’ which just calls /path/to/nuke -t. I’m using a script that’s roughly something like this:

mgr = sgtk.bootstrap.ToolkitManager(sg_user=user)
mgr.plugin_id = "basic.shell"
ctx = sgtk.context.deserialize(os.environ["TANK_CONTEXT"])
e = mgr.bootstrap_engine("tk-shell", entity=ctx.project)
args = ' '.join(sys.argv[1:])
e.execute_command("nuke_batch", [ args ])

I can then run the script with whatever args I like, which get passed through to Nuke. Nuke seems to successfully run in the bootstrap session and initialises its environment using the standard sequence in before_app_launch.py, which is great!

The only problem is that it doesn’t work on the farm, because by default the process launched by execute_command() gets executed in the background - it doesn’t wait for completion. This means the farm job just gets terminated early when the above script terminates, not later on when the Nuke process terminates.

After while hunting things down I realised that the reason why is very simple - in the default app_launch.py hook for tk-multi-launchapp (which is what starts the application), the line:

cmd = "%s %s &" % (app_path, app_args)

…uses os.system() to launch the process in the background, using &. If I override this hook in my config and remove the &, everything works fine.

This is good for me now, but for the convenience of others, is that & really necessary in the default config? At least when launching apps out of the tk-desktop GUI it doesn’t seem to make a difference whether it’s there or not. If there are no side-effects it would make life much simpler to not launch apps in the background (or perhaps at least to be able to choose whether to or not with a hook setting).

3 Likes

Hi Matt – Glad to hear you’ve been making good progress here! As far as the Launcher running things in the background by default, the simple answer is that it simplifies our code and allows us not to have to track the life of the subprocess – we don’t have to worry about closing Desktop (or the terminal if you’re launching from the commandline) shutting down your DCC.

This seems to work fine in the vast majority of cases, but the logic, as you’ve found, is in a hook (the app_launch hook, namely), and that’s by design. We know this is a place that people tend to want to customize, so you can easily take over the hook and remove the ampersand if that suits your needs.

1 Like

As it turns out, it looks like removing that ampersand seemed to be causing some problems in terms of the ways other applications were launching from shotgun desktop.

My solution was to create a new ‘tk-shell-launch’ environment by duplicating and stripping out the tk-shell.yml file, and a new setting for tk-multi-launchapp which only includes the custom launch_app.py hook in this one. Then when I bootstrap, I do it into tk-shell-launch, which restricts the non-background loading only to this case, not when other applications are usually launched.

It’s still a little bit ugly and I still really wish it was cleaner and easier out of the box to run software headless (eg. on a farm) in a shotgun environment, but at least this is a solution working well for now.

thanks!

3 Likes

Perhaps a simpler approach would be to have a custom app_launch.py that only removed the & character from the launch command, when it detected an certain environment variable.
Then on your farm you could set it so that an env var is set that the hook will look for. That would save you splitting out your environments, although there is nothing wrong with that solution as well, other than it’s more complex.

1 Like