Launching an application via launchapp from the command line


I’ve been looking so far unsuccessfully to find a way to launch an application from the command line, in the same way that it would get initialised through Shotgun Desktop/tk-multi-launchapp, setting up an environment via the before_app_launch hook etc. My configuration has all my apps set up as SoftwareEntities, which works well in Shotgun Desktop and in the web browser.

My goal here is to be able to send a job to the farm that initialises a Nuke environment, and starts Nuke with a script that generates a quicktime and uploads it to shotgun as a Version (hopefully using tk-multi-reviewsubmission?). I want it to initialise a Nuke environment on the farm, since I’d like to be able to use this tool from any app (eg. to be able to publish renders from Houdini) - in that case I won’t be already in a Nuke environment that I could copy to the farm environment.

I found the SoftwareLauncher API, but it doesn’t show any software versions when I run scan_software():

(sgtk imported, with an authenticated user)
>>> tk = sgtk.sgtk_from_path("/path_to_project")
>>> context = tk.context_from_path("/path_to_project")
>>> launcher = sgtk.platform.create_engine_launcher(tk, context, "tk-nuke")
>>> software_versions = launcher.scan_software()
>>> software_versions

Could this because all my software is in Shotgun in the form of Software Entities?


Hi Matt

There a few ways of doing this:

  1. Usually, on the farm (in my experience), the farm manager is responsible for launching the software. In this situation, you would need to provide the job with a startup script so that once the software had been launched the script could bootstrap the engine, and run your process. There is an example I posted here:
    Distributed Config and Render Farm Setup
    You would maybe set environment variables on your job, which the bootstrap script can look for and use, such as the context you want to bootstrap into.
  2. If you are going to manage how to start the software then, you have a couple of options. If you have an installed config, you can use the tank command to launch the software. Something like:
    /path/to/my/my_project_config/tank.bat Task 1234 nuke_11.3v2
    That will use the tk-multi-launchapp to start the software.
    The tricky bit here is that you need to provide a path to the config for that project and pass it the Task you are wanting it to operate on. You would also need to provide a startup script for Nuke so that it ran your processing script.
  3. Instead of using the tank command, you could use the bootstrap script again but this time you would bootstrap the tk-shell engine in python before launching the software and then once the tk-shell engine starts, you would run engine.execute_command("nuke_11.3v2",[]) to use the tk-multi-launchapp to start the software. Again you would need to provide a startup script to nuke to run your processing once it launched.



Thanks Philip.

We’re already using in SGTK to do all the pre-launch environment setup, per application. Eg. setting license details, setting HOUDINI_PATH or NUKE_PATH, plugins paths, etc. I’m hoping that I can re-use that same infrastructure to get a working environment ready on the farm.

Currently our farm submission process copies the existing environment to the farm job so it is available when the job starts and all it needs is the path to an executable, but this only works when I want to run the same type of environment that I’m already in (eg. submitting a Nuke job, from a local Nuke). In my current situation I want to be able to for example submit a Nuke job, from a local Houdini. In this case I want to be able to spin up a Nuke environment with all our customised stuff, the same ‘official’ way as if it was launched via shotgun desktop.

One option for this could be to submit a generic script job to the farm that takes care of launching Nuke correctly with all the SGTK environment available.

For 1. I already have a bootstrap setup working properly, but it gets its environment presented to it already, so I’d still need a way to initialise the environment first.

For 3. I saw a post of yours previously mentioning something like this but I can’t see an execute_command() method in the Engine API docs or in the tk-core python files?

But 2. Seems interesting, I just tried it and seemed to work fine. I didn’t even need a Task, it seemed happy to start up in a Project context. I could pass on an environment variable with the path to the tank executable per pipeline configuration on disk so that aspect should be ok.

One thing though, just from what I’ve read while hunting around online it seemed a bit like the tank command was a bit of a legacy thing, and perhaps deprecated? It seems to be associated with some older workflows. Is this the case, and is it safe to rely on this moving forward?


1 Like

Yeah, that is true. Unfortunately, that is an area that is inconsistent across engines. The tk-shell engine has a execute_command method which isn’t documented. Some other engines don’t have a public method for executiong of commands and some do, it’s a bit hit and miss. But it should be fine for the tk-shell.

That’s also true, you could pass a project id and then switch the context post bootstrap to match that of your file your processing. It’s up to you, whether you launch in the context off the bat or switch post-launch. I also guessed at Task as most work is most commonly done in an asset step or shot step environment.

One thing though, just from what I’ve read while hunting around online it seemed a bit like the tank command was a bit of a legacy thing, and perhaps deprecated? It seems to be associated with some older workflows. Is this the case, and is it safe to rely on this moving forward?

It is not deprecated currently, though it is certainly older tech. The tank command is not really available with distributed/not installed configs as well. Deprecation is something I could potentially foresee happening but we have no current plans.

I would say probably option 3 is your best bet then.

1 Like