Distributed Config and Render Farm Setup

I have set up a project with a distributed config and included the shotgun python api in that config so that remote artist shouldn’t have to download anything. Ideally they just log in to SG, everything downloads to the bundle cache folder, and then they are up an running.

I am now trying to get shotgun working on our farm (Nuke) and was wondering how bootstrapping works with a distributed config on a farm. I’m basically trying to treat my render nodes as freelance artists so that they automatically download the bundle cache locally, launch in the shotgun environment, and then execute the render/publish. Has anyone done this, because I can’t seem to figure it out? I am interested in all the caveats that go along with this approach as well.



Welcome to the forums and thanks for posting here!

So I’m actually working on a doc/guide on this very thing right now, though unfortunately, it’s not yet in a place where I can properly share it with you. However, I do have an earlier unreleased draft of a different doc I was working on which I can share.

I just though I’d point out that the tk-core (sgtk API) comes bundled with a copy of the Shotgun python API, and you can access it through the sgtk API. For example you can get an authenticated instance of Shotgun via the engine:

import sgtk

# get the engine we are currently running in
currentEngine = sgtk.platform.current_engine()
# get hold of the shotgun api instance used by the engine, (or we could have created a new one)
sg = currentEngine.shotgun

Here is an extract taken from one of my much earlier drafts on bootstrapping:

Distributed configs are handled a bit differently from centralized configs as you don’t necessarily have a config stored on disk for your project yet. The approach here is to use a standalone copy of the sgtk API to bootstrap into an engine.

The bootstrap process will take care of ensuring everything is cached locally and swap out the previously imported sgtk package for the one belonging to the project you’re bootstrapping.

(Note for this to work you need to have download a standalone copy of the sgtk API.)

The bootstrap process will start an engine, and you will want to pick the engine appropriately for the environment your running this script in. If you’re running the script in a python interpreter outside of some software like Maya or Nuke then the tk-shell engine serves this purpose nicely. Here is an example of how to do that:

(Note this also works for centralized configs.)

import sys
# import a standalone sgtk API instance, you don't need to insert the path if you pip installed the API

import sgtk

sa = sgtk.authentication.ShotgunAuthenticator()

# get pre cached user credentials
# user = sa.get_user()

# or authenticate using script credentials
user = sa.create_script_user(api_script="MYSCRIPTNAME",


project = {"type": "Project", "id": 176}

mgr = sgtk.bootstrap.ToolkitManager(sg_user=user)
mgr.plugin_id = "basic."
# you don't need to specify a config, the bootstrap process will automatically pick one if you don't
mgr.pipeline_configuration = "dev"

engine = mgr.bootstrap_engine("tk-shell", entity=project)

# As we imported sgtk prior to bootstrapping we should import it again now as the bootstrap process swapped the standalone sgtk out for the project’s sgtk.
import sgtk

print ("engine",engine)
print ("context", engine.context)
print ("sgtk instance", engine.sgtk)
print ("Shotgun API instance", engine.shotgun)

If this is running on the farm I would recommend you use script credentials rather than user credentials, as obviously there won’t be a user present to log in.

I would recommend against bootstrapping in each render frame/task, as it will, (A) slow each render task down, and (B) potentially DDoS your site, if you have many slaves running this simultaneously.

Instead, it’s better to have a pre or post job that can perform any Toolkit processing so that it is limited to once per farm job.

Let me know if you have any further questions!



Looking forward to seeing that guide, it would be great to compare and contrast best practices vs what various studios have come up with to solve this issue.

On the script user topic, we want to replicate the user environment on the farm, and our farm is set up to run processes as the submitting user, so we’re passing along shotgun user info via a SHOTGUN_DESKTOP_CURRENT_USER environment variable, and then authenticating with it at bootstrap time, with:

# Authenticate using the supplied user.
serialized_user = os.environ['SHOTGUN_DESKTOP_CURRENT_USER']
user = sgtk.authentication.deserialize_user(serialized_user)

# Start up a Toolkit Manager with our authenticated user
mgr = sgtk.bootstrap.ToolkitManager(sg_user=user)

I’m not sure if this has unexpected negative consequences, but it’s been working fine for us so far.


I’m not a 100% certain, but I imagine that it would fail if the job was left in the queue for too long as the session token would expire and it wouldn’t be able to authenticate without the user re-entering their user name and password.
Another approach you could take would be to use script authentication and then do something similar to what’s covered here:

Essentially the get_current_login.py hook could return back the SG user name that you could provide in an env var.


In regards to the python api coming with tk-core is there another place to call the code below from? Or a better way?

import shotgun_api3
self.sg = shotgun_api3.Shotgun(“sgwebsite”, login=“user”, password=“pw”)

I was using this in before_app_launch.py to eventually query the website for my software fields.


If you’re in the before_app_launch.py hook, then you can access an already authenticated Shotgun object via

self.sg = self.parent.shotgun

self.parent will return the App, and the App has a shotgun property..

However, if you’re wanting to create a new Shotgun API instance, perhaps because you want different permissions than the currently authenticated SG instance then you could do the following:

from tank_vendor import shotgun_api3
self.sg = shotgun_api3.Shotgun(“sgwebsite”, script_name =“test”, api_key=“asdfhqwery2349hciauy43”)

Which will import the shotgun API from the tk-core.


I am finally getting a chance to test this stuff out, and am I struggling a bit. I have used your code from above to create a tk-nuke engine on a render node. I am calling it using the nuke command line and feeding it the python script from above with a few tweaks for the project and configuration as well as tacking on some Nuke commands to open the script and render a frame. Below is my example nuke command.

/usr/local/Nuke11.1v2/Nuke11.1 -t /path/shotgun_bootstrap.py /path/to/nuke/script.nk

The output when running this is what I expect. I can see that the engine is starting and pulling the config down to the render node. However, it dies when trying to write the frame with this error:

ShotgunWrite2: 'WriteTank': unknown command. This is most likely from a corrupt .nk file, or from a missing or unlicensed plug-in. Can't render from that Node.

I am guessing this is because I am missing the part on how to launch a DCC properly (with environment) once the bootstrap process has been completed. Can you point me in the right direction for what I am missing?


Awesome sounds like you’re on the right path, and yes I think you’re right. The code I provided above will start the engine in a project context. Your config is most likely configured not to have the Shotgun write node present in the project context, so you would need to switch to an appropriate context for the file.

After bootstrap I wonder if you could do something like:

new_context = engine.sgtk.context_from_path(current_script_path)

Or if you can pass the context up front to the job then even better. You could either pass a serialised context, or just a Task entity id and then sgtk.context_from_entity("Task", entity_id)

1 Like