QA - Unit test and integration test

Hi everyone,

I started developing some unit tests and integration tests to test all Shotgun custom apps and I’m integrating that with Jenkins service. I would like to start the mayapy (Maya standalone process) with the Shotgun context. Is it possible? How can I do that?

I’m thinking to castrate the mayapy as a new Shotgun software and indicate the tk-maya as the engine. So on, started that with tank application. But, thinking as Jenkins works, this maybe isn’t a good idea, because I will not run shotgun desktop and I will not run the Shotgun as user-based login but I will log in with the Shotgun with script token-based.



Interesting question! I’ll check with the Toolkit folks here to see if they’ve ever done anything like this.


Ok @brandon.foster , thank you!

1 Like


At Shotgun, we tend to consider our tests more like render jobs than launching a DCC. In a render farm, a Python script is launched from a Python interpreter to do some work.

When it comes to testing, your test scripts are executed through the mayapy interpreter. Assuming you are using pytest, your could take care of bootstrapping Toolkit via the API and pytest would run your different tests. Further down on that page you’ll see how you can start the engine and then your tests can use the engine.change_context method to switch to the right context for a specific test.

As for user vs script based authentication, you call refer to those credentials via environment variables and have Jenkins set them when launching your automated tests. This is how some of our tests work on Azure Pipelines.

And no, these credentials aren’t available from forks or pull requests. :smile:




Hi @jfboismenu

I’ll try that way you said.

I started my tests with unittest because there is no need to install any external dependencies and I confess that I never used pytest lib, but I can switch to pytest. I had already seen that Shotgun uses pytest in Azure pipelines. Are there many differences between pytest and unittest? Is it mandatory to use pytest to test the Shotgun toolkit scripts?

Thank you!

1 Like

Hello again!

No, there is no need to use pytest if you’re more comfortable with unittest2. If you look closely at our tests, we started writing our tests with unittest2 many years ago and the vast, vast, VAST majority of our tests are written using the unittest2 framework. Only the more recent tests are written using the pytest API fixtures.

As far as running the tests however, we’ve indeed moved away from our custom tk-core/tests/ launcher and have switched to pytests and a custom pytest plugin distributed with tk-toolchain, which is approximately a billion times better than the homegrown solution we had.

When dealing with unittest2 based tests for a specific engine, it can make sense to have something like (not actually tested, so you’ll definitely have to review the code!)

from sgtk import bootstrap
from sgtk.authentication import ShotgunAuthenticator
import sgtk

class BaseMayaTest(unittest2.TestCase):
   def setUp(self):
       # Ensures the engine is started only once. Each test should call `self.engine.change_context`
       # Bootstrap is slow, so we'll only do it once.
       if sgtk.platform.current_engine() is None:
       self.engine = sgtk.platform.current_engine()
   def _bootstrap(self):
      auth = ShotgunAuthenticator()
      manager = bootstrap.ToolkitManager(auth.create_script_user(HOST, SCRIPT_NAME, SCRIPT_KEY))
      manager.bootstrap_engine("tk-maya", some_context)

Our test suite is actually a bit more complicated than that and uses mocked configurations created on the fly, but the whole thing is not properly documented so you’ll have to follow the breadcrumb trail.

Here’s how the Max engine tests sets up automation for the publisher hooks.

Note that we call start_engine in setUp and destroyEngine in the tearDown. This code is built on top of our TankTestBase testing class. It suffers from a lack a documentation, but hopefully it is well commented enough for you to follow what is going on.




I hope you guys don’t mind me hijacking this conversation, but I am also struggling to build unit tests. However, my unit tests are a bit different. We have created an event-driven pipeline that uses events to drive Action Menu Items via the large payload option and Webhooks. As a result, we expect the Event to persist for unit testing and deployment reasons. Regretfully, Events in SG are somewhat ephemeral. Either the Event gets berried so deep in a legacy of Events, and they take too long to query. Or they get purged based on the preferences in SG (Cloud Hosted SG Instance). It would be great if we could tag certain events to service unit tests, so they are NEVER purged. I am tempted to make an Action Menu Item tool that runs on Event Log table and replicates the data in another table that matches the Event Log so the entries are never lost. However, this solution also poses its own set of problems in that Action Menus can not be explicitly registered on the Event Log table. Thoughts?


Hi Romey,

EventLogEntry records should indeed not be relied upon for testing as they are indeed not everlasting. You’re also correct that you can’t select EventLogEntry as a target for AMIs - I’m not sure about the history behind this.

If the value of the EventLogEntry is a known quantity to the point that you could save its data into another entity type (via copying it with your AMI) could you not get away with mocking things?

If you want to go the route of duplicating EventLogEntry records into records of another custom type used solely for testing, do you foresee doing this so often that you’d need an AMI? Could a command-line tool where you specify the EventLogEntry id fit the bill?