Gitlab CICD and Shotgun event daemon

Hi all,

I use Gitlab CICD tools and pytest 2.7 to test my updates on Shotgun event Daemon. It acts strange as the same tests will sometimes fail, sometimes pass. I do not change a thing between the two, simply restart the test step.
I am not that scared as tests are always running ok in local (actually I do not push my code on the repository if they don’t).
So I have a few questions:

  • has any of you experienced this?
  • my feeling is that it is related to the amount of work Shotgun event daemon has to deal when the pipeline runs; how does Shotgun deal with non-event-generating scripts? i.e. do these scripts generate events anyway but are filtered out at a certain point?
  • is it related to Gitlab runners (I know it is not the correct place to ask for this :wink: )?

Best Regards,
Francois

3 Likes

Hi François,

There’s not much I can say on the subject of Gitlab CICD or pytests 2.7. I can however chime in on your second bullet point.

When a script entity is marked as not generating events, whatever actions the script takes, Shotgun won’t generate events in the first place. The code that would have generated the events is entirely bypassed as early as possible in the process.

I’m not sure how helpful this is to the greater problem you’re facing but hopefully that’s one less thing to check knowing that scripts that don’t generate events don’t impact, in any way, a listening Shotgun Event Daemon.

4 Likes

Hi @François_Touvet,

Could you perhaps elaborate on the tests which are behaving erratically? Erratic behaviour suggests that there is some dynamic aspect (perhaps network request, perhaps asserting a response etc), it would be difficult to identify the issue without knowing the particular test case.

I use Gitlab Runners (docker executors) extensively and have never come across these kinds of issues.

I have however encountered similar issues running pytest on some flask applications.

Though our design pattern is to lint and run tests using git hooks pre-commit, never on server, this allows us to pre-emptively handle such erratic behaviour should it arise.

Hope this helps you perhaps debug.

1 Like

Hi Patrick,

thanks for putting this clear, that’s a good thing to know the problem is not related to a misunderstanding of event handling.

Best Regards,
Francois

1 Like

Hi Sreenathan_Nair and thanks for your input !

Yes I know this was not a very detailed demand.
The tests that are failing are random tests from our task dependencies update. As for now we did not use the Shotgun-embedded task dependencies in our productions, so we have to deal with it through our event handler.
Basically we use something like:

  • a task is approved
  • check some other tasks’ status/some field values
  • update the next tasks’ status

In our test we set the first task status as trigger (update our Shotgun db), run the code then check the next tasks have been updated.
So my bet here is that once in a while, Shotgun db is not updated quick enough for the tests and it fails.

Best Regards,
Francois

1 Like

Hi @François_Touvet,

I agree with you that it’s probably the shotgun db update. For these situations rather than make a network request usually the testing pattern is to mock the request and response data. That way you can be 100% certain that a test would only pass or fail due to the business logic. My suggestion would be to refactor the tests to do this. (Unless you’re doing integration testing, but I don’t believe this applies here?)

As for the db update itself, unfortunately due to network factors or a range of other issues, it might be difficult to pinpoint the exact cause (though raising connection timeout is a good place to start)

Good luck to you, hope you are able to fix the issue soon.

FYI, if you’re interested in the mocking pattern : https://realpython.com/testing-third-party-apis-with-mocks/

1 Like

Hi again,

I did not think about mocking my test results, that could indeed be a way to deal with it. Thanks for the support !

Best Regards,
Francois

2 Likes