To successfully run a job in aiWARE using GraphQL, you create the job, poll the job's status, and get the job's results.
Creating a job requires the creation of one or more tasks. Each task specifies an engine, a payload, and input and output folders. When multiple tasks are involved, you must also specify an order.
Before you begin
Steps
- You create a job by using the
createJob mutation. The following example shows how you can set up and run a transaction job on an .mp4 file (passed in the payload attribute). All tasks have an engineId and one or more ioFolders. The routes array shows how the data gets routed between tasks.
The example also shows two ways to manage the Temporal Data Object (TDO) the job will operate on. You can either dynamically create a TDO by passing a target block to your createJob mutation, or you can use an existing TDO by passing its ID in the targetId field.
mutation {
createJob(input: {
# Supply target-block info if you want to create a TDO on the fly.
target: {
startDateTime:1574311000
stopDateTime: 1574315000
}
#targetId: "890661001" # Supply the TDO ID here if you have one (in which case, do not use the target block above).
# Tasks and their ioFolders
tasks: [
{
# Webstream Adapter (WSA)
engineId: "9e611ad7-2d3b-48f6-a51b-0a1ba40fe255"
payload: {
url:"https://s3.amazonaws.com/src-veritone-tests/stage/20190505/0_40_Eric%20Knox%20BWC%20Video_40secs.mp4"
}
ioFolders: [
{
referenceId: "wsaOutputFolder"
mode: stream
type: output
}
]
}
{
# Playback engine to store playback segments
engineId: "352556c7-de07-4d55-b33f-74b1cf237f25"
ioFolders: [
{
referenceId: "playbackInputFolder"
mode: stream
type: input
}
]
executionPreferences: {
parentCompleteBeforeStarting: true
}
}
{
# Chunk engine to split into audio chunks
engineId: "8bdb0e3b-ff28-4f6e-a3ba-887bd06e6440"
payload:{
ffmpegTemplate: "audio"
customFFMPEGProperties:{
chunkSizeInSeconds: "20"
}
}
ioFolders: [
{
referenceId: "chunkAudioInputFolder"
mode: stream
type: input
},
{
referenceId: "chunkAudioOutputFolder"
mode: chunk
type: output
}
],
executionPreferences: {
parentCompleteBeforeStarting: true
}
}
{
# Speechmatics engine
engineId: "c0e55cde-340b-44d7-bb42-2e0d65e98255"
ioFolders: [
{
referenceId: "transcriptionInputFolder"
mode: chunk
type: input
},
{
referenceId: "transcriptionOutputFolder"
mode: chunk
type: output
}
]
}
{
# Output Writer (to collate VTN-Standard output)
engineId: "8eccf9cc-6b6d-4d7d-8cb3-7ebf4950c5f3"
ioFolders: [
{
referenceId: "owInputFolderFromTranscription"
mode: chunk
type: input
}
]
}
]
##Routes : A route connects a parent output folder to a child input folder
routes: [
{ ## WSA--> Playback
parentIoFolderReferenceId: "wsaOutputFolder"
childIoFolderReferenceId: "playbackInputFolder"
options: {}
},
{ ## WSA --> chunkAudio
parentIoFolderReferenceId: "wsaOutputFolder"
childIoFolderReferenceId: "chunkAudioInputFolder"
options: {}
}
{ ## chunkAudio --> Transcription
parentIoFolderReferenceId: "chunkAudioOutputFolder"
childIoFolderReferenceId: "transcriptionInputFolder"
options: {}
}
{ ## Transcription --> Output Writer
parentIoFolderReferenceId: "transcriptionOutputFolder"
childIoFolderReferenceId: "owInputFolderFromTranscription"
options: {}
}
]
}) {
id
targetId
clusterId
tasks {
records{
id
engineId
payload
taskPayload
status
output
ioFolders {
referenceId
type
mode
}
}
}
routes {
parentIoFolderReferenceId
childIoFolderReferenceId
}
}
}
The job id returned can be used to poll the job for status.
- You can review the status of the job you created by polling it for status. The possible statuses of a job are:
- Pending - The job definition has been validated, the job is created, and it's waiting to be picked up from an AI Processing node.
- Complete - The job is complete and the job results are now available.
- Running - The job is currently in progress and has at least one task being executed.
- Canceled - The job has been canceled and will not be completed. See Canceling jobs for more information.
- Queued - The job has been selected by an AI Processing node and is awaiting available resources for execution.
- Failed - An error occurred and the job has failed. Errors occur at the individual task level, but you can retry the failed job once any error has been addressed.
To poll a job status, run the following job query by passing the job's id.
query jobStatus {
job(id: "19093817_Mdsq3lksrB") {
status
createdDateTime
targetId
tasks {
records {
log {
uri
text
}
status
taskOutput
createdDateTime
modifiedDateTime
id
engine {
id
name
category {
name
}
}
}
}
}
}
- Once the job is completed, you can query for the results of the job based on
engineId or jobId.
query {
engineResults(jobId:"YOUR_JOB_ID") {
records {
tdoId
engineId
startOffsetMs
stopOffsetMs
jsondata
assetId
}
}
- (Optional) You can export the results in specific output formats (such as
ttml or srt for captioning) using the createExportRequest mutation.
mutation createExportRequest {
createExportRequest(input: {
includeMedia: false,
tdoData: [{tdoId: "431011721"}],
outputConfigurations: [{
engineId: "71ab1ba9-e0b8-4215-b4f9-0fc1a1d2b44d",
formats: [{
extension: "vtt",
options: {newLineOnPunctuation: false}
}]
}]
}) {
id
status
organizationId
createdDateTime
modifiedDateTime
requestorId
assetUri
}
}
Below is a sample response:
{
"data": {
"createExportRequest": {
"id": "a2efc2bb-e09f-40bf-a2bc-1d25297ac2f7",
"status": "incomplete",
"organizationId": "17532",
"createdDateTime": "2019-04-25T20:45:20.784Z",
"modifiedDateTime": "2019-04-25T20:45:20.784Z",
"requestorId": "960b3fa8-1812-4303-b58d-4f0d227f2afc",
"assetUri": null
}
}
}
An export request may take time to process. You can poll the request status using the id, until the status is complete.
query exportRequest {
exportRequest(id: "a2efc2bb-e09f-40bf-a2bc-1d25297ac2f7") {
status
assetUri
requestorId
}
}
Below is a sample response where the export is incomplete:
{
"data": {
"exportRequest": {
"status": "incomplete",
"assetUri": null,
"requestorId": "960b3fa8-1812-4303-b58d-4f0d227f2afc"
}
}
}
When the status changes to complete, you can retrieve the results at the URL returned in the assetUri field.