[API][yes]
[Search][yes]
[UI][yes]
Face detection engines detect human faces in media assets and locate them (within the visual frame) in terms of a bounding polygon. Unlike a face recognition engine, a face detection engine merely determines whether any face was detected. It does not try to identify the face or match it to other data.
Engine output
Face detection engine output should be stored as objects in the time-based series array in .aion. Each object should be of type face and should include the bounding polygon and optionally a label. The label can be used for grouping together multiple faces that likely belong to the same individual (e.g. "Person 1").
Here is an example of the simplest type of face detection output:
{
"schemaId": "https://docs.veritone.com/schemas/vtn-standard/master.json",
"validationContracts": [
"face"
],
"series": [
{
"startTimeMs": 9800,
"stopTimeMs": 17200,
"object": {
"type": "face",
"label": "face 1",
"confidence": 0.95,
"boundingPoly": [
{
"x": 0.1,
"y": 0.1
},
{
"x": 0.1,
"y": 0.5
},
{
"x": 0.5,
"y": 0.5
},
{
"x": 0.5,
"y": 0.1
}
]
}
}
]
}