On-screen Portrayal Evaluation Framework Unveiled
Computer vision is revolutionising the way we measure on-screen representation, focusing on three key aspects: presence, prominence, and portrayal. This innovative framework, relevant to media regulators, broadcasters, researchers, film/TV fans, and more, can bring new value and prompt new questions.
Presence, Prominence, and Portrayal
By analysing these dimensions, computer vision can provide quantitative insights into the diversity of on-screen characters. Presence refers to detecting whether a person or object appears on screen, while prominence assesses how visually or spatially dominant they are (e.g., size, position, or focus). Portrayal involves analysing facial expressions, poses, or interaction context to infer affect or role.
Ethical and Logistical Considerations
Addressing ethical concerns is essential for responsible implementation. Models can reinforce stereotypes if training data is unbalanced, affecting portrayal accuracy and fairness. Analyzing individuals on screen raises privacy issues, especially without consent. The impact of portrayal on viewer perceptions must be critically assessed to avoid perpetuating harmful stereotypes or misrepresentations.
Logistical challenges include ensuring data quality and diversity, optimizing algorithms for real-time analysis, employing interpretable models, and supervising computer vision models with human-annotated data.
Interdisciplinary Collaboration
Interdisciplinary efforts are crucial for thoughtfully deploying computational methods to generate richer and more regular data about representation. Computer vision research, particularly on algorithmic fairness, has grown significantly in recent years.
Choosing the Right Method
The final part of the framework helps determine the most appropriate method for identifying character occurrences, considering factors like speed, accuracy, verification checks, guidance on defining categories, and rights and access to the data required. The choice of face detection model can vary depending on the type of program being analysed.
Examples of Metrics and Potential Biases
Examples of metrics for presence include the make-up of the cast by gender/ethnicity. Metrics for prominence could include duration of screen time, likelihood to appear as a solo face on screen, and relatively more central or influential characters. Metrics for portrayal might include emotion of faces or the words by a character, and likelihood of appearing next to particular objects like weapons or drinks.
Potential biases and ethical concerns include the perpetuation of bias, especially in commercial face detection models, and the need for careful deliberation of fairness and transparency criteria before any models are used.
Limitations and Future Directions
Research is needed on when face detections are missed and the causes of this, as well as factors that cause different faces to be mistaken as the same. It is important to acknowledge whether or not it is possible to capture intersectionality, with evidence identifying the need for more insights into the intersectional dynamics of underrepresented groups.
Quantitative analysis can complement qualitative discussions about representation, but it's essential to remember that these methods are not without limitations. The second part of the framework raises considerations around feasibility (can we) and ethics (should we) when tracking a group using a purely visual approach.
Applicability to Equality Act Protected Characteristics
This framework is particularly useful for measuring groups falling under Equality Act protected characteristics, such as gender, gender identity, age, ethnicity, sexual orientation, and disability. However, conventional diversity-form categories do not map well to categories in labelled data, especially for demographic traits like ethnicity and non-visible disabilities.
Data Compilation Methods and Standards
Data compilation methods for representation analysis vary and may capture different aspects of diversity. Technical recommendations and data standards specific to representation metrics can be developed for the future. Programs with many recurring frontal faces allow for easier clustering of 'tracks' (character appearances), making computer vision application more feasible. Programs with higher variance in viewpoint, more crowds (smaller faces), and darker lighting may give less reliable clustered faces.
In conclusion, computer vision offers a powerful tool for measuring on-screen representation. By addressing ethical concerns, logistical challenges, and interdisciplinary collaboration, we can ensure responsible and effective implementation, providing valuable insights into diversity and representation in media.
- Computer vision can offer quantitative insights into the diversity of on-screen characters, focusing on presence, prominence, and portrayal.
- Addressing ethical concerns is vital, as unbalanced training data may reinforce stereotypes and affect portrayal accuracy and fairness.
- Analyzing individuals on screen raises privacy issues, and the impact of portrayal on viewer perceptions should be critically assessed.
- Interdisciplinary efforts are crucial for deploying computational methods responsibly, with research on algorithmic fairness gaining significance.
- The choice of face detection model can vary, considering factors like speed, accuracy, verification checks, and the type of program being analyzed.
- Potential biases and ethical concerns include the perpetuation of bias in commercial face detection models and the need for careful consideration of fairness and transparency criteria.
- This framework is particularly useful for measuring groups falling under Equality Act protected characteristics, but data compilation methods and standards specific to representation metrics will need to be developed.