Projects:2021s1-13291 Automated Content Moderation

From Projects
Revision as of 14:51, 13 April 2021 by A1754148 (talk | contribs)
Jump to: navigation, search

Abstract Open source intelligence refers to information or data that is legally derived from publicly available sources. Within the context of social media, many such items of open source data exist including images, videos and text. The use of this media however, carries a level of risk. Within the context of material uploaded by unknown individuals, exposure to content which is not safe for work can often result. Platforms, as a result, must employ algorithms to process media to detect explicit media in order to remove posts which violate the law and standards or expectations of the community. For the investigator, automatic classification often has the opposite requirement. Rather than removing extreme content, we wish to isolate for further study and possible action on behalf of law enforcement. In this project an analysis of current machine learning and signal processing methods will be conducted to create an effective system capable of extracting, filtering, flagging and storing content from social media sites such as Snapchat, TikTok, Facebook and Reddit.

Introduction

Project team

Project students

  • <Linyu Xu>
  • <Sanjana Tanuku>
  • <Siyu Wang>

Supervisors

  • <Matthew Sorell>
  • <Richard Matthews>


Objectives

The objective of the project is to create an effective system capable of extracting, filtering, flagging and storing unsafe content from social media sites such as Snapchat, TikTok, Facebook and Reddit.

Background

Topic 1

Method

Results

Conclusion

References

[1] a, b, c, "Simple page", In Proceedings of the Conference of Simpleness, 2010.

[2] ...