The federal government cracked down on platforms failing to protect users after alarming statistics revealing three-in-four users experienced some form of dating app-facilitated abuse and one in five reported being threatened.
Match Group - which operates dating platforms Hinge, Tinder, Plenty of Fish, OkCupid and Match.com - banned or suspended 660,000 accounts in the 2024/25 reporting year for breaching its rules.
Almost 11,000 accounts were banned for abuse or harassment and 11,500 for allegations of "off-platform misconduct", which includes allegations of abuse, harassment, sexual exploitation or financial harm.
Action was taken against 2126 accounts for violence and hate across the company's five dating platforms.
There were 34,300 complaints about abuse and harassment, almost 50,000 for off-platform misconduct and 18,860 for violence and hate across the five dating platforms.
It also received 71 legal orders across all platforms, such as law enforcement requesting information for investigations.
This included 40 for allegations of off-platform misconduct, 17 for spam and fake accounts and nine for abuse and harassment.
Tinder made up the bulk of all the legal inquiries, including almost half of off-platform misconduct and seven for abuse and harassment.
Match Group head of trust and safety Yoel Roth said the company was committed to protecting users, increasingly using technology to proactively take action.
"Millions of interactions happen on our platforms every day," Mr Roth told AAP.
"While the vast majority of those interactions are positive, sometimes things don't go as they should.
"That's why we are constantly working to address safety challenges by investing in new technology and partnering with experts to keep users safe."
Scams and spam remains the biggest area the company is tackling, with more than 610,000 accounts banned or suspended.
More than 90 per cent were fake accounts.
"There's an increasing presence of sophisticated bad actors," Mr Roth said in reference to online scams.
Automatic detection rates are highest for spam content - above 90 per cent on some platforms - but fall into the low single digits for abuse and harassment on several of the dating sites.
Match Group pointed to how it handles specific issues to explain the discrepancy, with the company being more comfortable using AI to ban spam accounts and preferring a human eye being cast over more serious harms.
The data comes from the company's first transparency report since the voluntary dating app safety code came into force in October 2024.
The code requires online dating platforms to take proactive steps to protect users and take action against perpetrators.
The eSafety commissioner is reviewing the code's effectiveness and signatories' mandatory compliance reports after the federal government threatened to regulate the space if enough wasn't being done.
Match Group uses AI to detect photos that may have young people in them as well as scan messages where people reveal they're underage or use language to groom minors.
Using live selfies for identity verification as well as government issued IDs are two ways Match Group is attempting to boost safety.
1800 RESPECT (1800 737 732)
National Sexual Abuse and Redress Support Service 1800 211 028
Lifeline 13 11 14
beyondblue 1300 22 4636