![]()
Welcome to this month’s edition of Cyber Insights!
In this issue, we cover the latest myGov and ATO phishing scams, examine recent data breaches, and outline simple steps to reduce the risk of AI‑driven cyber threats.
Latest Scams
You receive an email or text message that appears to be from myGov or the Australian Taxation Office (ATO). The message claims your taxable income has been recalculated and you are owed a refund, or that there is an issue with your account that needs urgent attention. It includes a link for you to click so you can log in and resolve the issue.
However, the message is actually a phishing scam. The link takes you to a fake myGov sign-in page that looks almost identical to the real one. If you enter your login details, cybercriminals will steal them and use them to access your real myGov account. Once inside, they can lodge fraudulent tax returns in your name and change your bank details so that any refunds or government payments are redirected to their accounts.
These scams are extremely common right now. The ATO reports that fake myGov emails account for roughly 75% of all email scams reported to them over the past six months. Scammers are also using AI tools to make their messages more convincing, with fewer spelling mistakes and more realistic-looking layouts. Some victims have reported receiving phone calls from scammers pretending to be ATO officers, pressuring them to share personal details or make payments immediately.
Follow these tips to avoid falling victim to this scam:
- Never click links in emails or text messages claiming to be from the ATO or myGov. Instead, open a new browser window and go directly to my.gov.au to log in.
- Remember that the ATO will never send you an unsolicited message asking you to share personal information, passwords, or bank details by email or text.
- If you receive a suspicious message, report it to the ATO by forwarding the email to ReportEmailFraud@ato.gov.au or by calling the ATO scam hotline on 1800 008 540.
Latest Breaches
Nissan Motor Corporation
Incident Overview: On 10 January 2026, the Everest ransomware group claimed it had broken into Nissan’s internal systems and stolen approximately 900GB of data. The group posted samples of the stolen files on a dark web leak site and gave Nissan five days to respond before threatening to publish the full dataset. This is not Nissan’s first breach. In 2024, a separate ransomware attack affected 100,000 customers in Australia and New Zealand, and in December 2025, another incident exposed the details of 21,000 customers.
Impact Analysis: The stolen data reportedly includes dealership records, employee information, internal business documents, and confidential program files. While Nissan has not publicly confirmed the full extent of the breach, the exposure of dealer and employee information creates risks of identity theft and targeted phishing for those affected. For Australian Nissan customers, the repeated breaches highlight the ongoing risk that personal data shared with large organisations can be accessed by cybercriminals. Read more here
Victorian Department of Education
Incident Overview: On 14 January 2026, the Victorian Department of Education confirmed that an unauthorised third party had breached a school’s network and gained access to a department database. The attack impacted all 1,700 Victorian government schools, affecting hundreds of thousands of current and former students. The department temporarily disabled systems to prevent further access and reset all student passwords as a precaution, just weeks before the start of the 2026 school year.
Impact Analysis: The accessed data included student names, school-issued email addresses, encrypted passwords, school names, and year levels. More sensitive information such as dates of birth, home addresses, and phone numbers were not compromised. While there was no evidence that the data had been publicly released, the exposure of student email addresses creates a significant risk of targeted phishing attacks aimed at young people and their families. Read more here
Could you tell the difference between a real video call with your boss, and one created by artificial intelligence? AI-powered deepfake technology is making it harder than ever to trust what you see and hear online. Cybercriminals are now using AI to create realistic fake videos, voice recordings, and even live video calls to trick people into transferring money or sharing sensitive information. In one recent case in Australia, a bank had to step in to stop a customer from transferring $100,000 to someone who appeared, on a video call, to be a famous actor.
How to Protect Yourself from AI and Deepfake Scams
Follow the tips below to stay safe:
- Verify unexpected requests through a separate channel. If someone asks you for money or sensitive information, even on a video call, hang up and contact them directly using a phone number or email address you already have on file.
- Be suspicious of urgency. Scammers using AI-generated content will often pressure you to act quickly, saying things like “this needs to happen right now” or “don’t tell anyone about this.” Legitimate requests can wait for you to verify them.
- Watch for subtle signs of fakes. AI-generated videos may have unnatural eye movements, odd lighting, lips that don’t quite sync with the audio, or a slightly robotic tone. While these signs are becoming harder to spot, they can still be a clue that something isn’t right.
- Limit what you share publicly. Cybercriminals can use photos, videos, and audio clips you’ve posted on social media to create deepfakes of you or people you know. Think carefully about what personal content you share online.
- Talk to your team about this threat. Make sure your colleagues know that AI-generated scams exist. If your organisation handles financial transactions or sensitive data, establish a process for verifying unusual requests that doesn’t rely solely on email or video calls.
Find out more about cybersecurity for your business here or book a complimentary consultation with our Chief Information Security Officer, Chris Haigh here