Agents working for the Australian Federal Police have their personal information exposed in Colombian data leak

October 17, 2022

One of the most challenging issues for any organisation is securing information they have collected which has been provided to a third party.  It makes financial sense for data to be processed overseas and call centres for Australian companies on the subcontinent and Asia are ubiquitous.  Under the Australian Privacy Principle 8, Cross Border disclosure of personal information, organisations covered by the Privacy Act must take “reasonable steps” to ensure that personal information disclosed to overseas entities do not breach the Australian Privacy Principles. More particularly an organisation or government agency that does disclose personal information is liable for the breach of the Australian Privacy Principles by an overseas entity handling that information.

The Australian Federal Police is an APP entity under the Act.

It appears that information about agents engaged by the Australian Federal Police overseas, have been stolen from the Colombian Government in a cyber attack.  That information includes the identities of the agents. Given the murder rate in Colombia is three times that of the United States and drug cartels still operate in that part of Latin America the safety of those agents is more important that a possible breach of the Privacy Act.  There would be a real issue about Read the rest of this entry »

Woolies suffers a data breach through its MyDeal customers

October 16, 2022

Loyalty and rewards programs are just sophisticated data gathering machines.  Whatever benefits clients obtain from these programs the price is the constant collection of data.  The adage applies “If you are getting something for free, you are the product.” And so it is with Woolies My Deal program.  It needs data, lots of it, to assist Woolies make offers and determine trends.

There is nothing exceptional in any of that except when there is a cyber attack.  Which has happened in a report by Guardian with Woolworths says 2.2 million MyDeal customers’ details exposed in data breach.

The Guardian article provides:

Millions of customers’ details have been exposed in a major data breach at an online shopping site owned by the retail giant Woolworths.

The company says a compromised user credential was used to get access to customer information from the MyDeal website. Read the rest of this entry »

Medibank Private suffers a cyber security breach

October 13, 2022

As if to underscore the need for better cyber security and privacy reform Medibank has reportedly suffered a cyber attack yesterday according to itnews Medibank takes systems offline after ‘cyber incident’ .  In response Medibank shut down two customer facing systems.  According to the ABC the insurer says that no evidence that sensitive data had been accessed.

Interestingly the ABC reports on a surge of interest in cyber security professionals with Since the Optus data breach, Australia is desperate for cybersecurity professionals. You could become one without a university degree which is quite general.  The awareness of the need for cyber experts, actually privacy experts, has been growing for Read the rest of this entry »

Federal Government to expedite 3 reforms to the Privacy Act in light of the Optus data breach

At a speech at the National Press Club the Attorney General, Mark Dreyfuss, announced 3 privacy reforms before a more comprehensive amendment of the Privacy Act.  Those reforms are:

  • tougher penalties,
  • data retention limits and
  • anti-fraud measures

Each of the above reforms are welcome.  Legislating them outside of a broader and more comprehensive amendment to the Privacy Act is not best practice by any means.  Legislating tougher penalties is long overdue but increasing penalties when the legislation is going to be amended within 12 months has little practical impact.  A case brought today would not be resolved within 12 months based on the current state of the Federal Court list.  Data retention limits is Read the rest of this entry »

Another poll on privacy again finds that Australians care about their privacy and want tougher rules.

October 12, 2022

Today’s Sydney Morning Herald reports on a Resolve Political Monitor poll that finds that a clear majority of voters want tougher privacy rules.  The findings themselves are hardly new.  Wherever and whenever there have been polls on privacy people consistently express concern about the lack of privacy, the use of their personal information and the need for stronger rules. The attitude and concerns of Australians and Americans do not differ markedly.  That has not resulted in governments doing all that much to improve privacy protections.

Even if the poll does not reveal anything radically new the timing is signficant after the the Optus Data Breach.

The article Read the rest of this entry »

The Australian Information Commissioner opens an investigation into Optus regarding its data breach

October 11, 2022

Today the Australian Information Commissioner initiated an investigation.  In other jurisdiction this step by a regulator is quite common.  It is far less so in Australia.  It is clearly required given the size of the data breach, the likely cause and the consequential events as Optus has struggled to remediate the damage. 

The Commissioner’s statement Read the rest of this entry »

Singtel subsidiary, Dialog, suffers a data breach involving personal information of 1020 people

Stingtel’s woes continue.  Singtel’s Australian IT firm Dialog has announced it suffered a data breach just weeks after the Optus breach. It involved 1,000 employees and 20 customers.  As is the way of it, the media coverage has been considerable and unwelcome (to Singtel).  One of the almost inevitable effects of a data breach. 

Dialog released a statement which provided:

The Dialog Group (Dialog) today confirmed that the company has experienced a cyber security incident in which an unauthorised third party may have accessed company data, potentially affecting fewer than 20 clients and 1,000 current Dialog employees as well as former employees.

Dialog has notified the relevant authorities and is supporting those who may be impacted to protect against the risk of fraudulent activity.

On Saturday 10 September 2022, Dialog detected unauthorised access on our servers, which were then shut down as a preventative measure. Within two business days, our servers were restored and fully operational.

We contracted a leading cyber security specialist to work with our IT team to undertake a deep forensic investigation and continuous monitoring of the Dark Web. Our ongoing investigations showed no evidence of unauthorised downloading of data.

On Friday 7 October 2022 we became aware that a very small sample of Dialog’s data, including some employee personal information, was published on the Dark Web.

We are doing our utmost to address the situation and, as a precaution, we are actively engaging with potentially impacted stakeholders to share information, support and advice.

It is not a particularly statement Read the rest of this entry »

European Union Commission proposes an Artificial Intelligence Directive to complement the Artificial Intelligence Act

October 9, 2022

Artificial intelligence is now a key policy challenge across a range of disciplines; administration of justice, privacy, access to services, insurance and other forms of risk assessment, medicine, construction and the manufacturing.  It is transformative and will continue to be so. It also poses questions as to liability for products. The EU has proposed a legal framework on AI.

The reasons for that framework are described by the EU as follows:

The Commission is proposing the first-ever legal framework on AI, which addresses the risks of AI and positions Europe to play a leading role globally

The regulatory proposal aims to provide AI developers, deployers and users with clear requirements and obligations regarding specific uses of AI. At the same time, the proposal seeks to reduce administrative and financial burdens for business, in particular small and medium-sized enterprises (SMEs).

The proposal is part of a wider AI package, which also includes the updated Coordinated Plan on AI. Together, the Regulatory framework and Coordinated Plan will guarantee the safety and fundamental rights of people and businesses when it comes to AI. And, they will strengthen uptake, investment and innovation in AI across the EU.

Why do we need rules on AI?

The proposed AI regulation ensures that Europeans can trust what AI has to offer. While most AI systems pose limited to no risk and can contribute to solving many societal challenges, certain AI systems create risks that we must address to avoid undesirable outcomes.

For example, it is often not possible to find out why an AI system has made a decision or prediction and taken a particular action. So, it may become difficult to assess whether someone has been unfairly disadvantaged, such as in a hiring decision or in an application for a public benefit scheme.

Although existing legislation provides some protection, it is insufficient to address the specific challenges AI systems may bring.

The proposed rules will:

    • address risks specifically created by AI applications;
    • propose a list of high-risk applications;
    • set clear requirements for AI systems for high risk applications;
    • define specific obligations for AI users and providers of high risk applications;
    • propose a conformity assessment before the AI system is put into service or placed on the market;
    • propose enforcement after such an AI system is placed in the market;
    • propose a governance structure at European and national level.

A risk-based approach

pyramid showing the four levels of risk: Unacceptable risk; High-risk; limited risk, minimal or no risk

The Regulatory Framework defines 4 levels of risk in AI:

    • Unacceptable risk
    • High risk
    • Limited risk
    • Minimal or no risk

Unacceptable risk

All AI systems considered a clear threat to the safety, livelihoods and rights of people will be banned, from social scoring by governments to toys using voice assistance that encourages dangerous behaviour.

High risk

AI systems identified as high-risk include AI technology used in:

    • critical infrastructures (e.g. transport), that could put the life and health of citizens at risk;
    • educational or vocational training, that may determine the access to education and professional course of someone’s life (e.g. scoring of exams);
    • safety components of products (e.g. AI application in robot-assisted surgery);
    • employment, management of workers and access to self-employment (e.g. CV-sorting software for recruitment procedures);
    • essential private and public services (e.g. credit scoring denying citizens opportunity to obtain a loan);
    • law enforcement that may interfere with people’s fundamental rights (e.g. evaluation of the reliability of evidence);
    • migration, asylum and border control management (e.g. verification of authenticity of travel documents);
    • administration of justice and democratic processes (e.g. applying the law to a concrete set of facts).

High-risk AI systems will be subject to strict obligations before they can be put on the market:

    • adequate risk assessment and mitigation systems;
    • high quality of the datasets feeding the system to minimise risks and discriminatory outcomes;
    • logging of activity to ensure traceability of results;
    • detailed documentation providing all information necessary on the system and its purpose for authorities to assess its compliance;
    • clear and adequate information to the user;
    • appropriate human oversight measures to minimise risk;
    • high level of robustness, security and accuracy.

All remote biometric identification systems are considered high risk and subject to strict requirements. The use of remote biometric identification in publicly accessible spaces for law enforcement purposes is, in principle,  prohibited.

Narrow exceptions are strictly defined and regulated, such assuch as when necessary to search for a missing child, to prevent a specific and imminent terrorist threat or to detect, locate, identify or prosecute a perpetrator or suspect of a serious criminal offence.

Such use is subject to authorisation by a judicial or other independent body and to appropriate limits in time, geographic reach and the data bases searched.

Limited risk

Limited risk refers to AI systems with specific transparency obligations. When using AI systems such as chatbots, users should be aware that they are interacting with a machine so they can take an informed decision to continue or step back.

Minimal or no risk

The proposal allows the free use of minimal-risk AI. This includes applications such as AI-enabled video games or spam filters. The vast majority of AI systems currently used in the EU fall into this category.

step-by-step process for declaration of conformity
How does it all work in practice for providers of high risk AI systems?

Once an  AI system is on the market, authorities are in charge of market surveillance, users ensure human oversight and monitoring, and providers have a post-market monitoring system in place. Providers and users will also report serious incidents and malfunctioning.

Future-proof legislation

As AI is a fast evolving technology, the proposal has a future-proof approach, allowing rules to adapt to technological change. AI applications should remain trustworthy even after they have been placed on the market. This requires ongoing quality and risk management by providers.

And on that note the EU has Read the rest of this entry »

35.6 million records breached in September 2022

itgovernance has listed 88 publicly disclosed security incidents involving 35,566,046 in September.  That is a slight improvement on the September numbers, involving 97 million records, but remains grim reading. 

Some notable incidents include:

  • Wolfe Eye Clinic in the United States suffered a data breach through a third party which has affected 542,776.
  • The US NFL franchise the 49ers suffered a data breach involving more than 20,000 social security numbers.
  • the Medical Associates of the Lehigh Valley suffered a data breach affecting the social security numbers and personal health information of 75,628.
  • the personal information of 16 million compromised in a data breach of the Indian Swatchhta Platform.
  • 8 hotels of the Shangri La Group has been hacked with personal information affecting 290,000 hotel guests.  Three of the Shangri La hotels are located in Hong Kong.  The group waited 2 months to tell the Hong Kong Privacy Commissioner of the breaches to the Hong Kong hotels, something that raised the Commissioner’s ire. As a result the Privacy Commissioner commenced a compliance check.
  • Morgan Stanley paid a $35 million fine to the SEC for failing to dispose hard drives and servers containing the personal information of 15 million customers. 

The UK Home Office reprimanded by the UK Information Commissioner’s Office for leaving sensitive documents at a public venue in London…an old school data breach

A data breach is not confined to a cyber attack resulting in theft of personal information or the insertion of ransomware.  A data breach includes loss of paper documents in a public place or documents stored on a mobile device or memory stick.

The Information Commissioner issued a  formal reprimand to the Home Office, after sensitive documents were found at a public London venue in September 2021. It involved 4 documents in an envelope.

As is commonly the way of it, the documents were handed to police in September 2021.  The documents included two Extremism Analysis Unit Home Office reports and a Counter Terrorism Policing report. The reports contained personal data, including that of Metropolitan Police staff.

As often happens, the initial data breach is usually only the start of the organisation’s trouble.  The regulator found the Home Office’s processes lacking.

Not surprisingly the ICO found that the Home Office had failed to ensure an appropriate level of security of personal data, including where documents were classified as ‘Official Sensitive’ did not have a specific sign-out process for the removal of documents from the premises.

The reprimand relevantly Read the rest of this entry »