Long awaited Productivity Commission report on Data Availability and use released yesterday
May 9, 2017 |
The Productivity Commission’s long running investigation into data use gave rise to a very significant interim report. Yesterday the Productivity Commission publicly released the final report it provided to the Government on 31 March 2017. The final report, a behometh at over 658 pages, is found here while the overview, not exactly a slim lined edition at 76 pages, is found here.
This is a very thoughtful and comprehensive report, even for those who do not agree with all of the methodology and the recommendations. The Productivity Commission is recommending a complete overhaul on the way data is collected, stored, used and transferred. While it is calling for an opening up of the use of data and data sharing it is seeking to incorporate privacy protections and limit the controls government has on the flow of data.
If the recommendations are accepted and incorporated into law there will be a significant overhaul in privacy and data protection laws. As recommended there will also require a more serious and comprehensive attempt to establish effective regulation of data usage. Hopefully that will mean real, rather than illusory, enforcement for breaches. And that is the rub, based on Australia’s lacklustre history in this area.
The key points the Productivity Commission identified are:
- Extraordinary growth in data generation and usability has enabled a kaleidoscope of new business models, products and insights. Data frameworks and protections developed prior to sweeping digitisation need reform. This is a global phenomenon and Australia, to its detriment, is not yet participating.
- Improved data access and use can enable new products and services that transform everyday life, drive efficiency and safety, create productivity gains and allow better decision making.
- The substantive argument for making data more available is that opportunities to use it are largely unknown until the data sources themselves are better known, and until data users have been able to undertake discovery of data.
- Lack of trust by both data custodians and users in existing data access processes and protections and numerous hurdles to sharing and releasing data are choking the use and value of Australia’s data. In fact, improving trust community-wide is a key objective.
- Marginal changes to existing structures and legislation will not suffice. Recommended reforms are aimed at moving from a system based on risk aversion and avoidance, to one based on transparency and confidence in data processes, treating data as an asset and not a threat. Significant change is needed for Australia’s open government agenda and the rights of consumers to data to catch up with achievements in competing economies.
- At the centre of recommended reforms is a new Data Sharing and Release Act, and a National Data Custodian to guide and monitor new access and use arrangements, including proactively managing risks and broader ethical considerations around data use
- A new Comprehensive Right for consumers would give individuals and small/medium businesses opportunities for active use of their own data and represent fundamental reform to Australia’s competition policy in a digital world. This right would create for consumers:
- powers comparable to those in the Privacy Act to view, request edits or corrections, and be advised of the trade to third parties of consumer information held on them
- a new right to have a machine-readable copy of their consumer data provided either to them or directly to a nominated third party, such as a new service provider.
- A key facet of the recommended reforms is the creation of a data sharing and release structure that indicates to all data custodians a strong and clear cultural shift towards better data use that can be dialled up for the sharing or release of higher-risk datasets.
- For datasets designated as national interest, all restrictions to access and use contained in a variety of national and state legislation, and other program-specific policies, would be replaced by new arrangements under the Data Sharing and Release Act. National Interest Datasets would be resourced by the Commonwealth as national assets.
- A suite of Accredited Release Authorities would be sectoral hubs of expertise and enable the ongoing maintenance of, and streamlined access to, National Interest Datasets as well as to other datasets to be linked and shared or released.
- A streamlining of ethics committee approval processes would provide more timely access to identifiable data for research and policy development purposes.
- Incremental costs of more open data access and use — including those associated with better risk management and alterations to business data systems — will exist but should be substantially outweighed by the opportunities presented.
- Governments that ignore potential gains through consumer data rights will make the task of garnering social licence needed for other data reforms more difficult. Decoupling elements of this Framework runs the risk of limiting benefits to, and support from, the wider public.
The Findings and recommendations are:
Finding 1.1
Australia’s provision of open access to public sector data is below comparable countries with similar governance structures, including the United States, the United Kingdom and New Zealand.
While there remains considerable scope to improve the range of datasets published (and, correspondingly, the diversity of agencies and research bodies publicly releasing data), poor formatting and the lack of frequency with which data is publicly updated are reducing data usability
Finding 2.1
The benefits from greater access to data would be widespread, but consumers, in particular, have much to gain, collectively, from action on Australia’s data sharing and release arrangements.
Finding 3.1
Individuals are likely to be more willing to allow data about themselves to be used by private and public organisations, provided they understand why and how the data is being used, can see tangible benefits, and have control over who the data is shared with.
Finding 3.2
A wide range of more than 500 secrecy provisions in Commonwealth legislation plus other policies and guidelines impose considerable limitations on the availability and use of identifiable data. While some may remain valid, they are rarely reviewed or modified. Many would no longer be fit for purpose.
Incremental change to data management frameworks is unlikely to be effective or timely, given the proliferation of these restrictions.
Finding 3.3
Data integration in some jurisdictions (particularly Western Australia and New South Wales) has progressed in some fields, but highlights a lack of action in equivalent fields at both Commonwealth and State level, and reveals the large unmet potential in data integration opportunities.
Finding 3.4
The boundaries of personal information are constantly shifting in response to technological advances and new digital products, along with community expectations.
The legal definition of personal information, contained in the Privacy Act 1988 (Cth), has always had an element of uncertainty, and is managed by guidelines. In the face of rapid changes in sources and types of data, outcome-focused data definitions remain essential. But practical guidance (that data custodians and users can rely on) is required on what sorts of data are covered by the definitions.
Finding 3.5
Despite recent statements in favour of greater openness, many areas of Australia’s public sector continue to exhibit a reluctance to share or release data.
The entrenched culture of risk aversion, reinforced by a range of policy requirements and approval processes, and often perverse incentives, greatly inhibits data discovery, analysis and use.
The lack of public release and data sharing between government entities has contributed to fragmentation and duplication of data collection activities. This not only wastes public and private sector resources but also places a larger than necessary reporting burden on individuals and organisations.
Finding 3.6
Large volumes of identifiable information are already published online by individuals or collected by various organisations, with or without explicit consent.
Breaches of personal data, often compounded by individuals’ unwary approach to offering data, are largely dominated by malicious database hacking or criminal activity. By comparison, breaches due to sharing or release are rare.
Finding 4.1
Comprehensive reform of Australia’s data infrastructure is needed to signal that permission is granted for active data sharing and release and that data infrastructure and assets are a priority. Reforms should be underpinned by:
- clear and consistent leadership
- transparency and accountability for release and risk management
- reformed policies and legislation
- institutional change.
Finding 4.2
Community trust and acceptance will be vital for the implementation of any reforms to Australia’s data infrastructure. These can be built through enhancement of consumer rights, genuine safeguards, transparency, and effective management of risk.
Recommendations
Recommendation 5.1
Consumer data must be provided on request to consumers or directly to a designated third party in order to exercise a number of rights, summarised as the Comprehensive Right to access and use digital data. This Comprehensive Right would enable consumers to:
- share in perpetuity joint access to and use of their consumer data with the data holder
- receive a copy of their consumer data
- request edits or corrections to it for reasons of accuracy
- be informed of the trade or other disclosure of consumer data to third parties
- direct data holders to transfer data in machine-readable form, either to the individual or to a nominated third party.
Where a transfer is requested outside of an industry (such as from a medical service provider to an insurance provider) and the agreed scope of consumer data is different in the source industry and the destination industry, the scope that applies would be that of the data sender.
Recommendation 5.2
The Australian Government should introduce an outcome-based definition of consumer data that is, as an overarching objective, data that is sufficient to enable the provision of a competing or complementary service or product for a consumer.
In the relevant service or product context, consumer data is digital data, provided in machine-readable format, that is:
- held by a product or service provider, and
- identified with a consumer, and
- associated with a product or service provided to that consumer.
Participants in an industry should determine the scope of consumer data relevant to their industry (where an industry in this context would be determined by a broad description of the service). This should be in the form of a data-specification agreement.
Data-specification agreements should also articulate: transfer mechanisms, and security of data, to ensure that data use is practical and robust to technology updates; and the requirements necessary to authenticate a consumer request prior to any transfer.
These agreements should be registered with the ACCC, which may offer interim approval where an agreement has been reached but other industry agreements have been prioritised for approval.
In the absence of such agreement, consumer data must be in machine-readable form and include all of:
- personal information, as defined in the Privacy Act 1988 (Cth), that is in digital form
- information posted online by the consumer
- data created from consumers’ online transactions, Internet-connected activity, or digital devices
- data purchased or obtained from a third party that is about the identified consumer
- other data associated with transactions or activity that is relevant to the transfer of data to a nominated third party.
Data that is solely imputed by a data holder to be about a consumer may only be included with industry-negotiated agreement. Data that is collected for security purposes or is subject to intellectual property rights would be excluded from consumer data.
A consumer for the purposes of consumer data should include a natural person and an ABN holder with a turnover of less than $3m pa in the most recent financial year.
Data that is not able to be re-identified to a consumer in the normal course of business within a data holder should not be considered consumer data.
The definition should be included in a new Act for data sharing and release (Recommendation 8.1). Given the need for consumer data to have broad applicability, the outer boundary definition and reference to ACCC registered industry-specific definitions should also be included within the Acts Interpretation Act 1901 (Cth). Consequential amendments to other legislation in the future would ensure harmonisation across federal laws.
Recommendation 5.3
All holders of consumer data should include in their privacy policies, terms and conditions, or on their websites a list of parties to whom consumer data has been traded or otherwise disclosed over the past 12 months.
On the windup of an entity that holds consumer data, consumers should be informed if data to which they hold a joint right has been traded or transferred to another entity. For businesses entering formal insolvency processes, insolvency practitioners should ensure consumers have been informed. For businesses closing but not in insolvency proceedings, the entity acquiring consumer data should inform consumers of this fact and give them the opportunity for data collection to cease.
Recommendation 5.4
The Australian Government should provide for broad oversight and complaints handling functions relating to the use of the Comprehensive Right. Accordingly, the Australian Competition and Consumer Commission (ACCC) should be resourced to undertake the following additional responsibilities:
- approving and registering industry data-specification agreements and standards
- handling complaints in relation to a data holder’s failure to meet the terms of the Comprehensive Right, including in regard to the scope of consumer data
- educating consumers (in conjunction with State And Territory fair trading offices) on their rights and responsibilities under the Comprehensive Right
- assessing the validity, when requested or at their discretion, of charges levied by data holders for application of the Comprehensive Right.
The Office of the Australian Information Commissioner and industry ombudsmen should, in order to ensure a ‘no wrong door’ approach to handling consumer engagement, coordinate with the ACCC on the receipt and handling of consumer complaints on data access and use.
Recommendation 5.5
The Australian Government should adopt a minimum target for voluntary participation in Comprehensive Credit Reporting of 40% of all active credit accounts, provided by Australian Securities and Investments Commission (ASIC)?licensed credit providers, for which comprehensive data is supplied to the credit bureaux in public mode.
If this target is not achieved by 30 June 2017, the Government should circulate draft legislation by 31 December 2017, to impose mandatory participation in Comprehensive Credit Reporting (including the reporting of repayment history) by ASIC?licensed credit providers in 2018.
The Office of the Australian Information Commissioner and ASIC should consult with other regulators, industry groups and consumer advocates to collaboratively consider whether there is a need for a hardship flag in credit reporting.
The Department of the Treasury should be given responsibility for monitoring and publicly reporting on a regular basis on participation in Comprehensive Credit Reporting
Recommendation 6.1
As an immediate objective, all Australian governments should direct the early release of all non-sensitive publicly funded datasets — whether held by a government agency or other body receiving public funding for data collection activities.
A realistic assessment of the risks attached to public release of identifiable information that is already public (in a less accessible form) should be undertaken by all governments, with the intention of releasing low risk data, and mitigating risks where possible to enable far greater public release of data, including that which could be used for program or agency performance management purposes.
Agencies should report annually on the proportions of their datasets made publicly available, shared, and not available for release.
Recommendation 6.2
Additional qualified entities should be accredited to undertake data linkage.
State-based data linkage units should be able to apply for accreditation by the National Data Custodian (Recommendation 6.6) to allow them to link Australian Government data.
Recommendation 6.3
All Australian governments entering into contracts with the private sector that involve the creation of datasets in the course of delivering public services should assess the strategic significance and public interest value of the data as part of the contracting process.
Where data is assessed to be valuable, governments should retain the right to access or purchase that data in machine-readable form and to subsequently apply any analysis and release strategy that is in the public interest.
The Australian Government Department of Finance should modify template contracts to, by default, vest access and purchase rights in governments, and avoid the need for negotiating separate rights in each contract. State and Territory governments should adopt a similar approach.
Recommendation 6.4
Publicly funded entities, including all Australian Government agencies, should create comprehensive, easy to access registers of data, including metadata and linked datasets, that they fund or hold. These registers should be published on data.gov.au. Where datasets are held or funded but are not available for access or release, the register should indicate this and the reasons why this is so.
States and Territories should create an equivalent model for their agencies where such registers do not exist. These should, in turn, be linked to data.gov.au.
A reasonable timeframe in which to achieve this is within one year (by March 2018).
Recommendation 6.5
In determining datasets for public release, a central government agency in each jurisdiction with overarching policy responsibility for data should offer a public process whereby datasets or combinations of datasets can be nominated, with a public interest case made, for release.
A list of requested datasets, and decisions regarding dataset release or otherwise, should be transparent and published online — in the Commonwealth’s case, on data.gov.au.
Recommendation 6.6
The Australian Government should establish an Office of the National Data Custodian (NDC) to take overall responsibility for the implementation of data management policy, in consultation with all levels of Government.
The Office of the NDC should have responsibility for:
- broad oversight and ongoing monitoring of and public reporting on Australia’s national data system and the operation of the new Data Sharing and Release Act (recommendation 8.1)
- preliminary assessments for, and recommending designation of, National Interest Datasets (recommendation 7.1)
- accrediting release authorities, be party to determining a funding agreement for Accredited Release Authority (ARA) activities, and promoting cooperation between ARAs
- managing complaints about ARA processes
- providing practical guidance material for ARAs and data custodians on matters such as risk management, data curation and metadata, data security, data de-identification and trusted user models
- advising on ethics and emerging risks and opportunities in data use.
The Office of the NDC should include a small advisory board, comprising members with technical skills related to the NDC’s activities, and a dedicated ethics adviser.
The NDC role should be filled administratively by the end of 2017 to be operational by the time that new draft legislation for data access is completed for public consultation (Recommendation 10.2).
Recommendation 6.7
The National Data Custodian should streamline approval processes for access to data by:
- issuing clear guidance to all Australian Government data custodians on their rights and responsibilities, ensuring that requests for access to data they hold are dealt with in a timely and efficient manner and are consistent with the risk management approach to be adopted by Accredited Release Authorities (ARAs)
- requiring that these data custodians report annually on their handling of requests for data access, including requests from ARAs.
State and Territory governments may opt in to these approaches to enable use of data for jurisdictional comparisons and cross?jurisdictional research.
Recommendation 6.8
Selected public sector and public interest entities should be accredited as release authorities. Accreditation should be determined based on sectoral expertise, capability, governance structures, and include consultation throughout the relevant sector.
Accredited Release Authorities (ARAs) would be responsible for:
- deciding (in consultation with original data custodians) whether a dataset is available for public release or limited sharing with trusted users
- collating, curating, linking and ensuring the timely updating of National Interest Datasets and other datasets
- offering advice, services and assistance on matters such as dataset curation, de?identification and linking
- providing risk-based access to trusted users.
ARAs should be fully operational from the beginning of 2019.
Recommendation 6.9
All Accredited Release Authorities must have and publish formal risk management processes to effectively assess and manage the risks associated with sharing and release of data under their control.
Standardised, access-friendly Data Sharing Agreements should be implemented with external data providers and users to formalise the activities that can take place with identifiable and de?identified data.
Risk management processes should be regularly reviewed and revised to account for new and emerging risks.
Recommendation 6.9
All Accredited Release Authorities must have and publish formal risk management processes to effectively assess and manage the risks associated with sharing and release of data under their control.
Standardised, access-friendly Data Sharing Agreements should be implemented with external data providers and users to formalise the activities that can take place with identifiable and de?identified data.
Risk management processes should be regularly reviewed and revised to account for new and emerging risks.
Recommendation 6.10
Funding of Accredited Release Authorities (ARAs), for the purposes of data management, curation, storage and access should be set via a funding agreement with the National Data Custodian.
ARAs should have the power to charge fees sufficient to recoup costs where ARAs undertake requested work beyond that envisaged in their funding arrangement with the National Data Custodian.
In assessing the scope to undertake such activities, ARAs must ensure they do not detract from their primary focus on the public benefits of enabling greater access to, and use of, data (which is the basis for their accreditation and funding).
Recommendation 6.11
The Office of the National Data Custodian should be afforded the power to require an audit of a data custodian’s de?identification processes and issue assurance of de?identification practices used.
Recommendation 6.12
Accredited Release Authorities (ARAs) should be given responsibility to grant, on a continuing program-wide basis, data access to trusted users from a range of potential entities that:
- have the necessary governance structures and processes in place to address the risks of inappropriate data use associated with particular datasets, including access to secure computing infrastructure, and
- have a signed legal undertaking that sets out safeguards for data use and recognises relevant privacy requirements.
In assessing trusted user access, the ARAs should accept existing current approvals of the trusted user’s work environment.
Trusted user status for use of identifiable data would cease for that user when they leave the approved environment, when a program is completed, or if a data breach or mishandling occurs in that same environment and/or program.
Recommendation 6.13
Accredited Release Authorities (ARAs) and data custodians should be required to refer suspected and actual violations of data use conditions that have system-wide implications to the National Data Custodian.
Clarification should be issued detailing how this process would interact with the Privacy Amendment (Notifiable Data Breaches) Act 2017 (Cth).
Recommendation 6.14
Progress by individual research institutions receiving Australian Government funding in making their unique research data and metadata widely available to others should be openly published by those institutions, with reference to past performance.
All bodies channelling public funds for research, such as the National Health and Medical Research Council and Australian Research Council, should similarly require in future funding agreements with research applicants that data and metadata is to be publicly available, and publish the results of progress on this for their funded projects.
On completion of projects, research institutions should include in their reports details of when and how other researchers can access the project’s data and metadata.
Recommendation 6.15
Processes for obtaining approval from human research ethics committees (HRECs) should be streamlined.
To achieve this in the health sector:
- All HRECs should be required to register with the National Health and Medical Research Council (NHMRC). The NHMRC should receive funding to expand its current registration process, to include audits of registered HRECs.
- To maintain their registration, HRECs must implement efficient and timely approval processes, which ensure projects are not unduly delayed. The time taken to consider and review projects should be reported to the NHMRC, and included in the annual report on HREC activity.
- As a condition for registration, all HRECs and the institutions they operate in would be required to accept approvals issued by certified HRECs for multi-site projects, without additional reviews. The Australian Health Ethics Committee should develop uniform review processes to be used by certified HRECs.
The Council of Australian Governments’ Health Council should sign an intergovernmental agreement that extends the existing National Mutual Acceptance Scheme to all jurisdictions, including the Commonwealth, and all types of projects. As part of this agreement, all jurisdictions should also implement streamlined governance approvals.
Recommendation 6.16
The Privacy Act 1988 (Cth) exceptions that allow access to identifiable information for the purposes of health and medical research without seeking individuals’ agreement, should be expanded in the legislative package that implements these reforms to apply to all research that is determined by the National Data Custodian to be in the public interest.
Recommendation 6.17
The Australian Government should abolish its requirement to destroy linked datasets and statistical linkage keys at the completion of researchers’ data integration projects. Where an Accredited Release Authority is undertaking multiple linkage projects, it should work towards creating enduring linkage systems to increase the efficiency of linkage processes.
Data custodians should be advised as part of early implementation of this reform package to use a risk-based approach to determine how to enable ongoing use of linked datasets. The value added to original datasets by researchers should be retained and made available to other dataset users.
Recommendation 7.1
The Australian Government, in consultation with State and Territory governments, should establish a process whereby public (and in some exceptional cases, private) datasets are nominated and designated as National Interest Datasets (NIDs).
This process should be public, driven by the National Data Custodian, and involve:
- The National Data Custodian accepting nominations for NIDs, assessing their public interest merits and, after consideration by the Government, referring selected nominations to a public scrutiny process. Designation would occur via a disallowable instrument on the recommendation of the National Data Custodian.
- The establishment of a parliamentary committee, or addition of such a role to the work of an existing parliamentary committee, to conduct public scrutiny of nominations for NIDs.
The process of nomination should be open to the States and Territories in order to cover linked datasets.
This process should be in place by the end of 2018, as part of the legislative package to implement these reforms.
Recommendation 7.2
In considering nominations for National Interest Datasets (NIDs), the National Data Custodian’s public interest test should establish that through sharing or release, the designation of a dataset would be likely to generate significant additional community-wide net benefits beyond those obtained by the original data holder.
Once designated, NIDs that contain non-sensitive data should be made available for immediate release.
NIDs that include data on individuals would be available to trusted users only in a manner that reflects the accreditation processes of the relevant Accredited Release Authority, as established and updated by the National Data Custodian, to respect privacy and confidentiality.
Where data from the private and/or not-for-profit sectors is recommended to be included in a NID, the analysis prior to designation should specifically note the ways the designation addresses genuine commercial sensitivity associated with the information and costs (including those related to ongoing dataset maintenance).
Recommendation 7.3
Trusted users should be accredited by the relevant Accredited Release Authority (ARA) for access to those National Interest Datasets (NIDs) that are not publicly released, under processes accredited and updated as needed by the National Data Custodian.
Trusted users should be personnel from a range of potential entities that:
- have the necessary governance structures and processes in place to address the risks of inappropriate data use associated with particular datasets, including access to secure computing infrastructure, and
- have a signed legal undertaking that sets out safeguards for data use and recognises relevant privacy requirements.
The default position should be that after applicants and their institution establish capability to respect the processes and obligations of the ARA’s accredited standard, an individual researcher from one of these organisations would be readily approved for access.
For trusted users of NIDs, this status should provide an ongoing access arrangement to specified unreleased datasets that would only cease on completion of a researcher’s engagement with their relevant institution, or a loss of trust in the user or their organisation (via processes also established in accreditation of the ARA by the National Data Custodian).
Recommendation 7.4
The Australian Government should make provision, in select circumstances as approved by the funding Minister, for the National Data Custodian to pay for access or linkage to private sector datasets (Recommendation 9.4).
Equally, the National Data Custodian may consider applying charges for access to National Interest Datasets where this would not be inconsistent with the public interest purpose of the National Interest Dataset.
It is expected this would not be a common occurrence, in either case.
Recommendation 8.1
New Commonwealth legislation — the Data Sharing and Release Act — should be passed drawing on the full range of Commonwealth powers to regulate digital data, in order to authorise the better sharing and release of data.
The new Act should also establish the Comprehensive Right of consumers to access their data from government and private data holders alike, for the purposes of improving the services that are offered to them by alternative providers.
Recommendation 8.2
The Data Sharing and Release Act should establish the risk-based approach to data sharing and release and accompanying institutional frameworks.
- All non?sensitive data held by agencies and Accredited Release Authorities (ARAs) should be explicitly presumed to be made public, consistent with the Australian Government’s Public Data Policy Statement.
- Data custodians and ARAs would be authorised to provide sensitive data to trusted users in a secure environment, with de-identification where necessary for risk management of the data.
The National Data Custodian should have the authority to issue guidance on how the risks of all sharing of identifiable data between entities should be managed. This guidance should be updated where it judges the risks have shifted.
Recommendation 8.3
The Data Sharing and Release Act (DSR Act) would, where possible, override secrecy provisions or restrictions on use that prevent original custodians actively providing access to data to other public sector data custodians and Accredited Release Authorities (ARAs).
Access should be governed by Data Sharing Agreements that embed the trusted user principles, actively assist data sharing and create clarity of understanding amongst the parties. The National Data Custodian (NDC) should issue a model Data Sharing Agreement early in its life, and update it from time to time.
The DSR Act should establish modern, clear and supportive standards — the new ‘rules of the game’ — for data sharing and release. The Commonwealth Privacy Act would continue to apply, as well as any residual obligations emanating from the original data custodian’s legislation.
Existing protections would remain on datasets that do not utilise the DSR Act, in order to ensure there is no gap between the accountability obligations on original public sector data custodians and the ARA.
In limited exceptional circumstances as the DSR Act transitions to becoming nationally effective, it may be necessary to provide access to data shared under the new Act to a party that has yet to adopt its provisions. The NDC should be provided with the power to use a disallowable instrument to allow access or sharing for such transitional purposes.
Recommendation 8.4
The Australian Government’s Protective Security Policy Framework (and equivalent State and Territory policies) should be amended to recognise that the risk and therefore the classification needed for data can be reduced by:
- transforming a dataset, for example through de-identification, such that the risks of misuse on dataset release are reduced
- only making the transformed data available to trusted researchers in a secure computing environment, with usage monitored and output checked for disclosiveness.
This would align the Protective Security Policy Framework with the current legal environment.
The Australian Government should consider doing this as part of its response to the Belcher Review.
Recommendation 8.5
Legislative reform to implement the Commission’s recommendations would need to be undertaken in two parts, moving forward together:
- the first part is the passage of the Data Sharing and Release Act (DSR Act) itself, that authorises to the greatest extent practical in a single statute, the sharing and release of data for the purposes of the Act and removes existing Commonwealth and State restrictions on integrating, linking and research uses of datasets by Accredited Release Authorities
- the second part is a further legislative amendment process that may be necessary, depending on the particular characteristics of, for example, National Interest Datasets, in order to address residual restrictions on the use of specific datasets that were not able to be effected by the DSR Act itself.
The National Data Custodian should be asked to identify residual legislative restrictions that need removal in its consideration of National Interest Datasets.
Recommendation 8.6
The Data Sharing and Release Act (DSR Act) should have national reach — to create a simplified and transparent one-stop location for a national framework for data volunteered, declared or acquired for inclusion under the DSR Act.
The Act should allow for the acquisition of private datasets via disallowable instruments as part of the process of creating National Interest Datasets (NIDs). Acquisition should only occur on just terms after parliamentary scrutiny determines the benefits are demonstrable.
An initial set of NIDs should be identified by the National Data Custodian to accompany the DSR Bill, following processes to establish additionality and public interest.
The DSR Act should apply Commonwealth privacy legislation to datasets managed by Accredited Release Authorities where feasible. It should be drafted with reference to (and with the intention of being consistent with) the Data Sharing (Government Sector) Act 2015 (NSW) and the Public Sector (Data Sharing) Act 2016 (SA) to the extent possible.
Recommendation 8.7
The Australian Competition and Consumer Commission (ACCC) and the Office of the Australian Information Commissioner should enter into working arrangements with each other, industry ombudsmen and other relevant bodies at all levels of government to support a ‘no wrong door’ approach to how individuals (including small businesses) pursue complaints or queries regarding their rights as consumers to data held on them.
Where an industry data-specification agreement (Recommendation 5.2) seeks to use a recognised industry ombudsman to address consumer complaints, this should be considered by the ACCC as part of its acceptance or rejection of a proposed industry agreement.
Finding 9.1
There is no single pricing approach that could act as a model for guiding public sector data release decisions.
The identification by agencies of the grounds for undertaking each release would have a direct bearing on the choice of price approach.
Cost recovery, long considered to be the default option in the public sector, is only one of a range of approaches and not necessarily to be preferred.
Recommendation 9.1
The emphasis for government agencies in handling data should be on making data available at a ‘fit for release’ standard in a timely manner. Beyond this, agencies should only transform data beyond the basic level if there is a clearly identified public interest purpose or legislative requirement for the agency to undertake additional transformation, or:
- the agency can perform the transformation more efficiently than either any private sector entities or end users of the data; and
- users have a demonstrable willingness to pay for the value added product; and
- the agency has the capability and capacity in?house or under existing contract; and
the information technology upgrade risk is assessed and found to be small.
Recommendation 9.2
The pricing of public sector datasets for public interest research purposes should be the subject of an independent review.
Recommendation 9.3
Minimally processed public sector datasets should be made freely available or priced at marginal cost of release.
Where data has been transformed, the transformed dataset may be priced above the marginal cost of release. Data custodians should experiment with low prices initially to gauge the price sensitivity of demand, with a view to sustaining lower prices if demand proves to be reasonably price sensitive.
Recommendation 9.4
Funding should be provided to agencies for the curation and release of those datasets determined through the central data agencies’ public request process (Recommendation 6.5) to be of high value with a strong public interest case for their release. This funding should be limited and supplemental in nature, payable only in the event that agencies make the datasets available through public release.
Funding would also be required for the Office of the National Data Custodian, for functions undertaken by Accredited Release Authorities and, in some cases, for the purchase and ongoing maintenance of National Interest Datasets. Additional responsibilities required of the Australian Competition and Consumer Commission in regard to the Comprehensive Right should also be resourced.
Aside from these purposes, no additional supplementary funding appears warranted for agencies’ activities related to their data holdings as a consequence of this repo
Recommendation 10.1
The Australian Government should engage actively with the community on matters related to data availability and use.
At a minimum, the National Data Custodian should regularly convene forums for consultation, to ensure community concerns about increased use of data are addressed.
Recommendation 10.2
The Australian Government should set an ambitious — but realistic — timeline for implementation of the Commission’s recommended reforms.
A set of actions in this Report can be completed in 2017, to ensure they deliver benefits to the community in the short term.
Passage of the Data Sharing and Release Act and supporting Part 2 amendments for an initial suite of National Interest Datasets should be in place by the end of 2018.
A central agency with data responsibility should actively support the progress made against the implementation plan until the Office of the National Data Custodian is legislatively established.
Once established, the National Data Custodian should assume responsibility for monitoring and evaluating the effects of the new data Framework, reporting annually on progress and with a formal evaluation after three years’ experience of the Framework’s reforms.
Recommendation 10.3
Government agencies should adopt and implement data management standards to support increased data availability and use as part of their implementation of the Australian Government’s Public Data Policy Statement.
These standards should:
- be published on agency websites
- be adopted in consultation with data users and draw on existing standards where feasible
- deal effectively with sector?specific differences in data collection and use
- support the sharing of data across Australian governments and agencies
- enable all digitally collected data and metadata to be available in commonly used machine-readable formats (that are relevant to the function or field in which the data was collected or would likely be most commonly used), including where relevant and authorised, for machine-to-machine interaction.
Policy documents outlining the standards and how they would be implemented should be available in draft form for consultation by the end of 2017, with standards implemented by the end of 2020.
Agencies that do not adopt agreed sector?specific standards would be noted as not fully implementing the Australian Government’s Public Data Policy and would be required to work under a nominated Accredited Release Authority to improve the quality of their data holdings
Recommendation 10.4
The private sector is likely to be best placed to determine sector?specific standards for data sharing between firms, where required by reforms recommended under the new data Framework.
In the event that cooperative approaches to determining standards and data quality do not emerge or adequately enable data access and transfer (including where sought by consumers), governments should facilitate this.
The release of the Productivity Report coincides with the a very interesting, and relevant, briefing paper by the Australian Strategic Policy Institute titled Cyber information sharing: lessons for Australia. The synopsis of the paper provides:
Sharing information on the cyber landscape is a necessary and efficient way to benefit from mutual exposure to cyber threats and boost collective defensive capacity.
The US has been pursuing cyber information sharing since the late 1990s, when the federal government directed the creation of public–private partnerships for critical infrastructure protection. The now decades-long development of a variety of information sharing models in the US provide case studies and lessons for the Australian cybersecurity community as it pursues deeper information sharing mechanisms.
This paper draws on the examples, issues and recommendations discussed in the MITRE Corporation report, Building a national cyber information sharing ecosystem, by Bruce J Bakis and Edward D Wang. This paper offers recommendations for the development of Australia’s national cyber information sharing system.
And, again, by coincidence the Economist has a leading story on data titled the world’s most valuable resource is no longer oil, but data and a briefing with Data is giving rise to a new economy where the underlying thesis is that new thinking is required to properly regulate the use of data much like new thinking was required to deal with the monopolistic behaviour of oil producers in the early twentieth century. As always both are a thoughtful consideration of a tricky issue.
The Data is giving rise to a new economy article provides:
Fuel of the future
Data is giving rise to a new economy
AN OIL refinery is an industrial cathedral, a place of power, drama and dark recesses: ornate cracking towers its gothic pinnacles, flaring gas its stained glass, the stench of hydrocarbons its heady incense. Data centres, in contrast, offer a less obvious spectacle: windowless grey buildings that boast no height or ornament, they seem to stretch to infinity.
Yet the two have much in common. For one thing, both are stuffed with pipes. In refineries these collect petrol, propane and other components of crude oil, which have been separated by heat. In big data centres they transport air to cool tens of thousands of computers which extract value—patterns, predictions and other insights—from raw digital information.
Both also fulfil the same role: producing crucial feedstocks for the world economy. Whether cars, plastics or many drugs—without the components of crude, much of modern life would not exist. The distillations of data centres, for their part, power all kinds of online services and, increasingly, the real world as devices become more and more connected.
Data are to this century what oil was to the last one: a driver of growth and change. Flows of data have created new infrastructure, new businesses, new monopolies, new politics and—crucially—new economics. Digital information is unlike any previous resource; it is extracted, refined, valued, bought and sold in different ways. It changes the rules for markets and it demands new approaches from regulators. Many a battle will be fought over who should own, and benefit from, data.
There is an awful lot to scrap over. IDC, a market-research firm, predicts that the “digital universe” (the data created and copied every year) will reach 180 zettabytes (180 followed by 21 zeros) in 2025 (see chart). Pumping it all through a broadband internet connection would take over 450m years. To speed the transfer into its data centres, Amazon, an e-commerce giant with a fast-growing cloud-computing arm, uses trucks pulling shipping containers each packed with storage devices holding 100 petabytes (a mere 15 zeros). To ingest it all, firms are speedily building data refineries. In 2016 Amazon, Alphabet and Microsoft together racked up nearly $32bn in capital expenditure and capital leases, up by 22% from the previous year, according to the Wall Street Journal.
The quality of data has changed, too. They are no longer mainly stocks of digital information—databases of names and other well-defined personal data, such as age, sex and income. The new economy is more about analysing rapid real-time flows of often unstructured data: the streams of photos and videos generated by users of social networks, the reams of information produced by commuters on their way to work, the flood of data from hundreds of sensors in a jet engine.
From subway trains and wind turbines to toilet seats and toasters—all sorts of devices are becoming sources of data. The world will bristle with connected sensors, so that people will leave a digital trail wherever they go, even if they are not connected to the internet. As Paul Sonderegger, a big-data strategist at Oracle, a software-maker, puts it: “Data will be the ultimate externality: we will generate them whatever we do.”
It is what you know
Most important, the value of data is increasing. Facebook and Google initially used the data they collected from users to target advertising better. But in recent years they have discovered that data can be turned into any number of artificial-intelligence (AI) or “cognitive” services, some of which will generate new sources of revenue. These services include translation, visual recognition and assessing someone’s personality by sifting through their writings—all of which can be sold to other firms to use in their own products.
Although signs of the data economy are everywhere, its shape is only now becoming clear. And it would look pretty familiar to J.R. Ewing. There are the data majors, a growing number of wildcatters and plenty of other firms trying to get a piece of the action. All are out to exploit a powerful economic engine called the “data-network effect”—using data to attract more users, who then generate more data, which help to improve services, which attracts more users.
The majors pump from the most bountiful reservoirs. The more users write comments, “like” posts and otherwise engage with Facebook, for example, the more it learns about those users and the better targeted the ads on newsfeeds become. Similarly, the more people search on Google, the better its search results turn out.
These firms are always looking for new wells of information. Facebook gets its users to train some of its algorithms, for instance when they upload and tag pictures of friends. This explains why its computers can now recognise hundreds of millions of people with 98% accuracy. Google’s digital butler, called “Assistant”, gets better at performing tasks and answering questions the more it is used.
Uber, for its part, is best known for its cheap taxi rides. But if the firm is worth an estimated $68bn, it is in part because it owns the biggest pool of data about supply (drivers) and demand (passengers) for personal transportation. Similarly, for most people Tesla is a maker of fancy electric cars. But its latest models collect mountains of data, which allow the firm to optimise its self-driving algorithms and then update the software accordingly. By the end of last year, the firm had gathered 1.3bn miles-worth of driving data—orders of magnitude more than Waymo, Alphabet’s self-driving-car division.
“Data-driven” startups are the wildcatters of the new economy: they prospect for digital oil, extract it and turn it into clever new services, from analysing X-rays and CAT scans to determining where to spray herbicide on a field. Nexar, an Israeli startup, has devised a clever way to use drivers as data sources. Its app turns their smartphones into dashcams that tag footage of their travels via actions they normally perform. If many unexpectedly hit the brake at the same spot on the road, this signals a pothole or another obstacle. As compensation for using Nexar’s app, drivers get a free dashcam and services, such as a detailed report if they have an accident. The firm’s goal is to offer all sorts of services that help drivers avoid accidents—and for which they, or their insurers, will pay. One such is alerts about potholes or when a car around a blind corner suddenly stops.
Non-tech firms are trying to sink digital wells, too. GE, for instance, has developed an “operating system for the industrial internet”, called Predix, to help customers control their machinery. Predix is also a data-collection system: it pools data from devices it is connected to, mixes these with other data, and then trains algorithms that can help improve the operations of a power plant, when to maintain a jet engine before it breaks down and the like.
As in oil markets, bigger data firms keep taking over smaller ones (see table). But another aspect of the data economy would look strange to dealers in black gold. Oil is the world’s most traded commodity by value. Data, by contrast, are hardly traded at all, at least not for money. That is a far cry from what many had in mind when they talked about data as a “new asset class”, as the World Economic Forum (WEF), the Davos conference-organiser-cum-think-tank, did in a report published in 2011. The data economy, that term suggests, will consist of thriving markets for bits and bytes. But as it stands, it is mostly a collection of independent silos.
Keep it to yourself
This absence of markets is the result of the same factors that have given rise to firms. All sorts of “transaction costs” on markets—searching for information, negotiating deals, enforcing contracts and so on—make it simpler and more efficient simply to bring these activities in-house. Likewise, it is often more profitable to generate and use data inside a company than to buy and sell them on an open market.
Their abundance notwithstanding, flows of data are not a commodity: each stream of information is different, in terms of timeliness, for example, or how complete it may be. This lack of “fungibility”, in economic lingo, makes it difficult for buyers to find a specific set of data and to put a price on it: the value of each sort is hard to compare with other data. There is a disincentive to trade as each side will worry that it is getting the short end of the stick.
Researchers have only just begun to develop pricing methodologies, something Gartner, a consultancy, calls “infonomics”. One of its pioneers, Jim Short of the University of California in San Diego, studies cases where a decision has been made about how much data are worth. One such involves a subsidiary of Caesars Entertainment, a gambling group, that filed for bankruptcy in 2015. Its most valuable asset, at $1bn, was determined to be the data it is said to hold on the 45m customers who had joined the company’s customer-loyalty programme over the previous 17 years.
The pricing difficulty is an important reason why one firm might find it simpler to buy another, even if it is mainly interested in data. This was the case in 2015 when IBM reportedly spent $2bn on the Weather Company, to get its hands on mountains of weather data as well as the infrastructure to collect them. Another fudge is barter deals: parts of Britain’s National Health Service and DeepMind, Alphabet’s AI division, have agreed to swap access to anonymous patient data for medical insights extracted from them.
The fact that digital information, unlike oil, is also “non-rivalrous”, meaning that it can be copied and used by more than one person (or algorithm) at a time, creates further complications. It means that data can easily be used for other purposes than those agreed. And it adds to the confusion about who owns data (in the case of an autonomous car, it could be the carmaker, the supplier of the sensors, the passenger and, in time, if self-driving cars become self-owning ones, the vehicle itself).
“Trading data is tedious,” says Alexander Linden of Gartner. As a result, data deals are often bilateral and ad hoc. They are not for the fainthearted: data contracts often run over dozens of pages of dense legalese, with language specifying allowed uses and how data are to be protected. A senior executive of a big bank recently told Mr Linden that he has better things to do than sign off on such documents—even if the data have great value.
In the case of personal data, things are even more tricky. “A regulated national information market could allow personal information to be bought and sold, conferring on the seller the right to determine how much information is divulged,” Kenneth Laudon of New York University wrote in an influential article entitled “Markets and Privacy” in 1996. More recently, the WEF proposed the concept of a data bank account. A person’s data, it suggested, should “reside in an account where it would be controlled, managed, exchanged and accounted for”.
The idea seems elegant, but neither a market nor data accounts have materialised yet. The problem is the opposite to that with corporate data: people give personal data away too readily in return for “free” services. The terms of trade have become the norm almost by accident, says Glen Weyl, an economist at Microsoft Research. After the dotcom bubble burst in the early 2000s, firms badly needed a way to make money. Gathering data for targeted advertising was the quickest fix. Only recently have they realised that data could be turned into any number of AI services.
Slave to the algorithm
Whether this makes the trade of data for free services an unfair exchange largely depends on the source of the value of the these services: the data or the algorithms that crunch them? Data, argues Hal Varian, Google’s chief economist, exhibit “decreasing returns to scale”, meaning that each additional piece of data is somewhat less valuable and at some point collecting more does not add anything. What matters more, he says, is the quality of the algorithms that crunch the data and the talent a firm has hired to develop them. Google’s success “is about recipes, not ingredients.”
That may have been true in the early days of online search but seems wrong in the brave new world of AI. Algorithms are increasingly self-teaching—the more and the fresher data they are fed, the better. And marginal returns from data may actually go up as applications multiply, says Mr Weyl. After a ride-hailing firm has collected enough data to offer one service—real-time traffic information, say—more data may not add much value. But if it keeps collecting data, at some point it may be able to offer more services, such as route planning.
Such debates, as well as the lack of a thriving trade in data, may be teething problems. It took decades for well-functioning markets for oil to emerge. Ironically, it was Standard Oil, the monopoly created by John D. Rockefeller in the late-19th century, that speeded things up: it helped create the technology and—the firm’s name was its programme—the standards that made it possible for the new resource to be traded.
Markets have long existed for personal data that are of high value or easy to standardise. So-called “data brokers” do a swift trade in certain types of data. In other areas, markets, or something akin to them, are starting to develop. Oracle, which dominates the market for corporate databases, for example, is developing what amounts to an exchange for data assets. It wants its customers to trade data, combine them with sets provided by Oracle and extract insights—all in the safe environment of the firm’s computing cloud, where it can make sure, among other things, that information is not misused. Cognitive Logic, a startup, has come up with a similar product, but leaves the data in separate IT systems.
Other young firms hope to give consumers more of a stake in their data. Citizenme allows users to pull all their online information together in one place and earn a small fee if they share it with brands. Datacoup, another startup, is selling insights from personal data and passing on part of the proceeds to its users.
So far none of these efforts has really taken off; those focusing on personal data in particular may never do so. By now consumers and online giants are locked in an awkward embrace. People do not know how much their data are worth, nor do they really want to deal with the hassle of managing them, says Alessandro Acquisti of Carnegie Mellon University. But they are also showing symptoms of what is called “learned helplessness”: terms and conditions for services are often impenetrable and users have no choice than to accept them (smartphone apps quit immediately if one does not tap on “I agree”).
For their part, online firms have become dependent on the drug of free data: they have no interest in fundamentally changing the deal with their users. Paying for data and building expensive systems to track contributions would make data refiners much less profitable.
Data would not be the only important resource which is not widely traded; witness radio spectrum and water rights. But for data this is likely to create inefficiencies, argues Mr Weyl. If digital information lacks a price, valuable data may never be generated. And if data remain stuck in silos, much value may never get extracted. The big data refineries have no monopoly on innovation; other firms may be better placed to find ways to exploit information.
The dearth of data markets will also make it more difficult to solve knotty policy problems. Three stand out: antitrust, privacy and social equality. The most pressing one, arguably, is antitrust—as was the case with oil. In 1911 America’s Supreme Court upheld a lower-court ruling to break up Standard Oil, which then controlled around 90% of oil refining in the country.
Some are already calling for a similar break-up of the likes of Google, including Jonathan Taplin of the University of Southern California in his new book “Move Fast and Break Things”. But such a radical remedy would not really solve the problem. A break-up would be highly disruptive and slow down innovation. It is likely that a Googlet or a Babyface would quickly become dominant again.
Yet calls for action are growing. The “super-platforms” wield too much power, says Ariel Ezrachi of the University of Oxford, who recently published a book entitled “Virtual Competition” with Maurice Stucke of the University of Tennessee. With many more and fresher data than others, he argues, they can quickly detect competitive threats. Their deep pockets allow them to buy startups that could one day become rivals. They can also manipulate the markets they host by, for example, having their algorithms quickly react so that competitors have no chance of gaining customers by lowering prices . “The invisible hand is becoming a digital one,” says Mr Ezrachi.
Beware the digital hand
At a minimum, trustbusters have to sharpen their tools for the digital age. The European Commission did not block the merger of Facebook and WhatsApp. It argued that although these were operating the two largest text-messaging services, there were plenty of others around and that the deal would also not add to Facebook’s data hoard because WhatsApp did not collect much information about its users. But Facebook was buying a firm that it feared might evolve into a serious rival. It had built an alternative “social graph”, the network of connections between friends, which is Facebook’s most valuable asset. During the approval process of the merger Facebook had pledged that it would not merge the two user-bases, but started doing so last year, which has led the commission to threaten it with fines.
The frustration with Facebook helps explain why some countries in Europe have already started to upgrade competition laws. In Germany legislation is winding through parliament which would allow the Federal Cartel Office to intervene in cases in which network effects and data assets play a role. The agency has already taken a special interest in the data economy. It has launched an investigation into whether Facebook is abusing its dominant position to impose certain privacy policies. Andreas Mundt, its president, wants to do more: “Can we further optimise our investigation techniques? How can we better integrate dynamic effects into our analyses?”
A good general rule for regulators is to be as inventive as the companies they keep an eye on. In a recent paper Messrs Ezrachi and Stucke proposed that antitrust authorities should operate what they call “tacit collusion incubators”. To find out whether pricing algorithms manipulate markets or even collude, regulators should run simulations on their own computers.
Another idea is to promote alternatives to centralised piles of data. Governments could give away more of the data they collect, creating opportunities for smaller firms. They could also support “data co-operatives”. In Switzerland a project called Midata collects health data from patients, who can then decide whether they want them to be included in research projects.
Distributing the data
For some crucial classes of data, sharing may even need to be made mandatory. Ben Thompson, who publishes Stratechery, a newsletter, recently suggested that dominant social networks should be required to allow access to their social graphs. Instagram, a photo-sharing service which has also been swallowed by Facebook, got off the ground by having new users import the list of their followers from Twitter. “Social networks have long since made this impossible, making it that much more difficult for competitors to arise,” Mr Thompson points out.
Mandatory data sharing is not unheard of: Germany requires insurers jointly to maintain a set of statistics, including on car accidents, which smaller firms would not be able to compile on their own. The European Union’s new General Data Protection Regulation (GDPR), which will start to apply in May 2018, requires online services to make it easy for customers to transfer their information to other providers and even competitors.
But “data portability”, as well as data sharing, highlights the second policy problem: the tension between data markets and privacy. If personal data are traded or shared they are more likely to leak. To reduce this risk, the GDPR strengthens people’s control over their data: it requires that firms get explicit consent for how they use data. Fines for violations will be steep: up to 4% of global revenues or €20m ($22m).
Such rules will be hard to enforce in a world in which streams of data are mixed and matched. And there is another tension between tighter data protection and more competition: not only have big companies greater means to comply with pricey privacy regulation, it also allows them to control data more tightly.
In time new technology, which goes beyond simple, easy-to-undo anonymisation, may ease such tensions. Bitmark, another startup, uses the same “blockchain” technology behind bitcoin, a digital currency, to keep track of who has accessed data. But legal innovation will be needed too, says Viktor Mayer-Schönberger of the University of Oxford. He and other data experts argue that not only the collection of data should be regulated but its use. Just as foodmakers are barred from using certain ingredients, online firms could be prohibited from using certain data or using them in such a way that could cause harm to an individual. This, he argues, would shift responsibility toward data collectors and data users who should be held accountable for how they manage data rather than relying on obtaining individual consent.
Such “use-based” regulation would be just as hard to police as the conventional rules of notice and consent which currently govern what data are collected and how they are used. It is also likely to worsen what some see as the third big challenge of the data economy in its current form: that some will benefit far more than others, both socially and geographically.
For personal data, at least, the current model seems barely sustainable. As data become more valuable and the data economy grows in importance, data refineries will make all the money. Those who generate the data may balk at an unequal exchange that only sees them getting free services. The first to point this out was Jaron Lanier, who also works for Microsoft Research, in his book “Who Owns the Future?”, published in 2014.
Mr Weyl, who collaborates with Mr Lanier and is writing a book about renewing liberal economics with Eric Posner of the University of Chicago, advances another version of this argument: ultimately, AI services are not provided by algorithms but by the people who generate the raw material. “Data is labour,” says Mr Weyl, who is working on a system to measure the value of individual data contributions to create a basis for a fairer exchange.
Data workers of the world, unite!
The problem, says Mr Weyl, is getting people to understand that their data have value and that they are due some compensation. “We need some sort of digital labour movement,” he says. It will take even more convincing to get the “siren servers”, as Mr Lanier calls the data giants, to change their ways, as they benefit handsomely from the status quo.
A more equal geographic distribution of the value extracted from data may be even more difficult to achieve. Currently, most big data refineries are based in America or are controlled by American firms. As the data economy progresses, this also hardly seems sustainable. Past skirmishes between America and Europe over privacy give a taste of things to come. In China draft regulations require firms to store all “critical data” they collect on servers based in the country. Conflicts over control of oil have scarred the world for decades. No one yet worries that wars will be fought over data. But the data economy has the same potential for confrontation.
[…] Long awaited Productivity Commission report on Data Availability and use released yesterday […]