Call IT Assessment

How Australian Medical Practices Can Use AI Without Breaking the Law

Published: 28 March 2026 | Reading time: 18 minutes | Author: AyeTech AI & Compliance Team

Key Takeaways

  • ChatGPT, Microsoft Copilot, Google Gemini, and most AI tools process data in the United States. This makes them non-compliant for Australian patient health information.
  • Three layers of Australian law prohibit sending patient data offshore: The Privacy Act 1988 (APP 8), the My Health Records Act 2012 (Section 77), and the NSW Health Records and Information Privacy Act 2002 (HPP 14).
  • You cannot trust staff policies alone. A single employee pasting patient data into ChatGPT constitutes a potential notifiable data breach. The risk is too high to manage with policies alone.
  • There are two compliant solutions available right now: Local on-premises AI (NVIDIA DGX Spark / GB10) and sovereign cloud AI (Anthropic Claude on AWS Bedrock in Sydney).
  • Medical practices that act now gain a competitive advantage — those that wait risk both compliance penalties and falling behind on productivity.

The Problem: Every Major AI Tool Sends Data Offshore

AI is transforming every industry. Medical practices see it every day — staff summarising notes faster, letters drafted in seconds, administrative backlogs cleared in minutes. The productivity gains are real and they are enormous. Every practice owner in NSW is asking the same question: when can we start using AI?

The answer, for most AI tools available today, is blunt: you cannot. Not with patient data. Not legally.

0 Major AI chatbots with Australian-only data processing
3 Separate laws prohibiting offshore health data transfers
$50M Maximum penalty for serious privacy breaches
78% Of employees bring their own AI tools to work without approval

The reason is straightforward. ChatGPT processes data in the United States. Google Gemini processes data on Google's global infrastructure, primarily in the US. Anthropic's direct Claude API processes data in the US. Microsoft recently announced in-country processing for Copilot in Australia, but it is opt-in, qualified with “under normal operations,” and not yet independently verified for healthcare compliance. The reality is that the vast majority of AI tools your staff want to use either definitely send data offshore, or cannot yet provide the ironclad guarantees that Australian healthcare law demands.

For a retail business or marketing agency, this might be an acceptable risk with proper governance. For a medical practice handling patient health information, it is not. Australian healthcare operates under some of the strictest data protection laws in the world, and those laws were written specifically to prevent patient data from leaving Australian borders.

The Core Issue Is Simple

Patient health information is the most heavily regulated category of personal data in Australia. Three separate layers of legislation — federal, federal health-specific, and NSW state — all restrict or prohibit sending this data overseas. No major consumer or enterprise AI tool currently satisfies all three. Until the AI industry builds Australian data centres or you deploy AI locally, the law is clear: patient data stays in Australia.

The Three Laws That Block AI in Australian Healthcare

Medical practices in NSW operate under a triple layer of data protection that makes offshore AI processing a legal minefield. Understanding these laws is critical because they do not just suggest you keep data in Australia — they create enforceable obligations with serious penalties when you do not.

Law 1: The Privacy Act 1988 (Commonwealth)

The Privacy Act 1988 is the foundational federal privacy legislation. It applies to all private sector organisations with an annual turnover of more than $3 million, as well as all health service providers regardless of turnover. For medical practices, there is no turnover threshold — every GP clinic, specialist practice, pathology lab, and allied health provider is covered.

Three Australian Privacy Principles (APPs) are directly relevant to AI:

  • APP 6 — Use or disclosure of personal information: Personal information can only be used or disclosed for the purpose it was collected for, or a directly related secondary purpose the individual would reasonably expect. Feeding patient data into an AI chatbot for processing on overseas servers is not a purpose any patient consented to or would reasonably expect.
  • APP 8 — Cross-border disclosure of personal information: Before disclosing personal information to an overseas recipient, an organisation must take reasonable steps to ensure the recipient does not breach the APPs, or ensure one of the limited exceptions applies. AI tool providers based in the US do not operate under the APPs, and their terms of service do not provide equivalent protections.
  • APP 11 — Security of personal information: An organisation must take reasonable steps to protect personal information from misuse, interference, loss, and unauthorised access, modification, or disclosure. Allowing patient data to be processed on servers outside your control, in a foreign jurisdiction, with no guaranteed data handling protections, does not meet this standard.

Enhanced Penalties Since 2022

The Privacy Legislation Amendment (Enforcement and Other Measures) Act 2022 dramatically increased penalties. Serious or repeated privacy breaches can now attract penalties of up to $50 million, three times the benefit obtained from the breach, or 30% of adjusted turnover — whichever is greatest. For a medical practice, even a single incident involving patient data in an offshore AI tool could trigger the OAIC's mandatory data breach notification scheme.

Law 2: The My Health Records Act 2012 (Commonwealth)

The My Health Records Act 2012 governs the national digital health records system. Section 77 of the Act explicitly prohibits holding records, taking records, or processing or handling information relating to records outside Australia. The exact language is unambiguous: the System Operator, registered repository operators, registered portal operators, and registered contracted service providers must not hold, take, process, or handle My Health Record information outside Australia.

This is one of the clearest and strongest data residency requirements in any Australian legislation. Medical practices that are registered with the My Health Record system — which is the vast majority of practices in Australia — must ensure that no AI tool they use could result in My Health Records data being processed outside of Australia. Any AI tool that sends data to US servers for processing is a direct violation of this requirement.

The penalty for contravening Section 77 is 1,500 penalty units — approximately $495,000 at the current Commonwealth penalty unit rate of $330. The Act also creates additional offences for unauthorised collection, use, or disclosure of health information under Sections 59–76, with penalties of up to 120 penalty units for individuals and 600 penalty units for bodies corporate.

Law 3: Health Records and Information Privacy Act 2002 (NSW)

NSW medical practices face an additional layer of protection under the Health Records and Information Privacy Act 2002. This Act establishes 15 Health Privacy Principles (HPPs) that are in several areas stricter than the federal Privacy Act.

The key principles for AI are:

  • HPP 9 — Limits on use of health information: Health information must only be used for the purpose for which it was collected. Using patient data as input for an offshore AI system is a use that was never contemplated when the information was collected.
  • HPP 10 — Limits on disclosure of health information: Disclosure to a third party (including an AI provider) is restricted to specific circumstances, none of which cover feeding data into consumer or enterprise AI tools.
  • HPP 14 — Transborder data flows: This principle specifically addresses the transfer of health information outside NSW. It requires that the recipient is subject to a law or binding scheme substantially similar to the HPPs, or the individual has consented. US-based AI providers are not subject to laws substantially similar to the NSW HPPs.

What This Means in Practice

For a medical practice in NSW, the combined effect of these three laws is simple: patient health information cannot be processed by AI tools that send data overseas. This is not a grey area. It is not a matter of interpretation. Three separate pieces of legislation all point in the same direction. Until AI tools can guarantee that all data processing occurs within Australian borders, medical practices are legally blocked from using them with patient data.

Why Staff Policies Are Not Enough

Every practice manager reading this is thinking: "We will just create a policy that says staff cannot put patient data into AI tools."

That is not sufficient. Here is why.

Research consistently shows that the vast majority of AI usage in workplaces is completely unsanctioned. Staff use AI tools because they work. A receptionist who discovers that ChatGPT can draft a patient recall letter in 10 seconds instead of 10 minutes is not going to stop using it because of a policy document she signed six months ago. A nurse who finds that Gemini can summarise a complex patient history in moments will use it when the waiting room is full and the doctor needs the summary now.

This is not a question of bad employees. It is human nature. The productivity benefit of AI is so immediate and so significant that expecting every staff member, in every moment of pressure and convenience, to remember and comply with an AI policy is unrealistic.

One Moment of Convenience = One Notifiable Breach

A single instance of a staff member pasting a patient's name, Medicare number, diagnosis, or treatment plan into ChatGPT is a potential notifiable data breach. Under the Notifiable Data Breaches (NDB) scheme, if there are reasonable grounds to believe the breach would result in serious harm, you must complete your assessment within 30 days and notify the OAIC and every affected individual as soon as practicable after confirming the breach. One employee. One moment. One patient record. That is all it takes.

What Policies Cannot Prevent

  • Personal devices: Staff can access ChatGPT on their personal phones while standing at the reception desk. Your network controls do not reach their mobile data connection.
  • Free AI tools everywhere: ChatGPT, Gemini, and dozens of other AI tools are free, require no installation, and work in any browser tab. There is no software to block because there is nothing to install — staff simply visit a website.
  • Copy and paste: Patient data can be copied from the practice management system and pasted into an AI tool in two keystrokes. There is no technical control that prevents this with consumer AI tools.
  • New tools constantly emerging: Staff discover new AI tools every week. Your policy would need to be updated continuously to name every prohibited tool — an impossible task.
  • Pressure and time constraints: When a GP is running 30 minutes behind, the temptation to use any tool that speeds things up is enormous. Policies break down under operational pressure.

The only way to eliminate this risk is to ensure that the AI tools available to your staff physically cannot send data offshore. That means either running AI locally on hardware in your practice, or using a cloud AI service that is contractually and technically bound to process data within Australia. Or, if your practice is not ready for AI at all, blocking it entirely at the network level.

Policies are necessary. But they are not sufficient. The technology layer must enforce what the policy requires.

Option Zero: Not Ready for AI? AyeTech Can Block It Completely

If your practice has decided that AI is not appropriate right now, a written policy alone will not stop staff from using it. AyeTech can enforce that decision with multiple layers of technical controls:

  • Firewall-level blocking: We block access to ChatGPT, Gemini, Copilot, Claude, and every other consumer AI platform at the firewall. Staff cannot reach these sites from any device on your practice network — full stop.
  • Deep Packet Inspection (DPI): DPI goes beyond simple URL blocking. It inspects the actual traffic content to catch AI services that try to use non-standard domains, API endpoints, or embedded AI features within other applications. If data is trying to reach an AI service, DPI catches it.
  • DNS filtering: AI tool domains are blocked at the DNS layer, providing a fast first line of defence that works across every device on the network without installing anything on individual machines.
  • Endpoint protection: For managed workstations, we deploy endpoint controls that prevent AI browser extensions from being installed, block AI desktop applications, and can flag or prevent clipboard activity involving sensitive data patterns like Medicare numbers or patient identifiers.
  • Web content filtering: Category-based web filtering blocks the entire “Artificial Intelligence” category, catching new AI tools as they appear — not just the ones you have heard of today.
  • Network monitoring and alerts: Even with all blocking in place, AyeTech monitors for any attempts to access AI tools. If a staff member tries to reach ChatGPT and gets blocked, your practice manager gets an alert. You know exactly who tried, when, and how often.

This is not a single switch. It is layered defence — firewall, DPI, DNS, endpoint, and monitoring working together. A staff member would need to bypass every layer simultaneously, which is effectively impossible on a properly managed network.

When your practice is ready to explore compliant AI options, AyeTech can selectively open access to approved tools like local DGX Spark or Claude on AWS Bedrock Sydney — while keeping everything else locked down.

Not Sure If Your Practice Has an AI Exposure Problem?

Most practice managers we speak to discover their staff are already using consumer AI tools with patient data. Whether you want to block AI entirely or deploy it safely, AyeTech gives you the technical controls to enforce your decision. We will audit your network, show you exactly what is happening, and lock it down or set it up — your call.

Book Your AI Exposure Audit Call 02 9188 8000

Where Your Data Actually Goes: AI Tool by AI Tool

This is the information every medical practice needs but no AI vendor makes easy to find. Here is where your data actually goes when you use each major AI tool:

AI Tool Where Data Is Processed Australian Data Centres? Safe for Patient Data?
ChatGPT (Free & Plus) United States (OpenAI servers) No No
ChatGPT Enterprise United States (OpenAI/Azure servers) No No
Microsoft Copilot (M365) M365 data in AU; in-country AI processing announced late 2025 but requires opt-in and verification Emerging — in-country option announced, but not yet independently verified for healthcare compliance Caution — verify before use with patient data
Google Gemini Google global infrastructure (primarily US) No No
Claude (direct API) United States (Anthropic/GCP servers) No No
Claude on AWS Bedrock (Sydney) AWS Sydney (ap-southeast-2) Yes — IRAP PROTECTED assessed Yes
NVIDIA DGX Spark (local) On-premises — your practice Yes — data never leaves the building Yes

The pattern is clear. Every consumer and most enterprise AI tools process data in the United States. Only two options keep data within Australia: local AI hardware and sovereign cloud services in Australian data centres.

A Critical Note on Microsoft Copilot

In November 2025, Microsoft announced in-country data processing for Copilot in Australia. However, Microsoft's own technical documentation (updated March 2026) tells a more complicated story:

  • Data can still leave Australia: Microsoft's privacy documentation states that Copilot calls “are routed to the closest data centers in the region, but also can call into other regions where capacity is available during high utilization periods.” For non-EU customers (including Australia), queries “may be processed in the US, EU, or other regions.”
  • Anthropic is a Copilot subprocessor: Since January 2026, Anthropic has been listed as a subprocessor for Microsoft 365 Copilot. Microsoft's documentation explicitly states that Anthropic models are “out of scope for the EU Data Boundary and when available, in-country LLM processing commitments.” This means some Copilot features that use Anthropic models will send data outside Australia regardless of your settings.
  • In-country processing has caveats: The opt-in in-country processing is qualified with “under normal operations,” meaning exceptions exist during high-demand periods.
  • No healthcare-specific guarantee: There is no contractual commitment from Microsoft that all Copilot AI processing for Australian healthcare tenants will remain within Australian borders at all times, under all conditions.

For a retail business, these caveats may be acceptable risks. For a medical practice handling patient data subject to Section 77 of the My Health Records Act — which prohibits processing health information outside Australia with a penalty of up to $495,000 — they are not. Until Microsoft provides an unconditional, contractual guarantee of Australian-only processing for all Copilot features, medical practices should not use Copilot with patient data.

Solution 1: Local AI with NVIDIA DGX Spark

The first compliant path forward is the most straightforward: run AI locally, on hardware physically located inside your medical practice. No internet connection required for inference. No data leaves the building. No cross-border transfer. No compliance risk.

The technology that makes this possible in 2026 is the NVIDIA DGX Spark, powered by the GB10 Grace Blackwell superchip.

What It Is

The DGX Spark is a desktop AI computer that sits on a shelf in your practice. It is roughly the size of a Mac Mini — 15cm x 15cm x 5cm, weighing just 1.2 kg. It is completely fanless (silent), draws about 170 watts under load (less than a desktop PC), and plugs into a standard power outlet. It is purpose-built AI infrastructure from NVIDIA, the same company that powers the AI systems at Google, Microsoft, and every major cloud provider.

  • 128GB unified LPDDR5x memory — enough to run AI models up to 200 billion parameters
  • 1 petaFLOP of AI compute — more than enough for medical administrative tasks
  • Silent, fanless design — no noise, suitable for a consultation room or reception area
  • Starting from $6,249 AUD (ASUS Ascent GX10) to $7,999 AUD (MSI EdgeXpert 4TB), with Dell Pro Max configs from ~$6,800 AUD — all available from Australian retailers now
  • Shipping in Australia now from Dell, ASUS, and MSI, with HP, Lenovo, and others coming

What It Can Run for a Medical Practice

A DGX Spark can run open-source AI models like Meta Llama 3.1 (up to 405B parameters with two clustered units), Mistral, and other production-quality language models. These are not toy models — they are capable, production-ready systems that can handle:

  • Clinical note summarisation and structuring
  • Referral letter and specialist correspondence drafting
  • Patient discharge summary generation
  • Pathology and radiology report summarisation
  • Medicare billing description assistance
  • Patient recall letter generation
  • Practice management report generation
  • Translation of medical documents for non-English-speaking patients
  • Insurance and WorkCover correspondence drafting
  • Training material and patient education content creation

The Key Advantage: Total Data Isolation

When a staff member uses AI on the DGX Spark, the patient data they enter never leaves the practice's local network. The AI model runs entirely on the local hardware. There is no API call to an overseas server. There is no data upload. The information stays exactly where Australian law requires it to stay — under the practice's direct physical control, within Australian borders.

This Is Not Theoretical — It Is Already Happening

Healthcare organisations are already deploying DGX Spark for self-hosted medical AI, including clinical note summarisation, radiology and pathology report processing, and clinical triage workflows. NVIDIA themselves showcased medical imaging research running on DGX Spark at MICCAI 2025. The technology is proven, the hardware is shipping, and AyeTech can have your practice running local AI within days of delivery.

Scaling Up: Clustering and Beyond

A single DGX Spark handles most medical practice workloads comfortably. For larger multi-site clinics or healthcare groups that need to serve dozens of concurrent users, up to four DGX Spark units can be clustered together, pooling memory to 512 GB and enabling models up to 405 billion parameters. This “desktop data centre” approach keeps everything on-premises at a fraction of traditional server infrastructure costs.

For most medical practices — whether you are a solo GP, a 5-doctor clinic, or a small specialist group — a single DGX Spark is more than enough. The point is simple: the hardware exists today, it fits on a desk, it costs less than a year of cloud AI subscriptions for a medium practice, and it keeps every byte of patient data exactly where the law requires it.

Solution 2: Sovereign Cloud AI with Claude on AWS Bedrock Sydney

Not every medical practice wants to buy and manage hardware. The second compliant option is to use Anthropic's Claude via AWS Bedrock in the Sydney (ap-southeast-2) region.

This is fundamentally different from using Claude's direct API or website. Here is why:

How It Works

AWS Bedrock is Amazon Web Services' managed AI service. When you deploy Claude through Bedrock and select the Sydney region, every piece of data — your prompts, the patient information, and the AI responses — is processed within AWS data centres physically located in Sydney, Australia. The data does not leave the country. It does not transit through US servers. It does not go to Anthropic's own infrastructure.

Why This Is Compliant

  • Australian data residency: All processing occurs in AWS Sydney data centres. Data stays in Australia.
  • IRAP PROTECTED assessed: AWS Sydney is assessed under the Australian Government's Information Security Registered Assessors Program (IRAP) at the PROTECTED level. This is the same standard required for Australian government systems handling sensitive information, including health data.
  • No training on your data: Anthropic has explicitly confirmed that data processed through AWS Bedrock is not used to train Claude models. Your patient data is processed and discarded — it does not become part of the model.
  • Contractual guarantees: AWS provides contractual commitments through its Customer Agreement and Data Processing Agreement that data in a selected region stays in that region.
  • Proper security controls: AWS provides user access management, encryption, network isolation, full audit logging (so you can prove who accessed what and when), and compliance certifications that consumer AI tools cannot match. Your MSP configures all of this for you.

Claude Is One of the Most Capable AI Models Available

Unlike local AI which runs open-source models, AWS Bedrock gives you access to Anthropic's frontier Claude models — the same models that consistently rank among the most capable AI systems in the world. For medical practices, this means more accurate summarisation, better letter drafting, and more nuanced understanding of medical terminology. You get frontier-class AI performance with Australian data sovereignty.

Cost Model

AWS Bedrock operates on a pay-per-use model. You pay per token processed (a token is roughly three-quarters of a word). There is no upfront hardware cost. For a typical medical practice, monthly costs range from $200 to $2,000 depending on the volume of AI usage. This makes it accessible to smaller practices that cannot justify the upfront cost of a DGX Spark.

Comparing the Two Solutions

Factor Local AI (DGX Spark) Sovereign Cloud (Claude on AWS Bedrock Sydney)
Data location Never leaves your practice Stays in AWS Sydney data centres
Upfront cost ~$6,250–$8,000 AUD for hardware No upfront cost
Ongoing cost Power, maintenance, MSP management $200–$2,000/month pay-per-use
AI model quality Open-source models (Llama, Mistral) — very capable Frontier Claude models — among the best available
Internet dependency None — runs fully offline Requires internet connection to AWS Sydney
Compliance level Maximum — data never leaves premises Very high — IRAP PROTECTED, contractual guarantees
Scalability Limited by hardware (cluster up to 4 units) Virtually unlimited
Best for Practices with highest security needs, consistent AI usage Practices wanting frontier AI with lower upfront cost
Management Requires MSP for deployment and maintenance Requires MSP for setup, security configuration, and integration

Many medical practices will benefit from a hybrid approach: using local AI on DGX Spark for the most sensitive clinical workflows, and Claude on AWS Bedrock Sydney for higher-volume administrative tasks where frontier model quality provides the most value. AyeTech can design and deploy both.

What Medical Practices Can Do with Compliant AI

Once you have a compliant AI solution in place, the productivity gains for a medical practice are transformative. Here are real-world use cases that work today:

GP Clinics

  • Consultation note summarisation: AI summarises a 20-minute consultation into structured clinical notes in seconds
  • Referral letter generation: Draft specialist referral letters from clinical notes and patient history
  • Patient recall management: Generate personalised recall letters for preventive health checks, immunisations, and chronic disease management reviews
  • Medicare item number assistance: AI suggests appropriate MBS item numbers based on consultation descriptions
  • Chronic disease management plans: Draft GP Management Plans (item 721) and Team Care Arrangements (item 723) from patient records

Specialist Practices

  • Detailed procedural reports: Generate structured operative reports and procedure summaries
  • Correspondence to referring GPs: Draft comprehensive post-consultation letters with findings, management plans, and follow-up recommendations
  • Research and literature review: Summarise relevant medical literature for complex cases (using de-identified queries)
  • Insurance and WorkCover reports: Draft detailed medico-legal reports from clinical records

Allied Health

  • Treatment plan documentation: Generate structured treatment plans from assessment notes
  • Progress notes: Summarise session outcomes and patient progress
  • NDIS reports: Draft NDIS-compliant progress reports and funding review documentation
  • Patient education materials: Create personalised exercise sheets, dietary guides, and self-management resources

Practice Administration

  • Policy and procedure documentation: Draft and update practice policies, infection control protocols, and workplace health and safety documents
  • Staff training materials: Generate onboarding documents, compliance training content, and procedural guides
  • Patient communication: Draft SMS reminders, email newsletters, and health promotion content
  • Complaint response: Draft professional responses to patient complaints and feedback

What to Do Right Now

If you are a medical practice owner, practice manager, or healthcare IT decision-maker in NSW, here is your immediate action plan:

  1. Audit your current AI exposure today
    Find out what AI tools your staff are already using. Ask directly. Send a survey. Check your web filtering logs. The results will almost certainly reveal that consumer AI tools are already being used with patient data — and you need to know the extent of the problem before you can fix it.
  2. Block consumer AI tools on your practice network immediately
    Use your firewall or web filtering to block access to ChatGPT, Google Gemini, Claude (direct), and other consumer AI services on your practice network. This is a band-aid, not a solution — staff can still use personal devices — but it stops the most obvious data leakage vector while you implement a proper solution.
  3. Draft and distribute an AI acceptable use policy
    Create a clear policy that explicitly prohibits entering patient data into any AI tool that has not been approved by the practice. Make every staff member sign it. Include it in your induction process. Make consequences clear.
  4. Engage an MSP with healthcare AI expertise
    Deploying compliant AI for a medical practice is not a DIY project. You need an IT partner who understands the compliance landscape (Privacy Act, My Health Records Act, NSW Health Records Act), the technology options (DGX Spark, AWS Bedrock), and the practical realities of integrating AI into medical workflows.
  5. Deploy compliant AI and give your staff a safe alternative
    The reason staff use consumer AI tools is because they work. The only way to stop them is to provide an approved alternative that is equally effective. A properly deployed DGX Spark or Claude on AWS Bedrock Sydney gives your staff the AI productivity they want — without the compliance risk you cannot afford.

Every Day You Wait, the Risk Grows

Your staff are almost certainly already using consumer AI tools with patient data. Every day without a compliant alternative is another day of potential notifiable data breaches, OAIC exposure, and patient trust at risk. The practices that act now will be the ones that gain the productivity advantage of AI without the regulatory consequences. The practices that wait will eventually be forced to act — probably by an incident that could have been prevented.

How AyeTech Deploys Compliant AI for Medical Practices

AyeTech is a Sydney-based managed IT services provider that specialises in deploying compliant AI infrastructure for Australian healthcare. We understand the regulatory landscape, we have the technical expertise, and we have been helping medical practices across NSW navigate the intersection of AI, compliance, and productivity.

Our healthcare AI deployment follows a structured methodology:

  • AI Readiness Assessment: We audit your current AI exposure, identify compliance gaps, review your practice management system integration points, and assess your infrastructure. This is free and comes with a detailed report.
  • Solution Design: Based on your practice size, budget, and usage patterns, we recommend local AI (DGX Spark), sovereign cloud AI (Claude on AWS Bedrock Sydney), or a hybrid approach.
  • Deployment and Integration: We deploy the chosen solution, configure security controls, integrate with your practice management system, set up user access, and implement audit logging.
  • Staff Training: We train every staff member on how to use the approved AI tools effectively and safely. We cover what is allowed, what is not, and why — so your team understands the compliance requirements, not just the rules.
  • Ongoing Management: 24/7 monitoring, regular compliance reviews, quarterly policy updates, system maintenance, and continuous optimisation. This is a managed service, not a one-off project.
  • Shadow AI Monitoring: We implement network-level monitoring to detect any use of unapproved AI tools on your practice network, giving you visibility and control.

We work with GP clinics, specialist practices, allied health providers, pathology laboratories, dental practices, and multi-site healthcare groups across Sydney and NSW.

Your Patients' Data Cannot Wait

Every day without compliant AI governance is a day of risk. Every day without a safe AI alternative is a day your staff may be using ChatGPT with patient data. AyeTech deploys compliant AI for medical practices — local, sovereign, secure.

Book Your Free Healthcare AI Assessment Call 02 9188 8000

Or email [email protected] — we respond within 2 hours during business hours.

Frequently Asked Questions

Can Australian medical practices use ChatGPT?

No, not safely or legally for anything involving patient data. ChatGPT processes all data on servers in the United States. Under the Privacy Act 1988 (APP 8), the My Health Records Act 2012, and the NSW Health Records and Information Privacy Act 2002, Australian medical practices cannot send patient health information to overseas servers without meeting strict cross-border disclosure requirements that consumer AI tools do not satisfy. Even with staff policies in place, the risk of a single employee pasting patient information into ChatGPT creates an unacceptable compliance exposure.

Can doctors use Microsoft Copilot in Australia?

Microsoft announced in-country data processing for Copilot in Australia in November 2025. However, Microsoft's own technical documentation (updated March 2026) reveals critical caveats: non-EU customers' queries 'may be processed in the US, EU, or other regions' during high utilization periods, and Anthropic (a Copilot subprocessor since January 2026) is explicitly 'out of scope for in-country LLM processing commitments.' This means some Copilot features will send data outside Australia regardless of settings. For medical practices handling patient data under the My Health Records Act (which prohibits offshore processing with penalties up to $495,000), Copilot cannot currently provide the unconditional data residency guarantee required. Local AI (DGX Spark) or Claude on AWS Bedrock Sydney remain the verifiably compliant options.

What AI can Australian medical practices legally use?

Australian medical practices have two compliant options for AI: (1) Local on-premises AI using hardware like the NVIDIA DGX Spark, which runs AI models entirely within the practice with no data leaving the building; and (2) Sovereign cloud AI such as Anthropic Claude via AWS Bedrock in the Sydney (ap-southeast-2) region, where data is processed and stored entirely within Australian data centres. Both options keep patient data within Australian borders and under Australian jurisdiction, satisfying the Privacy Act, My Health Records Act, and NSW Health Records Act requirements.

What happens if a medical practice employee puts patient data into ChatGPT?

This constitutes a potential notifiable data breach under the Privacy Act 1988 and may breach the My Health Records Act 2012. The practice could face: mandatory notification to the OAIC and affected patients, penalties up to $50 million or 30% of adjusted turnover for serious breaches, investigation by the OAIC and relevant state health privacy commissioners, professional conduct complaints to AHPRA or the relevant medical board, loss of patient trust and reputational damage, and potential civil liability. The practice is liable regardless of whether the employee acted maliciously or innocently.

Is AWS Bedrock in Sydney IRAP assessed?

Yes. AWS Sydney (ap-southeast-2) is IRAP assessed at the PROTECTED level, which is the Australian government's standard for sensitive information including health data. AWS Bedrock services in the Sydney region inherit this assessment. This means data processed through Claude on AWS Bedrock Sydney meets the same infrastructure security standards required by Australian government agencies. For medical practices, this provides a level of assurance that consumer AI tools cannot match.

What is the NVIDIA DGX Spark and can it run AI for a medical practice?

The NVIDIA DGX Spark is a desktop AI computer powered by the GB10 superchip, starting at approximately $6,250–$8,000 AUD depending on configuration. It can run AI models up to 200 billion parameters locally with no internet connection required for inference. For a medical practice, it can handle: clinical note summarisation, patient letter drafting, medical record analysis, appointment triage, referral letter generation, and administrative automation. All data stays physically within the practice. No patient information ever leaves the building.

Does the Privacy Act 1988 apply to AI tools used by medical practices?

Yes, absolutely. The Privacy Act 1988 applies to all handling of personal information by organisations, including when that information is entered into AI tools. Australian Privacy Principle 6 restricts how personal information can be used and disclosed. APP 8 imposes strict requirements before personal information can be sent overseas. APP 11 requires reasonable steps to protect personal information. When a medical practice employee enters patient data into an AI tool that processes data offshore, the practice has potentially breached all three of these principles. Critically, health service providers are covered regardless of their annual turnover.

What does the My Health Records Act say about AI and data residency?

The My Health Records Act 2012 contains some of the strictest data residency requirements in Australian law. Section 77 explicitly prohibits the System Operator, registered repository operators, portal operators, and contracted service providers from holding, taking, processing, or handling My Health Record information outside Australia. The language is unambiguous — records must not leave Australia, and information must not be processed outside Australia. The penalty for contravening Section 77 is 1,500 penalty units (approximately $495,000). Medical practices registered with the My Health Record system must ensure no AI tool they use could result in that data being processed overseas.

What is the NSW Health Records Act and how does it affect AI use?

The Health Records and Information Privacy Act 2002 (NSW) governs the handling of health information by NSW health service providers, including private medical practices. It contains 15 Health Privacy Principles (HPPs) that are stricter than the federal Privacy Act in several areas. HPP 9 limits how health information can be used. HPP 10 restricts disclosure. HPP 14 requires transborder data flow protections. For NSW medical practices, this Act creates additional compliance obligations beyond the federal Privacy Act when using any AI tool that processes patient health information.

Can you trust staff not to put patient data into AI tools?

No. Research consistently shows that the majority of AI usage in workplaces is unsanctioned. Staff use AI tools because they genuinely improve productivity, not out of malice. In a medical practice, a receptionist summarising a patient complaint, a nurse drafting a care plan, or a practice manager writing a referral letter will naturally reach for the most effective tool available. Even with training, policies, and monitoring, a single moment of convenience can result in a notifiable data breach. The only way to eliminate this risk entirely is to ensure that AI tools available to staff physically cannot send data offshore.

How does Claude on AWS Bedrock keep patient data in Australia?

When you access Claude through AWS Bedrock in the Sydney (ap-southeast-2) region, all data processing occurs within AWS data centres physically located in Sydney, Australia. Your prompts, patient data, and AI responses never leave Australian soil. AWS provides contractual guarantees that data remains in your selected region. Additionally, Anthropic has confirmed that data processed through AWS Bedrock is not used to train Claude models. This combination of Australian data residency, IRAP PROTECTED assessment, and no-training guarantees makes it one of the most compliant cloud AI options for Australian healthcare.

What are the penalties for a medical practice that breaches patient data privacy with AI?

Penalties are severe and multi-layered. Under the Privacy Act 1988, serious or repeated breaches attract penalties of up to $50 million, three times the benefit obtained, or 30% of adjusted turnover. Under the My Health Records Act 2012, unauthorised disclosure of health information carries penalties of up to 120 penalty units for individuals and 600 penalty units for bodies corporate. Additionally, practitioners face potential AHPRA complaints, medical board investigations, professional indemnity insurance implications, and civil liability claims from affected patients.

Is Google Gemini safe for Australian medical practices?

No. Google Gemini processes data through Google's global infrastructure, which is primarily based in the United States. Google does not currently offer a Gemini deployment option that guarantees all AI processing occurs within Australian data centres. For any task involving patient health information, Google Gemini presents the same cross-border data transfer compliance risks as ChatGPT and other US-hosted AI tools. Medical practices should not use Gemini for any workflow involving patient data.

What AI tasks can a medical practice do locally with DGX Spark?

A DGX Spark running open-source models like Meta Llama 3 or Mistral can handle: summarising clinical notes and consultation records, drafting referral letters and specialist correspondence, generating patient discharge summaries, automating appointment reminders, analysing pathology results and flagging abnormalities, creating practice management reports, drafting Medicare billing descriptions, translating medical documents for non-English-speaking patients, and generating patient education materials. All of this runs locally with zero internet dependency for inference.

How much does compliant AI cost for a medical practice in Sydney?

There are two main options. Option 1: Local AI with NVIDIA DGX Spark starts at from $6,250 AUD for the hardware plus MSP deployment, configuration, and ongoing management costs. Option 2: Sovereign cloud AI with Claude on AWS Bedrock Sydney operates on a pay-per-use model with no upfront hardware cost, typically costing between $200 and $2,000 per month depending on usage volume. Both options require an MSP to deploy properly with security controls, access management, and compliance frameworks. AyeTech provides turnkey deployment of both solutions for medical practices across Sydney and NSW.

Does RACGP have standards about AI use in general practice?

The RACGP Computer and Information Security Standards (CISS) require general practices to implement appropriate security measures for patient data. While the CISS does not yet contain AI-specific provisions, its requirements for data storage, access controls, encryption, and third-party data handling directly apply to any AI tool a practice uses. Practices accredited under the RACGP Standards must ensure any AI deployment meets these existing requirements, including data sovereignty and appropriate security controls.

Where does Microsoft Copilot actually process data?

Microsoft 365 stores tenant data in Australian data centres (Sydney and Melbourne). In November 2025, Microsoft announced an in-country processing option for Copilot AI interactions in Australia, meaning prompts and responses would be processed in Australian data centres “under normal operations.” However, this is an opt-in feature (not default), the “under normal operations” qualifier implies exceptions may exist, and as of March 2026, independent verification for healthcare compliance is limited. Medical practices should obtain written confirmation from Microsoft before using Copilot with patient data.

What is the difference between local AI and sovereign cloud AI for healthcare?

Local AI (like NVIDIA DGX Spark) runs entirely on hardware physically located in your practice. Data never leaves the building. You own the hardware, control the models, and have zero internet dependency. Sovereign cloud AI (like Claude on AWS Bedrock Sydney) runs in data centres located in Australia, operated by a provider with Australian compliance certifications (IRAP). Data stays in Australia but does leave your premises. Both are compliant options. Local AI offers maximum data isolation. Sovereign cloud AI offers lower upfront cost, access to frontier models, and no hardware management burden.

Can a medical practice use AI for Medicare billing and claims?

Yes, but only with compliant AI that keeps data within Australia. Medicare billing involves patient names, Medicare numbers, dates of birth, and clinical descriptions, all classified as sensitive personal and health information under Australian law. Using a cloud AI tool that processes this data overseas would breach the Privacy Act. A local AI system (DGX Spark) or sovereign cloud AI (Claude on AWS Bedrock Sydney) can safely assist with Medicare billing descriptions, item number selection, and claims review with appropriate access controls and audit trails in place.

Is AI-generated medical advice legal in Australia?

AI-generated medical advice raises complex legal and regulatory issues. The Therapeutic Goods Administration (TGA) regulates software that functions as a medical device, including AI tools that provide diagnostic or treatment recommendations. Any AI tool used for clinical decision support may require TGA approval. For administrative AI tasks like note summarisation, letter drafting, and billing assistance, the regulatory burden is lower but data handling obligations remain. Medical practitioners remain professionally and legally responsible for all clinical decisions regardless of whether AI was used to assist.

What should a medical practice do right now about AI?

Medical practices should take five immediate steps: (1) Audit current AI usage to find out if staff are already using ChatGPT, Gemini, or other consumer AI tools with patient data; (2) Block consumer AI tools on practice networks immediately; (3) Draft an AI acceptable use policy that explicitly prohibits entering patient data into any non-approved AI tool; (4) Contact an MSP like AyeTech that specialises in compliant AI deployment for healthcare; (5) Evaluate local AI (DGX Spark) or sovereign cloud AI (Claude on AWS Bedrock Sydney) as your compliant AI pathway.

Can my medical practice just block AI tools instead of using them?

Yes — and AyeTech can enforce a complete AI block using multiple layers of technical controls. This includes firewall-level blocking of all known AI platforms, Deep Packet Inspection (DPI) to catch AI traffic using non-standard domains or API endpoints, DNS filtering across all network devices, endpoint protection to prevent AI browser extensions and desktop apps, category-based web content filtering that automatically catches new AI tools as they appear, and network monitoring with alerts when staff attempt to access blocked services. A written policy alone will not stop determined staff — AyeTech's layered approach makes it effectively impossible on your practice network. When you are ready to explore compliant AI in the future, we can selectively open access to approved tools while keeping everything else locked down.

How does AyeTech deploy AI for medical practices?

AyeTech deploys compliant AI for medical practices in NSW through a structured methodology: (1) AI Readiness Assessment to evaluate current AI exposure and compliance gaps; (2) Solution Design recommending local AI, sovereign cloud AI, or a hybrid approach; (3) Infrastructure Deployment including hardware installation, cloud configuration, and security controls; (4) Integration with practice management systems and workflow setup; (5) Staff Training on approved AI tools and data handling obligations; (6) Ongoing Management including 24/7 monitoring, compliance reviews, and continuous optimisation. Contact AyeTech on 02 9188 8000 or [email protected].

About AyeTech

AyeTech is a Sydney-based managed IT services provider specialising in compliant AI deployment, cyber security, and IT support for Australian healthcare providers. We help medical practices deploy AI safely with full data sovereignty, navigate the Privacy Act, My Health Records Act, and NSW Health Records Act requirements, and maintain enterprise-grade protection.

Contact Information:

  • Phone: 02 9188 8000
  • Email: [email protected]
  • Address: Suite 203, Level 8, 99 Walker St, North Sydney, NSW 2060
  • Service Areas: Sydney, North Sydney, Parramatta, Chatswood, and all NSW

Related Resources: