Products
Key Summary
- ā¢Australians use Claude a lotāabout four times more per person than youād expect from population size.
- ā¢Most use happens in New South Wales and Victoria, and that pattern matches where office, finance, and tech jobs are.
- ā¢Compared to the world, Australians do fewer coding tasks with Claude and more management, office, sales, and personal-life tasks.
- ā¢Australian prompts are written at a higher schooling level but describe tasks that would take less time without AI, meaning people bring sharper questions about shorter jobs.
- ā¢Australians tend to collaborate with Claude instead of fully delegating work, showing a lower AI autonomy style.
- ā¢Work and personal use dominate (46% work, 47% personal), while coursework is relatively small (7%).
- ā¢Task diversity is wider than the global average, meaning Australians spread Claude across more kinds of jobs.
- ā¢State-level adoption does not track income; it tracks who works in AI-friendly roles, like office and professional services.
- ā¢Findings come from a February 2026 sample of Claude.ai conversations and comparisons to Anglosphere peers and global patterns.
Why This Research Matters
This study shows where AI is already helping people in everyday work and life, beyond just coding. By fairly adjusting for population and looking closely at task types, it helps leaders see which jobs benefit most and where training or tools can do the most good. It also uncovers that collaboration with AIānot full delegationāis common, guiding safer, more reliable workflows. Inside Australia, it points to workforce composition as the main driver of adoption, suggesting that upskilling office, management, and professional roles could have big payoffs. For families, students, and workers, it means AI can be a practical co-pilot for planning, writing, organizing, and decision-making. For policymakers, it offers a roadmap to spread benefits more evenly across regions.
Reading Workflow
Turn this paper into a decision
Scan fast. Promote only the papers that survive triage.
No workflow history yet.
Detailed Explanation
Tap terms for definitions01Background & Problem Definition
š Hook: Imagine your classroom got a brand-new helpful robot assistant. Youād want to know who uses it, for what, and whether it helps with math, reading, or organizing your backpack. Countries are like big classroomsāand Australia just got very excited about using an AI helper named Claude.
š„¬ The Concept (Claude):
- What it is: Claude is an AI assistant that helps people think, write, code, plan, and learn using text.
- How it works: 1) You type a question or task; 2) Claude reads it; 3) Claude reasons about what you need; 4) Claude replies with ideas, drafts, code, or steps; 5) You keep the conversation going to refine the result.
- Why it matters: Without a helper like Claude, people spend more time searching, drafting, or debugging, and get stuck more often on tricky steps. š Anchor: A student asks, āHelp me outline a science report on coral reefs.ā Claude suggests headings, key facts, and a plan in minutes.
š Hook: You know how a weather report helps you plan your day? Researchers made a kind of āAI weather reportā to understand who uses AI and how.
š„¬ The Concept (Anthropic Economic Index, AEI):
- What it is: AEI is a set of measures that track how people use Claude across places, jobs, and task types.
- How it works: 1) Collect a safe, private sample of conversations; 2) Sort each into buckets like work, school, or personal; 3) Estimate task traits (complexity, time without AI, and whether people delegate or collaborate); 4) Compare patterns across regions and countries.
- Why it matters: Without a shared scoreboard, weād only have guesses about who benefits from AI and where the opportunities and gaps are. š Anchor: Like checking sports stats to see which team is best at defense, AEI lets us see, for example, which places use AI more for office work vs. coding.
š Hook: Imagine grading how much each kid uses the classroom helper, not just which kid used it once.
š„¬ The Concept (Anthropic AI Usage Index, AUI):
- What it is: AUI compares how much a place uses Claude to how much youād expect based on its working-age population.
- How it works: 1) Count a placeās share of Claude use; 2) Compare it to that placeās share of working-age people; 3) Take the ratio to see whether use is higher or lower than expected.
- Why it matters: Without adjusting for population, big places look active just because theyāre big, not because they use AI unusually often. š Anchor: Australia has an AUI of about 4.1, meaning it uses Claude around four times more than population size alone would predict.
The world before: Lots of people knew AI was getting popular, but we didnāt have a clear, fair way to compare AI use across places and kinds of tasks. It was like knowing the whole school liked the library but not knowing which grades checked out which books, or whether they used them for homework or fun.
The problem: Policymakers, educators, and businesses in Australia wanted to understand four things: 1) How much is Claude used per person? 2) Which states and territories lead? 3) What kinds of tasks does Claude help with? 4) Do people mostly collaborate with Claude or hand tasks off to it entirely?
Failed attempts: Before, people often looked at raw traffic or total users. Thatās like saying āClass A borrowed 100 books and Class B borrowed 50,ā without noticing Class A has three times more students. Others tried to link usage to income only. But income doesnāt always predict whether a task is a good fit for AIājob mix matters too.
The gap: We needed a population-aware measure (AUI), task-aware categories (like work vs. school vs. personal), and task traits (complexity, time without AI, and autonomy) to tell a true, apples-to-apples story.
Real stakes: This matters for:
- Workers: to learn where AI can save time in their daily tasks;
- Students and teachers: to decide when AI is a good study partner vs. not;
- Businesses: to target training and tools for high-impact roles;
- Governments: to design smart, safe AI policies and workforce programs;
- Everyone: to spot equity gaps so benefits donāt cluster in just a few places.
What this study reveals: Australians are among the most active users of Claude per person, but that energy isnāt spread evenly. New South Wales and Victoria lead, likely because they have more people in roles that benefit from AI (like office, finance, and professional services). Australians also use Claude more broadly than average, leaning less on coding and more on management, office work, and personal life tasks. Their prompts are written at a higher schooling level, but the tasks themselves would have taken less time without AIāsuggesting people bring sharper questions about shorter jobs. Finally, Australians usually keep a hand on the wheel (lower AI autonomy), collaborating rather than fully delegating work to Claude.
02Core Idea
š Hook: Think of a citywide music festival. Some neighborhoods dance more, some like jazz over rock, and some prefer to jam with the band instead of just listening. Thatās Australia with Claude.
š„¬ The Aha! Moment (one sentence): When you fairly adjust for population and task types, Australians use Claude far more than expected, spread that use across a wider mix of non-coding tasks, and mostly collaborate with Claudeāwhile state-to-state differences line up with who works in AI-friendly jobs, not with whoās richest.
Three analogies:
- Map heat: Like a temperature map where certain suburbs glow warmer, Australiaās New South Wales and Victoria light up for Claude use because thatās where many office and professional jobs live.
- Buffet plate: Globally, coding is the biggest dish, but Australiaās plate mixes more management, office, and personal-life sides, with a smaller scoop of coding.
- Group project: Instead of telling the helper, āDo it all,ā Australians sit beside Claude, steering and editing as they goāmore co-pilot than autopilot.
Before vs. After:
- Before: People guessed usage followed income or assumed coding dominated everywhere.
- After: We see a high per-person adoption in Australia, a broader task mix with less coding, and collaboration over full delegation; inside Australia, adoption follows job mix more than income.
Why it works (intuition, not equations): You canāt compare AI usage fairly without scaling by population, just like you canāt compare two classrooms without knowing class size. You also need to know what people are doingāwork, school, or personalāand how tricky or time-consuming those tasks are. Finally, understanding whether users delegate or collaborate tells you how much trust and control they keep. Put together, these lenses show a cleaner, truer picture.
Building blocks (each explained the sandwich way):
š Hook: You know how sometimes you want a friend to take the lead, and other times you just want advice? š„¬ The Concept (AI autonomy):
- What it is: AI autonomy is how much freedom you give the AI to act or decide on its own.
- How it works: 1) Look at the prompt; 2) Ask if the user is requesting a draft to edit (low autonomy) or instructing the AI to finalize decisions (high autonomy); 3) Score the level from low to high.
- Why it matters: Without this, weād think all AI use is the same, missing the difference between co-pilot and autopilot styles. š Anchor: āBrainstorm email ideas I can choose fromā is low autonomy; āSend this email to all clients nowā is high autonomy.
š Hook: Some puzzles are 10-piece jigsaws, others are 1,000-piece monsters. š„¬ The Concept (task complexity):
- What it is: Task complexity is how hard a job is to understand and solve.
- How it works: 1) Read the prompt; 2) Estimate the schooling needed to follow it; 3) Consider how many steps and concepts are involved.
- Why it matters: Without it, we might compare quick chores to deep research as if they were equal. š Anchor: āList three healthy snacks for kidsā is simpler than āCompare two health studies and evaluate their methods.ā
š Hook: A lunch tray with only pizza looks different from one with pizza, salad, fruit, and milk. š„¬ The Concept (use-case mix):
- What it is: Use-case mix is the variety of things people ask AI to doāwork, school, and personal life.
- How it works: 1) Classify each conversation; 2) Tally the shares; 3) See which buckets get most attention.
- Why it matters: Without it, we wouldnāt know if AI is a workhorse, a study buddy, a life coach, or all three. š Anchor: Australiaās mix is roughly half work, half personal, with less coursework than many countries.
š Hook: When you finish your chores well, your family says, āNice job!ā š„¬ The Concept (task success):
- What it is: Task success is whether the AI produced a good, useful result.
- How it works: 1) Check if the answer fits the request; 2) See if the user accepts or iterates; 3) Rate the outcome quality.
- Why it matters: Without it, weād count attempts but not whether they helped. š Anchor: If Claudeās meeting agenda gets used in the actual meeting, thatās task success.
Put together, these pieces reveal: Australiaās strong per-person adoption, its broader-than-usual task buffet with less coding, a hands-on collaboration style, and state differences that trace job mix more than income.
03Methodology
At a high level: Input (a sample of Claude.ai conversations from February 2026) ā Clean and safely anonymize ā Classify by use-case (work, coursework, personal) ā Map tasks to job-related categories ā Estimate task traits (complexity, time without AI, autonomy) ā Compute usage indexes (like AUI) ā Compare Australiaās patterns to peers (Anglosphere, high-adoption, global).
Step-by-step recipe:
- Gather and protect data
- What happens: Researchers sampled a large set of Claude.ai conversations from February 2026 and applied strong privacy practices so no personal identities are revealed.
- Why this step exists: Without careful sampling and privacy, we could not analyze responsibly or compare places fairly.
- Example: Pull a random slice of 1M conversations globally, remove any direct identifiers, and keep only whatās needed to study patterns (like country or state, task type, and anonymized text features).
- Place conversations on the map
- What happens: Each conversation is attributed to a country and, within Australia, to a state or territory, using reliable, privacy-protecting signals.
- Why this step exists: Without location, we canāt compare Australia to other countries or NSW to Victoria.
- Example: A conversation is tagged as coming from Australia, New South Wales.
- Classify the use-case mix (work, coursework, personal)
- What happens: A classifier sorts each conversation into broad buckets: work tasks (like drafting a business email), coursework (like outlining a history essay), or personal (like planning a fitness routine).
- Why this step exists: Without these buckets, weād miss big differences across how people rely on AI.
- Example: āSummarize notes for a quarterly sales meetingā ā work; āExplain photosynthesis at a 9th-grade levelā ā coursework; āCreate a weekly meal planā ā personal.
- Map tasks to job categories š Hook: Think of a big filing cabinet where every task card goes into the right drawer. š„¬ The Concept (O*NET task taxonomy and SOC groups):
- What it is: O*NET is a job and task dictionary; SOC groups are broad job families (like Management or Computer & Mathematical).
- How it works: 1) Identify the kind of work a prompt describes; 2) Match it to an O*NET task; 3) Roll tasks up into SOC groups for easier comparisons.
- Why it matters: Without a shared dictionary, āwrite a project planā and ādraft a team roadmapā might get counted separately when theyāre really the same kind of task. š Anchor: āDraft a product requirements documentā gets mapped to tasks in Management or Business Operations, not to Education or Healthcare.
- Estimate task traits (complexity, time without AI, autonomy)
- What happens: For each conversation, models estimate: a) schooling years needed to understand the prompt (complexity), b) how long a skilled person would take without AI (duration), and c) how much freedom the user gives Claude (autonomy, from collaborative to delegated).
- Why this step exists: Without task traits, we canāt tell whether people bring Claude short errands or deep research, or whether theyāre co-piloting or handing over the wheel.
- Example: āSummarize a 2-page article and suggest 3 action itemsā ā moderate complexity, low duration; autonomy low-to-medium because the user will review and choose.
- Compute population-aware adoption (AUI) š Hook: Comparing ten cookies eaten by a family of four vs. a family of forty isnāt fair unless you adjust for family size. š„¬ The Concept (AUI formula):
- What it is: AUI is the ratio of a placeās share of Claude use to its share of working-age population.
- How it works: Compute usage share and population share, then divide.
- Why it matters: Without this, big places look overactive just because theyāre big. š Anchor: Australiaās AUI is about 4.1, meaning roughly four times the expected use per person.
Formula (with example): . Concrete example: If usage share is and population share is , then .
- Measure task diversity and concentration
- What happens: Researchers check how much of Australiaās usage is covered by its top 100 tasks. Lower coverage means people spread AI across more kinds of tasks.
- Why this step exists: Without a diversity check, a few popular tasks could hide a lot of smaller but important uses.
- Example: If the top 100 tasks cover about half of conversations, thatās broader than a place where top 100 cover most conversations.
- Compare to peers (Anglosphere, high-adoption economies, global)
- What happens: Australiaās numbers are placed alongside the USA, UK, Canada, New Zealand, Ireland, other high-AUI countries, and the global sample.
- Why this step exists: Without context, a number like ā47% personal useā is hard to interpret.
- Example: Australiaās coding-related share is lower than the global average and similar to Anglosphere peers, but Australia fills that gap more with Management and Office tasks than with Education.
- Interpret state-by-state patterns
- What happens: The study compares state AUI values to income and workforce composition.
- Why this step exists: Without this, we couldnāt see that job mix, not income, best explains adoption across Australiaās states and territories.
- Example: Western Australia has high income but a mining-heavy workforce; it shows lower AI use per person. NSW and Victoria have more roles where AI fits daily tasks (like finance and professional services), so they use more.
The secret sauce:
- Fair scaling: Using AUI keeps comparisons honest.
- Task lenses: Complexity, time, and autonomy let us see not just āhow much,ā but āhowā and āwhy.ā
- Common dictionary: O*NET and SOC groups make task comparisons consistent.
- Peer context: Side-by-side comparisons prevent misreading isolated numbers.
What breaks without each step:
- No privacy: unethical and unusable data.
- No location: no geography insights.
- No use-case mix: canāt tell work from school from personal.
- No task mapping: apples-to-oranges comparisons.
- No task traits: weād confuse quick chores with deep research.
- No AUI: big places would seem overactive just for being big.
- No peer context: numbers lose meaning.
- No state analysis: weād miss the job-mix story.
A tiny walkthrough example:
- Input: āHelp me draft a friendly reminder email to clients about invoice due dates.ā
- Use-case: Work.
- Task mapping: Office & Administrative Support ā Workplace correspondence.
- Traits: Moderate complexity (clear professional tone), short duration without AI, low autonomy (user reviews and sends).
- Inclusion in stats: Adds to Australiaās work share, office tasks, low autonomy bucket, and nudges task diversity higher if that task type is less common.
Thatās the recipe the researchers used to uncover Australiaās distinctive, collaborative, and broad AI usage profile.
04Experiments & Results
The test: The researchers measured how much Australians use Claude compared to expectation (AUI), what they use it for (use-case mix and task mapping), how they use it (autonomy), and what the tasks are like (complexity and estimated time without AI). They also checked how usage spreads across many tasks (diversity) and how patterns vary by state and territory.
The competition: Australiaās results were compared to three groups: 1) other Anglosphere countries (USA, UK, Canada, New Zealand, Ireland), 2) high-adoption economies (countries with high AUI), and 3) all countries in the sample. Inside Australia, states and territories were compared against each other.
The scoreboard (with context):
- Adoption level: Australia accounts for 1.6% of global Claude.ai conversations and ranks eleventh by share, but its AUI is about 4.1ālike scoring four goals when the average team scores one. Thatās a top-tier per-person adoption, behind only a small group led by Singapore, Israel, Luxembourg, Switzerland, the United States, and Canada.
- State distribution: New South Wales (about 37% of Australian conversations) and Victoria (about 31%) lead both by size and by higher-than-expected per-person use. Other states/territories sit below expected levels, with Western Australia, Tasmania, and the Northern Territory lowest. That pattern lines up with workforce composition: where there are more office, finance, professional services, and tech roles, usage is higher.
- Use-case mix: Australiaās split is about 46% work, 7% coursework, and 47% personal. Thatās typical of high-income, high-adoption places: less coursework, more personal use. Imagine a school where older students borrow more how-to guides and life-planning books than textbooks.
- Task diversity: Australiaās usage is more spread out across many tasks than the global average. Its top 100 tasks cover a smaller share of use, signaling a broader buffet of jobs people bring to Claude.
- Task composition: Computer & Mathematical (coding-related) tasks are around 8 percentage points below the global baseline. The space is filled by Management, Office & Administrative Support, and some Life/Physical/Social Science workāso Australia leans more into planning, communication, and organization.
- Request clusters: Underrepresented: general coding help and document translation (Australia is mostly English-speaking). Overrepresented: personal life management, health and well-being support, workplace correspondence, business documents, and financial guidance.
- Task traits: Prompts require higher schooling years to understand than the global median (showing sophistication), yet the estimated time those tasks would have taken without AI is shorter than average. Picture people asking sharp, specific questions about jobs that arenāt huge time sinks.
- Autonomy: Australia sits on the lower end of AI autonomy (around the 3.38 mark on a 1ā5 scale), similar to Anglosphere peersāmore co-pilot than autopilot.
Surprising findings:
- Income doesnāt explain state differences: Unlike the cross-country pattern (where richer countries often adopt more), within Australia, income per person doesnāt line up neatly with adoption. Instead, job mix does.
- Less coding, more everything else: Even with a smaller coding slice, Australiaās overall adoption is very highāshowing that AIās value goes far beyond software tasks.
- Sophisticated but short: Australians bring well-formed prompts for jobs that, even without AI, wouldnāt take ages. Thatās a recipe for quick, high-quality wins.
- Collaborative culture: The lower autonomy score hints at a user base that keeps control, reviews drafts, and tunes outputs instead of handing over the whole task.
Put simply: Australia is an enthusiastic, hands-on AI user community, spreading Claude across many everyday professional and personal tasks rather than concentrating mainly on code.
05Discussion & Limitations
Limitations:
- Time window: The snapshot is from February 2026. Usage patterns can shift with new features, marketing, school calendars, or economic changes. A single month canāt tell the whole story.
- Classification errors: Automated tagging of use-case mix, task categories, complexity, duration, and autonomy can make mistakes, especially on unusual or very short prompts.
- Location uncertainty: State/territory attribution uses privacy-preserving signals that may be imperfect at fine geographic levels.
- Correlation vs. causation: Seeing job mix line up with usage doesnāt prove job mix causes usage; other hidden factors (like local tech communities or procurement rules) could matter.
- Coverage scope: This analysis focuses on Claude.ai consumer conversations. It may undercount enterprise or API usage and wonāt capture other AI tools people might use alongside Claude.
- Generalization: Findings about Australia may not transfer to countries with different languages, education systems, or workforce structures.
Required resources to use this approach:
- Anonymized conversation samples at scale, with safe handling and governance;
- Reliable classifiers for use-case mix and task mapping to O*NET/SOC;
- Models or rules to estimate complexity (schooling years), duration (no-AI hours), and autonomy (collaboration vs. delegation);
- Population data (working-age shares) to compute AUI;
- Benchmarks for peer comparisons (Anglosphere, high-adoption, global).
When not to use or trust this approach:
- Very small samples (few conversations) where random noise dominates;
- Highly confidential domains where even anonymized text is off-limits;
- Periods with unusual spikes (like a viral trend) that distort typical usage;
- Settings where O*NET/SOC doesnāt map well (e.g., novel tasks in emerging fields).
Open questions:
- How do autonomy and task complexity evolve as people gain AI fluencyādo users gradually delegate more, or do co-pilot habits persist?
- Which training programs best help non-coding roles (management, office, sales) unlock more value from AI?
- How does enterprise adoption (beyond consumer Claude.ai) change the map across states and sectors?
- What fairness or access gaps appear across regions, and what policies or tools close them?
- Can better measurement of task success (beyond proxies) sharpen our understanding of real-world impact?
06Conclusion & Future Work
Three-sentence summary: Australians use Claude far more per person than population size would predict, with usage concentrated in New South Wales and Victoria and spread across a wide variety of non-coding tasks. Their prompts are sophisticated but often describe shorter tasks, and they tend to collaborate with Claude rather than fully delegate, mirroring patterns seen in other Anglosphere countries. Inside Australia, adoption aligns more with workforce composition than with income, highlighting the importance of job mix in AI uptake.
Main achievement: The study builds a fair, task-aware picture of AI adoption in Australiaāusing population-adjusted metrics (AUI), consistent task mapping (O*NET/SOC), and task traits (complexity, time, autonomy) to reveal a broad, collaborative usage style beyond coding.
Future directions: Track trends over longer periods; link consumer and enterprise usage; refine autonomy and success metrics; explore regional training and policy pilots that help underrepresented sectors and states; and deepen comparisons across languages and education systems.
Why remember this: It shows that AIās biggest wins arenāt only in codeātheyāre also in everyday planning, writing, organizing, and decision support. And it proves that who benefits most depends less on how rich a place is and more on whether peopleās daily jobs naturally fit an AI co-pilot. That insight helps educators, businesses, and governments aim resources where they make the most difference.
Practical Applications
- ā¢Design targeted AI training for office, management, and professional services roles in NSW and Victoria to accelerate existing momentum.
- ā¢Create outreach and upskilling programs in lower-AUI states focused on everyday tasks (correspondence, planning, documentation) rather than coding.
- ā¢Encourage collaborative AI workflows (draftāreviewāedit) in businesses to match Australiaās lower-autonomy style and improve safety.
- ā¢Integrate AI co-pilots into financial guidance, workplace correspondence, and business document preparation to capture high-use clusters.
- ā¢Develop school policies that emphasize AI as a study partner for understanding and outlining rather than doing assignments end-to-end.
- ā¢Adopt task libraries mapped to O*NET/SOC so teams can discover where AI fits their daily work and measure impact.
- ā¢Pilot government service templates (forms, notices, FAQs) with AI-assisted drafting, keeping human review for final decisions.
- ā¢Use AUI-style metrics in organizations to spot departments that could benefit from AI but currently underuse it.
- ā¢Build quick-win playbooks for short, sophisticated tasks (summaries, brief analyses) that match Australiaās prompt style.
- ā¢Support health and well-being use with vetted resources and clear disclaimers, aligning with the overrepresented personal clusters.