How First-Year Students Actually Use AI in Permitted Assessments
Interactive Dashboard: Empirical Evidence from 167 Students
Research Context
The Challenge: Whilst institutional policies on generative AI proliferate across higher education, empirical evidence of how students actually navigate these tools in authentic assessment contexts remains limited. Assumptions about student behaviour, ranging from fears of passive "copy-paste" practices to concerns about uncritical AI acceptance, dominate discourse without validation through direct observation.
What We Studied: This research examined first-year ICT students at Central Queensland University during a supervised in-class assessment where AI use was explicitly permitted under institutional "AI Collaborate" guidelines. Students analysed a peer-reviewed research article whilst having explicit permission to use generative AI tools, then completed an embedded 12-item reflection instrument capturing both quantitative behavioural patterns and qualitative reflections.
How We Collected Data: During Week 12 of Term 3, 2025, 167 students completed behavioural questions documenting their AI interaction patterns (prompt frequency, verification strategies, revision approaches) and 163 provided qualitative reflections on challenges, support needs, and institutional frameworks. The assessment was designed using SAGE (Structured AI-Guided Education) verification protocols, requiring systematic cross-referencing with source materials.
Why This Matters: This dashboard presents empirical evidence challenging prevailing assumptions about student AI adoption. Rather than passive acceptance or minimal-effort shortcuts, the data reveals sophisticated verification behaviours, strategic interaction patterns, and a critical competency-confidence inversion wherein students demonstrate higher competence than confidence. These findings position students as partners in developing AI literacy frameworks rather than subjects requiring policing.
Unit: COIT11239 (Professional Communication for ICT)Participants: 167 Valid ResponsesContext: First-year, first-term studentsFramework: SAGE (Structured AI-Guided Education)✓ DATA VERIFIED
Key Findings Summary
The dashboard below presents twelve dimensions of student AI engagement, revealing four behavioural typologies (Strategic Optimisers 32%, Dialogic Learners 28%, Cautious Adopters 23%, Experimental Users 17%), a "Goldilocks Zone" of optimal interaction (4-8 prompts, 55.1%), and critical gaps between demonstrated competency and expressed confidence. Students prioritise technical guidance (85%) over integrity frameworks (19.2%), seeking partnership rather than policing in AI literacy development.
73%
Systematic Verifiers
Debunking "Lazy Cheater" Myth
55.1%
Goldilocks Zone
Optimal (4-8) Prompts
95%
Strategic Pivot
Growth Mindset Evident
#1
Accuracy Anxiety
Top Technical Challenge
1. Initial Approach Strategies (Q1)
What we asked: How did you begin interacting with the AI tool: conversation style, upload everything at once, or something different for each section?
Meaning Behind the Data: How do students begin their AI interaction? The data reveals that 39.5% adopt conversational paradigms, treating AI as a dialogic partner rather than a search engine. Meanwhile, 36.5% gave the article first before addressing questions, showing systematic sequential processing. Only 7.2% attempted "upload and extract" monolithic approaches, suggesting students intuitively understand AI limitations and prefer structured engagement over one-shot queries.
Initial Approach Distribution (Q1)
How students began their AI interaction
2. Behavioral Typologies (Cluster Analysis)
What we analysed: By examining patterns across prompt frequency, verification intensity, and revision strategies, we identified four distinct student types.
Meaning Behind the Data: Cluster analysis of student behaviors reveals that students are not a monolith. They fall into four distinct archetypes. While 32% are "Strategic Optimisers" who balance speed and quality, nearly 28% are "Dialogic Learners" who engage in deep, conversation-style learning. This suggests support strategies must be tailored: Optimisers need templates, while Dialogic Learners need help with synthesis and closure.
Integrity-focused. 1-3 prompts. High verification.
Type D: Experimental
17%
Risk-takers. Variable prompts. Fluid boundaries.
3. Learning Trajectory: Prompt Frequency (Q2)
What we asked: How many separate times did you interact with the AI during this assessment—just once, a few times, or many iterations?
Meaning Behind the Data: How do students learn to prompt? The data reveals a "Goldilocks Zone" of 4-8 prompts (55.1%) where students feel most productive. Those who used 15+ prompts (4.8%) often reported "confusion spirals," where more AI interaction led to worse outputs. The convergence towards moderate engagement suggests students intuitively discover optimal interaction intensity through experiential learning.
Prompt Frequency: The "Goldilocks Zone" (Q2)
Efficiency vs. Confusion Spirals
4. Depth of Engagement: Revision Strategies (Q3)
What we asked: What did you do with AI outputs—use them as-is with minor tweaks, rewrite completely in your own words, or ask follow-up questions for refinement?
Meaning Behind the Data: A common fear is that AI leads to passivity. The data refutes this. 81% of students (n=135) engaged in "Deep" behaviors—either completely rewriting the output (37.7%) or asking follow-up questions (43.1%) to refine it. Only 14% (n=24) engaged in "Surface" behaviors like minor formatting (8.4%) or response combination (6.0%). This proves that for most students, AI functions as a drafting partner that stimulates revision, rather than a ghostwriter that replaces it.
Revision Strategy Breakdown (Q3)
Are students "Copy-Pasting"?
Detailed Breakdown
Four distinct revision approaches
Follow-up Questions (43.1%, n=72)
Iterative refinement through continued dialogue
Verification-Rewrite (37.7%, n=63)
Checked article then rewrote AI output
Minor Formatting (8.4%, n=14)
Surface-level adjustments only
Combined Responses (6.0%, n=10)
Merged multiple AI outputs
5. The "SAGE" Verification Core (Q4)
What we asked: How often did you check AI responses against the original article—regularly, occasionally, or rarely?
Meaning Behind the Data: The study used the SAGE framework to require a "Verification" step. The results are striking: 73% of students complied systematically (47.9% verified several sections + 25.1% regularly compared), debunking the idea that students will always take the easy route. Only 5.4% demonstrated minimal verification ("AI seemed accurate, checked few times"), showing that with proper scaffolding, students develop sophisticated verification practices.
Verification Behavior (Q4)
Did they check the article?
6. The Application Bottleneck (Q5)
What we asked: Which part of the assessment required the most rewriting after receiving AI help—summary, analysis, recommendations, or something else?
Meaning Behind the Data: This question tests where AI helps versus where it struggles. The data reveals a stark difference: only 24.6% needed major rewriting for "Article Summary" (AI excels at extraction), but 31.1% needed extensive rewriting for "Specific Recommendations" (AI struggles with context-specific application). This defines the "Application Bottleneck": students add the most value when applying AI-synthesized information to specific contexts, rather than in extracting information.
The "Application Bottleneck" (Q5)
Where AI failed & humans worked
7. Drivers Beyond Time: Equity and Inclusion (Q7)
What we asked: What factors influenced your decision to use AI—time pressure, English confidence, maintaining your voice, or something else? (Select up to 2)
Meaning Behind the Data: The popular assumption is that students use AI for efficiency ("I was running out of time"). While 73.1% cited time pressure, the second-highest driver was English confidence (46.7%). For international students (~60% of the cohort), AI acts as an equalizer, helping them express complex ideas without linguistic barriers. This reframes AI from "cheating shortcut" to "accessibility tool", crucial for equity in higher education.
What we asked: Which best describes your overall interaction pattern—provided all info at once, built responses step-by-step, or engaged multiple times with verification checks?
Meaning Behind the Data: This question captures the broader workflow. 58.1% engaged in iterative verification workflows (multiple AI interactions interspersed with source checks), validating the SAGE emphasis on checkpoint-based engagement. Only 3.0% used "comprehensive single-input" approaches, showing that students naturally develop multi-turn strategies when assessment structures encourage verification.
Engagement Pattern Distribution (Q8)
Interaction strategies employed during the assessment
9. The Friction Points: Technical Challenges (Q9)
What we asked: What was most difficult about using AI effectively—creating useful prompts, verifying accuracy, maintaining your authentic style, or something else? (Select up to 3)
Meaning Behind the Data: This reveals where students struggle. Contrary to expectations, the #1 challenge was "Verifying AI Accuracy" (77.8%), not prompt creation. The second challenge was "Maintaining Authentic Style" (64.7%). This pattern suggests students need support for epistemic practices (how to verify) and identity concerns (preserving voice), not just technical operation. It's about confidence and competency, not just knowing how to click buttons.
Challenge Hierarchy: The Friction Points (Q9)
What was hardest for students?
10. Retrospective Learning: Strategic Pivot (Q10)
What we asked: If you could redo the assessment, would you change your AI approach—engage more iteratively, verify more thoroughly, or keep the same strategy?
Meaning Behind the Data: This question tests metacognitive awareness (the ability to reflect on one's own learning). The result: 95.2% said they would modify their approach, with 46.7% wanting to "engage more iteratively". This is extraordinary—it shows students are not blindly using AI, but actively learning and refining their strategies. The high modification rate demonstrates a growth mindset, where students view AI interaction as a learnable skill requiring improvement through practice.
Retrospective Pivot (Q10)
"If you could do it again, what would you change?"
11. The Support Wishlist: Student Priorities (Q11)
What we asked: What kind of institutional support would most help you use AI correctly and ethically—verification guidance, prompt examples, practice opportunities, or integrity frameworks?
Meaning Behind the Data: Instead of assuming what students need, we asked them. The result: 51.5% requested "Verification Guidance" (how to check AI outputs), and 33.5% requested "Prompt Examples" (how to interact effectively). Together, that's 85% requesting technical skills. Only 19.2% requested "Integrity Frameworks" (rules). This shows students want competency development, not mere compliance. They're asking: "Teach me how to do this right," not "Tell me what's against the rules."
Prioritized Support Needs (Q11)
Specific interventions requested by students
12. Thematic Analysis of Concerns and Needs (Q12)
What we asked: In your own words (2-3 bullet points), what could CQUniversity do more to help you use AI correctly and ethically?
Meaning Behind the Data: The open-ended responses (n=163) were computationally analysed using semantic clustering. The result: 45% expressed "Policy Clarity" concerns (e.g., "crystal clear guidelines," "what's allowed vs. prohibited"), far exceeding practical training requests (35%). This reveals widespread anxiety about accidental misconduct through AI misuse. Students aren't asking "How do I cheat?"—they're asking "How do I avoid accidentally cheating?" This is a legitimation crisis, not a compliance crisis.
What this section provides: Based on the empirical evidence from 167 students, this roadmap outlines specific, evidence-based actions institutions can take.
Meaning Behind the Data: Based on the "Support Wishlist" and behavioral gaps, this roadmap outlines a phased institutional response. It moves from the immediate need for clarity (stopping the anxiety), to the mid-term need for competency (building skills), to the long-term need for integration (embedding AI into the fabric of the degree).
Immediate Term (0-3 months)
Policy Clarification & Legitimation
Publish unified AI policy: Define acceptable use with concrete examples differentiating "idea generation" vs "drafting" vs "editing assistance"
Address three ambiguity layers: Definitional (what counts as AI assistance?), procedural (disclosure requirements), evaluative (how will it be assessed?)
Establish transparency protocols: Create AI declaration templates for assessments specifying usage levels
Communicate institutional stance: Move from "detection and punishment" to "partnership and development" messaging
Mid Term (3-12 months)
Competency Development via Exemplars
Develop verification frameworks: Create discipline-specific heuristics for fact-checking AI outputs (51.5% requested this)
Build prompt engineering library: Annotated case studies showing effective vs ineffective prompts across assessment types (33.5% demand)
Establish practice infrastructure: Consequence-free "sandbox" environments with formative feedback on AI orchestration quality (29.3% requested)
Create comparison exemplars: Side-by-side demonstrations of legitimate AI collaboration vs problematic over-reliance
Integrate into curriculum: Embed AI literacy explicitly within core units rather than treating as supplementary skill
Long Term (12-24 months)
Assessment Transformation & Infrastructure
Transition to process-based assessment: Evaluate AI interaction quality through prompt sequences, verification protocols, and revision strategies rather than output alone
Legitimate hybrid authorship models: Develop contribution frameworks explicitly articulating human value-addition in AI-mediated work
Emphasize application over extraction: Design assessments requiring context-specific reasoning where AI struggles (recommendations, contextual analysis) rather than information retrieval where AI excels (summarisation)
Position students as co-creators: Involve student representatives in AI policy development and revision based on evolving practices
Establish AI Support Hub: Centralized resource providing workshops, drop-in consultations, and discipline-specific guidance
14. Key Theoretical Implications
From Data to Theory: The empirical patterns reveal three critical insights that challenge existing assumptions about student AI adoption and inform institutional responses:
Competency-Confidence Inversion
Students demonstrate higher competency than confidence: 73% verify systematically yet 77.8% identify verification as their primary challenge. This paradox suggests:
Students possess greater AI orchestration capabilities than they recognize
Regulatory anxiety stems from lacking institutional validation, not from incompetence
The primary educational challenge involves validating emergent competencies rather than teaching fundamentally new skills
Institutional role shifts from instructor to certifier of organically developed capabilities
Temporal Reallocation, Not Reduction
Despite 73.1% citing time pressure as the primary driver, behavioral data reveals extensive engagement through iterative workflows (58.1%) and deep revision (81%). This demonstrates:
AI adoption transforms rather than eliminates intellectual labour requirements
Students invest substantial cognitive effort in verification and revision activities
Time gains from AI-assisted information gathering are reinvested in critical evaluation and contextual application
The "efficiency" narrative obscures the reality of shifted rather than reduced cognitive load
Equity Imperative
English language confidence emerges as the second-highest AI adoption driver (46.7%), reframing AI tools from efficiency mechanisms to accessibility technologies:
For linguistically diverse students (~60% of cohort), AI functions as an equalizer enabling intellectual contribution without linguistic barriers
Blanket AI prohibitions may inadvertently exacerbate existing inequities by removing scaffolding that levels linguistic playing fields
Institutional policies must carefully calibrate restrictions to preserve accessibility benefits whilst maintaining academic standards
AI integration represents an equity imperative, not merely an efficiency consideration