Lab Policy on the Use of Generative and Agentic AI

Neurocode / Cognitive Neuroscience of Learning Lab at Uni Hamburg
Version 3 May 2026

Preamble

This policy documents our current thinking on how members of the lab should use generative and agentic AI tools in their scientific work. The document provides mostly guidelines rather than hard rules, reflecting that often there are grey zones and no clean lines. We also recognize that both the technology and the surrounding culture are changing fast, so this document will need to be revised as tools, norms, and regulations evolve.

The policy is not meant to make our life unnecessarily hard or discourage the use of AI. Its goal is to encourage a mindful use that separates good uses from those that are harmful to the scientific system and our role in it.

While the policy does not name concrete AI tools, these are an integral part of how AI shapes science in the lab. We maintain a section on the lab wiki (https://schucklab.gitlab.io/wiki/science/ai/) for practical tips, prompt patterns, tool setups, and lessons learned. Sharing what works (and what doesn't) is part of the policy; please contribute to the ongoing discussions and documentation.

The policy was inspired by several sources, including Bridgeford et al. on AI-assisted coding in science1, the Gureckis Lab AI policy2, and blog posts by Stefano Palminteri3 and Michael Frank4, among others. The ideas were entirely human-generated. Writing the policy was done by Nico Schuck and assisted by Claude Opus 4.7.

Principles

1. Don't outsource your scientific training and development.

The lab's mission is to train scientists and to keep developing our skills. Producing papers is part of that, but not the whole point. That means we need to be deliberate about which thinking tasks we want to offload, and which we want to do ourselves. The core intellectual work, such as developing research questions or theories and testing them with experiments, interpreting results, writing arguments, in most cases falls into that category, and stays with you. Other tasks, like wordsmithing, reformatting, simple coding, or scaling models, might be fine to offload. Concretely:

  • Bring your ideas to paper or code in your own words first, then use AI to critique, compress, polish or scale. A useful default: let the AI challenge your writing rather than generate it.
  • Basic programming is a skill you should have; it teaches problem decomposition and algorithmic thinking. Implementing a model yourself is often where the thinking happens.
  • There are legitimate non-thinking tasks — wordsmithing, reformatting, boilerplate, SLURM jobs, online experiment plumbing etc — where leaning on AI is fine and often smart.
  • The line between the desired (writing an argument) and burdensome (wordsmithing) intellectual tasks is often fuzzy. Being unsure which side you're on can be a signal to do that part yourself.

2. Verify and validate.

AI outputs will sound confident and smart, even when they are incorrect. Be skeptical. Check your work as you go, and check it especially carefully before anything goes public.

  • Know how you would verify or falsify what the model produces. Articulate edge cases, expected inputs and outputs, and likely failure modes. AI itself can be good at suggesting cases you would have missed, but beware of sycophancy.
  • Prefer code over trust. Where possible, write a script that checks what the AI produced rather than asking the AI to check itself. A script that resolves every DOI beats "do these citations look right?"
  • Notice and remember the kinds of mistakes models make (but stay aware that these shift as models change).
  • Before "shipping," test the final output thoroughly. Code: read it line by line when it matters; shared code should be documented and cleaned of sloppy and unused bits. Text: read every sentence carefully and ask whether it has the intended meaning. Figures: carefully proof your figures too. Citations: verify that every DOI resolves, every URL is live, and a human has read at least the abstract of every paper cited to confirm it is being accurately characterized.

3. Avoid risks.

Be deliberate about what access AI tools get to your data, files, and accounts.

  • Do not give AI tools direct access to participant data or identifiable information, i.e. grant data access only to fully anonymized data and when clearly necessary. Prefer tools that give control over data processing and follow current data protection laws.
  • Give tools only the access they actually need. Restrict agentic tools to specific project folders rather than your whole machine.
  • Keep personal and professional contexts separate (e.g., different accounts, different tool instances). Avoid mixing them to reduce context bleed and privacy risks.
  • Be aware of prompt injection: content from emails, PDFs, web pages, or issue trackers can contain instructions aimed at the model. Treat agent actions on untrusted content with extra care.
  • Never share passwords, API keys, SSH keys, or unpublished data you don't control with external AI services.

4. Invest in using AI tools well.

The same model can produce a great or a useless answer depending on how you set up the interaction, and practices that worked six months ago may already be outdated. Treat learning to use these tools well as part of your training as a scientist, and share what you learn with the lab; the wiki is where we collect current best practices.

  • The craft has several components worth investing in: managing context (what you put in front of the model and what you keep out), prompting clearly, scoping tasks tightly enough that the model can actually succeed, and iterating when the first attempt isn't good enough.
  • See the wiki for some guidance to get you started: https://schucklab.gitlab.io/wiki/science/ai/

5. Authorship, transparency, & responsibility.

You are the author of everything you make public, such as code, figures, text or apps, and you should be open about how it was made. Using AI tools does not change or dilute the fact that you bear full responsibility for everything authored by you.

  • AI is a tool, not a co-author. No AI system will be listed as an author on work from this lab, regardless of how much it contributed — this aligns with the policies of all major journals and AI/ML conferences.
  • "The model said so" is not a defence for errors, sloppy analyses, or problematic claims. If you wouldn't be comfortable defending a sentence, a figure, or a block of code on your own, don't make it public.
  • Be open about how you're using AI at every project stage. You can use AI in any way that brings your intellectual work forward, including fast prototyping with limited understanding, which can be valuable for initial ideas. The condition is that you are transparent with your collaborators and PI about what you know and what you don't, and follow up with solid work.
  • Disclose substantive AI use in manuscripts, theses, and code following journal, funder, and university guidelines.
  • For reviewing follow the confidentiality rules of the journal or funder, noting the necessary confidentiality considerations in these cases.
  • If you discover an error in AI-assisted work after submission — a hallucinated citation, a bug in generated code — report it immediately.

References

  1. Bridgeford et al., “Ten Simple Rules for AI-Assisted Coding in Science,” arXiv:2510.22254 (2025).
  2. T. Gureckis, “Lab AI Policy,” todd.gureckislab.org/2026/03/06/genaipolicy (2026).
  3. S. Palminteri, “Cognitive Modelling Research in the Era of Agentic Large Language Models,” Medium (2026).
  4. M. C. Frank, “Using AI to improve (not automate away) academic research,” Babies Learning Language blog (2026).

 

Back to Home