We’re running a longitudinal study at MIT on how AI tools are changing research workflows in the social sciences. We’re interested in how social scientists are adapting to these new tools.
As part of the study, half of participants will be offered access to Claude Code for two months, via a Claude Max subscription. Claude Code is a command-line AI assistant that can write and debug your analysis code, manage your data pipeline, scrape and clean datasets, and automate tedious research tasks. These accounts have been donated by Anthropic. Other participants will be compensated for participation across the study.
To be eligible for the study, you need to complete an initial survey, and be a researcher (3rd year PhD student to full professor) working with quantitative data, with an empirical paper from the last six months. The survey (~8 minutes) covers your current research, workflows, and time use. You can take it here:
https://mit.co1.qualtrics.com/jfe/form/SV_3k5gra5DNrs0o4u
For completing the initial survey, you’ll receive a $10 gift card. Your responses are confidential. We use participant IDs rather than names, and report only aggregate results.
Thank you for considering this. Please let us know if you have any questions.
Thomas Lyttelton and Nathan Wilmers
On behalf of the MIT research team
Institute for Work and Employment Research
MIT Sloan
wflstudy@mit.edu
Informed consent information
Overview: The purpose of this study is to examine how generative artificial intelligence is changing social science.
What You Will Be Asked to Do: You will complete this baseline survey (approximately 8 minutes) about your research workflow and recent research output. As part of this survey, you will be asked to share an empirical paper (published paper, working paper or draft) from the last six months. You will also be contacted for brief follow-up surveys on similar topics at 3 and 6 months. You may be randomly assigned access to Claude Code and asked to share your experiences of using these tools. For those offered to use generative AI tools, you will be expected to follow their terms of service, which may include retaining data for AI learning. You should ensure that you are following AI policy at your institution.
Voluntary Participation: Your participation is entirely voluntary. You may decline to answer questions and withdraw from the study at any time without adverse consequences.
Benefits: Your participation will contribute to research that helps the social science community understand how research practices and productivity are evolving in response to new technologies, including AI tools. The findings could inform how researchers adapt to these changes. You will receive a $10 Amazon gift card for completing this first survey. You will be offered access to Claude Code or offered additional gift cards with the two future surveys.
Risks: There are minimal risks to participants. The questions about workflow require careful reflection but should involve minimal discomfort. There is a risk to your privacy and to the confidentiality of your responses if you are identifiable in publicly shared data, or in the event of a data breach. There is also a risk to your intellectual property if your research paper is released as part of a data breach. We mitigate risks caused by potential identifiability in public data by using participant and institution IDs instead of names. We will not release research papers uploaded by participants. We mitigate the risk of a data breach by password protecting and encrypting data, and storing the participant IDs key separately from the rest of the data. COUHES will be contacted in the event of an emergency or adverse event.
Confidentiality and Privacy: Your responses will be kept strictly confidential and used only for research purposes. We will separate identifying information from materials used for analysis, use participant IDs rather than names, and report only aggregated, de-identified results. The text of the research paper that you upload will be coded and then separated entirely from the survey responses. We will not share the research paper that you provide and will delete it once the final research paper produced by the research team using the data is accepted for publication or three years after the final survey, whichever is earlier. We also will not report any identifying information from your research outputs, such as titles or abstracts. Your research paper will not be used for AI or LLM training. We will also not be running the paper through any AI writing checker. We are coding the paper for length, target journal, number of results, and related features. Any research papers you submit will not be shared.
Future Data Sharing: Your non-identifiable data, including survey results, collected as part of the research will be stored, used for future research studies, and shared with other researchers for future research studies without additional informed consent from you or your legally authorized representative. Your data will be shared with Open Science Foundation, other public repositories, academic research institutions, non-profit entities, and/or for-profit entities.
Funding: This is an MIT research project, with funding provided by the Gates Foundation and Anthropic.
Contact Information: If you have questions about this study, please contact Nathan Wilmers and Tom Lyttelton, at wflstudy@mit.edu. You are not waiving any legal claims, rights or remedies because of your participation in this research study. If you feel you have been treated unfairly, or you have questions regarding your rights as a study participant you may contact the Chairman of the Committee on the Use of Humans as Experimental Subjects, M.I.T., Room E25-143B, 77 Massachusetts Ave, Cambridge, MA 02139, phone 1-617-253 6787.