Make sure you know each professor's AI policy! Depending on the class and the assignment, using AI could be considered plagiarism or academic misconduct.
Generative Artificial Intelligence (AI) or Large Language Models (LLMs) like the ones listed below are tools that can be one way to help you think differently about your research -- they will not do the research for you. A few things to think about EVERY TIME you're using tools like these:
Always Cross-Check Translations: While this tool can translate historical texts, it's important to double-check any translations with other resources, especially for complex or nuanced phrases in older languages or dialects. Machines can miss cultural or context-specific meanings.
Context is Key: The tool can provide a good general context, but you should still read widely around the topic. It might not always capture subtle historical debates or the most recent research trends, which is why consulting human historians or secondary sources is critical.
Be Aware of Interpretations: This tool excels at analyzing primary sources, but every interpretation reflects choices about what's important or relevant. Use your own judgment and think critically about what conclusions you draw.
Historical Uncertainty: Some historical documents are ambiguous or incomplete. The tool will give its best guess, but when there’s uncertainty (especially with paleography), it’s good to note that and seek additional expert input.
Bias in Modern Databases: This LLM can link primary sources to modern databases and research, but these sources can have biases. Always think critically about the perspectives these modern sources represent, especially if they focus on dominant narratives.
Using this tool effectively requires keeping a critical eye and balancing it with traditional research methods.
Questions? Contact reference@carleton.edu
Powered by Springshare.