AI tools for research
In this guide, you will find artificial intelligence (AI) tools based on large language models (LLMs). The guide summarizes selected tools ranging from general chatbots to specific tools for literature search, mapping and subsequent analysis, as well as tools for data analysis. The aim is not to provide an exhaustive list for each category, but to offer a curated selection based on each tool’s quality and accessibility. Keep in mind that LLMs can sometimes hallucinate and that they generally work best in English. It is therefore advisable to approach the tools and their outputs critically. If you are interested in using AI for academic writing, visit our guide Tools to support writing.
Need help with something? Have a recommendation for an AI tool? Contact us by email or schedule a consultation.
Artificial intelligence is transforming the way we approach scientific knowledge. It opens up new possibilities in data analysis, resource discovery, and personalized support. At the same time, however, it brings challenges related to data protection, transparency, reproducibility of outputs, issues of authorship, and ethical and academic standards.
Ethical standards for working with AI
As a general rule, it is important to use artificial intelligence transparently and to acknowledge its use or cite it properly. This is especially true when working with tools that have an integrated chatbot. The European Commission’s “Living guidelines on the responsible use of generative AI in research” set out basic recommendations for the use of AI:
- The user (student, researcher) is ultimately responsible for the AI outputs used.
- When working with AI tools, it is important to be aware of their limitations (inaccuracy, bias, hallucinations, sycophancy).
- AI should be used transparently, especially in cases where AI has significantly influenced your output.
- Data security must be ensured and sensitive data safeguarded.
How to acknowledge the use of AI?
Acknowledging the use of AI may be required in cases where you have relied on an AI tool for language corrections, stylistic editing, or brainstorming. However, this is not a general rule: it is always necessary to check your university or publisher’s AI policy or guidelines. If your institution does not provide a template for such a statement, you could use the following template provided by the University of Newcastle instead. This is just one example of what AI acknowledgement may look like:
"I acknowledge the use of ChatGPT (https://chat.openai.com/) to provide a background summary of the essay topic that I used to inform my basic level of understanding. I also generated a list of synonyms to help me expand my search and suggest some key articles on the topic, which were searched for in Library Search.“
When and how to cite AI outputs?
If you include any AI outputs (images, graphs, code, text excerpts) in your text, you must clearly acknowledge each AI output by citing it.
The citation should include the following basic information:
- Name of the AI tool
- Model/version
- Prompt
- Date of tool use
- Link to tool/chat
Below you will find citation templates for commonly used citation styles.
“How to cite AI?” prompt. ChatGPT, GPT-4o, OpenAI, 31. 7. 2025, https://chat.openai.com/chat.
OpenAI. (2025). ChatGPT (March 14 version) [Large language model].https://chat.openai.com/chat.
In this case, it is recommended to include only a footnote without a bibliographic entry.
OpenAI. Response to “Tell me how to fix a flat bicycle tire.” ChatGPT-4, September 30, 2024.https://chatgpt.com/share/66fb0ff3-7280-8009-93a9-d956f412390b.
OpenAI. ChatGPT. Online, generative AI. Version GPT-4o mini. 15. 5. 2025. Available from: https://chatgpt.com/ [cit. 2025-05-16].
Policies of selected institutions
Before using AI for academic work, you should always check the policy of your university/faculty or the publisher of the academic journal to ensure that you are maintaining academic integrity and not violating any applicable rules.
AI chatbots generate text based on your input and depending on internal parameters (a dataset trained on a large amount of data organized according to statistical probability). In addition to their information search function, which is still far from perfect, they can help with writing, language editing and text analysis, programming, and other tasks.
We do not recommend relying entirely on the accuracy of information generated by any chatbot – ideally, check the sources cited and then draw directly from them. In addition to random hallucinations, these tools may also have difficulty understanding more complex queries, have limited knowledge of the latest events, or provide irrelevant sources. Due to their training data, chatbots exhibit varying quality of output for different fields.
If you decide to use text generated by a chatbot, make sure that you are not violating or the rules of your institution , especially with regard to plagiarism and citation.
For more information, please feel free to arrange a consultation or contact us by email.
When to use generative AI?
- Preparing to search for academic sources: creating keywords and search query using logical operators
- Text summarization: quick summarization of extensive scholarly articles or other documents
- This is not a complete replacement for critical reading. For more information on using artificial intelligence for text summarization, see our guide Reading for writing.
- Language support: grammar correction, stylistic suggestions, or text translation
- For tools specifically designed for working with languages, visit our guide Tools to support writing.
- Structuring and ideas: suggestions for outlines of articles, presentations, and projects, or for brainstorming
- Individual support: quiz creation, pre-trained agents/educational features (Tutor me, Study & Learn), study plan creation
- Coding and programming: editing and structuring code; explaining chunks of code
When not to use generative AI?
- If the rules of your university, faculty, or publisher prohibit it.
- For activities that require high accuracy and verifiability (generating bibliographies and citations, statistical data).
- Whenever the process itself is the actual purpose of your work, which cannot be replaced by chatbot output (writing your own code, completing a school assignment, formulating your own research goals).
- If you do not have sufficient knowledge to verify the correctness of the output. Keep in mind that you are the person ultimately responsible for all AI-generated outputs.
What to watch out for
- Paywall and access to the model: Many chatbots are only available for free to a limited extent and with daily limits. Make sure that the quality of the output is adequate and that the daily limit is sufficient.
- Differences between models: Generative AI tools (such as ChatGPT) come in different models (e.g., ChatGPT models) that excel at different tasks (writing code, logical operations, writing coherent text). We recommend checking which model is the most suitable for your particular task. Differences in output quality also exist between chatbots from different companies (Gemini, ChatGPT, Claude). Comparisons of tools are provided by performance benchmarks, such as Humanity’s Last Exam and Epoch.ai.
- Data security: Before you start working with a chatbot, check how your data is handled (this includes your chats, but also uploaded documents). For example, Claude from Anthropic does not use user data to train models, while ChatGPT requires you to disable model training on your data in the settings. Even so, be extra careful about what data you enter into the chatbot.
- Bias and hallucinations: Since generative AI tools are trained on huge amounts of data, data that is of poorer quality or contains social stereotypes (racial, gender, or minority bias) also becomes part of the training, which is then reflected in the AI’s outputs. Due to their predictive nature, these tools can “hallucinate”, which means provide misleading or inaccurate information. The tools may also have censorship filters set up to avoid certain topics or to respond in a “desirable” manner, however incorrect or misleading. In such cases, the responses may not reflect reality, but rather the values of the company operating the model.
What to consider when choosing
- Prepaid institutional access: Check if your institution subscribes to any AI tools. For example, the Czech Technical University, Masaryk University, Charles University, and the VSB – Technical University of Ostrava offer access to Microsoft 365 Copilot. Institutional access usually guarantees greater data security compared to the standard individual version.
- Specific use: When choosing a chatbot, the purpose for which you want to use it plays an important role. Currently, commonly used chatbots offer similar features, but their quality varies. Therefore, it is always necessary to compare and verify the quality of the output of individual chatbots relative to your task.
- Functions, integration with other tools, and working with documents: Test whether the chatbot works correctly in your language and the outputs are of sufficient quality. Test and compare the chatbot’s individual features: internet access, writing code in the desired language (Python, R, Java), working with uploaded documents, integration with cloud storage (OneDrive, Google Drive, etc.).
The website lmarena.ai can help you decide between chatbots by comparing the responses of any two models to a selected prompt. It also provides user ranking of models according to various categories, such as text generation, web development, image creation, and more.
- ChatGPT
- Claude
- Gemini
- Microsoft Copilot
Commercial models
- DeepSeek
- Gpt-oss-120b/20b
- LLaMa
- Mistral
Open-source/local models
AI detectors are tools that attempt to determine whether a text has been generated by artificial intelligence, particularly chatbots based on large language models. It is important to emphasize that the results of these tools are indicative only. Detection capabilities can vary significantly between detectors and are often more reliable for English than for other languages. The quality of detection by individual tools can also change over time, depending not just on how the tools evolve, but also on the AI model.
We recommend using AI text detectors as a supporting tool, not as proof.
For more information, please feel free to arrange a consultation or contact us by email.
How AI detectors work
AI detectors combine several methods to analyze text and determine the likelihood of AI match:
The first method is statistical analysis, which focuses on the words used and their frequency of occurrence as well as on sentence structure and other textual patterns. These variables are then compared to texts written by humans. In addition to statistics, natural language processing is also used to measure perplexity (predictability of text) and burstiness (irregularity and alternation of short and long sentences). Lower values of both are often found in AI-generated texts.
The second method is detection using machine learning. In this regard, a dataset combining human-written texts and AI-generated texts is used to detect patterns and determine the probability of AI matching. These tools also search for watermarks or traces left behind by AI (such as metadata or non-printable characters).
Shortcomings of AI detectors
AI detection tools struggle to achieve accurate and reliable results. The accuracy of the results is affected both by the detector and by the language used. This leads to two problems.
- Refers to a situation where an AI detector is unable to detect text that has been generated by artificial intelligence.
- In the case of Turnitin, a tool widely used by universities, this can be as much as 15% of texts.
- The inaccuracy is not all due to the tools themselves. Accuracy can be deliberately reduced by AI humanizers, i.e., other tools designed to rewrite text so that it passes AI checks.
- Refers to a situation where an AI detector incorrectly identifies human-written text as generated by artificial intelligence.
- False positives often have worse consequences than false negatives, especially for individuals who may face accusations of fraud and damage to their reputation.
- False positives are also more common among neurodivergent individuals or non-native English speakers. This is because they often use different sentence structures than texts used to train the detector.
- Texts created before the advent of artificial intelligence are also often mislabeled.
Some texts produced by artificial intelligence may be easily spotted by humans. Given the way AI generates texts, it is possible to identify popular words and other linguistic patterns that artificial intelligence overuses significantly. However, moderate use of such expressions does not necessarily mean that artificial intelligence has been involved.
Tools in this category allow you to ask questions in natural language and get answers directly from the content of academic articles. Their goal is to quickly identify relevant publications and provide answers to your questions. Unlike literature mapping tools, they focus on searching for information from full texts and abstracts and providing specific answers or summaries. The tools draw mainly from freely accessible citation and bibliographic databases (OpenAlex, Semantic Scholar, etc.).
We recommend you always verify any AI generated results using original sources and supplement AI searches with traditional search tools (Google Scholar, Web of Science, Scopus, and the like) to obtain a reliable and complete overview of the relevant literature.
For more information, please feel free to arrange a consultation or contact us by email.
Scite_
Scite_ primarily focuses on how a given source has been cited. For the selected publication, it classifies citations according to whether they represent a neutral mention, support, or questioning of any part of it, or whether it is a self-citation. It also alerts you if there have been additional corrections or withdrawals of the article after publication. Furthermore, based on a query, it searches for relevant literature to answer it. For greater transparency, it displays a specific paraphrased section from the source used.
Registered NTK users can access the tool for free.
You are not required to create an account or log in to search Scite_. However, an account is needed to access many Scite_ features, such as notifications, assistant history, and dashboards. The first registration (or login for an existing account) with any email address to Scite_ must be done from the NTK network (NTK-simple). Create an account via Sign Up and log in. After that, the account will work for remote login as well via Log In at Scite_.
- Access: Registration recommended for chat history (login in with ORCID)
- Pricing: Freemium – access to the paid version with registration in the NTK network
- LLM used: GPT 4o-mini, GPT o3-mini, Claude 3.5 Haiku
- Citation databases: Agreements with publishers, PubMed, open access articles
- Key features: User customization of search strategy using filters, citation analysis of articles, AI assistant, table with extracted data (methods, results, etc.), browser and Zotero extension
- Limitations: Varying article quality, limited access to articles
Elicit
A tool for automated research of professional literature. After you enter a query, it searches the most relevant sources and creates a structured report with a detailed description of the search strategy. Furthermore, similar to SciSpace, it summarizes the most relevant articles for your query and provides an extended list of sources through the Find papers function.
- Access:Registration required
- Pricing: Freemium
- LLM used: Not specified
- Citation databases: Semantic Scholar, OpenAlex
- Key features:Search and summary of relevant articles based on a query, report creation, query quality assessment
- Limitations:Varying quality of articles, ability to export records only in the paid version, monthly/weekly limits in the free version
Perplexity
A search and conversation tool that uses artificial intelligence to quickly find and summarize relevant scientific articles. It combines results from the regular web with academic databases and provides citations directly in the response. It is suitable for a quick overview of a topic or for creating a basic report.
- Access: Limited access without registration, registration recommended for chat history
- Pricing: Freemium
- LLM used: Sonar (proprietary model), Claude, ChatGPT, Gemini
- Citation databases: Internet and academic databases (not specified)
- Key features: Wide selection of models, source citation, detailed search settings
- Limitations: Daily quotas for advanced features, literature source is not transparent, more complex verification of output accuracy
SciSpace - Literature Review
Based on a natural language question or keywords-and-operators query, it searches for a list of sources. It generates a summary for each source and compiles a short text on the topic based on the most relevant ones. It also provides an extended overview of the list of sources with basic information (sample, methods, results).
- Access:Limited access without registration, registration recommended for chat history
- Pricing: Freemium
- LLM used: GPT 3.5 in the free version, GPT-4o in the paid version
- Citation databases: OpenAlex, Semantic Scholar, Google Scholar
- Key features: SciSpace Agent, structured table with sources, citation generation, Deep search for report creation
- Limitations: Limited access to articles, varying article quality, more complex verification of output accuracy
Do you have any recommendations for other tools? Send us an e-mail.
Mapping tools use metadata from academic literature (citations, co-authorship, keywords, references, or abstracts) to create citation graphs and visualize relationships between publications. Unlike literature search tools, they do not primarily work with full text, but with bibliographic data.
Due to limited access to key databases, these tools cannot as yet replace the traditional literature search. We therefore recommend that you first create your own collection of literature via searches conducted in Google Scholar, Web of Science, or Scopus, and only then use mapping tools to navigate the literature and discover additional sources.
For more information, please feel free to arrange a consultation or contact us by email.
Inciteful
Inciteful is a free tool that does not require registration and maps the connections between selected articles. It allows you to create a collection and suggests similar sources or links two randomly selected publications based on their citations. Unlike other tools, Inciteful provides detailed information about the created literature collection, such as the most represented authors, journals, or institutions.
- Access: No registration required
- Pricing: Completely free
- Method used: Citation network analysis (machine learning), bibliometrics
- Citation databases: OpenAlex, Open Citations, CrossRef, Semantic Scholar
- Key features: Paper Discovery, Literature Connector, record export (.ris, .bib)
- Limitations: Limited functionality and depth of analysis compared to other tools
Litmaps
Litmaps uses databases and freely accessible metadata to create citation maps. Unlike other tools, visualizations can be annotated, edited, exported, and shared. Based on a collection of literature, it offers additional recommended sources. Features that distinguish this tool from similar tools are only available in the paid version of Litmaps Pro.
- Access: Registration required (login in with ORCID)
- Pricing: Freemium, discount for students
- Method used:Citation network analysis (machine learning), bibliometrics
- Citation databases: Semantic Scholar, Crossref, OpenAlex
- Key features: Citation mapping, relationship visualization, resource recommendations
- Limitations: Advanced features only in the paid version, limited number of recommended articles in the free version, risk of information bubbles
Open Knowledge Maps
Open Knowledge Maps is a free tool for exploring and mapping literature based on shared citations, references, or keywords. Unlike other tools, it also filters literature into thematic clusters.
- Access: No registration required
- Pricing:Completely free
- Method used:Citation network analysis (machine learning), bibliometrics
- Citation databases: PubMed, BASE
- Key features: Categorization of articles into topics
- Limitations:Limited depth of analysis, recommended articles are limited to the 100 most relevant, does not recommend further reading
Research Rabbit
A completely free tool that offers resources based on articles you have added to your collection. Links between literature are formed by relationships (co-authorship, shared citations and references, keywords), and the tool recommends other similar research results in the field. Research Rabbit is similar in function to the free version of Litmaps.
- Access:Registration required
- Pricing: Completely free
- Method used: Citation network analysis (machine learning), bibliometrics
- Citation databases: OpenAlex, Semantic Scholar, PubMed
- Key features: Mapping of co-authorship, shared citations and keywords, annotation of records, export and import to citation managers (two-way synchronization with Zotero)
- Limitations: : Less intuitive user interface, risk of information bubbles
Do you have recommendations for other tools? Send us an e-mail.
Tools in this category help process and interpret text data using artificial intelligence. They allow you to quickly identify key topics, summarize large documents, compare different texts, or explain terms and reveal relationships between them. Some also search for additional sources.
Using text analyzers can save time, but the content must always be checked. For more information on the usefulness of AI in text analysis, see our guide Reading for writing.
We also recommend checking whether you are allowed to upload the document to the tool. In this regard, you must comply with Copyright law (§ 39c, § 39d) and the rules of your institution.
For more information, please feel free to schedule a consultation or contact us by email.
ChatPDF
ChatPDF provides a summary of any text or a freely accessible article (found via URL) that you upload as a PDF file.
- Access: Registration recommended, limited access without registration
- Pricing: Freemium
- LLM used: GPT-4o
- Knowledge base: Content of the uploaded document
- Key features: PDF summarization, answers to questions about the content, creation of PowerPoint presentations or flashcards based on the document
- Limitations: Daily limit for the free version – 20 messages per day
NotebookLM
AI research assistant from Google. Through an integrated chatbot, it allows you to summarize and analyze uploaded documents. Uploaded documents create a knowledge database for the model, significantly reducing the risk of hallucinations. In addition to text analysis, the tool can be used as a knowledge database for creating and organizing notes.
- Access: Google account required
- Pricing: Freemium – can be used free of charge, paid version increases limits
- LLM used: Gemini 2.5 Flash
- Knowledge base: User-uploaded documents
- Key features: Summaries, notes, mind maps, interactive podcasts
- Limitace: : Less user-friendly document reading (PDFs are converted to plain text), complicated export of personal notes
SciSpace - Chat with PDF
In addition to literature search, SciSpace offers summaries of entire articles or sections, for which it either finds additional sources or paraphrases them in various ways. It draws on freely available metadata and publications available on the internet (published in open access). SciSpace also analyzes files that you upload yourself.
- Access: Limited access without registration, registration recommended to preserve chat and search history
- Pricing: Freemium
- LLM used: GPT 3.5 in the free version, GPT-4o in the paid version
- Knowledge base: Články s otevřeným přístupem a metadata (OpenAlex, Semantic Scholar, Google Scholar), user-uploaded articles
- Key features: Explanation or summary of selected sections of text, tables, images, recommended reading for selected sections of text, conversion of articles into podcasts, conversation with a collection of articles
- Limitations:Varying quality of articles, restrictions in the free version, advanced features only in the paid version
Do you have any recommendations for other tools? Send us an e-mail.
Here you will find tools that, unlike general chatbots, are developed specifically for data analysis. Tools in this category help to process large amounts of data efficiently, find connections within datasets, and also facilitate their interpretation. The advantage is that they do not require knowledge of a programming language (Python, R) or statistical software, which is nevertheless necessary to verify the outputs.
When using AI tools for data analysis, it is first necessary to verify how the tool processes the data, how the data is secured, and what happens to it. Be especially careful if you intend to upload original or sensitive data to the tool.
For more information, please feel free to arrange a consultation or contact us by email.
Julius.ai
The Julius.ai tool allows you to perform data analysis and statistical evaluation in natural language without the need for programming. Users upload their own data files and they can then generate descriptive statistics, visualizations, and regression models through text queries. A distinct advantage of this tool is the option to create and save steps for recurring analytical activities. According to the developers, the data uploaded to the tool by users is not used to train the model (more information on data security here).
- Access: Registration required
- Pricing: Freemium
- LLM used: OpenAI a Anthropic
- Knowledge base: User-uploaded documents (.xls, .csv, .json, SPSS file, .sql)
- Key features: Descriptive statistics, visualization and regression models, automation using analytical blocks, option to connect to cloud storage
- Limitace: Advanced features behind paywall, data security concerns
Powerdrill
PowerDrill.ai allows you to upload data files (CSV, Excel, TSV, PDF, Word) or connect to an SQL database and ask questions in natural language, which the tool automatically converts into analytical operations (e.g., filtering, comparison, trends). Unlike Julius.ai, it can generate reports and presentations based on the data. The tool protects user data in accordance with GDPR and ISO 27001.
- Access: Registartion required
- Pricing: Freemium
- LLM used: Not specified
- Knowledge base: User-uploaded documents (.xls, .csv, .tsv, .sql)
- Key features: Data cleaning and analysis, creation of reports or presentations based on data, prompt templates
- Limitations: Advanced features behind paywall (daily and monthly quotas), data security concerns
Do you have any recommendations for other tools? Send us an e-mail.
Books available at NTK
- Phoenix, J. (with Taylor, M.). (2024). Prompt engineering for generative AI: Future-proof inputs for reliable AI outputs at scale. O’Reilly.
- Diamond, S., & Allan, J. (2024). Writing AI prompts for dummies. John Wiley & Sons.
- Musiol, M. (2024). Generative AI: Navigating the course to the artificial general intelligence future (1st ed.). John Wiley & Sons.
University guides
- Tilburg.aia portal managed by Tilburg University. It offers an overview of recommended tools and a large number of guides on how to use artificial intelligence for academic work. It also includes examples of AI use in research and teaching, and recommendations for the ethical and effective use of generative AI.
- Umělá inteligence UPOL - The portal of Palacký University Olomouc provides a comprehensive overview of artificial intelligence, its use in education and research, ethical recommendations, and potential risks.
- Gen AI Guidebook– The University of Geneva's guide contains detailed information on generative AI, its ethical and security risks, principles for responsible use, and guides for creating effective prompts.
- Reaktor AI– The portal of Jan Evangelista Purkyně University in Ústí nad Labem provides an overview of news, guides, and trends in the field of AI.
- AI CUNI– A Charles University guide offering up-to-date information on AI, a list of training courses, webinars, and educational activities
Blogs and video tutorials
- One Useful Thing (Ethan Mollick)Blog with commentary on the progress and limitations of artificial intelligence.
- The Effortless Academic (Ilya Shabanov)– A blog focusing on searching and processing literature using AI tools.
- Andy Stapleton– YouTube channel mapping new AI tools for academic work.
Online courses
- Become a prompt engineer– A series of articles on effective prompting and various prompting techniques.
- Learn Prompting– A comprehensive guide to prompting, from basic to advanced techniques. It also offers an introduction to generative AI and its possible uses.
- Ethics of AI– A course focusing on the ethics associated with the development and use of AI.
- Elements of AI – An introductory course providing basic information about AI, its practical applications, and social impacts. The course is available in many languages including Czech.
Do you have any recommendations for further educational materials? Send us an e-mail.
Your contact

Adam Urban
- adam.urban
- 232 002 456