
By now, most people have probably heard of Chat GPT, a powerful artificial intelligence tool that can answer questions, generate text, create images, and more, all based on information available on the internet. While many of us have probably used Chat GPT for fun (e.g., see this Forbes article on some of the funny and strange ways people have interacted with it), it holds a great deal of potential when it comes to reshaping how we access evidence and information.
AI, or artificial intelligence, has completely changed the way we can seek and interact with information. While there are many definitions and conceptualizations of AI, it can generally be thought of as a computer’s or robot’s ability to complete a task in the way a human being would, using logic and intelligence. In other words, computers thinking intellectually. AI already exists in many of our daily lives, such as in our smart phones, search engines, digital personal assistants (e.g., Siri), and so on. By the computer learning about what we need to know and how to help us, it makes it easier to find what we need to know, can help us remember what we need to do, and can make suggestions about other things we might want to consider.
There are a whole range of possibilities when it comes to the potential of AI tools like Chat GPT, but one area of particular interest is the way in which we share scientific information. AI potentially holds a great deal of promise to not only present information that is more helpful and easy to understand for those who are seeking it (think non-experts wanting to understand a complicated scientific topic, for example), as well as those who generate knowledge (researcher who wants to share their evidence more effectively).
When it comes to mobilizing knowledge, AI tools have the potential to become powerful tools in two ways: sharing evidence and improving how it is written. This potential is, of course, accompanied by questions, however.
First, can AI really make evidence and scientific information more easily accessible? The answer is more confidently becoming “yes.” Consider the opportunity for finding evidence or scientific information through a tool like Chat GPT. Imagine you are interested in learning more about a health diagnosis you just received, or how you can manage symptoms like pain. AI technology can screen the available evidence, select the evidence relevant to what you want to know, and explain it to you in the simplest of terms. Specific and simple information available as quickly as you can type out your request. As the use of such tools for this purpose continues to grow and expand (despite some caveats discussed below), there is great promise here to have a tool that can share evidence with those who are seeking it, in a way that is convenient, digestible, and easily accessible.
What about opportunities to improve how researchers share science? The power of AI tools like Chat GPT is also expanding in this space, and can help researchers with developing written materials to sharing evidence (e.g., patient-facing materials like handouts, video scripts, etc.), policy briefs, lay summaries, and other dissemination methods. Tools like Chat GPT can make recommendations around clarity and conciseness of writing, identifying jargon, and providing suggestions and examples for phrasing. Overall, it can be a helpful tool for researchers or other experts to improve their writing with specific strategies and recommendations.
As with any new piece of technology, nothing is ever perfect. While AI tools can provide a starting place for people to ask questions and learn from evidence, these tools may be unreliable in terms of the quality of the information they provide (e.g., whether results are evidence based, up to date, biased, relevant to the individual, etc.). People using AI tools to seek evidence and information related to their health should always consult with a health professional. Using these tools to support writing in plain language can also present challenges related to plagiarism. Representing your ideas with reference to original sources remains critical to ethical scholarship. As well, being mindful of reference styles and practices related to giving credit to AI contributions that develop can be one way to acknowledge when your writing has been supported by AI. All in all, human oversight to ensure context is appropriate and sources are acknowledged appropriately remains a best practice at this stage.
AI tools are an exciting advancement in how we communicate and gain access to scientific evidence. They can serve as a powerful tool to support researchers and other professionals in how they communicate science. While there are key ethical considerations and other shortcomings to stay mindful of, the opportunity to change the way we share information remains an exciting one to watch for.
Photo by Tara Winstead from Pexels