Skip to main content

AI Squared in Research

Exploring AI2 in Research and Scholarship

Generative AI tools may be useful at different stages of the research process as summarized in "ChatGPT and AI in higher education" by the United Nations Educational, Scientific and Culture Organization (UNESCO). (Text only version of the image available here.)

Use of GPT in research

In the following, we collect some information and resources relevant to AI2 in research and scholarship.


Policies and Guidance from Scholarly Publishers and Federal Agencies

Discipline-Specific Information & Resources

Some uses of ChatGPT in bioinformatics and biomedical research can be found here;

However, chatGPT is a double edged sword for biomedical engineering and there are several concerns besides the opportunities.

It is especially critical to know that the citations suggested by chatGPT is not 100% accurate. A detailed analysis can be found in the reference below:

The efficacy of ChapGPT4 citing references across disciplines:

For these reasons and due to the critically sensitive nature of biomedical information, microsoft announced that they work on BioGPT, which is a generative language model specifically trained on biomedical research. However, only a demo version of BioGPT is available as of today.

BioGPT: Generative Pre-trained Transformer for Biomedical Text Generation and Mining 

One of the less well-known features of ChatGPT is its potential to model physical phenomena such as, biomechanics, fluid dynamics, and electromagnetic propagation. An example video can be found here.

In the simplest case, python code generated by chatGPT can run on colab offered by google (https://colab.research.google.com/) without requiring any other simulation packages. It is also possible for chatGPT to help with existing simulation tools. A good example for biomechanical simulations where chatGPT helps with OpenSim is described here.

Policies of major publication venues in the civil engineering department:

Environmental & science technology: Ethical standards

Association for Computing Machinery (ACM) has issued "Principles for the Development, Deployment, and Use of Generative AI Technologies."

 

ACM’s updated policy on authorship regarding AI:

Generative AI tools and technologies, such as ChatGPT, may not be listed as authors of an ACM published Work. The use of generative AI tools and technologies to create content is permitted but must be fully disclosed in the Work. For example, the authors could include the following statement in the Acknowledgements section of the Work: ChatGPT was utilized to generate sections of this Work, including text, tables, graphs, code, data, citations, etc.). If you are uncertain ­about the need to disclose the use of a particular tool, err on the side of caution, and include a disclosure in the acknowledgements section of the Work.

 

ChatGPT Cheat Sheet, 100+ Prompts to Unlock All The Power of ChatGPT (related to writing, programming, data science, spreadsheet, creativity, academic life, etc.)

 

Terence Tao on using GPT4 to help with math

 

Harvard’s CS 50: Introduction to Computer Science

  • Leverage an AI assistant to respond to frequently-asked student questions
  • AI-generated answers are reviewed by human course staff
  • “Our own hope is that, through AI, we can eventually approximate a 1:1 teacher:student ratio for every student in CS50, as by providing them with software-based tools that, 24/7, can support their learning at a pace and in a style that works best for them individually,” - course instructor

 

ICML Conference

Top AI conference bans use of ChatGPT and AI Language tools to write academic papers.

“……The International Conference on Machine Learning (ICML) announced the policy earlier this week, stating, “Papers that include text generated from a large-scale language model (LLM) such as ChatGPT are prohibited unless the produced text is presented as a part of the paper’s experimental analysis.” The news sparked widespread discussion on social media, with AI academics and researchers both defending and criticizing the policy. The conference’s organizers responded by publishing a longer statement explaining their thinking. (The ICML responded to requests from The Verge for comment by directing us to this same statement.)……” See more here.


IEEE Computational Intelligence Society (CIS)

"……Artificial intelligence (AI)-Generated Text (e.g., chatGPT)

The use of AI-generated text in an article shall be disclosed in the acknowledgements section of any paper submitted to an IEEE Conference or Periodical. The sections of the paper that use AI-generated text shall have a citation to the AI system used to generate the text.……" See more here.


ChatGPT and generative AI Guidelines for Addressing Academic Integrity and Augmenting Pre-Existing Chatbots

See here.


An example of an IEEE conference (IEEE CDC 2023) and their policy 

“……Large Language Model (LLM) Policy

Please note that manuscripts generated by large-scale language models (LLMs) such as ChatGPT are prohibited as submissions to CDC 2023. We will allow papers for which LLMs are used for light-editing of the authors' original text, such as for spelling and grammar corrections……” See more here.


Arxiv policy

arXiv announces new policy on ChatGPT and similar tools 

“……The official policy is:

  1. continue to require authors to report in their work any significant use of sophisticated tools, such as instruments and software; we now include in particular text-to-text generative AI among those that should be reported consistent with subject standards for methodology.
  2. remind all colleagues that by signing their name as an author of a paper, they each individually take full responsibility for all its contents, irrespective of how the contents were generated. If generative AI language tools generate inappropriate language, plagiarized content, errors, mistakes, incorrect references, or misleading content, and that output is included in scientific works, it is the responsibility of the author(s).
  3. generative AI language tools should not be listed as an author; instead authors should refer to (1).

……" See more here.


Opinion paper: "So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy” See more here.


A Critical Look at AI-Generated Software: Coding with the New AI Tools is Both Irresistible and Dangerous

See here.

Conferences by the American Society of Mechanical Engineers (ASME): 

Generative Artificial Intelligence Use Prohibited
E.g., International Mechanical Engineering Congress & Exposition (IMECE)


Journals by the American Society of Mechanical Engineers (ASME): 

Authorship and AI Tools

"ASME is a member of the Committee on Publication Ethics (COPE) and has implemented COPE recommendations for authorship and AI Tools.

AI tools cannot meet the requirements for authorship as they cannot take responsibility for the submitted work. As non-legal entities, they cannot assert the presence or absence of conflicts of interest nor manage copyright and license agreements.

Authors who use AI tools in the writing of a manuscript, production of images or graphical elements of the paper, or in the collection and analysis of data, must be transparent in disclosing in the Materials and Methods (or similar section) of the paper how the AI tool was used and which tool was used. Authors are fully responsible for the content of their manuscript, even those parts produced by an AI tool, and are thus liable for any breach of publication ethics.

Please visit the COPE website for more information about artificial intelligence and authorship."

 

Contact Us

For any queries, please contact
the AI Squared Task Force.