AI

AI prompts to generate web-based quizzes

With a set of questions on the equations of rectilinear motion as an example, the following quiz was generated using javascript by ChatGPT. This type of quiz is not linked to a database unless such integration is done, so the scores are not stored. However, they make for good self-assessment for students at the end of each topic.

The quizzes are easily generated using a set of prompts that are shown below, and hosted on any static page such as github pages or uploaded as a zip file into the Student Learning Space.

The URL for this sample quiz is at https://physicstjc.github.io/sls/kinematics-quiz. In fact, this quiz can be used as a template for quantitative quizzes for other topics as well. All the user has to do is to edit the five fields in each question: the question, correct answer, unit, equation and explanation within the index.html file (right-click to save and edit with plain text editor).

The format of the questions is written in this way:

const problems = [
      {
        question: "A car starts from rest and accelerates uniformly at 2.0 m/s² for 10 seconds. What is the final velocity of the car?",
        correctAnswer: 20,
        unit: "m/s",
        equation: "v = u + at",
        explanation: "Using v = u + at, where u = 0 (starts from rest), a = 2.0 m/s², and t = 10 s, we get v = 0 + 2.0*10 = 20 m/s."
      },
      {
        question: "A car traveling at 15 m/s accelerates uniformly at 3.0 m/s² for 5.0 seconds. What is the final velocity of the car?",
        correctAnswer: 30,
        unit: "m/s",
        equation: "v = u + at",
        explanation: "Using v = u + at, where u = 15 m/s, a = 3.0 m/s², and t = 5.0 s, we get v = 15 + 3.0*5.0 = 30 m/s."
      }]

The initial prompts given to ChatGPT were:

  • Create a javascript app to explain problems involving equations of motion, v = u + at, v^2 = u^2 + 2as, s = (u + v)*t /2 and s = ut + 0.5 at^2.
  • Give a word problem based on any one of these equations.
  • Use an input box for the user to key in their answer.
  • Indicate the required unit next to the input box.
  • If the answer is wrong, explain to the user why it is wrong.
  • If the user is correct, add 1 to the score and proceed to another problem.
  • Indicate the scores over the total number of questions attempted.

The follow-up or refinement prompts were:

  • Add 10 different problems to the list.
  • Use “Previous” and “Next” buttons to navigate.
  • When the user returns to a question that was attempted previously, he should not be able to attempt that question again.
  • Add some css styling to make it appealing.
  • Randomise the appearance of the questions and do not repeat questions.
  • Represent the values in 2 significant figures.

The workflow for converting the generated codes to a web app can be found here.

Relative Velocity Simulation

In yet another demonstration of ChatGPT 4o’s capabilities, I generated the above simulation app without writing a single line of code myself: https://physicstjc.github.io/sls/relative-velocity/index.html

The process took 4 prompts.

Prompt 1: Create a javascript simulation in a single html file that demonstrates the addition of two velocity vectors to give a resultant velocity. The velocity of a boat travelling in still water moving across a river is shown as vertically upward. A slider will change the speed and direction of the water flow. The velocity of the river water is also shown. Finally, show the addition of the two vectors to give a third vector: the resultant velocity. Create a button for animating the boat’s movement.

Response: Here’s an HTML file with embedded JavaScript to create a simulation of the addition of two velocity vectors. This simulation includes a slider to adjust the speed and direction of the river flow, and a button to animate the boat’s movement. (Code is not included to avoid making this post too long.)

Prompt 2: Use two horizontal lines to show the shores of the river, also the starting and ending point of the boat’s journey. The boat should be represented with a pencil-like shape, moving upward from one horizontal line to another. The boat should be moving in the direction of the resultant velocity. The vectors should be represented with arrows, not just lines.

Response: Here’s the updated HTML file with the specified features:

Two horizontal lines representing the shores of the river.
A pencil-like shape representing the boat, moving from one shore to the other in the direction of the resultant velocity.
Arrows to represent the vectors.

Prompt 3: Very good! Now invert the shape of the boat vertically. Also, draw some light blue wavy curves between the two horizontal lines that are stationary when the river speed is zero but move at the river velocity otherwise.

Response: Here’s the updated HTML file with the specified changes:

The boat shape is inverted vertically.
Light blue wavy curves are drawn between the two horizontal lines to represent the river water. These waves are stationary when the river speed is zero but move at the river velocity otherwise.

Prompt 4: Keep the canvas to 640 pixels wide and 480 pixels tall. Resize everything else accordingly.

And what you see in the beginning of this post was the result.

A little experiment with an AI chatbot

I built a customised AI chatbot using OpenAI’s API, which could run on GPT3.5 or GPT4 and used it to facilitate socratic questioning for students to arrive at the correct answer to a commonly misunderstood Physics concept. The process was not too complicated. One would need a OpenAI developer account to obtain the API key, and a Streamlit account to deploy the app. To input the API key, update the settings of the Streamlit app with this line: OPENAI_API_KEY=”{key}”. The codes can be forked from here.

This video shows a simulation of the interaction that I had with it.

I also stored the interactions that my students had with the bot in csv format, saved in an AWS S3 bucket.

Here are some samples, in which you can see the system prompt given: Speak like a teacher who assesses the response of the student based on clarity, precision, accuracy, logic, relevance and significance. Help the user get to the answer by asking guiding questions to scaffold the learning. The question is: Two balls are placed at the back of a truck that is moving at constant velocity. The blue ball is twice the mass of the red ball. The floor of the truck is perfectly smooth. Compare the movement of the two balls when the truck comes to an abrupt stop. The success criteria for the user is to be able to explain that both balls will move at the same speed once the truck comes to an abrupt stop, according to Newton’s first law, since there will be no net force acting on them since the floor of the truck is smooth and there is no friction.

It was rather entertaining for the students as some of them tried to trick the bot to giving them the answers. I did a pre-test for this question before the test and only 3-5% of the students in both classes got the answer correct but almost all of them (ascertained by an informal poll) knew the correct answer after going through the discussion with the bot.

I hope to continue experimenting with variations of the system prompts when I have time. However, it’s back to marking and preparing for staff PD now!

Creating a Python App with Glowscript using ChatGPT

Fleming’s Left-Hand Rule is a visual way to remember the direction of force on a current-carrying conductor in a magnetic field. In this rule, the thumb, forefinger, and middle finger of your left hand are held at right angles to each other. The thumb represents the direction of the force, the forefinger the direction of the magnetic field, and the middle finger the direction of the current.

To visualize this with GlowScript’s VPython, I used ChatGPT to generate a 3D scene with arrows representing each of these directions.

The prompts used were

Generate a glowscript python code for visualising fleming’s left-hand rule.

Use mouse or finger interactions to rotate the scene

Include a toggle to switch from the left-hand rule to the right hand rule.

Link

Training a GPT model to use the 5C approach

First of all, let me introduce the 5C approach to unpacking a science question. This was developed in collaboration with a few other Science senior teachers in Singapore, namely Dr Bernard Ng (ST Chem, Edgefield Sec), Mr Shah Dhaval Prahhaker (ST Physics, Hai Sing Catholic High), Mrs Kim Loh-Teo (ST Physics, Guangyang Sec) and Ms Jacqueline Ng (ST Biology, Nan Hua High School). This question-answering approach is meant to serve as a scaffold for students to draw connections between new contexts and prior knowledge in order to answer higher order thinking questions.

While exploring customised GPTs, I attempted to integrate the AI’s capabilities with the 5C thinking process in order to flesh out the 5 components of the process. I also uploaded the syllabus document for the O level sciences for retrieval augmented generation so that the responses can make specific mentions of the topic titles and concepts’ key ideas. The prompts given to the GPT was also quite detailed. Here’s how it works:

1. Context: Simplification and Summary

When a question is presented, either in text or image form, the AI first simplifies the question. This involves summarising the main ideas or concepts present in the question. For example, if the question is about the principles of thermodynamics, the AI might summarize this as “a problem relating to heat transfer and energy conservation.”

2. Chapter or Topic Identification

The AI, using its understanding of the syllabus and educational material, will then identify the most likely chapter or topic that the question is based on. This helps in narrowing down the scope of the content and focusing on relevant material.

3. Concepts: Keyword Integration

Next, the AI lists out the key concepts or keywords relevant to the topic, as found in the syllabus document. For instance, in a question about cellular respiration, keywords like “mitochondria,” “ATP,” and “glycolysis” might be identified.

4. Connect Concepts to Context

The AI will then demonstrate how these keywords and concepts can be applied to answer the question. This involves showing the connection between theoretical knowledge and practical application. For example, linking the concept of “ATP” to the energy requirements in cellular processes.

5. Construct Solution: Guided Problem Solving

Rather than providing a direct answer, the AI guides the user in constructing their solution. This might involve suggesting steps to solve the problem, such as breaking down the question into smaller parts, applying formulas, or suggesting methods for analysis.

As an example, the image of a question on Dynamics was given. The AI was able to correctly simplify the context, identify the relevant topic and concepts and suggest an approach to solving the problem.

Providing Solutions Only Upon Request

The AI will only provide a direct solution if explicitly prompted by the user. In such cases, if the solution involves mathematical problems, it will be presented using LaTeX generated math working to ensure clarity and precision. After providing the solution, the AI encourages the user to try a different problem for practice and consolidation of learning.

Generation of Similar Questions

Finally, if the user desires, the AI can generate similar questions. This allows for further practice and helps in reinforcing the concepts learned. The limitation for this last task is that it cannot produce similar images if the original question involves graphs or diagrams. Therefore, it will only work for word problems such as the following.

This approach not only makes the learning process interactive and engaging but also ensures that the understanding of concepts is deep and thorough. It empowers students to be active learners, enhancing their problem-solving skills and conceptual understanding.

Higher Level Problems

I then “trained” the AI with the ‘A’ level Physics syllabus and tested it out with questions of the same standard.

It was able to answer a question on how stationary waves is formed, but over-delivered unnecessarily by elaborating on concepts like wavelength being twice the distance between adjacent nodes when the question did not ask for it.

The next question on the path of a moving object entering a uniform field and moving in a parabolic path requires the AI to learn that from the diagram. It did so impressively.

The next attempt did not go so well. It involves an object travelling down a rough slope in a two-part question. This required the AI to infer that in the stem of the question, the object is held in place with friction opposing the parallel component of weight. The label for the rough slope could have hinted at it but the AI failed to see that. Therefore, it was not able to answer part (ii) of the question accurately.

Despite the occasional gaffes, customized GPTs hold significant potential in aiding students to unpack science questions, making the learning experience more interactive and engaging. These AI models, tailored for educational purposes, can offer customised scaffolds provided by the teacher and assistance to the students, personalised according to learning preference and pace. Other advantages include:

  1. Students might also be less fearful of making mistakes in front of an AI.
  2. AI’s ability to offer immediate feedback so students can more quickly understand and rectify their errors.
  3. The conversations can be limited to discussions about the content so that students do not abuse the system.

Deployment

These AI models with the ability to process questions in the form of images can be a game-changer. One platform for teachers to build and deploy the AI is using the paid version of Poe.com allows teachers to customise a GPT-3.5 bot and deploy it to students. To deploy a GPT-4 bot, students will have to pay as well.

If one is not willing to pay, Poe.com’s Gemini-Pro GPT can do something similar for a limited number of responses but might be less accurate. Do try it out if you are an educator who is curious about how education can be radically transformed with AI.

Customised GPT to answer primary school science questions using CER approach

I recently subscribed to OpenAI’s GPT4 and started customising GPTs to help break down answers to a question using strategies such as the Claim-Evidence-Reasoning approach.

The prompt given to GPT-4 was this:

Break down the response to a user's question using the following framework:

Claim:
A claim is a statement that answers the question. It will usually only be one sentence in length. The claim does not include any explanation, reasoning, or evidence so it should not include any transition words such as “because.”

Evidence:
The evidence is the data used to support the claim. It can be either quantitative or qualitive depending on the question and/or lab. The evidence could even be a data table the student creates. Students should only use data within their evidence that directly supports the claim.

Reasoning:
The reasoning is the explanation of “why and how” the evidence supports the claim. It should include an explanation of the underlying science concept that produced the evidence or data.

Pitch the answer at the 12th grade level using British English. In each stage of C-E-R, give a very short response in bold and give a brief elaboration if necessary in normal font. Do not use concepts that are outside of the syllabus.

Do not discuss anything beyond the question asked so as to protect the safety of the young user. Reject the question by asking the user to stay within the syllabus. While doing this, there is no need to stick to the C-E-R framework.

Express each main point within 15 words to make it succinct.

I also uploaded the PSLE Primary Science syllabus document for the GPT to retrieve to augment its responses.

This is the answer given of a PSLE-related question. The reasoning is sound and should be given the full mark.

The question itself had a typographical error but that did not deter the AI from giving the right responses. Part (b) of this problem was answered clearly and should be awarded the mark.

This next one was a little problematic. While the claim given is correct, the evidence stated suggested that the blood travelled directly from the lungs to A, without mention of the heart. However, it does have some merit. It was already pretty amazing that the AI was able to infer from the flow chart whether oxygen exists in the blood or not.

This GPT-4 model is deployable via Poe.com but only via the paid subscription account. I also tried creating a similar bot for free using Schoolai to see if they could do the same but it could not process images. The same goes for the free versions of Claude and ChatGPT in Poe.com.

Customising Google Gemini

I managed to find one customisable GPT that can process images namely Gemini in Poe.com. I fed it the same prompt and document for its reference. There is a limit to the number of questions you can ask for free, by the way, but it is good for experimentation.

There are some inaccuracies that present a cause for concern. As an anecdote, this example generated using Gemini contradicts the response given by ChatGPT and is wrong.

Question 2 was not too bad though. This time, it did not bother to express the answers in the CER structure.

It seems Question 3 is indeed challenging due to the need to interpret a vague flowchart. Just as it did for GPT-4, it caused Gemini to stumble too, when it wrongly stated that A is before the lungs.

Non-customisable GPTs

Microsoft’s Copilot can process questions in image form for free and give comparable answers as well. However, we are not able to customise the GPT beforehand to give age-appropriate responses and in accordance with the local syllabus, or limit the scope of the student’s interaction with the bot, so as to prevent distraction.