Seng Kwang Tan

U-tube Manometer – a ChatGPT-Generated Simulation

I wanted a simple manometer simulation for the Sec 3 topic of Pressure and decided to try generating one with ChatGPT3.5. The first attempt, using just words, resulted in many errors such as single lines instead of 2D objects being used to represent the tubes and coloured bars that move in the wrong direction.

However, after I switched to ChatGPT4 and uploaded an image for reference, it was then able to produce a proper design consisting of glass tubes and coloured columns that move up and down with pressure changes in a flask. With a bit of UI changes using further prompts, this was the final product.


The following is the screenshot showing the image that was uploaded as well as the initial prompt.

A little experiment with an AI chatbot

I built a customised AI chatbot using OpenAI’s API, which could run on GPT3.5 or GPT4 and used it to facilitate socratic questioning for students to arrive at the correct answer to a commonly misunderstood Physics concept. The process was not too complicated. One would need a OpenAI developer account to obtain the API key, and a Streamlit account to deploy the app. To input the API key, update the settings of the Streamlit app with this line: OPENAI_API_KEY=”{key}”. The codes can be forked from here.

This video shows a simulation of the interaction that I had with it.

I also stored the interactions that my students had with the bot in csv format, saved in an AWS S3 bucket.

Here are some samples, in which you can see the system prompt given: Speak like a teacher who assesses the response of the student based on clarity, precision, accuracy, logic, relevance and significance. Help the user get to the answer by asking guiding questions to scaffold the learning. The question is: Two balls are placed at the back of a truck that is moving at constant velocity. The blue ball is twice the mass of the red ball. The floor of the truck is perfectly smooth. Compare the movement of the two balls when the truck comes to an abrupt stop. The success criteria for the user is to be able to explain that both balls will move at the same speed once the truck comes to an abrupt stop, according to Newton’s first law, since there will be no net force acting on them since the floor of the truck is smooth and there is no friction.

It was rather entertaining for the students as some of them tried to trick the bot to giving them the answers. I did a pre-test for this question before the test and only 3-5% of the students in both classes got the answer correct but almost all of them (ascertained by an informal poll) knew the correct answer after going through the discussion with the bot.

I hope to continue experimenting with variations of the system prompts when I have time. However, it’s back to marking and preparing for staff PD now!

Analogue Meter Template

This GeoGebra applet ( can serve as a template for an analogue meter.

I added a check for the text input so that users have to key in the correct number of decimal places according to the precision of the instrument. For instance, a reading of 1 V should be recorded as 1.00 V and 1.5 V recorded as 1.50 V. Users need to read to half the smallest division, e.g. if the needle is between 2.4 and 2.5, they should input 2.45 V.

Creating a Python App with Glowscript using ChatGPT

Fleming’s Left-Hand Rule is a visual way to remember the direction of force on a current-carrying conductor in a magnetic field. In this rule, the thumb, forefinger, and middle finger of your left hand are held at right angles to each other. The thumb represents the direction of the force, the forefinger the direction of the magnetic field, and the middle finger the direction of the current.

To visualize this with GlowScript’s VPython, I used ChatGPT to generate a 3D scene with arrows representing each of these directions.

The prompts used were

Generate a glowscript python code for visualising fleming’s left-hand rule.

Use mouse or finger interactions to rotate the scene

Include a toggle to switch from the left-hand rule to the right hand rule.


Training a GPT model to use the 5C approach

First of all, let me introduce the 5C approach to unpacking a science question. This was developed in collaboration with a few other Science senior teachers in Singapore, namely Dr Bernard Ng (ST Chem, Edgefield Sec), Mr Shah Dhaval Prahhaker (ST Physics, Hai Sing Catholic High), Mrs Kim Loh-Teo (ST Physics, Guangyang Sec) and Ms Jacqueline Ng (ST Biology, Nan Hua High School). This question-answering approach is meant to serve as a scaffold for students to draw connections between new contexts and prior knowledge in order to answer higher order thinking questions.

While exploring customised GPTs, I attempted to integrate the AI’s capabilities with the 5C thinking process in order to flesh out the 5 components of the process. I also uploaded the syllabus document for the O level sciences for retrieval augmented generation so that the responses can make specific mentions of the topic titles and concepts’ key ideas. The prompts given to the GPT was also quite detailed. Here’s how it works:

1. Context: Simplification and Summary

When a question is presented, either in text or image form, the AI first simplifies the question. This involves summarising the main ideas or concepts present in the question. For example, if the question is about the principles of thermodynamics, the AI might summarize this as “a problem relating to heat transfer and energy conservation.”

2. Chapter or Topic Identification

The AI, using its understanding of the syllabus and educational material, will then identify the most likely chapter or topic that the question is based on. This helps in narrowing down the scope of the content and focusing on relevant material.

3. Concepts: Keyword Integration

Next, the AI lists out the key concepts or keywords relevant to the topic, as found in the syllabus document. For instance, in a question about cellular respiration, keywords like “mitochondria,” “ATP,” and “glycolysis” might be identified.

4. Connect Concepts to Context

The AI will then demonstrate how these keywords and concepts can be applied to answer the question. This involves showing the connection between theoretical knowledge and practical application. For example, linking the concept of “ATP” to the energy requirements in cellular processes.

5. Construct Solution: Guided Problem Solving

Rather than providing a direct answer, the AI guides the user in constructing their solution. This might involve suggesting steps to solve the problem, such as breaking down the question into smaller parts, applying formulas, or suggesting methods for analysis.

As an example, the image of a question on Dynamics was given. The AI was able to correctly simplify the context, identify the relevant topic and concepts and suggest an approach to solving the problem.

Providing Solutions Only Upon Request

The AI will only provide a direct solution if explicitly prompted by the user. In such cases, if the solution involves mathematical problems, it will be presented using LaTeX generated math working to ensure clarity and precision. After providing the solution, the AI encourages the user to try a different problem for practice and consolidation of learning.

Generation of Similar Questions

Finally, if the user desires, the AI can generate similar questions. This allows for further practice and helps in reinforcing the concepts learned. The limitation for this last task is that it cannot produce similar images if the original question involves graphs or diagrams. Therefore, it will only work for word problems such as the following.

This approach not only makes the learning process interactive and engaging but also ensures that the understanding of concepts is deep and thorough. It empowers students to be active learners, enhancing their problem-solving skills and conceptual understanding.

Higher Level Problems

I then “trained” the AI with the ‘A’ level Physics syllabus and tested it out with questions of the same standard.

It was able to answer a question on how stationary waves is formed, but over-delivered unnecessarily by elaborating on concepts like wavelength being twice the distance between adjacent nodes when the question did not ask for it.

The next question on the path of a moving object entering a uniform field and moving in a parabolic path requires the AI to learn that from the diagram. It did so impressively.

The next attempt did not go so well. It involves an object travelling down a rough slope in a two-part question. This required the AI to infer that in the stem of the question, the object is held in place with friction opposing the parallel component of weight. The label for the rough slope could have hinted at it but the AI failed to see that. Therefore, it was not able to answer part (ii) of the question accurately.

Despite the occasional gaffes, customized GPTs hold significant potential in aiding students to unpack science questions, making the learning experience more interactive and engaging. These AI models, tailored for educational purposes, can offer customised scaffolds provided by the teacher and assistance to the students, personalised according to learning preference and pace. Other advantages include:

  1. Students might also be less fearful of making mistakes in front of an AI.
  2. AI’s ability to offer immediate feedback so students can more quickly understand and rectify their errors.
  3. The conversations can be limited to discussions about the content so that students do not abuse the system.


These AI models with the ability to process questions in the form of images can be a game-changer. One platform for teachers to build and deploy the AI is using the paid version of allows teachers to customise a GPT-3.5 bot and deploy it to students. To deploy a GPT-4 bot, students will have to pay as well.

If one is not willing to pay,’s Gemini-Pro GPT can do something similar for a limited number of responses but might be less accurate. Do try it out if you are an educator who is curious about how education can be radically transformed with AI.

Customised GPT to answer primary school science questions using CER approach

I recently subscribed to OpenAI’s GPT4 and started customising GPTs to help break down answers to a question using strategies such as the Claim-Evidence-Reasoning approach.

The prompt given to GPT-4 was this:

Break down the response to a user's question using the following framework:

A claim is a statement that answers the question. It will usually only be one sentence in length. The claim does not include any explanation, reasoning, or evidence so it should not include any transition words such as “because.”

The evidence is the data used to support the claim. It can be either quantitative or qualitive depending on the question and/or lab. The evidence could even be a data table the student creates. Students should only use data within their evidence that directly supports the claim.

The reasoning is the explanation of “why and how” the evidence supports the claim. It should include an explanation of the underlying science concept that produced the evidence or data.

Pitch the answer at the 12th grade level using British English. In each stage of C-E-R, give a very short response in bold and give a brief elaboration if necessary in normal font. Do not use concepts that are outside of the syllabus.

Do not discuss anything beyond the question asked so as to protect the safety of the young user. Reject the question by asking the user to stay within the syllabus. While doing this, there is no need to stick to the C-E-R framework.

Express each main point within 15 words to make it succinct.

I also uploaded the PSLE Primary Science syllabus document for the GPT to retrieve to augment its responses.

This is the answer given of a PSLE-related question. The reasoning is sound and should be given the full mark.

The question itself had a typographical error but that did not deter the AI from giving the right responses. Part (b) of this problem was answered clearly and should be awarded the mark.

This next one was a little problematic. While the claim given is correct, the evidence stated suggested that the blood travelled directly from the lungs to A, without mention of the heart. However, it does have some merit. It was already pretty amazing that the AI was able to infer from the flow chart whether oxygen exists in the blood or not.

This GPT-4 model is deployable via but only via the paid subscription account. I also tried creating a similar bot for free using Schoolai to see if they could do the same but it could not process images. The same goes for the free versions of Claude and ChatGPT in

Customising Google Gemini

I managed to find one customisable GPT that can process images namely Gemini in I fed it the same prompt and document for its reference. There is a limit to the number of questions you can ask for free, by the way, but it is good for experimentation.

There are some inaccuracies that present a cause for concern. As an anecdote, this example generated using Gemini contradicts the response given by ChatGPT and is wrong.

Question 2 was not too bad though. This time, it did not bother to express the answers in the CER structure.

It seems Question 3 is indeed challenging due to the need to interpret a vague flowchart. Just as it did for GPT-4, it caused Gemini to stumble too, when it wrongly stated that A is before the lungs.

Non-customisable GPTs

Microsoft’s Copilot can process questions in image form for free and give comparable answers as well. However, we are not able to customise the GPT beforehand to give age-appropriate responses and in accordance with the local syllabus, or limit the scope of the student’s interaction with the bot, so as to prevent distraction.