Depth Studies are the newest section of the HSC Science Syllabus that you’ll face, either as a 45-60 minute exam or a hand in report/presentation. Not to fear! Hero’s here, and we’ve all got all the gritty details to get you full marks!
Secondary sourced Analysis Research Task
A depth study differs from a practical report in that it can also be a secondary analysis research task. These are where you as the student will be finding information online or in books to consolidate your research. The main difference between a primary source and a secondary source is that a primary source in science is something that you personally carry out (a practical experiment would be a primary source). A secondary source is any information that you gathered that you personally didn’t carry out (any scientific journals online, any data from online). Examples of these are: scientific journals, website articles, textbooks.
Hand-In Secondary Sourced Depth Studies will vary between schools, but they generally will be presentations or reports that you will make. With Hand-In tasks, be sure to include references and/or a bibliography in your research task, as that is the focus of the markers for that section. There are many sites online that can easily help you, such as:
In your exams, there can be questions related to secondary sourced analysis, specifically about how you obtain your sources and evaluate its credibility.
“How do you determine the reliability/credibility of secondary sources?”
In any depth studies exam, this question will appear. It is meant to test you on your knowledge of secondary sources and how you best determine their credibility, which is a major part of the syllabus. Here are the key answers:
- One of the best ways that you can determine the reliability or credibility of a secondary source is by cross referencing. If I was to check a source made by someone in Australia with a source made in America, and they had the same results, this would be successful cross-referencing. Cross referencing is the act of using other sources to check the reliability of the original source (If you do history, this will be familiar!). Cross referencing can allow for greater reliability/credibility as the evidence and information and be compared and corroborated together to make a stronger analysis.
- Checking the publication date is important as it helps give a sense of relevancy to your analysis. If you have sources that are outdated, they may have been rewritten or disproven. What is outdated or not will come down to personal judgement, but you’ll be able to logically determine it. If the information that you are gathering is likely to have been improved or updated, 5 to 10 years is a good time range. However, if the information you are gathering is not very likely to have been updated (for example, I don’t think the boiling point of an alcohol is likely to change very frequently!), then you can be a bit more lenient with the date.
- A safe time frame would be within 20 years.
- The source itself can be examined in order to determine its reliability. The objectivity of the source is crucial in determining whether the source has been influenced by bias and is a subjective piece. Common red flags that lead to a lower reliability/credibility such as emotive language, skewed/unbalanced supporting evidence can harm the reliability of a source.
- The author of the source should be from a respected academic or scientific institution. Generally, any university or trusted scientific journal (if it has .edu, .gov, then it is considered trustworthy by the government). If the author is not a high level/accredited researcher, this impacts the reliability of the source by adding the possibility of bias and incorrect evidence.
- A little side tip to note is that you often get told to not use Wikipedia. However, what you can list in your sources are the citations/references from the Wikipedia Page, which can be found by clicking the box with a hyperlinked box, like this one:
Primary sourced Depth Studies report
These types of reports are more in line with what you know: writing a practical report up. This is the standard format of a Primary Sourced Depth Study:
- Enquiry Question
- A question into what is being investigated, related to a specific syllabus/module question
- Background Research / Introduction (this is more general)
- Rough background and context into what your experiment requires you to do, includes additional research into the importance of your experiment in the greater context of science
- Brief, only about 300 words long
- Statement describing what will be investigated, with key steps needed to test the hypothesis included
- A hypothesis is a prediction based on the evidence/prior knowledge established
- It contains two or more variables in a cause and effect relationship statement
- “If x is true and this is tested, then y will occur”
- Uses the independent and,dependent variable while being both measurable and falsifiable
- Materials / Equipment (must be detailed)
- Lists capacity of containers (E.g. 250mL Beaker)
- Lists quantity of substances that will be used in the experiment
- In chemistry, it is also important to add the concentration of the substance if possible.
- Method / Diagram(s) (must be detailed)
- See below section for checklist
- Risk assessment (in a table)
- Risk(What is the risk?)
- Effect of Risk (What damage/harm does the risk bring?)
- Treatment/Prevention strategy (What is done to prevent the risk?)
- Results (Tabulated) – Qualitative AND Quantitative
- Must show ALL working out
- Ensure units and significant figures are listed where necessary
- Discussion (Most of your report should be based on this so about 400 words)
- Here you should also attempt to address any ways that the method could have been changed to improve the validity/reliability/accuracy/precision
- Address Aim / Hypothesis
- Address calculated results
Exams about Primary reports
There are 3 main things that you can be asked about regarding Primary sourced reports. You will most likely be asked to write about a data sheet or an experiment that is provided. All these experiments will have been ones that you have performed before and should know relatively well leading into your exam.
In preparation for these questions in your exams, you should know:
- What are the independent/dependent variables in the experiment?
- What is the procedure?
- What instruments are used? What are the specific measurements/capacities of this instruments?
- What measures are taken to ensure validity/reliability/accuracy?
- What variables need to be controlled?
- What assumptions need to be addressed?
Graphing and Tabulating Data
These questions are comprised of a given data set/table, which students must either graph or tabulate from them. Marks are often deducted for sloppy work or mislabelling, so ensure that you take your time and double check everything.
What Graph is best?
You’ve got the data in a table, now you’ve got to put it into a graph, but which one do you use? The way to distinguish between which graph to use is whether the data is continuous or discrete.
Think of discrete data like a medal podium. There are positions for 1st, 2nd and 3rd, but never for 1.5. Discrete data works in that fashion, where there can only be a specific value. Other examples include people in a class (you can’t have half a person!). Discrete data works best for bar graphs, frequency histograms and pie charts, where your data is easily segmented.
Continuous data is the opposite of this and can take any value within a range. For example, temperature can be 30oC, 30.2oC, 30.5oC or 31oC. These are your line graphs or scatter plots.
What do I put on the X and Y axis?
The way to figure out what you put on the x and y axis is to look at what you’re changing and what you’re measuring. The X axis always has the independent variable, or what YOU are changing. The Y axis has the dependent variable, or what you are measuring BECAUSE you changed the independent variable.
- If I was boiling water, then the amount of time I have the water boiling is the independent variable, and the temperature is the dependent.
- If I was to dilute a solution, the amount of water added to dilute would be the x axis and the concentration would be the y axis.
Drawing It Out
The last thing you want to worry about in your exam is whether to use crosses or circles when plotting data. Crosses are the safer option, as they show the exact positioning of the data. When plotting data, you don’t want to use a circle that could be misinterpreted. If you need to have multiple lines plotted on the same graph, make sure to include a key that separates them.
Line or Curve of Best Fit?
So now you’ve plotted all your data, and you must draw the trend. The dreaded question looms, do I use a line, a curve or do I connect the dots? Remember our example of discrete and continuous data? Well it works the same way here! Connect the dots with straight lines when you have discrete data, after all, there is no value that exists between the data. When deciding to draw a line or curve of best fit, look at the data itself. Would a straight-line pass through or be close to most of the points? Or would I need to curve it a bit? When doing a line or curve, try to have values that pass both above and below the line/curve, so that markers don’t think you’re trying to connect the dots.
Sometimes, you just have one stubborn point that is so far off from the rest of the graph. This is called an outlier (think outsider). These don’t need to be part of your line/curve of best fit. However, they should be included if the data you are working with is discrete. You should circle outliers on your graph.
A Quick Checklist/Where are marks allocated?
- Title underlined?
- X and Y axes labelled (WITH UNITS)?
- Have you used crosses and labelled outliers?
- Does the data take up a majority (at least 70%) of the space provided?
- Is there a key if there are multiple lines?
- Line/Curve of Best Fit?
Significant figures are the bane of any science student doing calculations. They are only 1 or 2 marks that end up piling on.
- The first significant figure is the first digit that is not zero
- 0.00123 (5 s.f)
- 45230 (4 s.f)
- 3.023 (4 s.f)
- Every digit that is not zero is always significant
- Trailing/Leading Zeroes
- Zeroes before the first significant figure in a decimal are not significant
- 0.00062543→0.00063 (2 s.f)
- Zeroes after the last significant figure in a whole number are not significant
- 56,439,000→56,000,000 (2 s.f)
- Zeroes before the first significant figure in a decimal are not significant
- Zeroes between significant figures are significant
- 230,535,743→230,536,000 (6 s.f)
- Note how the 0 is counted as a significant figure
- 230,535,743→230,536,000 (6 s.f)
- The last zero in a decimal is significant
- 31.0 (3 s.f)
- 0.50 (2.s.f)
0.50 is an example of a common question or value given that is deliberately used to trick students (0.50 is 2 signicant figures, not 1). Additionally, remember to only do rounding and significant figures at the END of your calculations. Rounding your results mid calculations leads to inaccuracies and mistakes further down the question. Even if you need to use the figure in the next question or for further calculations outside of the one you have just completed, always use the exact value to keep the consistency.
Prac Report recreation
A major part of depth study exams is either recreating sections of the report (method/risk assessment) or analysing and making a judgement on a given experiment’s procedure/results.
These questions are the ones in which you are asked to make a judgement on the validity, reliability or accuracy of an experiment. Be sure to justify your judgement with evidence and a solution if you think it is invalid, unreliable or inaccurate.
The questions are mostly framed as: [Outline/Assess/Evaluate] the [validity/reliability/accuracy] of the experiment. They can range from 2-mark questions to hefty 6-mark responses. Marks are normally divided into 3 sections:
- Identifying and defining the specific category of discussion (validity/ reliability/ accuracy). This is generally 1-2 marks, and only requires you to define and state the category.
- Making a judgement. Here the marker wants you to determine if the category of discussion is high or low (High/Low validity, reliability, accuracy). These can range from 1 to 3 marks because they are often looking for a judgement as WELL as evidence that supports your claim.
- Suggesting an improvement. Some questions will ask you to suggest an improvement that can be made. Marks here are allocated for stating an improvement, as well as how it improves the experiment.
Validity is defined in the HSC Science Dictionary as: “the measurement of how well an experiment measures what it claims, or purports to be measuring.” You’ll often be asked to outline or assess the validity of an experiment, which is asking for: “does this experiment do what it’s supposed to?” There are two sections to this question:
The first is the simplest, and is just: “Does this fit the aim?” Do the results get the data that you were looking for? If yes, then it is valid, if not then it is invalid. The second section, while not tested in the HSC, is still important and will get you the extra marks to excel over the other students. It asks: “Is my independent variable the only reason for the results appearing? Or is it because of some uncontrolled factor?”
This is known as Internal Validity, and is the measurement of specifically: “does the research method follow the Scientific Method? High internal validity is achieved through:
- Controlling all variables that aren’t being tested to ensure a fair test
- Add control groups to have a standard result to compare the experiment against
- Using more precise equipment to gain accurate results
Students should also account for/attempt to resolve any assumptions made during the experiment. Doing so will improve validity by reducing errors from appearing, which can possibly harm the outcome and results of the experiment. For example: “Have you assumed that your equipment/sample is not contaminated?”
How to answer a question on validity
When given a question asking for an assessment of an experiment’s validity, students should:
- Define validity
- Identify the aim
- Make a judgement based on whether the experiment fulfils the aim
- Identify whether the method follows the scientific method
- If it doesn’t, what can be done to make it so?
Example: How is the validity of an experiment measured? (3 Marks)
Validity is defined by how well the experiment completes its aim. (1 mark for defining validity) If an experiment is consistent with the scientific method and produces results that align with the aim, then it is a valid experiment. (1 mark for identifying the criteria for validity) If the experiment fails to meet the aim, then it is not valid. If an experiment is invalid, it is primarily due to its inability to address core issues with the experiment’s structure and methodology itself. If the experiment fails to generate results related to its aim, then the internal validity of the experiment is low, and the experiment should be revised to better suit the scientific method. (1 mark for identifying how validity is measured)
Recently you chose a question to research on an aspect of drivers of chemical reactions. You submitted a scientific report as part of the depth study. Outline how the validity of this scientific report could be assessed. (2 Marks)
Validity is the measurement of whether the experiment meets the requirements of the aim. (1 mark for defining validity) It can be assessed by looking at the experiments results and methodology, and comparing the hypothesis and aim to those results and methodology. (1 mark for identifying a method of assessing validity)
Reliability is the measurement of the consistency between the results you obtain, so how accurate the measurements are to each other. It can be determined by finding the average and comparing results in relation to the average. The greater the difference between your results and the average, the lower the reliability.
How to improve Reliability
Think about an archery target. If you were to shoot 5 arrows at the bullseye, shooting another 5 arrows doesn’t mean you’ll hit the bullseye again. Same goes for your experiment. A common mistake told is that reliability is always improved by repeating the experiment multiple times. This would give more results, but it doesn’t necessarily mean that they are more reliable. Students should be aware that increasing the amount of repetitions in an experiment decreases the chances of an outlier being accepted as a normal value but does not always lead to significant reliability improvements without a change in the method.
A lot like internal validity, to improve reliability you should change the method to try and control more variables. By controlling more variables, you reduce the risk of what is known as random error. Random errors are errors that are unpredictable, such as reading your instruments wrong, or miscalculating your values.
How to answer it
When observing any method given, look at its steps and consider this question: “Is everything controlled in this method? Or are there spaces where results could be made inconsistent?”
- Make a judgement about the reliability
- Justify with evidence
- Which step is wrong, and how does it affect the reliability?
- Provide an improvement
- How does this improvement reduce random error?
How is the reliability of an experiment increased? (4 Marks)
Reliability is the measurement of how consistent results are to each other. (1 mark for defining reliability) High reliability is achieved through controlling as many variables as possible in order to reduce random error. Random errors are unpredictable, constantly changing factors that can affect the results of an experiment to varying magnitudes. (1 mark for identifying the factors that impact reliability) By controlling as many variables as possible, you reduce the appearance of random errors. This increases reliability as now the results will be more consistent as there are less factors that cause variation. In addition to this, repeating the experiment multiple times allows us to have a higher amount of data sets on which to assess reliability with. This increases reliability by creating a more precise average that can be compared against. This indirectly affects the reliability as now, results that have random errors are scaled less in terms of impacting your results.
A student attempts to find the percentage composition of a magnesium ribbon. The student takes a piece of magnesium without cleaning it, placing it into a crucible. The crucible is heating, leaving the lid off. After the reaction is complete, the mass of the crucible with magnesium oxide inside is measured.
- Assess the reliability of this experiment, using examples (3 marks)
- How would the student improve the reliability of the experiment? (2 marks)
- The reliability of the experiment is fairly low given that the student has not controlled specific variables that induce random error (1 mark for making a judgement on the reliability). The student does not clean the piece of magnesium before using it, meaning that possible impurities were not removed from the mass of the magnesium. This reduces reliability as now the magnesium piece that is weighed can not be assumed to be 100% magnesium and possible reactions can occur with the impurities. In addition to this, by leaving the lid off, the student further reduces reliability of the experiment as the magnesium inside the crucible over-oxidised, thus making the weighed amount less than the actual composition of magnesium within the magnesium oxide. (1 mark for identifying examples, 1 mark for identifying the examples’ effect on reliability)
- The reliability of the experiment can be improved by improving the method. By cleaning the magnesium piece with emery paper before placing it in the crucible, the reliability of the experiment is increased due to the impurities such as dust and contaminates would be removed, thus creating more reliable results. In addition to this, the reliability can be improved by leaving the lid askew on the crucible, allowing enough air to let the magnesium oxidise, but not too much that it over-oxidises. (1 mark for giving examples, 1 mark for linking the example to the increase in reliability).
Accuracy is the measurement of how close your results are to the theoretical value expected. Thinking about the archery target again, accuracy can be described as how close your arrows are to the bullseye.
Accuracy is measured by: |(theoretical value – actual value)|/theoretical value x 100% , otherwise known as percentage error. If your percentage error is within 5%, it is considered very accurate.
The easiest way to write about and assess accuracy is to talk about precision. Precision is a subset of accuracy, detailing a measurement of how accurately your equipment works. Often on your equipment you’ll see a ± number next to the measurement of how much volume it can hold. This is the uncertainty value and describes how much possible deviation there is from the expected value. For example: if a beaker is listed as 100mL ±0.15, your values could range from 99.85 mL to 100.15 mL.
Another additional way to measure precision is by taking the maximum uncertainty, found by taking half the limit of reading. The limit of reading is the smallest increment in a measurement device, so for a ruler measuring in mm, the limit of reading would be 1 mm and the maximum uncertainty ±0.5 mm. While precision is not specifically listed in the HSC syllabus, it is an important factor in an experiment’s accuracy: if your equipment has a maximum uncertainty that is too large, your results will be inaccurate. For example, if you were to measure millimetres with a ruler that only measures in metres, you would be very inaccurate, as your maximum uncertainty would be ± 50cm.
Precision is improved by using higher grade equipment, with a more accurate limit of reading. Imagine if you were to record the amount of time that passes between when a ball is thrown and when it lands, would it be more accurate to record it by manually counting, using a stopwatch or a light gate pair? The light gate pair would be the most precise, and as a result, would give the most accurate results.
How to answer it
When made to assess accuracy, consider this question: “Is there any factor/misjudgement that would change all the values I have used/been given? Is there a way I can be more precise?”
- Make a judgement about the accuracy
- Justify with evidence
- Which step is wrong, and how does it affect the accuracy?
- Provide an improvement
- How does this improvement reduce systematic error?
How is the accuracy of the experiment determined? (4 Marks)
The accuracy of results can be determined by comparing it to the theoretical value that can be found, and determining the percentage error from that value. (1 mark for identifying what accuracy is) The lower the percentage error, the greater the accuracy. Generally high accuracy results have a percentage error less than 5%. Another metric that accuracy is measured by is the precision of the experiment and its equipment. The accuracy of the instruments used, otherwise known as precision, can be measured through multiple methods. One of the methods is measuring the maximum uncertainty, and comparing it to your results. The maximum uncertainty of the instruments used can be used to determine the accuracy of the experiment by halving the limit of reading. The second method is looking through the uncertainty deviation values assigned on the instrument itself, and comparing that to the scale of your results. A beaker with a ±15mL uncertainty deviation is less precise than a beaker with an uncertainty deviation of ±0.15mL. More precise instruments have greater accuracy because they reduce the scale of the percentage error. (1 mark for identifying ways in which accuracy is determined, 2 marks for explaining how they determine accuracy)
How is the accuracy of an experiment increased? (3 Marks)
Improving accuracy comes from reducing systematic errors (1 mark for identifying the factors that impact accuracy). Systematic errors are errors that are consistently repeatable and occur through the misuse of equipment, the usage of faulty imprecise equipment, and/or flawed experimental design. For example, if you don’t zero your scales before use, this affects the results of everything you weigh afterwards, which reduces your accuracy. If you used a metre ruler to measure the size of an atom, you would have wildly inaccurate results. (1 mark for identifying a way of improving accuracy, 1 mark for explaining how it improves accuracy)
Systematic errors are different to random errors, because the amount that you are wrong by is constant as your misuse of equipment or flawed experimental design appears through ALL your results. As a result of this, having a curve/line of best fit allows you to effectively account for systematic error, as systematic errors are shifted in the same direction by the same magnitude. Your line of best fit will have the same gradient no matter what, so the trend is still viable to be analysed (see Graph below).
By accounting for this shift, you are able to gain more accurate results. Accuracy is further improved by refining your method (taring your equipment) and ensuring that your equipment is properly used/maintained. In addition, students should consider if there is better equipment that can be used in their experiments, whether it be in their precision, or general maintenance.
You can often be asked to recreate a method in the exam, with given conditions and variables. These will always be the ones that you’ll have performed in class, so it’s good to have an understanding of what happens in each of the experiments you perform, especially what the independent/dependent variables are and how they are changed/affected. This will allow you to be able to recognise the same experiments in your questions. The question you should think whenever you write a method is: “Could I do this experiment step for step without knowing anything about the subject?”
Checklist/Where are marks allocated? (Examples of both are below)
- Distinct, sequential separation of each step
- Each step should be a unique independent step, you should never have to write: “step 5: then,” or “step 4: After step 3,”
- Lists the equipment and their measurements (e.g: 250mL beaker)
- Past Tense and Passive Voice
- 3rd Person
- Have you repeated the experiment for multiple data sets?
- In pencil
- Labelling things should be done with a straight line, NOT an arrowed line
- Make sure you label diagrams with the units
- Set up the apparatus as shown in the diagram.
- Light the first spirit burner and adjust the height of the can so that the tip of the flame just touches the can.
- Replace the cap on the spirit burner to extinguish the flame.
- Using a measuring cylinder, add 200mL of cold water to the can.
- Place a thermometer in the water and record its initial temperature
- Weight the spirit burner with its liquid contents and record the mass.
- Light the wick and stir the water gently with the glass rod.
- When the temperature has risen by about 10oC, record the temperature and extinguish the flame by replacing the cap.
- Reweigh the burner immediately and record its final mass.
- Remove any soot from the bottom of the can.
- Repeat all steps for the different alcohols.