RoboCupJunior2021 Rescue
Dear RoboCuppers,
RoboCupJunior 2021 was an amazing event. Despite all the restrictions, many teams like yours managed to overcome all the challenges and take part in the competition with the same energy as we were all in the same place. We had 20 teams competing for Rescue Line, 12 teams for Rescue Maze, 20 teams for Simulation, and another 12 teams competing for New Simulation (Demonstration). Rescue Line and Rescue Maze considered in this year not only the robot performance but added to the evaluation system the teams’ competence to present and explain their hardware and strategies, through description papers, engineering journals, live presentations, and (virtual) face to face interviews. And it was a success! The general public and RoboCuppers were astonished by what you can design, develop and solve.
The following is the overall commentary for every submission item and activity of the event.
Most teams followed the items on the provided template for the documentation, which meant most segments of the required information were present. However, most teams can improve through better presentation of their work. Teams who scored better may have had similar content but had a better presentation, mostly by the use of annotated diagrams. On the other hand, some teams took the concept of having a diagram to the extreme and included multiple pages of detailed images/diagrams of their software and/or hardware. It would instead be appropriate to have a few notable examples to illustrate the point, and then leave the rest inside the appendix (to use the 10 pages most efficiently). Another common feature was that many teams did not substantiate their good work, stating that they have done something without mentioning what has been done. An example would be, “To solve this problem, we improved the software”. How did you manage to do that? What exactly did you change? This last bit of details would make the claims much more grounded. Finally, teams should be aware of the rubric. Many teams (whether by accident or choice) omitted some criteria on the rubrics, which resulted in low score for those specific criteria.
For the engineering journal, there were many cases when the teams prepared a different document than what is required. This journal can be thought of as a log book to mark any progress when the team works on the robot. It should also not be paragraphs and paragraphs of text at every entry - the idea is for it to be a quick entry that can be referred to in the future. One entry can be done with a few bullet points and illustrations (e.g.: photos of CAD, pictures of the hardware development, the field you tested, etc).
Most teams provided a “valid” recorded video with the correct intended instructions. For line, some teams interpreted the evacuation in the diagrams which did not visually show victims literally (i.e.: complete without victims) - these however did not lead to a significant effect on the overall performance of teams. For maze, a few teams had an invalid victim identification method - mainly not blinking the indicator. We found the pre-recorded video was good to gauge the general field performance of the robot.
As mentioned in the introduction to this summary, both the organizers and viewers loved the presentations given by teams. We would like to especially congratulate teams that do not have a strong English-speaking background. Similar to the documentation, teams who presented information that corresponded to the rubrics, scored highly. A general tip would be to a) not include much text on each slide (do not read the slide, but have a few bullet points to talk about), b) not making too many slides (30sec ~ 1min or more sometimes is a good rule of thumb), and c) having diagrams and visuals (given that they are concise and straightforward for someone without knowledge of your robot).
We believe this section of the event went particularly well. For the interview, apart from some internet hiccups (which were of course not taken into account for the scoring), every team gave in dpeth answers. Many times, we wished to have a longer session, but the time was limiting. The field challenge was successful too. We observed that the robot performed worse in the field challenge compared to that presented in the pre-recorded video - which was expected, and thought it was the best simulation of an in-person competition scenario for this online event.
The final field challenge focused on the field performance of the robot, and therefore a much more challenging field design was provided compared to the first field challenge for all teams. For both Line and Maze, teams that were able to score highly on that field were able to receive a high final score. For line, we found the field was too difficult. For maze, the field difficulty was ideal with a few teams scoring near perfect or perfect runs.
We hope to see you in 2022 at Bangkok, Thailand.
RoboCupJunior Rescue TC and OC 2021 members:
(OC Chair / TC) Tatiana Pazelli
(OC Co-Chair / TC) Ryo Unemoto
(TC Chair) Kai Junge
(OC) Alexander Jeddeloh
(OC) Bill Chuang
(OC) Jiayao Shen
(TC) Elizabeth Mabrey
(TC) Tom Linnemann
(TC) Naomi Chikuma
RoboCupJunior2021
RoboCupJunior2021
RoboCupJunior2021
RoboCupJunior2021
For the results of Rescue New Simulation (Demonstration), please refer to the separate page linked below.
AwardsHave Question?