It’s been a good six months since the last LEGO robotics club at school – I should blog on what we did in that session.
This term it’s time to start up LEGO robotics again; we’ve limited the pre-school class to 4th and 5th grade – so we should have a pretty reasonable level of logic and construction skills.
I’m writing up the rules and the playbook for this session. We’re going to focus on three areas – similar plan to previous sessions:
– construction: gears, gear ratios and torque
– software: planning, prototyping, iterative troubleshooting
– project: communication, team work, documentation
The requirement is going to be:
Build a robot that can pull the largest mass on the sledge provided. A successful ‘pull’ will be over 50cm (20 inches)
Using the same robot chassis (you can change wheels and gears – but not rebuild the robot) cover a long, straight race course (~5m/~15 feet)in the shortest time.
Produce a display board for your project showing your design, thoughts, diagrams, photos and program.
I spent a lot of time over the Thanksgiving week sorting and rebuilding the LEGO Mindstorms kits. Here and here.
Today all of the groups at LEGO club had a working robot; complete with two motors and the light sensor.
The challenge today was to make the robot roll forward in the ‘Shark Tank’ (the light coloured floor), detect the dark edge of the carpet, stop and then reverse up.
Initially all of the teams had a go at programming this; the usual combination of loops and guesswork. I then moved on to ‘human prototyping’. Two of the LEGO group – Ben and Jonathon – became my motors and sensors. I had the entire group verbally express what should be happening:
Ben walks forward
Jonathan is on the carpet
Ben walks backward
I introduced the concept of both the LEGO ‘move’ and ‘wait’ blocks.
The next iteration was much better; most of the teams got the move – wait – move concept; the difficulty was working on the sensor values. Showing the on-brick sensor values (reflected light on floor = 54; reflected light on dark carpet = 32) really helped – back to the human prototype – Jonathon held his arms in the air (the high value) and dropped his arms (low value).
Finally some debugging (greater than, less than confusion; tweaking sensor values) and everyone had a working ‘edge sensor robot’.
Here’s the visual:
Here is a link to the tiny program shark-pool-edge.rbt from this morning.