The Final Exhibition

Exhibiting my work was very interesting, as it allowed me to communicate with other people who seemed to be enthusiastic about my project.

In addition to the rover and the control station, I made some museum exhibition style information boards that I thought might make things more interesting.

marslanderss

marsmissions

 

blarb

 

 

 

 

viking spiritoppy sojourner phoenix curiosity

 

 

 

 

 

As it turned out, people had difficulty working out that the rover was an actual real robot that they were controlling until they saw the thing itself. I therefore definitely think that if this was to be exhibited in a museum that I would allow people to see the rover driving. Perhaps not at the same time as they are controlling it however.

Another thing that was suggested was the inclusion of goals for the driver to achieve. I initially wanted this too, with the purpose of the goals to be the delivery of the information, but as discussed in a previous post, this was a whole new project unto itself. I had also thought about the inclusion of fun goals such as finding little aliens that would be fun for younger users, but i decided that this was unscientific.

Here is a video of the final exhibit in action.

Finalising the Project

My project is currently in a prototype form, however I am feeling confident about the final outcome. Even though it is not as smooth as I had initially planned it especially in terms of the control of the rover and the delivery of information, i still feel that it works correctly and achieves most of what I wanted it to do in the beginning.

To complete the project I plan to recreate the rover body in a better looking, sturdier material and add detail to make the rover resemble those that have already been sent to Mars more closely. I also plan to complete the redesign of the user interface and add more facts to the information delivery. Finally, once the terrain is in place I will add a thin layer of unglued sound and rocks to add a level of realism.

The prototype rover on the terrain.

The prototype rover on the terrain.

Creating the User Interface

The user interface and controls are both web based due to the ease of customisation and wide range of functions that can be achieved via the various web languages such as HTML, JavaScript and PHP.

Main Screen

interfaceFor the main viewing screen I have decided that the central focus should of course be the camera’s video feed. I decided to centre this, and then overlay a clear .png image of the same size in a separate ‘div’ on the page  to create the ‘artificial horizon’ overlay on the camera. I found however that when the screen resolution is changed, it moved the png from the correct position. This is mainly an issue when moving it from the computer that I am developing it on to the projector or other monitor that might be of a different resolution. I solved this by leaving one page with the video feed and png overlay, and creating a new page with an iframe in the centre of it.

Similarly, I decided that to make the main page as simple and flexible as possible that everything should be in iframes, so the code for the main page only consists of three iframes, each aligned to a spate part of the page.

For the telemetry and satellite displays I decided that they should occasionally change to reflect new data coming in. I looked into JavaScript’s random function for this, and I decided that it would require a too high of a refresh rate to achieve the effect that I wanted. I therefore opted to create looping .gif images instead. For the telemetry, I decided to use two separate .gifs on two separate pages, and have a JavaScript function that occasionally refreshes the page to the other one.

Control Interface

controls

My Initial draft of the control interface.

For the first draft of the controls I have stuck to the triangular design for the buttons, and I have redesigned the .gif on the ‘waiting’ page that is called when using some of the controls so that it incorporates the font and feel that is used on the main viewer page.

Additionally, it only runs through once before returning to the previous page as opposed to the old gif that was on a loop.

Note that the wheel direction triangles change colour depending on what the current direction is.

Original .gif

Original .gif

 

For the final design I want to smarten up the interface and make the buttons resemble what they are more clearly. I want to keep the triangle theme, but I also would like to make it feel more like a professional user interface, whilst retaining some of the brightness to engage younger users.

Design: The User Interface

I would like the user’s interaction with the project to simulate a job working with NASA or another space agency driving the rover. Therefore I want the environment to simulate a control room, with a large projection or screens showing the video feed and other data, and another monitor where the main user will sit to control the rover. This will allow watchers to view what is going on, but only allow one user to use the controls.

roverproposalI would like there to be the possibility that users could see the rover in action on the terrain, and so I would allow them to go past the curtain once they had finished in the ‘control room’ aspect of the piece.

 

 

data For the main viewing screen I want to include several things. Over the video feed I would like to have an artificial horizon to show the orientation of the camera/rover. I would like to include various graphic displays including perhaps satellite location, and I would also like to include telemetry data from the rover itself.

The design of the control interface has had to change due to the actions occurring directly after the button being clicked, rather than on the click of an execute button. The three different directions that the wheels can turn to also require buttons therefore, however I still want it to stay simple. The design I have chosen for the buttons is triangular as this infers direction without having to use arrows. Therefore, even without the text it would not be incredibly difficult to guess what the buttons do.fwd1

 

 

 

Working With Electronics: Controlling the Servo Over the Web

Python Scripts

Using the script that I had created in Python which allows the servo to turn to both extremes (left and right) and the neutral position, I thought about possible ways of the user selecting the direction that they want the rover to travel in and how I would create a Python script to do this.

  1. Allow the user to specify the exact angle that they wish the wheels to turn to (from 180 degrees as full right, to -180 degrees full left). This is the most preferable option as it gives the user maximum control of the vehicle and is probably closest to how Martian Rovers actually operate since the drivers would need a high level of precision to navigate tricky terrain. I therefore calculated the following formula(a/170)+1.5/20.00*100.00Where a is the angle that you wish to turn to. However, when using this in Python it did not seem to want to calculate it correctly. This would also require me to have Python edit the script based upon what the user inputted on the webpage.
  2. Have 3 options for wheel turn angle – Left, Centre and Right. This would allow me to use the values that I already know, and I could very easily create a script where only 1 value would need to be changed in order for this to work. I could do this with one script that takes feedback from the web, or I could do it with multiple scripts that activate when necessary.

Setting Up The Web Server
Using this Instructables tutorial, I managed to install Apache and php5 as well as clients to allow me to upload files to the Pi remotely. Once I figured out

the www folder

the /www/ folder

that the Raspberry Pi’s web server path /home/pi/www/ worked the same as any other web directory it was easy for me to create folders and pages within it that could easily be referenced through the HTML. I decided that all my images would go in /images and all my python scripts in /python.

Creating a Basic Control Interface

I discovered that in order to run python scripts from the web you need to call .php pages to open which have “<?php exec(‘sudo python /full/directory/here/scriptname.py’);?>” in their body. Therefore I figured out that I could link from the index.php page to a similar or identical page except that it calls the Python script on opening. This would cause the script to run and the servo to move.

I therefore created some basic images that said left, centre and right on them and put in some basic text that told the user what direction the wheels would now be facing and created two similar .php files – one with and one without the ‘exec’ function.

web<!DOCTYPE html>
<html>
<head>
</head>
<body>
<?php exec(‘sudo python /home/pi/www/python/servoleft.py’);?>
<a href= left.php><img src=”images/left.png” alt=”Turn Left”></a>
<a href= centre.php><img src=”images/centre.png” alt=”Centre Wheels”></a>
<a href= right.php><img src=”images/right.png” alt=”Turn Right”></a>
</br>
Current Wheel Direction: Left
</body>
</html>

Once I was sure that this worked, I then duplicated and edited the Python scripts appropriately, putting them in the /python folder and created the missing .php files using the above template, before ensuring everything linked up correctly.

This allowed me to remotely change the direction of the servos.

Working with the Electronics

I decided to use a stepper motor to control the wheels of the rover. A stepper motor unlike a regular DC motor can be told to move in specific increments and stop at various angles. I thought that this would be useful so that the operator could tell the rover to move its wheels forward a certain number of steps and then stop.

DSC_0132editI researched and purchased a driver board and motor that I had found instructions for the usage of together.

The motor had two screw fixings on it so that it would also be easy to mount upon the body of the rover too.

I soldered pins onto the driver board and connected it up to the Pi, power supply and motor, and it showed me via an LED that it was receiving power.

I then sent ‘on’ and ‘off’ signals to the appropriate GPIO pin on the raspberry pi, but this failed to activate the motor at all.

Meanwhile I decided to see if I could work the servo that would tilt the camera. Whereas the motor requires a separate driver board, the pi can directly drive servos from its GPIO pins.

My research found that the different lengths of pulses sent to the GPIO tell the server what position to move to:

By default, the Pi sends a pulse that lasts 20ms, so using Python, a code needed to be written that changed the pulse length, and that means setting the pin to use PWM (Pulse Width Modulation). Doing this, you can change the duty cycle (or the length of each pulse) to what it needs to be to direct the servo to move to a specific angle.

This gives me several options in letting the user control the camera tilt:

  1. Give them 3 positions to be able to move to – up, down and central
  2. Calculate some in between values and let them select from a wider range of angles
  3. Create a script that when they choose any angle themselves, it calculates the correct duty cycle and activates it.

Here is a video of a script that moves the servo to each of the 3 extremes of positions. (Note that the servo only actually moves to 160 degrees)

Contingency Plan: Providing Information

If none of the software that I try is able to manage to achieve what I would like them to I need to have another method of providing information to the user about various aspects of the exhibit. I must therefore overlay the video stream with another layer that gives the info, rather than have it integrated into the video itself.

rovercontingency

Pressure Pads

The pressure pads work by sending a signal to the computer when two conductive surfaces touch. They could be placed under the terrain, and activate specific info when in a specific area. This however requires the terrain to be thin enough and the rover to be heavy enough to trigger the pads.

Conductive Plates

The conductive plates work by sending a signal to the computer when a wire underneath the rover makes contact with them. The biggest problem with this is that they would have to be on top of the terrain and thus visible.

Electromagnetic Detectors

These devices are often used in burglar alarms to detect whether a door is open or not, and could be modified to send a signal to the computer when a magnet underneath the rover passes above them. They could be above or below the terrain depending on the magnet strength, however, the activation area would be quite small and require the rover to be driven precisely over it to activate.

Ariel Camera

This would be a camera attached to the computer placed high above the exhibit, which would detect when the rover which is a contrasting colour to the terrain drives into a certain area of the exhibit. This is quite a complicated solution and requires being able to place the camera high up. It would also require re-exploring Pure Data again.