Fonts

For the web elements of my project I think Google Fonts is a useful resource as the fonts can be downloaded so that I can use them in images, but also they can be linked in the stylesheet for the web pages so that any written text can be in a matching font.

 

Looking at NASA and ESA telemetry data, they tend to prefer monospaced fonts such as courier for the distinctiveness of each letter and number so that there is no confusion.

Based on this, I looked at the fonts in the monospace category on Google Fonts. I wanted something that was reasonably clear to read, but I also wanted it to be interesting from a design point of view (so Courier was not an option).

fontsI decided upon the font ‘VT323’, because it looks slightly pixelated and I thought this looked interesting especially at large font sizes, and makes it seem as though it is part of a text-based computing system.

 

Advertisements

Design: The User Interface

I would like the user’s interaction with the project to simulate a job working with NASA or another space agency driving the rover. Therefore I want the environment to simulate a control room, with a large projection or screens showing the video feed and other data, and another monitor where the main user will sit to control the rover. This will allow watchers to view what is going on, but only allow one user to use the controls.

roverproposalI would like there to be the possibility that users could see the rover in action on the terrain, and so I would allow them to go past the curtain once they had finished in the ‘control room’ aspect of the piece.

 

 

data For the main viewing screen I want to include several things. Over the video feed I would like to have an artificial horizon to show the orientation of the camera/rover. I would like to include various graphic displays including perhaps satellite location, and I would also like to include telemetry data from the rover itself.

The design of the control interface has had to change due to the actions occurring directly after the button being clicked, rather than on the click of an execute button. The three different directions that the wheels can turn to also require buttons therefore, however I still want it to stay simple. The design I have chosen for the buttons is triangular as this infers direction without having to use arrows. Therefore, even without the text it would not be incredibly difficult to guess what the buttons do.fwd1

 

 

 

Creation of the Rover Body

Although the main purpose of the rover is that the user views everything from the on-board camera, I have decided that it would make it visually more appealing if it could be seen in its terrain environment afterwards and if it could resemble an actual Martian rover.

Having to use the RC car chassis however does limit what I am able to achieve, although it does provide a solid base from which to start.

Possible Materials:

ABS Plastic construction.

ABS Plastic construction struts and sheet material.

  • Strong
  • Easy to drill, saw and sand
  • Paintable
  • Easy to glue with poly cement

Balsa Wood

  • Light
  • Cheap
  • Easy to glue

Cardboard

  • Free
  • Widely available
  • Easy to cut
  • Lightweight

Foamboard

  • Lightweight
  • Sturdy
  • Easy to glue and paint
  • Cheap

I decided that my initial design should be in cardboard due to the availability and economic benefits, and then could be more easily measured up for creation in balsa wood, plastic or foamboard.

rover

Final Prototype Design

 

Design – Control Interface

roverctrl2I have several ideas for the control interface. I have decided that it should be accessible on a tablet device, and thus the easiest way to do this is to create a web page and use JavaScript to send commands to the Pi.

According to this article, the Mars rover drivers plan out a course using a combination of data, which they then run a simulation on and adjust. Once the course is ready, the driver gives the signal and the rover follows the pre-assigned instructions. It makes sense that the rover is not under manual control since there is about a 4 minute delay sending signals to the red planet.

In order to make it interesting and not too complicated however I have decided not to simulate this entirely. I do however think that the user should have to input instructions and then execute them which will provide more of an interesting and simulative experience than if they were just driving the rover around like it were an RC car.

In its simplest form, this would involve the user inputting a direction and angle of rotation and a distance or amount of degrees and then sending the command. The tilt for the camera could probably be activated as soon as the button is pressed.

homofaciens.deOn the website homofaciens.de (home of the RoboSpatium as mentioned in a previous post), the control system for the robots is fairly simple and easy to use. Additionally, he has provided the source code and instructions on how to set up Apache2 on the Pi to be able to communicate with another computer and send and receive instructions.

Designing the Rover

After ordering in the parts that I need for the electronics to control the rover, I measured their dimensions and used this to plan the size of the Rover. Asiderover3 from fitting all the components, I need to be able to access the Pi and the battery pack to recharge and reboot the rover.

I intend to use plastics for the casing, and have identified 2mm thick PVC as being a good option, as it has a high mechanical and tensile strength for the price. The structure will likely be held together with 3mm threaded steel rods that have nuts supporting the plastic platforms.

The structure will be then hidden underneath the ‘solar’ panels, which are for aesthetic purposes only.

Contingency Plan: Providing Information

If none of the software that I try is able to manage to achieve what I would like them to I need to have another method of providing information to the user about various aspects of the exhibit. I must therefore overlay the video stream with another layer that gives the info, rather than have it integrated into the video itself.

rovercontingency

Pressure Pads

The pressure pads work by sending a signal to the computer when two conductive surfaces touch. They could be placed under the terrain, and activate specific info when in a specific area. This however requires the terrain to be thin enough and the rover to be heavy enough to trigger the pads.

Conductive Plates

The conductive plates work by sending a signal to the computer when a wire underneath the rover makes contact with them. The biggest problem with this is that they would have to be on top of the terrain and thus visible.

Electromagnetic Detectors

These devices are often used in burglar alarms to detect whether a door is open or not, and could be modified to send a signal to the computer when a magnet underneath the rover passes above them. They could be above or below the terrain depending on the magnet strength, however, the activation area would be quite small and require the rover to be driven precisely over it to activate.

Ariel Camera

This would be a camera attached to the computer placed high above the exhibit, which would detect when the rover which is a contrasting colour to the terrain drives into a certain area of the exhibit. This is quite a complicated solution and requires being able to place the camera high up. It would also require re-exploring Pure Data again.

Testing Software

The Raspberry Pi can output video via a http protocol in an MJPEG stream and it can also stream via RTSP which uses the h264 codec. So far I have successfully managed to get both of these formats working.

The difficulty then is to pull them into a program which can manipulate them, using shape or colour recognition and chroma-keying. The programmes that I previously highlighted as potential solutions failed to work.

pure data

Manipulating video in Pure Data

Both Pure Data and Quartz Composer are effective for the manipulation of video from a webcam attached to the computer, but are less capable receiving a stream from over a network. Pure Data’s pix_video and pix_film objects can access webcams and files, but not network streams. Quartz Composer has a patch that has been created for it that can work with certain IP cameras, but only selective Axis Camera models are compatible.

I am now testing a new software that may be capable of what I want it to do, Processing. Processing is a programming language environment that is somewhere close to JavaScript. It is capable of handling video and audio and other visuals.