The Multi-Modal Watch Station was an effort funded by the Office of Naval Research(ONR) and was an AEGIS Cruiser Air-Defense Simulation of a Combat Information Center team, performing air defense for a U.S. Navy battle group. Our teams of developers and designers were investigating how various HCIs could be used to reduce workload and improve performance. I was hired to lead the rapid prototyping effort, while a team of JAVA programmers worked on a ‘real’ version. Our first system was a beast! We had 6 Wacom 800×600 touch screens wired together, and there was also a seventh display in the form of a Head Mounted display. The HMD was further enhanced with a head tracker. The first prototypes were very ‘golden path’-only designs. Demos were a lot of fun, since we only had two projectors, so while we were running through the script, it was my job to toggle the various AV switches. For the second version, the screens were greatly improved. The new console now had 3 1280×1024 LCDs with touch and a fourth 1024×768. A custom console had also been built.
After we had redesigned the interfaces to the larger displays, we began rebuilding the prototype to allow more freedom for the user. The effort now also changed focus, as we needed to validate the designs against the actual AEGIS user interface. So, an actual scenario was chosen, and the team began building out the screens and user tasks needed for the scenario.
One of the driving concepts behind this effort was task management. Current systems require the sailors to ‘hunt and peck’ on tracks (airplanes, ships, etc) and then figure out what tasks they need to perform. Our system automated a lot of this effort. Track status reports were auto-generated and logged. The user still reviewed the data gathered by the system, but it became a lot easier to manage. We even introduced a text to speech engine, so they no longer had to issue the verbal reports themselves. This system even had a priority system, so important reports were issued first. I kept banging away, and the lines of Lingo kept piling up. The whole system was huge set of Movies In A Window.
Each of the large displays had the same basic setup, but based on the type of task the operator was doing, various components would change out. One universal truth about dealing with sailors, they all want MORE MAP! The other modules on the right are various Close Control Read Outs (CCROs). They inform the sailor of various pieces of data about the task or track. Along the bottom is a list of outstanding tasks. Throughout the development, we were conducting a variety of usability tests about the UI. A data logger was added to the code. I suspect that the data may still be used by some of the researchers at SPAWAR Systems Center, San Diego. The JAVA team had been tasked with developing two versions: a basic MMWS and an advanced MMWS, while I kept in front validating designs before they rebuilt them. The idea was by doing two versions, we could measure which enhancements offered more improvements against their cost to develop. However, they were running into a variety of issues (connecting to the actual training simulators, etc.). Since the prototype was in fact the advanced UI version, we committed to completing that UI for the usability studies. So now, I had to scale this prototype from a single user, to a multi-user real time system. Leveraging the now defunct Multi User System (see the article I wrote on Multi User System at Director-Online), and with help from a great intern, we were able to encode the entire 2 hour scenario. Previously we had only run the first 30 minutes, in case we had to use the participant again. The intern also served as my QA. Finally, we had fully networked MMWS. There were five primary consoles, and six stations for various role players. In the end, there were about 80,000 lines of Lingo running the system. Tests went fine. Both systems did really well in the study. Sadly, various issues ended the program, but it was a lot of fun to develop. The project evolved into the Tomahawk Cruise Missile Project.