Common Playground

Common Playground

Marika Kajo, Michael Johansson
Marika.kajo@interactiveinstitute.se
Michael.johansson@interactiveinstitute.se
cast01 // Living in Mixed Realities
Conference on artistic, cultural and scientific aspects of experimental media spaces.
September 21-22, 2001
Sankt Augustin (Bonn, Germany)
Organiser:
 MARS-Exploratory Media Lab

In this paper we present our experiences in using a computer game platform as a development tool in the areas of art, architecture and drama. Our approach has been to find ways to do quick prototyping and testing of ideas in a virtual space. We strive at building general applications independent of the 3D platform used for the specific case rather than getting stuck in a certain game engine or technological generation. The focus has been on the constant dialogue between the concept and its form. Today’s Computer games seem very fit for our purpose.

Projects presented run at Space respectively Narrativity Studio, Interactive Institute in Malmö.

Keywords:

multi user, art, architecture, drama, computer games, virtual space, avatar, body movement, collaborative story telling

1. Half-Life – a creative environment?

During the last couple of years we have looked at the game industries in comparison with the more military/industrial oriented VR market. We have also followed the development of the wrml standard for communicating 3D models on the Internet. Both these tracks has in more than one way led us to the enlightenment that there maybe other ways to work with 3D as a communication tool in a multi user environment.

The criteria that I was looking for were that it should support the following features:

  • Multi-user environment at least 16 users simultaneously on line
  • Hardware graphics support such as textures, dynamic lightning, bump mapping, displacement mapping and other real-time effects.
  • Support for both first and third person view
  • Communication tools voice or at least chat.
  • Server management – to be able to set up your own server and manage it as         you want – giving users different access and privileges.        
  • A graphics editor available for both building worlds and manages content. If possible access to source-code or even a SDK.

At Malmö University K3 – in one of the interaction design master classes we started our first Hal-Life projects. In one of the so-called research themes in Digital art the students where introduced to a series of paintings from the late 1800, which all communicated a certain mood. The artist often has a close relationship with both the tools and the material that they are working with. In the connection or confrontation with the material new ideas and decisions takes place constantly. In this situation of give and take you soon learn what limitations the material and tools in this case the software set up for you as designer. But it also reveals possibilities and directions you couldn’t imagine beforehand. So instead of letting the students observe and evaluate a 3Dgame or a level designer or a graphical artist’s work. They where forced to go into a new domain, learn new software and try to communicate something as hard as a certain mood in space. The goal was to build a room or a world that communicated a certain mood.

The student was introduced to a game level design program called worldcraft. They had 2 days of training and then the spent about 8 to 12 days building their worlds. The where also introduced to different websites dedicated to Half-Life and Half-Life editing. They were encouraged to take part of the several discussions list about Half-Life editing and to download scripts and models to use in their work and part of the learning process. The overall impression by the students was that the editor wasn’t hard to learn but the editor (WorldCraft 2.2) had the computers go mad at first. But as the students gained experience the crashes came more seldom and in the end the actually felt that they could work around most of the shortcomings of the editor. They find that when they got a grip of the editor it was fun an engaging to work with.

At the presentation the students where allowed to play each world and make notes of what feeling the experienced playing the other students worlds. Surprisingly about 80% of the students manage to communicate their intentions.

To see students with no experience with architectural design so easily take control of the designing these virtual spaces in such rich and interactive way, led us to choose hl as our development tool for the projects to come.

2.1 Half-Life – a rapid modeling tool?

One of ForeSites project goals is to integrate our experiences from collaborative spatial and architectural design using Virtual Reality into a digital modeling and visualization tool based on Half-Life’s game-engine. We have therefore developed a couple of prototypes for rapidly designing 3D environments in 2D space. Johan Torstensson a student at Malmö University K 3 developed the first version of the prototype called hardhat designer. In its first appearance it allowed the user to access a small database of 2D elements, which then could be distributed on a 2D surface. On that surface the user could divide the space with walls in different lengths, insert windows and doors etc, and furnish it with different items as chairs, sofas, and tables. A set of eight prefabricated “placeholders” was also introduced to mark different objects or events. The detailing and visual quality of the 3D/VR worlds seems to be adequate for the chosen tasks. The 2d layout was then compiled into a lightened 3D/Virtual Reality world in Half-Life. Here the user could navigate trough their newly constructed space.

2.2 ForeSite Designer & Playgrounds

Next step was to develop the Hardhat concept further into something called ForeSite Designer where you could create different kinds of Playgrounds. The basic idea was the same but some new tools for rotating, multi-selecting and deleting objects was introduced. The size of the playground could also be varied according to task. The possibility to use a background image was also introduced. But the new and main focus was to investigate different kinds of representations both in 2d and 3d space We knew that from the very first tests of the Half-Life game-engine that light made the spatial representations “come alive” without having to overload them with irrelevant detailing [1].

2.3 Workshops with Half-Life and ForeSite Designer

Half-Life and ForeSite Designer has then been tested in workshops together with external users. The 3D/VR possibility was used for evaluation of what was built in 2D but also immediately generated a lot of new ideas, which then was executed in 2D. The presentations where made in front of a large projection of the virtual spaces they just had modeled. They could here immediately interact with a Virtual Reality/Half-Life world in scale 1:1 of the scenario they just had designed. After the workshops the participants and others also had access to the worlds on a Half-Life server over the Internet. There the workshop participants, their colleagues who had not attended the workshop, the research team and others could meet, look around and discuss (in Half-Life there is a text chat feature available for simple messages on line) the outcome from the design exercise in a multi user environment [1].

2.4 ForeSite Designer for innovation and understanding

In our test cases we found that game based VR is a usable tool in architectural design processes. ForeSite Designer showed to be effective for expanding ideas and gain a better understanding of the design task. Totally untrained persons were able to build rather complex furnished and lightened workspaces within short time limits. It was fun and stimulating to use, promoted innovative thinking and in that way activated the design process. Our conclusion is that this is due to the fact that the actual design of the virtual spaces forced the participants to combine different ideas, negotiate and prioritize. In this way the design tool deepened the understanding of the complexity of space [2].

3.1 Communicating Moods in Space

This project, defined in Narrativity Studio, aims at developing innovative metaphors for spatial expression and communication. To design a virtual narrative space, which combined with event scenarios, inspires collaborative creative processes and establishes a meaningful relationship between virtual space and reality. The mode of communication is physical rather than textual.

For these purposes I was looking for a virtual environment to support: multi user environment, open SDK for application development, openness for character animation and behaviour control. In collaboration with the Space studio I chose Half-Life as my first test platform. For the first half-year prototype I have looked closer at two areas:

Pace in Space

How do we physically communicate and experience stories together in virtual space? How to create a virtual space that in a haptic sense inspires the pace in space.

Trace of Pace

How to define dynamic drama structures that allows the space itself to become an actor. In the role of actor the space may embody memories and let events and stories told integrate physically in space as trace of pace – the story and memories of actors pace as well as the space itself.

3.2 The Mixed Reality Platform

With a long-term goal of building a flexible and open application and not putting too much effort in Half-Life specific solutions, a Mixed Reality Platform was formulated. The aim of this platform is to create a flexible meeting point between physical and virtual space, an open environment and interface to different 3D engines as well as flexibility in relation to user interfaces. The platform consists of a protocol and a software package (API), based on the same protocol.

The protocol is to handle positioning, movements in virtual and physical environments and support several input devices and systems – in the range from microphones to tracking systems.

The software package is meant as a support for other people who are interested in developing applications for the platform and is developed by Johan Torstensson as Master work in Interaction Technology, Malmö University K3.

3.3 Creating Drama

The choice of drama structure is striving at finding ways of “fighting” for collaboration. This is the point of concentration in theatre improvisation and many traditional games. To implement the method of creating drama by establishing a conflict, I have chosen an old tagging game called “Stone, Wolf and Lamb” in the first prototype. The plot in short gives Wolf hunts Lamb. If Wolf gets Lamb they change roles. Lamb may jump over a Stone (also a Role character playing a stone), then Lamb becomes Stone, Stone becomes Wolf and Wolf becomes Lamb. This game is implemented as a multi player computer game in Half-Life, where actors interacts through their Roles in first person view.

Specific for the chosen game is a constant change of Role. This gives the actor/ player different relation and point of view in the game played. This changing of roles also creates the circular dramaturgy where there is no winner.

The theoretical model used for structuring the dramatic scenarios is Semantique Structurale [3], a structural interpretation of a narrative where a subject (S) with driving force and quest (destinateur D1) to get object (O), with the goal to satisfy receiver (D2). Added to this main story, there is the helper (H) who supports the main story and an antagonist (A) counter acting the main story – in all 6 actants.

Applying this structure for playing the tagging game in a virtual playground opens up for a dynamic environment introducing Fools (autonomous Roles), Space, “God-Role”, Sound or groups of Roles acting as helper or antagonist to the main story with a common dramatic function.

A feature in the Mixed Reality Platform is a 2D overview of the playground and current active players in a LAN network. In relation to the dramatic structure this may be interpreted as a “God-Role” acting as director or Fate acting as different actants, but is not yet explored as part of the game.

THis drama structure is also interesting when looking on definition of actor viewpoint. In what Role do you experience the story told? In the prototype I have chosen the actor to be either of the three main characters Stone, Wolf or Lamb.

The ways to communicate emotions as sorrow, fear, happiness are many. We laugh and hug each other using our body, but how do you communicate all this at distance, different time and space – in virtual space? The design of the MoodBody is inspired of physical theatre and focusing on body expression and communication rather than facial expressions. Striving for a non-realistic environment and scenario the movement design is in the field of extra daily behavior.

My goal was to find a non-realistic “clean” character body, optimal for body expressions and with a neutral mask. These criteria’s also to open up for the Role change transformations. The 3D character models are developed together with Michael Johansson in the Space studio. My inspiration for character model is simple line drawings by Carlo Blasis [4].

Character movement is designed in the context of the game. In the process of defining motion for expressing moods, I find it important to put the moods in context. Like in improvisation relations is as important for the development of the story as individual basic moods and emotions. Our mode of motion capture for this first prototype were by shooting live workshops in video, which was the raw material for the character animation. Workshops were arranged in collaboration with Kent Sjöström, teacher at the Theatre Academy.

3.5 Pace in space, Trace of pace and Space

The playground is a white rectangular space – narrow but with high roof staged by light and sound as ending in “heaven” and “hell” respectively.

The Pace in space implements the tagging game and MoodBody as the basic game rules, controlling the drama structure, multi user environment and the over all behavior and timing.

When defining Communicating Moods in Space, the Trace of Pace was defined as a separate structure due to restrictions in Half-Life real-time features. When using the Mixed Reality platform the trace function is a real time implementation and becomes an integrated part of the general game rules.

The Trace of pace is concerning “physical” embodiment of dramatic events, spatial events and Role behaviour. This is Space taking part in the drama as an actant. In the prototype traces are implemented for Role change, traffic in space and Wolf behaviour (a good or bad in relation to Role defined goal). As examples implemented, a role change will place a plant to grow, creating a jungle over time, movement in space as traffic will generate markers of different intensity and colour and Wolf getting tired in relation to the tempo and performance of his Lamb hunt etc.

Experiences in the Forsite project in the Space studio shown inertia to make people move around when visiting virtual workplaces etc. Using the method of conflict in creating drama we wanted to experiment and inspire actors/users experience space in an open minded way. At an internal institute workshop on rearranging our own studio workspace, we used the Stone Wolf and Lamb game as a “Design game” – as an introduction to use and experience space as well as a motivation to learn the Half-Life interface in a playful way.

In this workshop I also wanted to experiment with the space design in relation to the dramatic structure – Space as helper / antagonist. In contrast to the original rectangular open space, a lot of effort where put in finding the object of desire when playing in a more complex and layered space, which need to be integrated and considered in relation to play structure.

Game rules are implemented in JAVA as the first costumer to the Mixed Reality Platform. Game Rules is developed by Per Larsson and Jens Henriksson, as Master project, Lund Institute of Technology /Multimedia.

3.6 Experiences of virtual play

Using VR as augmented imagination is stimulating when playing games. Playing in virtual space gives a unique opportunity to design exciting and dynamic play space as non- realistic, interactive and participating.

With actors playing in first person, the aspect of design for presence is important – to communicate the Role – “How do I know I am a virtual wolf?”

The neutral character bodies are designed in order to be open to actor’s fantasy and interpretation. In this prototype the body language and character behaviour is not fully developed which gives the disadvantage of some confusion of identity. This is partly due to the game itself (it is really confusing also in material space, which is part of the game…).

For the final prototype state the Role identity is designed by character sound (Stone Wolf and Lamb), different view points and sound markers for Role change as well as Wolf using paw for hunting. Finally we worked with Role head to identify actors in Role. As long as actors not playing they keep their neutral body.

When focusing on body as expressive media and mood communicator, the question of alternative user interface is of big importance when finding solutions to integrate body and technology. Next step in project is to look at our relationship to technology and how technology integrates itself into the creative process, shapes content and potentially can assist in taking the work to a deeper level of consciousness where both form and content are interwoven.

4. Conclusions on Half-Life as playground

Half-Life has proven to be a good environment when it comes to building simple spaces. The deliberate use of textures and lightning is good ways of stage the virtual setting. When the architectural models get complex the compilation times rise dramatically which makes tools like foresitedesigner not so attractive, because the time span between the shift between 2d sketch (editing) and 3D environment (visualisation) takes to long, and make the tool less interactive. The lack of possibilities to import 3D models from other 3D packages is a shortcoming of the worldcraft editor. Animation is another area that is very limited and restricted both in the 3D resolution of the characters, and the way you handle and script animation. The Half-Life SDK is available for free but as you start to dive deeper into it the support in both documentation and on all the HL forums tend to be missing. So you have to consider the time spent in a free editor with open source with little support in relation to the commercial available game developing platforms. Half-life has however made it possible for us to tryout and test our different scenarios in a short timeframe. The strategy of building separate modules as foresitedesigner in Space Studio project and Mixed Reality Platform, Game Rules and Trace function in Narrativity Studio project has made it possible to separate our development from the actual 3D application or engine. This lets us use the 3D environment suitable for our projects without each time starting from scratch.

References

        

  1. Computer Games in Architectural Design Proceedings HCI 2001 New Orleans By foresite Peter Fröst, Michael johansson,peter Warrén        
  2. Greimas, AJ, Semantique structurale, 1966        
  3. Barba, Eugenio Savarese, Nicola: A Dictionary of Theatre Antropology – The Secret Art of the Performer, 1991 Routledge        
  4. Spolin, Viola: Improvisation for the Theter, 1963, Northwestern University Press.

A case studie of modyfing a Characters parameters by a mixer in Softimage|XSI

A case studie of modyfing a Characters parameters by a mixer in Softimage|XSI. 

To animate a human being has always been very difficult because of the complexity of the human body. Trying to manipulate one parameter in the body that are affecting many other parameters, is only one problem to solve when trying to animate a complex character. Softimage|XSI gives the opportunity to let the user solve this problems in a lot of ways, and some solutions are given in this case study. What we first of all wanted to do, was to use an analog mixer, a Peavy PC1600x, to drive the charaters parameters. The choise of using a mixer was done primarly because of two different reasons. First of all Softimage|XSI is supporting the hardware, even if its quite difficult to setting up parameters in the mixer. And the second reason was made by the fact that it’s very useful to drive more than one parameter at a time. In Softimage|XSI there are plenty of ways to modify a objects parameter. The easiest way is to use chains of objects, like skeletonparts and constraints to hold the chain. Another useful instrument and easy to use, is the sliders that Softimage|XSI provides to translate objects. The sliders can even be combined together in custom parameter sets and in that way control more than one object at a time. Other ways that are more complicated but very efficience, is to use expressions or modifying the functioncurves. XSI even provides an animationmixer where you can modify predefined animation made out of shapes or just usual keyframing. Finally another hard but useful way, is to use programming to modify a objects expressions, movement and animated parameters.

Peavy PC1600x

PC1600x is a hardware mixer from Peavey with 16 channels and programable up to 96 channels. As said before XSI|Softimage support this mixer and will work together very well with an USB-adapter. The far most biggest advantage by using a mixer is the flexibility. To change more than one parameter at a time will make 2D-parameters suddenly appearing as real 3D movements and the opportunities to calibrate the characters movements will be much wider and more precise. And if you take advantage of the other ways in XSI|Softimage to build 3D sliders and using expression, you have the ultimate tool for steering and manipulating the characters parameters. 

PC1600x only have some small disadvantages, one is the graphical user interface in XSI|Softimage, which we’re actually trying to improve. Later on in this article, we’re hopefully giving the reader a useful tool and some suggestions to avoid common problems with the mixer. Even if it seems to be the perfect tool for animating and modelling 3D objects, it stills demands a few skills. To create an object with a natural look a deeper understanding of different parameters are needed, some artistic knowledge of the objects natural behaviors and fundamental theory of perspective.

Finally, and maybe the most important thing about 3D and art, thats often are being missed, it’s to remembering having a good time during work. This is unfortunately something thats very often are being put into second place and particullary when the technical pieces are in focus. So, dont forget to put all of your magination, creativity and fantasy into work.

I think 3D much to often, of course not allways, are used to imitate the reality down to the very last detail, and just strictly following the laws of nature. Unfortunately 3D much to rare are used to explore the limits beyond the ‘real’ world. Often I find it hard realizing the meaning of creating something, thats with advantage, can be done with a simple camera. If Nobody has the capability to see the difference between photos and a 3D modulated pictures, whats the point?

Modifying parameters 

This case study discuss modelling and animating an object by changing its parameters . Every object always has some parameters, and a 3D object has at least three directions in the room. The objects parameters in the room is defined as x, y and z. If we want to, we can call them left-right, up-down and back and forward. But before we proceed, we also must think over one important thing. The parameters on an object can exist as both local and global parameters. We first must know in what reference plane we are changing our parameters in. You can think about it like this and I will try to explain, for example – if I bend one of my fingers, so it moves from two inches from my open hand, to just one inch and want to change this parameter in Softimage, it most likely not a good way to change the y-parameter from 2 to 1. Because, how do we know that is not, 51 to 50 inches it has moved, maybe we shall measure the distance between the foot and the finger and not the distance between the hand and the finger? Of course we can pick what point we want to have as a referencepoint, but even if we can do that, we must be able to understand the difference between global parameters and the more useful and flexibel, local parameters. We can think of the global parameters as descibing the distance of all objects to a point in the room. Like every planet in our solarsystem has different distances to the sun. But if we for example want to draw jupiter and earth according to scale on a peace of paper, we’re just intrested in the distance from the middle of the planet (0) to the planets surface. And we cant care less if it take ten or maybe twenty minutes for the light from the sun to reach the two planets surfaces. There’s also one more parameter to consider when we want to animate an object and it’s the time. Of course the affect of time is very important to the objects behavior and especially when more than one object at a time is involved in the animation. But right now we’re just going to look at the parameters in the room.

Custom parameters

As mentioned before, there is different parameters in XSI|Softimage and among others, they differs in functionallity and how they are arranged. But every object has as default, some global and local parameters. Broadly the global parameters sets up where the object appears in XSI|Softimage different windows and the most important local parameters sets the size ,scale and position of the specific object. There’s a few ways to change the values of these parameters, for example we can write them in a window or just to modify their values through the sliders (see pic.) Even if it’s intresting to have the possibility to change an object by a slider, we really want to do some more with an object than just modify its size or length. Ssomething that would be very useful if we have the possibility to change more than one object at a time. To attain this goal, we have to build, what they call in XSI|Softimage, a custom parameter set.

The animatable custom parameter set or the proxy parameters is just two or more sliders that have been combined together. When the new proxy slider is moved in any direction, it moves all parameters on the objects were the original parameters belonged, just before the new custom parameter were build. It’s quit easy to create these parametersets and a convenient way of moving multipel objects at one time. It’s also very useful when you have something that moves linear. An easy example is a face where you can setup sliders for a happy or a sad face. And as you easy can realize, its not a problem to build very komplex slider setups.

Expressions

To explain expressions in an easy way, we can compare them to the custom parameterset. Expressions can also, like the proxy parameters be tied to a specific slider and both are animatable. The two slidertypes is quit alike, but there is some significant differences. Expressions is taking the movement of an object one step further and you can modify a single objects forms in many ways that are impossible to do with proxy parameters. A big difference between this two approaches of modifying an object is the way how we create them. Expression are almost exclusively made out of mathematical formulas. And because an objects properties is made just out of pure math and the object wouldn’t even exist without math, it’s easy to understand the power of expressions. It’s really not very hard to learn, but it require someone, that in some way, at least understand some basic formulas in algebra and have visions of how mathematical functions can move in space. Witthout any advanced mathematical formulas, and just to give a picture of how expressions can be used, I will take two very easy examples. The f irst example is just one of the most easiest expression we can do. If we’re having two objects and we want them to move in just the same way, it can for example be two very komplex custom parameter sliders. But it makes no difference. The only thing we have to do, is making the both objects equal to eachother and it will be done by just writing an ‘=’ sign between those objects. XSI|Softimage provides us with an expression editor were this short statements can be evaluated and added to the object. The editor also brings us the most usual formulas to use. The second example is more of an explanation what an expression can accomplish, than some mathematical explanation. If we think we have an object and want to change its form, maybe an object that behavs like vawes in a not linear mode i.e not just accelerating in its movement and forms. What approach will we take? One way is ofcourse to change every point on the object and modulate it into the desired form. But it’s quit boring if we also want to animate it somewhere between the movements and it shall come back to the original form and then one more time coming back to the first modulated form But as you propably already have guess, the answer must be an expression. I will not get myself entangled into some advanced formulas, but a not to wild guess, will be an expression where we give the object a sinus formula and using one or more variabels to adjust the frequency and amplitud.

Animation mixer

XSI|Softimage provides a little different approach to change animated parameters in realtime. You can store your animated clips and get an opportunity to combine them in a userdefined way through the animation mixer. There also two different sorts of clips to store, shapes and animation clips. Shapeclip can be explained as a surface animation stored in different clips. These clips can contains all information about the specific object, or in some cases it depends on which way the user has stored it. In other words it’s possible to store only some of the parameters. One of the animationclips primary information is the function curves. Softimage|XSI provides a graphical animation editor to inspect and change the objects value. The animation editor primarly shows the objects movement through time and gives good possibilities to change and adjust values. Although Softimage|XSI great capabilities it have a little weakness, mostly because of the nonpossibilities of editing the function curves in realtime animation. But actually we found one way to solve this problem, if an object is created to just ‘hold’ the object we want to change the function curves on, it actually will work. The way to do this is to create an expression between the object, were this new object can affect the old one. Sadly the animationmixer still is a rather an unexplored area because of the lack of time.

Developing in Softimage|XSI

What we wanted to do was a new, more userfriendly interface to communicate with the Peavey PC 1600x hardware mixer. Even if the functions already exist in XSI to setup the mixer, it’s not very easy and takes a lot of time. The first approach we were thinking of was to develop a kind of script were all data was given and at the end of the script take all parameters and write them to the appropiate places. But with 16 channels it seemed a bit messy and even if it was much easier to set up the device, there were no good controllers to adjust values, or a view to see which channels that was already bind to objects. Finally we decided to make some sort of property window for the mixer instead of trying to develop an application. We started out with version 3.5.1, but after some weeks, v.4.0 of Softimage|XSI turned up. And it actually got some new implementation possibilities through the script window. And one of the big advantages was the possibility to directly create listboxes, radiobuttons and other useful graphical components. In earlier version, the only possibility to create that kind of interface was to create them direcly in XSI:s SPDL ( Softimage Property Definition Language ) files. The reason why they really have developed this new possibility is because they wants to skip the SPDL files and only wants to use dynamic created propertyfiles. Even if it seems to be a good idea, I decided to implement the SPDL file with the logics, primarly because of the backward compability which disappear if all programming is done dynamically.On the right hand is the final result of the property that’s making the job much easier when working with the mixer. All the logics is programmed in the SPDL file so it must be installed during a add-on, that will be shipped in a near future. The biggest improvements due to the device manager is the easy way to pick the parameters to the channels. Just by picking the parameter it will be combined to the selected channel. It’s also pretty easy to navigate between the channels and check which channel that has a connection to a specific parameter. But one of the simpliest implementation, thats finally seems most useful is the way the offset and scale can be handled. When working with parametersets in complex characters, we found out the importance of having a scale that limits to big movements of the characters objects. So the most useful tool is the sliders to set scale and offset. This property is also shipped with three buttons that take care of starting the device and loading this property window and the other two button has the ability to load and save presets for the mixer. Some minor bugs will appear if the device not is properly mounted i.e not active and added in the device manager. Also must the device and objects being correct implemented. Important when loading a preset is the objects in the loaded scen satisfy the target objects. Theres also one more important notice to make and it considers XSI:s original graphic interface. Because of the limitations in programming the window interface and due to the property windows dynamical update it’s recommended, NOT to use the SAVE and LOAD button. It worst scenario it will cripple the properties.

Softimage|XSI Netview 

Softimage|XSI provides a webinterface for interacting with the application. It has some small advantages due to the ‘real’ software. It’s primarily the possibilities to use the pure java and VB-script engine in webpages that’s make it intresting, even if we’re still stuck with the XSI script engine when interacting with the application. But an idea could be to use ActiveX components to interact with XSI or maybe a java application in Netview. The primary reason for this discussion is the idea of creating an applikation that’s independent (see Future development) and taking advantage of real programming, instead of the poor scripting language. Anyway, we have created a singelthreaded software demo in Netview that shows the overall functions of the hardware mixer. It contains two buttons, one to update the sliders and one to launch the properties we build. Obviously the property button needs the add-on installed, to work properly. An advantage with this demo is the handling of the sliders, it takes the initial values of the objects and giving a graphical picture of it values by the sliders.

Future Development

Because of the great advantages and opportunities a mixer provides, when it comes to handle more than one object at a time, it seems like a good idea to develop an application that’s easy to use and can handle objects in the same way as the hardware mixer. What first comes to my mind, is a softwaremixer working like the real mixer, but can be handled with the keyboard. By changing every channel with different keys on your keybord it would provide at least a couple of object to steer at the same time. Obviously the software must be multithreaded, but the biggest problem I can think of, is how to communicate with Softimage|XSI. Maybe there is a solution by emulating the hardware and taken advantage of the already implemented support for mixers in XSI. We’ll see what the future provides us with.

Jens Pettersson for Michael Johansson 2002

Netview Demo

Drag and drop in a Netview window in Softimage|XSI

Channel_properties add-on

Drag and drop in Softimage|XSI or right click and Save as…

Channel_properties package

Zip-archive – preset/netview/add-on/help

Swedish

Short description in Swedish