Copyright © 1999 - 2002 the authors
Released under the terms of the GNU General Public License, version 2.0 or greater. This document has been prepared for printing and the web using XML & DocBook. It is available online at http://www.fresco.org in HTML and PDF. Comments, additions, and discussion are welcome, and best directed towards the mailing list.
Table of Contents
List of Figures
List of Tables
List of Examples
Table of Contents
The first thing to do when starting a client is to get a reference to the server object. The display server advertises its service to the naming service, such that the client can look it up there. The name server is a separate process, who's address is configured into the CORBA runtime environment of all programs, such that you can look it up as an 'initial reference'. The name server returns an 'Interoperable Object Reference' (IOR), which the client uses for direct requests.
Here is the relevant code in python:
orb = CORBA.ORB_init(sys.argv, CORBA.ORB_ID) object = orb.resolve_initial_references("NameService"); ncontext = object._narrow(CosNaming.NamingContext) name = [CosNaming.NameComponent("IDL:Warsaw/Server:1.0", "Object")] object = ncontext.resolve(name) server = object._narrow(Warsaw.Server)
and the same in C++:
CORBA::ORB_var orb = CORBA::ORB_init(argc, argv); CosNaming::NamingContext_var ncontext = resolve_init<CosNaming::NamingContext>(orb, "NameService"); Server_var server = resolve_name<Server>(ncontext, "IDL:Warsaw/Server:1.0");
One of the most frequent errors is a misconfigured naming service, i.e. the ORB is unable to retrieve the NameService reference. Make sure your naming server is running, and the ORB is able to communicate with it. (Various ORBs provide applets to inspect the naming context graph. You can use these tools as a debugging help.)
The next step is to register with the server, i.e. to request that it hands you a ServerContext. To authenticate yourself, you have to provide a ClientContext object which is able to provide a set of credentials to the server. You therefor create a ClientContextImpl object (a 'servant' in CORBA terminology), then you bind it to an abstract 'CORBA Object', by means of a 'Portable Object Adapter' (POA). The POA manages the demultiplexing of incoming requests, i.e. it is responsible that other parties can talk to the servants that are activated with it. Here is the python version:
poa = orb.resolve_initial_references("RootPOA") poaManager = poa._get_the_POAManager() poaManager.activate() ci = ClientContextImpl(map(lambda x:ord(x), "hello world applet")) client = ci._this() context = server.create_server_context(client)
and the same in C++:
PortableServer::POA_var poa = resolve_init<PortableServer::POA>(orb, "RootPOA"); PortableServer::POAManager_var pman = poa->the_POAManager(); pman->activate(); ClientContextImpl *ci = new ClientContextImpl(Unicode::to_CORBA(Babylon::String("hello world applet"))); ClientContext_var client = ci->_this(); ServerContext_var context = server->create_server_context(client);
Now that you have a ServerContext, you can ask for some resources that are allocated in the display server process for you. These resources are generally 'Kits', which are abstract factories for various purposes. Most Kits create Graphic nodes that you can insert into the scene graph. We allocate here a DesktopKit, which is responsible for top level windows, a TextKit, responsible for Text related objects such as simple text labels, or complex text flow objects, a WidgetKit, that generates all the common widgets such as buttons, scrollbars, or choices, and a ToolKit, which generates all those little helpers such as color setters that don't fit anywhere else. In python that looks so:
properties = [] object = context.resolve("IDL:Warsaw/DesktopKit:1.0", properties) desktop = object._narrow(Warsaw.DesktopKit) object = context.resolve("IDL:Warsaw/TextKit:1.0", properties) text = object._narrow(Warsaw.TextKit) object = context.resolve("IDL:Warsaw/WidgetKit:1.0", properties) widgets = object._narrow(Warsaw.WidgetKit) object = context.resolve("IDL:Warsaw/ToolKit:1.0", properties) tools = object._narrow(Warsaw.ToolKit)
and the same in C++:
DesktopKit_var desktop = resolve_kit<DesktopKit>(context, "IDL:Warsaw/DesktopKit:1.0"); TextKit_var text = resolve_kit<TextKit>(context, "IDL:Warsaw/TextKit:1.0"); WidgetKit_var widgets = resolve_kit<WidgetKit>(context, "IDL:Warsaw/WidgetKit:1.0"); ToolKit_var tools = resolve_kit<ToolKit>(context, "IDL:Warsaw/ToolKit:1.0");
Now let's build a scene graph for a tiny applet. We'll start with a little text label, wrap it into a button, and put that button into a window. In python you write:
label = text.chunk(map(lambda x:ord(x), "hello world")) black = tools.rgb(label, 0., 0., 0.) button = widgets.button(black, Command._nil) window = desktop.shell(button, client)
and in C++:
Graphic_var label = text->chunk(Unicode::to_CORBA(Babylon::String("hello world")); Graphic_var black = tools->rgb(label, 0., 0., 0.); Controller_var button = widgets->button(black, Command::_nil()); Window_var window = desktop->shell(button, client);
To run, we'll just start an endless loop. This works, since the ORB is multi-threaded, so some other hidden threads will watch for server callbacks, timers, etc. Of course, you might want to do something more exciting in the main thread. In python you may write:
while 1: line = sys.stdin.readline() if len(line) == 1: break
and in C++:
while (true) Thread::delay(1000);
Table of Contents
General ideas concerning fresco's architecture...
Table of Contents
A central difference between the fresco server and other display servers you might have encountered is that fresco maintains a detailed, abstract graph of the current scene in its own process memory. This means that, rather than asking client applications to repaint every window whenever any change occurs, a lot of the redrawing machinery is contained within the server.
The scene graph abstractly resembles a tree. That is, there are a number of nodes (called "Graphics"), which are connected together in a transitive parent/child relation. The meaning of this relation is a hybrid of a number of intuitively related concepts.
The basic idea of the parent/child relation is that it expresses the logical containment of the child within the parent. This means both that the child only occupies a subset of the space in the scene that the parent occupies, and that the parent is in a way "responsible" for the child. A parent sequences layout and drawing of its children, and may play a "containing" role in memory management, event distribution, etc.
Since the scene graph is actually stored in double-precision floating point values, and since any graphic may be subject to arbitrary linear transformations within the scene, the parent/child relationship between graphics naturally extends to composition of linear transformations. That is, any child is assumed to be subject not only to its own linear transformation, but also the cumulative product of all its parents' transformations. This means, in practise, that if you happen to scale, shear, invert or rotate a window, all of its "contents" will scale, shear, invert or rotate along with it. This behaviour can be overridden in cases where it is undesirable, but it fits with one's physical intuitions so it is included as part of the general semantics of the parent/child relation.
There are instances in which it is desirable to share children between multiple parents in the scene graph. This is typically referred to as "flyweighting", and doing so means that the scene graph is actually not stored structurally as a real tree; though it frequently represents a tree and is only flyweighted to improve efficiency. This is somewhat of an "under the hood" issue, but it is central to our design and must be understood at least in passing in order to understand the remaining section, on the visitation pattern.
Storing a large number of small objects can frequently cost too much, in terms of memory. Particularly when given a relatively static object like an icon or a textual glyph. However, we would like not to make special cases since they complicate our code. So in order to improve efficiency, especially in the very important case of text, we store only 1 copy of each object and allow it to appear in multiple logical places in the scene graph.
While each graphic may occur in multiple places, if we enforce the DAG (directed, acyclic graph) structure on our scene, we can maintain a useful fact: that no matter how many paths through the graph there actually are, it remains possible to construct a true tree with exactly the same logical meaning / appearance. That is, if there is a parent P with 1 child C connected along 2 distinct edges to P, we could in principle construct the tree with the same parent P having 2 children C1 and C2, each of which is just an identical copy of C. This process can be repeated anywhere in the scene graph where there are 2 different "trails" leading from the root of the graph to a given child. While we do not explicitly construct this tree, it is easy now to see that we can "imagine" ourselves to be traversing such a "flattened" tree any time we traverse the scene graph, by simply maintaining a stack of which edges was followed from the root of the graph to the current graphic.
In fact this is what our traversals do, so the fact of multiple parents for each child is largely hidden from the programmer. The only reason we mention the fact here is to reassure you that the multiple-parent condition is not a serious problem, and to explain why traversals do all the things they do.
In addition to understanding the concept of a trail, it is important to realize that a graphic cannot reliably store a copy of its cumulative transformation or layout information, since it may be laid out at multiple places on the screen, many of which may have different cumulative transformations. Thus is can only store "relative" information about its layout requirements, and have its true state computed on the fly. This is known as "externalizing" its state; in fresco we attempt to externalize as much state from each graphic as possible. Partly this is done to facilitate the memory savings mentioned previously, but it also simplifies the task of maintaining the proper values for layout and cumulative transformation, which are highly dynamic to begin with. Since we compute them on the fly, such values are never "out of sync" with one another.
The scene graph is subject to a few "bulk" operations, such as delivering events and drawing. These operations are encapsulated in stateful objects, called Traversals.
The traversal algorithm is a generic "walk" over the scene graph. It may be either depth first or breadth first at each node, and may apply one of a number of operations to each node. The common feature of traversals is that they compute the externalized state of each node as they visit, so that the node can "read off" its state as it is traversed through.
Obviously a static scene graph is not interesting; the point of a windowing system is to allow users to interact with the computer. We will defer discussion of exactly how a user's events are transformed into application changes for the moment, and focus on how changes to the scene graph are propagated to the screen. The basic redrawing facility described here is invoked from within any graphic node by calling this->need_redraw(), which calculates the graphic's damaged region and requests redrawing from the draw thread.
A graphic may occur in any number of places in the scene graph, so the first thing a graphic must do when updating its appearance on the screen is to work out exactly where it appears. The answer to this question is not a single position -- rather it is a list of regions, each of which represents a separate position on the screen where the graphic appears. This list is called the Allocations of a graphic, and is of central importance to the redrawing system.
Calculating allocations is a recursive call, which "reaches up" each of its parents and their transformations. Each parent will compute the child's allocation by reaching up through each of its parents, etc. Eventually the roots of the graph are reached (yes roots: there can be more than 1 screen watching the same graph) and it will return regions of the screen. Though it sounds like there are many steps, it all happens in process, following pointers, so allocation computing is usually very quick.
In the case of need_redraw(), the graphic's allocations are all merged together into a Region object which the graphic constructs, called the "damage region". The Allocation list contains a reference to a Screen object, which the graphic will then insert the damage region into. Doing so queues up a redraw which the drawing thread will dispatch with a draw traversal.
DrawTraversals are objects which the drawing thread passes over the scene graph, rendering objects which intersect the damage region. Each draw traversal thus carries a damage region inside it, and tests the intersection of each node as it passes by. If the node passes the intersection test, the node will be traversed and be given access to the traversal's DrawingKit, which subsequently can be used to draw to the device in charge of the traversal.
The technical details of the drawing kit (lines, paths, paint, textures) are not deeply relevant at this point; we defer discussion of them to a later chapter. However one thing which is important to know is that layout (coordinates, size, alignment) of graphics is, like their position and transformation, calculated dynamically with each traversal. The details of the layout algorithm are somewhat involved, so likewise they will be deferred. It suffices to say that the abstraction of layout is suitable to local constraint-based systems such as box-packing, typesetting concepts such as "springs" and "glue", absolute positioning, and more.
Table of Contents
Users interact with the scene graph by feeding the display server with events, from a wide variety of physical input devices. Unlike many GUIs, we take an extensible and "low impact" approach to dispatching events to their respective destinations.
It is a difficult task to design a user interface which will work not only with all kind of existing input devices but also with devices even not yet conceived. For this reason, and because the concrete environment may be very different for two users, map physical devices and their input data to logical devices and sets of elementary data. Categorizing input data in terms of certain attributes (types) like
Table 4.1. device attributes
name | type |
---|---|
telltale | Bitset |
key | Toggle |
button | Toggle |
positional | Position |
valuation | Float |
Each logical device now possesses any number of these attributes which are the only means for fresco to describe them. For example mice and keyboards would be described as
Table 4.2. logical devices
device | name | type | description |
---|---|---|---|
Keyboard | |||
0 | key | Toggle | the keysym (as unicode ?) |
0 | telltale | Bitset | set of current modifiers |
Mouse | |||
1 | positional | Position | the current location |
1 | button | Toggle | the actuated button |
1 | telltale | Bitset | pressed buttons |
Berlin's EventManager will use such a description to create Event types suitable to carry the data associated with each attribute. An Event is therefor nothing but a list of device/attribute pairs where an attribute has a discriminator (type) and a value. This composition based principle allows devices to be coupled as well. For example, traditionally mouse events trigger different commands dependant on whether modifier keys are pressed. This can simply be achieved in synthesizing the appropriate events with the following data:
Input::Position position = ...; Input::Toggle button = ...; Input::Bitset keymodifiers = ...; Input::Event event; event.length(3); event[0].device = 0; event[0].attr.location(position); event[1].device = 0; event[1].attr.selection(button); event[2].device = 1; event[2].attr.state(keymodifiers); ...
In order for events to have any effect on an application, they must be "dispatched" from the event queue they originate in, and be "received" by some appropriate object in the scene graph. Such objects are called Controllers.
Controllers are implemented in terms of invisible "decorator" graphics. They are parents of the graphics which you naturally assume to be receiving the events. So in the case of a button, for instance, the "image" of a beveled rectangle with some label in it is a child of the invisible controller which really receives and processes mouse clicks. The button's bevel merely reflects the state of the controller. This has the advantage that any graphic can become an "active" recipient of events merely by being wrapped in a suitable controller.
the Controller receiving the event is said to hold the focus for the device the event originated in. There are two fundamentally different ways to change the focus for a given device. Positional events are - unless a device grab is active - dispatched to the top most controller intersecting with the event's position. The determination of this controller is done with a pick traversal. For non positional devices a controller can request the focus explicitly.
If you step back from the scene graph and just concentrate on the controllers, you will see that they form a sort of "subgraph" within the scene. This is referred to as the logical control graph, since it is the set of nodes into which most applications will hook their logic. The necessary methods to construct the control graph are:
interface Controller { void append_controller(in Controller c); void prepend_controller(in Controller c); void remove_controller(in Controller c); Iterator first_child_controller(); Iterator last_child_controller(); };Note that the control graph isn't necessarily isomorph to the scene graph though that's we one would intuitively expect. Since the control graph defines (mostly) the traversal order for the navigation of non positional focus, it is the desired behavior which should ultimately drive the topology if this graph.
For positional events the target controller must be determined - at least if no device grab is active - by comparing the event's position with the graphics screen real estate. This lookup algorithm is called picking and is done by means of a PickTraversal which gets passed through the scene graph. As it does this, it maintains a growing and shrinking stack of information representing the current state of the traversal. This stack represents a "trail" or "path" to the currently traversed graphic. We need to create a "snapshot" of this trail at the hit graphic. This is done by calling hit on the PickTraversal, resulting in a memento being created.
Imagine the mouse to click on the red polygon. This will result in the TransformFigure's method pick to be called which looks like
void TransformFigure::pick(PickTraversal_ptr traversal) { if (ext->valid and and traversal->intersects_region(ext)) traversal->hit(); }The Traversal's trail, in the moment the hit occurs, contains the following entries:
For each position, the trail contains the following information:
Table 4.3. stack information at each position in a traversal
Graphic | the actual node in the scene graph |
Tag | an identifier for the edge to the parent |
Region | allocation, in local coordinates |
Transform | the transformation from global to local coordinate system |
Since after the traversal is over, the stack will be empty, the Picktraversal needs to create a memento of itself, which can be used later on to deliver the event. The controller stack, extracted out of the trail, is
It is used to update the focus, i.e. to call (in appropriate order) all the controllers' receive_focus methods. From within this method, the controllers can manipulate global data such as pointer images, context sensitive menus etc. In particular, a Controller may want to install an Event::Filter through which the event has to be passed before the top controller (the Editor in this case) can handle it.
CORBA::Boolean ControllerImpl::receive_focus(Focus_ptr f) { set(Telltale::active); f->set_pointer(myPointer); return true; }Finally, if all filters let the event through, the Editor's handle method will be called. It sees the trail like
A special iterator allows it to access the graphics which, inside the editor, were intersecting with the device.
You can navigate the focus through this control graph via the following methods:
interface Controller { boolean request_focus(in Controller c, in Event::Device d); boolean receive_focus(in Focus f); void lose_focus(in Focus f); boolean first_focus(in Event::Device d); boolean last_focus(in Event::Device d); boolean next_focus(in Event::Device d); boolean prev_focus(in Event::Device d); };
Table of Contents
MVC is short for "Model, View, Controller", and is a design technique frequently adopted by object-based programs to assist in modularity, flexibility and reuse. As you can guess, it involves separating the objects in a particular interaction sequence into 3 categories, each of which supports a general-purpose interface through which it is known to the other 2.
Many programs pay a certain quantity of lip service to MVC, but Berlin adopts it as a central technique through all its systems, internally as well as when communicating with applications in separate processes. It is very important to understand how and why we use MVC.
Separating a program into Model, View and Controller has a number of important advantages over attacking all three concepts at once. First and foremost, it provides a natural set of encapsulation boundaries, which helps reduce program interdependencies and interactions, and thus reduce bugs and enhance program comprehension. Secondly, the separation encourages many-to-many relationships along the component boundaries, which (as it turns out) is implicit in many program requirements from the onset. For instance, having a model separated from the controller makes it very easy to adapt your model to simultaneous manipulation by multiple parties, such as remote users or script engines, or by manipulation through previously unknown event types. Likewise having separate view components makes it easy to produce multiple views of the same model (for simultaneous interaction through different representations) and to adapt to novel representations. In our case, the MVC separation is also an ideal set of boundaries along which to make a switch between programming languages or process address spaces (as allowed by CORBA). We make it a common practise to store some or all of a data model in a client application, and most of the controller and model components in the display server where they have high-bandwidth access to display and input hardware.
The aforementioned separation between process address spaces is, in general, referred to as the client/server separation. In many windowing systems, the client stores the majority of data structures, and the display server stores the minimum data required to represent its drawing state. In fresco, we have much more flexibility over storage locations, for two reasons: the client/server communication protocol is generated automatically by the CORBA stub compiler, so it is very easy to add semantically rich concepts to its "vocabulary"; and the display server has no special operating-system level privilege, so can be much more promiscuous about the sort of dynamic extensions it loads. The resulting flexibility means that we can load most of the representation code of a user interface metaphor into the display server, and just "attach" application models, running in separate processes, to the UI at appropriate places. This separation between "representation space" and "application space" gives us concrete advantages: applications written in simple scripting languages have access to powerful UI components, accessibility mechanisms and user preferences (like "themes") have a more universal effect on applications, network traffic is greatly reduced, multiple representations can be attached to the same application relatively painlessly, and application writers do not need to know as much about the device they are drawing to.
Models support a common interface, which we have named Subject in order to be familiar to Java programmers. It includes operations for adding and removing Observers (such as Views, as well as a common notification method which a client (or the model itself) should call when observers should be notified of a change to the model. In addition, most models subclass the Subject interface a little, to provide accessors for their concrete data-type.
A good example which helps illustrating the purpose of the MVC paradigm is the separation of data and presentation within the Controller. As we have seen in the previous chapter, the controller's job is to process input events. In fact, it interprets the event and maps it to (observable) state changes. Events as such are considered a private means between the server and the Controller for notification. Therefor, focus changes and event reception isn't visible for the outer world. However, the Controller is tightly coupled to a model, which represents it's state. In fact, this coupling is so tight that we chose to implement it within the same object. It's a Telltale inside the controller which serves this purpose. A typical controller implementation will set appropriate flags in this telltale which are observable. In other words - you never ask the controller whether it has keyboard focus or whether it holds a grab for a positional device. You ask whether it is active, pressed, chosen, etc. For buttons for example you typically use frames or highlights to reflect these state flags. This decoupling has the advantage that you can customize the behaviour - i.e. the mapping from events to these semantic flags - and therefor have greater freedom to adapt the interface to your own needs. Here are the predefined flags declared in the Telltale interface:
Table 5.1. predefined telltale flags
enabled | the controller is enabled if it can receive events |
active | a button click or the Enter key would press it |
pressed | controller is being pressed |
chosen | a flag used in toggleble widgets |
running | indicates that an associated command is being executed |
... | ... |
Here we give some examples of Models which fresco has pre-made interfaces for observing and modifying. They should help convey the idea of Model, if it's not yet clear.
A BoundedValue is a double precision floating point value with some built-in named increments. The increments are important, because it allows general-purpose controllers to be constructed which "step through" the numeric range without needing to care exactly how large the range is or what the increments of stepping are. When the value is changed, the BoundedValue inspects the change to make sure it represents an actual numeric difference (this step is important to avoid notification loops) and then notifies all observers.
Telltales represent sets of flags, each of which can be independently tested, set or unset. The flags are named to correspond to common "switchable" states that UI controls can assume, such as enabled, visible, active, chosen, running, stepping, choosable, toggle. These names are chosen in order to allow telltales to, amongst other things, be used to model the state of a controller itself.
Strings, the most common example that we all know and love, are slightly more complex in fresco since we use the Unicode text encoding internally. Specifically, every time text is changed in a modifiable string buffer, we must re-chunk the text into indivisible units (not the same as characters), and then sequentially process any unit which was changed by re-rendering it into glyphs. This feature alone precludes making a 1:1 correspondence between the text "model" and any view or control of it.
As discussed previously, views of models (a.k.a. "representation space") reside primarily though not exclusively in the display server process. While it is possible to attach remote "view" nodes to the scene graph, 2 reasons prevent this from occurring in the common case: CORBA itself is somewhat slow when making a large number of inter-process calls, and doing so would also eliminate any user preferences which might effect the view's concrete appearance. Since part of our goal is to allow users to centrally enforce their preferences across all UI elements, the second issue is considered quite serious. It is recommended that you let the server construct views for you in as many cases as possible.
Here we give some specific views of abstract models. This will help set the concept of View.
Radio boxes are a specific view of a set of mutually exclusive telltales. Selecting one will unselect any other. They are usually presented with a set of labelled, bevelled discs or rectangles, possibly with check marks drawn on top of them.
Table of Contents
Berlin's underlying architecture is extremely modular. All non-basic functionality is encapsulated within domain specific modules, which may or may not be loaded by the server, depending on the platform, and on the client requesting it. These modules are called 'kits'. Since only clients will access the most derived interface of a kit, it is not necessary for the server to know all kit interfaces, resulting in the high level of extensibility we have.
Berlin's principal paradigm is composition. All complex objects are compositions out of small, lightweight building blocks, each from a specific domain, such as 'text', 'layout', 'figure'. Let's have a look at a simple button widget.
As simple as this button looks, it is already an amazing composition of smaller parts coming from a variety of kits. Since each domain is supported by its own kit, only those kits that are actually used will be loaded. For example, there is nothing intrinsic to a button that requires text. A button just contains a graphic. That can be a text label, an image, a figure, or even a 3D scene. The method that generates the button takes a graphic and wraps it by a Controller that implements the specifics of a button, i.e. some 'clickable' behavior together with some form of visual feedback:
interface WidgetKit : Kit { Trigger button(in Graphic g, in Command c) /*...*/ };
The high abstraction level used in Berlin is provided through the extensive use of factories. Location transparent is achieved through the use of the proxy pattern, i.e. some reference factory returns either a local reference (possibly typedefed to a real pointer in C++), or a proxy to a remote object, depending on the location of the servant. In much the same way becomes Berlin flexible to use various styles through the use of exchangeable implementations.
Clients don't instantiate objects by themselves. Instead, they ask factory objects to create objects for them.
The exact type of the returned object isn't known, which means that the server has more freedom to choose an appropriate implementation that is most appropriate. What that means can depend on the specific hardware the server is using, or some UI styles which the server is configured for, etc.
When the server is started, it doesn't really load any kits. It rather looks up all available kits and waits for clients to request them. But how does a client express what kits it needs ?
First of all, there is the specific type of the kit, which is encoded within a 'repoId'. RepoIds are the CORBA way of expressing type information. Given such a repoId, an arbitrary object can be asked whether it supports that type. This is similar to the RTTI facilities provided by C++. Additionally, Kits provide a PropertySeq which can be inspected. That is simply a set of name/value pairs (strings).
interface Kit { struct Property { string name; string value; }; typedef sequence<Property> PropertySeq; readonly attribute PropertySeq properties; boolean supports(in PropertySeq p); /*...*/ };
The 'properties' method returns the full list of properties, while the 'supports' method checks that all properties that are specified are supported.
With this information, a query method can be provided that asks the server for a specific kit type that supports a given set of properties:
interface ServerContext { Kit resolve(in string type, in Kit::PropertySeq attr); raises (SecurityException, CreationFailureException); }
Since such a request may fail, either because the desired kit isn't available or because the client isn't authorized to use it, this method can throw specific exceptions to indicate the failure.
On the server side, all this is provided by means of the prototype pattern. On startup, one object per type is loaded into a lookup table (no worries, kits are almost stateless, so this isn't as expensive as it sounds), such that the lookup can be implemented by means of CORBA's own type system (objects have an '_is_a(in string repoId)' method). If an appropriate object was found, it is simply cloned and the new copy is assigned to the client's ServerContext.
the full sequence of events thus looks like this:
template <class T> typename T::_ptr_type resolve_kit(Warsaw::ServerContext_ptr context, const char *name, const Warsaw::Kit::PropertySeq &props) { CORBA::Object_ptr object; try { object = context->resolve(name, props); typename T::_var_type reference = T::_narrow(object); } catch (...) { /* provide some meaningful error diagnostics */ } return reference._retn(); }
ServerContext_vat context; /*...*/ Kit::PropertySeq properties; properties.length(1); properties[0].name = CORBA::string_dup("style"); properties[0].value = CORBA::string_dup("Motif"); widgets = resolve_kit<WidgetKit>(context, "IDL:Warsaw/WidgetKit:1.0", properties);
#include <Prague/Sys/Thread.hh> #include <Warsaw/config.hh> #include <Warsaw/Trigger.hh> #include <Warsaw/DesktopKit.hh> #include <Warsaw/TextKit.hh> #include <Warsaw/WidgetKit.hh> #include <Warsaw/resolve.hh> #include <Warsaw/Unicode.hh> #include <Warsaw/ClientContextImpl.hh> using namespace Prague; using namespace Warsaw; int main(int argc, char **argv) { CORBA::ORB_var orb = CORBA::ORB_init(argc, argv); CosNaming::NamingContext_var ncontext = resolve_init<CosNaming::NamingContext>(orb, "NameService"); Server_var server = resolve_name<Server>(ncontext, "IDL:Warsaw/Server:1.0"); PortableServer::POA_var poa = resolve_init<PortableServer::POA>(orb, "RootPOA"); PortableServer::POAManager_var pman = poa->the_POAManager(); pman->activate(); ClientContextImpl *ci = new ClientContextImpl(Unicode::to_CORBA(Babylon::String("hello world applet"))); ClientContext_var client = ci->_this(); ServerContext_var context = server->create_server_context(client); DesktopKit_var desktop = resolve_kit<DesktopKit>(context, "IDL:Warsaw/DesktopKit:1.0"); TextKit_var text = resolve_kit<TextKit>(context, "IDL:Warsaw/TextKit:1.0"); WidgetKit_var widgets = resolve_kit<WidgetKit>(context, "IDL:Warsaw/WidgetKit:1.0"); ToolKit_var tools = resolve_kit<ToolKit>(context, "IDL:Warsaw/ToolKit:1.0"); Graphic_var label = text->chunk(Unicode::to_CORBA(Babylon::String("hello world"))); Graphic_var black = tools->rgb(label, 0., 0., 0.); Controller_var button = widgets->button(black, Command::_nil()); Window_var window = desktop->shell(button, client); while (true) Thread::delay(1000); };
Table of Contents
In the sake of scalability CORBA provides a variety of coupling strategies between objects and servants. While the fastest and easiest way is a one-to-one mapping between objects and servants, which lasts over the whole life time of both, in some contexts it might be better to let the server incarnate objects by servants on demand, and evict servants whenever it is short of memory. In the context of fresco, we use a one-to-one mapping, where objects are activated with newly created servants, and the servants life time is bound by the object.
In a distributed environment we can no longer use explicit construction to create new instances. Instead, we rely on factory methods to create objects for us. That has two important consequences:
the exact object type isn't known
the callee isn't necessarily the owner of the returned reference
Therefor, we need some other means to inform the system that we no longer need the reference we hold.
For reasons we discussed before, it is the POA which needs to delete the servant whenever the last pending request has been processed. All servants therefor need to be derived from PortableServer::RefCountBase. The counter is initialized to one, and incremented by one in the activation process, i.e. as soon as the servant is registered into the Active Object Map of the POA. Therefor, if the servant should be deleted upon deactivation, we must decrement the counter immediately after activation:
Example B.1. servant life cycle for servants derived from PortableServer::RefCountBase
{ MyServant *servant = new MyServant(); PortableServer::POA_var poa = ...; PortableServer::ObjectId_var oid = poa->activate_object(servant); servant->_remove_ref(); // do some work here poa->deactivate_object(oid); delete [] oid; }
![]() | The constructor initializes the reference counter to 1 |
![]() | The POA inserts the servant into its Active Object Map, incrementing the counter by 1. |
![]() | With a decrement the counter is reset to 1 |
![]() | The deactivation removes the servant from the POA's AOM. As soon as the last pending request on this servant has been processed, the counter is decremented and the servant is deleted. |
Example B.2. a local temporary object
{ RegionImpl *region = new RegionImpl; PortableServer::POA_var poa = ...; PortableServer::ObjectId_var oid = poa->activate_object(region); // do some work here poa->deactivate_object(oid); delete [] oid; }
Example B.3. a local temporary object, using the Impl_var template
{ Impl_var<RegionImpl> region(new RegionImpl); // do some work here }
Example B.4. a single owned object is created and destroyed
{ Graphic::Iterator_var iterator = graphic->first_child(); // do some work here iterator->destroy(); }
Example B.5. a multiowned object is referenced temporarily
{ Graphic_var child = graphic->body(); // do some work here child->decrement(); }
Example B.6. a multiowned object is referenced temporarily using the RefCount_var template
{ RefCount_var<Graphic> child = graphic->body(); // do some work here }