The PIM to Relational Transformation
The transformation rules for generating relational database models typically take care of a consistent object-relational mapping. Although most of these rules are rather straightforward and well-known (Blaha 1998), it can be a hard job to execute them manually. Small changes in the PIM can have a large effect on the relational model. For instance, changing the type of an attribute in the PIM from a simple data type to a class means introducing a foreign key in the corresponding table. The simple data type can be mapped directly to a column in a table. But if the data type is a class, this class will be mapped to a table itself. The column will now have to hold a reference (foreign key) to a key value in that other table.
What rules should be used to generate a relational model? Note that we want to formulate rules that will apply to any UML model, not only to Rosa's PIM. First, we must decide how the basic data types are being mapped. This is a fairly simple task. All we need to do is find the right corresponding data type in the relational model. Data types can be mapped according to the following rules. Note that we define an arbitrary length for each of the relational data types.
A UML string will be mapped onto a SQL VARCHAR(40).
A UML integer will be mapped onto an INTEGER.
A UML date will be mapped onto a DATE.
But what do we with the Address? In the PIM the address is not a class, but a data type, a struct containing only attributes, and no operations. We have two options: either make a separate table for every data type, or inline the data type into the table that holds the attribute. Here we choose the latter option, because it will simplify the alignment with the EJB model. So for struct data types we have the following rule:
A UML data type that has no operations will be mapped onto a number of columns, each representing a field in the data type.
Second, every class should be transformed into a table, where all attributes are fields in the table (rules ClassToTable and AttrToCol). When the type of the attribute is not a data type but a class, the field in the table should hold a foreign key to the table representing that class (rule AttrToFrkey). Note that we do not yet take into account the possibility that the multiplicity of the attribute is more than one.
The third step is more complicated. Associations in the UML model need to be transformed into a foreign key relation in the database model, possibly introducing a new table. Note that we have several possibilities for the multiplicities of an association from class A to class B in a UML model:
The multiplicity at A is zero-or-one.
The multiplicity at A is one.
or
The multiplicity at A is more than one.
The same holds for the multiplicity at B. This leaves us with nine different combinations of multiplicities at both ends. Furthermore, we have to take into account that an association can be adorned with an association class. The rule can best be expressed in pseudocode:
if the association A to B is adorned by an association class
or the multiplicity at both ends is more-than-one
then create a table representing the association class or the
association
and create foreign keys in both the table representing A and
the table representing B referring this new table
else if the multiplicity at one end is zero-or-one
then create a foreign key in the table representing the class
at that end, referencing the other end
else /* the multiplicity of the association is one-to-one */
create a foreign key in one of the tables, referencing the
other end
endif
endif
Note that in this example we do not take into account the navigability of the association. We assume, for the sake of simplicity, that all associations are navigable.
Each column in a relational model may or may not have the NULL value. In this transformation, only the columns generated from the attributes of the PIM may have the value NULL. The other columns are generated based on the association ends to constitute the foreign keys. These columns may not have the value NULL. The following rules correspond with the above:
A UML attribute will be mapped to a column that may contain the NULL value.
A UML association end will be mapped to a number of columns that may not contain the NULL value.
Figure 5-1 depicts the resulting database model in the form of an Entity-Relationship diagram. You can see that there is one table for each class in the PIM. Columns that are part of the key are repeated as foreign key columns in tables representing the "many" side of the association. The struct called address is inlined into the Customer and BreakfastOrder tables, and for each field there is one column in the table. The fact that a column may have the NULL value is not shown in the diagram, but assumed to be part of the generated relational model.
Figure 5-1. Relational PSM of Rosa's Breakfast Service
The PIM to EJB Transformation
To complete the system for Rosa's Breakfast Service we need to generate an EJB PSM. We will make a number of architectural choices in the way that we use the EJB framework. These choices are specific for Rosa's Breakfast Service. Depending on the project requirements and the possibilities offered by the available tools, you will need to make your own choices in your own situation. We start out by explaining some aspects of the choices we have made regarding the EJB PSM.
5.2.1 A Coarse Grained EJB Model
The EJB model for Rosa's Breakfast Service is structured in a rather different manner than the PIM. We could have built a component model for Rosa's Breakfast Service by simply generating a component for each class. However, it is crucial for the performance of EJB components in a distributed environment with remote access that the communication between components remains minimal.
The attributes of an object could be exchanged one by one, where each get- or set-operation on an attribute value causes a remote method invocation. This is a straightforward, but rather naive approach. To minimize the frequency of interaction between components, wherever possible, it is better to exchange all attributes of an object in one remote call. This results in messages that are more complex, and include a relatively high amount of data, and in component interfaces that have a relatively low number of operations, relieving the burden on the communication network.
Furthermore, it is better to keep the number of components relatively small, because intercomponent communication will not burden the network, whereas intracomponent communication will. To minimize the number of components, we can combine closely related classes into one component, instead of having a component for each of them separately. We call a component model that adheres to these principles a coarse grained component model.
A coarse grained component model is a model of components where the components are large and have infrequent interaction with a relatively high amount of data in each interaction.
In contrast to the coarse grained component model there is the fine grained component model.
A fine grained component model is a model of components where the components are small and have frequent communication with a low amount of data in each interaction.
For Rosa's Breakfast Service we have chosen to use a coarse grained EJB component model. There are a number of books where you can find out more about component models (for example, Cheeseman 2001, Szyperski 1998, and Allen 1998). This book does not address that subject.
The interfaces of the coarse grained components have methods to exchange complete sets of associated objects, and they do not have methods for accessing the individual attributes and associations between these objects. Therefore, a number of classes in the source model must be clustered into so-called EJB data schemas.
An EJB data schema is a set of classes, attributes, and associations that is served by an EJB component as a whole.
An EJB data class is a class that is part of an EJB data schema.
To find out which EJB data classes should be part of a data schema, we use the composite aggregation property of the associations in the PIM. Every class that is part of a whole is clustered into the data schema that is generated from the whole. For example, the class Breakfast is defined to be part of the class BreakfastOrder, therefore Breakfast will be clustered in the data schema for BreakfastOrder and there will be no separate data schema or component for Breakfast.
Likewise, association classes are clustered in the data schema that is generated from the associated class that is able to navigate to the other associated class. For example, the class Change would be clustered into the data schema for Breakfast, but because the class Breakfast itself is clustered into the data schema for BreakfastOrder, the class Change becomes part of the data schema for BreakfastOrder as well.
A client of the EJB component can access the details of these clustered classes by getting a so-called EJB data object.
An EJB data object is an instance of an EJB data class.
Each data class has its own local get- and set-operations for its attributes and associations to other data classes. When the client of the EJB component has finished, all changes that need to be made on the data objects are sent back to the EJB component with the request to process all changes.
Besides data classes that are holding the state of the exchanged objects, we need so-called key classes.
An EJB key class is a class that holds the data that is needed to identify EJB data objects.
If a data class has an association to a class that does not reside in the same data schema, it will refer to a key class instead of a data class. In this manner, the amount of exchanged data in one data object is limited to the instances of the classes within one data schema.
5.2.2 The Transformation Rules
Now that we have established that we are going to generate a coarse grained EJB model, we are able to define the rules for the transformation from the PIM to the EJB PSM. Figure 5-2 shows the four EJB components that result from this transformation.
Figure 5-2. Top-level EJB component model for Rosa's system
As explained in the previous section, the granularity of the EJB components is based on the composition of the classes in the domain model. In the transformation rules, we will use the term outermost composition of x to refer to the class that is not a part (via a composite association) of another class and is equal to or the (direct or indirect) container of the class x. To make the following transformation rules readable, we will leave out the words UML and EJB whenever the source and target model are clear.
For each PIM class, an EJB key class is generated.
Each PIM class that is not a composite part of another PIM class is transformed into an EJB component and an EJB data schema.
Each PIM class is transformed into an EJB data class residing in an EJB data schema that is generated from the PIM class that is the outermost composition of the transformed PIM class.
Each PIM association is transformed into an EJB association within an EJB data schema that is generated from the PIM class that is the outer-most composition of the transformed PIM association.
Each PIM association class is transformed into two EJB associations and an EJB data class. The EJB associations and the EJB data class are generated within the EJB data schema that is generated from the PIM class that is the outer-most composition of the PIM class that can navigate across the transformed PIM association class.
Each PIM attribute of a class is transformed into an EJB attribute of the mapped EJB data class.
Each PIM operation is transformed into an EJB operation of the generated EJB component that is generated from the PIM class that is the outer-most composition of the PIM class that owns the transformed PIM operation.
Figure 5-3 depicts the EJB data schemas of the Comestible and StandardBreakfast components. As indicated by the composition relations between the classes in the PIM, the Comestible data schema includes only the Comestible EJB data class, while the StandardBreakfast data schema includes the EJB data classes StandardBreakfast and Part.
Figure 5-3. EJB component model including the data schemas
The PIM to Web Transformation
The Web model specifies the definitions of the Web components. The Web components serve HTML content to the user. Each component serves a subset of the classes and associations of the system. Extra details are added to the Web model to define the layout and the user interactions of the HTML pages. In this example, the Web components serve the same classes as in the data schemas of the entity beans.
Web components are defined similarly to EJB components. The served classes and associations are defined in Web data schemas similar to the EJB data schemas. The most important differences are:
The data types in the Web model define user presentation and interaction details.
In the Web application model there are no key classes; instead, the key classes of the EJB model are referenced.
Web actions are added that define actions that can be triggered by the end-user.
The Web data schemas define which information is shown that can be altered by the user. One Web data schema is typically presented to the user in more than one HTML page. A user may create, query, alter, and delete objects from the domain. Which changes the user may execute is defined in properties of the elements of the Web data schemas.
5.3.1 The Transformation Rules
The rules for generating the Web application model from the UML model are almost equal to the ones for generating the EJB application model. Again, we will leave out the words UML and Web whenever the source and target model are clear.
Each class that is not part of another class is transformed into a component and a data schema. The component is set to serve the data schema.
Each class is transformed into a data class residing in a data schema that is generated from the class that is the outer-most composition of the transformed class.
Each association is transformed into an association within a data schema that is generated from the class that is the outer-most composition of the transformed association.
Each association class is transformed into two associations and a data class. The associations and the data class are generated within the data schema that is generated from the class that is the outer-most composition of the class that can navigate across the transformed association class.
Each attribute of a class is transformed into an attribute of the mapped data class.
Each operation is transformed into an operation of the generated Web component that is generated from the class that is the outer-most composition of the class that owns the transformed operation.
Figure 5-4 depicts the Web model generated from the same PIM classes as the EJB model shown in Figure 5-3.
Figure 5-4. Web component model of Rosa's Breakfast Service
The Communication Bridges
The MDA process has not been completely described if we do not reference the generation of the communication bridges between the relational model and the EJB model and between the Web model and the EJB model.
In Figure 4-1 on page 46, the communication bridges are shown by arrows. This means that there is a direction in the relation between the two models. The Web model uses (and knows) the EJB model, and the EJB model uses (and knows) the relational model.
Both bridges are simple and have been explained. The data storage for the EJB components is provided by a relational database, structured according to the generated relational model in Figure 5-1. The EJB-Relational bridge is constituted by the relation between the table generated for a UML class and the EJB data class that is generated from the same UML class. The relationship between the generated relational model and the generated EJB component is shown in Figure 5-5. The figure depicts the EJB component model for Rosa's Breakfast Service without showing the EJB data schemas, but with all dependencies on the tables of the relational model of Figure 5-1.
Figure 5-5. Communication bridge between EJB and relational models
The bridge between the Web model and the EJB model is constituted by the links to the EJB key classes and EJB components as shown in Figure 5-4. Note that both communication bridges are relationships between the PSMs. This relationship will need to be preserved when the PSMs are transformed into code.
Monday, December 10, 2007
Rosa's Application of MDA
Rosa's Breakfast Service
The example we will be exploring in this book is the ordering system for Rosa's Breakfast Service. The example system described in this and subsequent chapters is implemented using the OptimalJ tool. You can download OptimalJ and the example at the Web site http://www.klasse.nl/mdaexplained.
4.1.1 The Business
Rosa has founded a company that supplies a complete breakfast delivered to the homes of its customers. Customers can order from one of the breakfast menus on Rosa's Web site, indicate the hour and place of delivery, give their credit card number, and the ordered breakfast will be delivered. Rosa's slogan is: "Surprise your husband, wife, lover, valentine, mother, father, or friend on their special day while still enjoying the comfort of your bed."
Rosa has composed a series of breakfasts, each of which comes with special decorations. For instance, the Valentine breakfast is served on plates decorated with small hearts and cupids, with matching napkins. The items served are set by the choice of breakfast. For instance, if you choose the French breakfast, you will get one cup of coffee, one glass of orange juice, two croissants and one roll, butter, and jam. But if you choose the English breakfast, you will get two fried eggs and bacon, three pieces of toast, marmalade, and a pot of tea. The Champagne Feast breakfast, which always serves two persons, includes a bottle of champagne, four baguettes, a choice of french cheese and pâtés, and a thermos bottle filled with coffee (to sober up afterwards). Orders can be filled for any number of people, where any in the party may order a different breakfast.
The items served are set, but customers can indicate the style in which the breakfast is served. They can choose between simple, grand, and deluxe. A simple breakfast is served on a plastic tray with carton plates and paper napkins. The orange juice glass, when included, is made of plastic, too. A grand breakfast is served on a wooden tray with pottery plates and cups, and simple white cotton napkins, and the glass is made of real glass. A deluxe breakfast is served on a silver tray with a small vase with some flowers. The plates are fine pottery plates and the napkins are decorated, made from the finest linen. Obviously, the price of the breakfast is higher when the serving style is better. Some breakfast types, like the Champagne Feast, can be ordered in the grand or deluxe style only.
Rosa has ten employees that come in her kitchen at half past five in the morning and work until noon. Five of them take care of deliveries, and five do the cooking and preparation of the breakfasts. Rosa's kitchen is located next to a bakery. The first thing Rosa does in the morning is get fresh bread from the bakery. All other ingredients are kept in supply. Twice a week their inventory is resupplied.
Rosa wants to give her customers a bit of flexibility. Customers may, after choosing a standard breakfast as basis, decide to put in extra comestibles, alter the amount of certain parts, and even remove parts from the breakfast. So, if you like the Champagne Feast breakfast, you may order two bottles of champagne instead of one, add another baguette, and leave out the smelly cheese and the coffee (coffee won't help after two whole bottles of champagne anyhow).
4.1.2 The Software System
In this example we are not very interested in the delicious breakfasts Rosa makes; instead, we look at the system needed to support Rosa's business. The ordering system is a standard Web-based, three-tier application. There will be two different Web interfaces, one for the customers, and one for Rosa's employees to indicate which breakfasts they need to make and deliver. If the customer agrees, his or her name and address will be kept in the system. This will enable Rosa to give a discount to regular customers.
The customer Web interface must respond to the customer information. When a known customer logs in, a list of previous orders must be shown with the option to repeat that order. The database should hold customer information: the name, price, and contents of all breakfast types offered, and the ordering information. The middle-tier will primarily add orders and customers to the database.
We have decided to use a three-tier architecture for Rosa's system. Of course, other choices could be made, and the decision on what architecture to use must be made carefully, but that is not the subject of this book. The three-tier application will consist of a database, a middle tier using Enterprise Java Beans (EJB), and a user interface built with Java Server Pages (JSP).
Applying the MDA Framework
Rosa will be interested only in the final system. But, because this is an example of how the MDA framework can be applied, we are interested in the process of building Rosa's Breakfast System. We will dissect this process into parts that have meaning within the MDA framework. We must identify which PSMs and code models should be delivered and which transformation definitions should be used to generate the PSMs and code models. All elements of the MDA framework used in this example are shown in Figure 4-1. The models are shown as rectangles and the transformations are shown as arrows.
Figure 4-1. Three levels of models for Rosa's Breakfast Service
4.2.1 The PIM and PSMs
To start the MDA process we need to build a platform-independent model that comprises the whole of Rosa's business. For the sake of simplicity, our PIM will be written in plain UML. This is the only model that the developer will create completely "by hand." The other models are mostly generated.
Because each tier is implemented using a different technology, we need three PSMs, one for each tier. The first PSM specifies the database, and is described by a relational model depicted in an Entity-Relationship diagram.
The PSM for the middle tier, which we call the EJB model, is written in a language that is a UML variant. It uses classes, associations, and so on, as in UML, but there are a number of stereotypes defined explicitly for the EJB platform.
The PSM for the Web interface is also written in a UML variant. This language uses different stereotypes than the UML variant used for the EJB model. Neither UML variant is standardized as a profile. They are small and simple, so we will not give an explanation of these UML variants.
4.2.2 The PIM to PSM Transformations
Because three PSMs need to be generated, we need three PIM to PSM transformations:
A PIM to Relational model transformation: a transformation that takes as input a model written in UML and produces a model written in terms of Entity-Relationship diagrams.
A PIM to EJB model transformation: a transformation that takes as input a model written in UML and produces a model written in a UML variant using special EJB stereotypes.
A PIM to Web model transformation: a transformation that takes as input a model written in UML and produces a model written in a UML variant using special stereotypes for the Web interface.
4.2.3 The PSM to Code Model Transformations
For each PSM, we need to generate code. Note that in Chapter 2, The MDA Framework, code was explicitly included to fit in our definition of model. Therefore, we can speak of code models written in some programming language. The code model defines the application in code. For Rosa's business we will have three code models in SQL, Java, and JSP, respectively. Therefore, we need three PSM to code transformations:
A relational model to SQL transformation: a transformation that takes as input a model written as an Entity-Relationship model and produces a model written in SQL.
An EJB model to Java transformation: a transformation that takes as input a model written in the UML EJB variant and produces a model written in Java.
A Web model to JSP and HTML transformation: a transformation that takes as input a model written in the UML Web variant and produces a model written in JSP and HTML.
4.2.4 Three Levels of Abstraction
All models in this example describe or specify the same system, although at a different level of abstraction.
At the highest level of abstraction we define the PIM. This model defines the concepts without any specific technology detail.
At the next level there are the PSMs. These models abstract away from coding patterns in the technologies, but still they are platform specific.
At the lowest level there are the three code models. These models are, of course, pure platform specific models.
Figure 4-1 shows the different models at the three levels of abstraction and the transformations between them. Note that the three tiers and the three levels of abstraction are orthogonal. The levels of abstraction are depicted from top to bottom; the tiers are depicted from right to left.
The following two chapters address the transformations and technologies needed to generate the PSMs and code models. Chapter 5 describes the transformation to the three PSMs and Chapter 6 explains portions of the code models of Rosa's Breakfast Service.
The PIM in Detail
The PIM for Rosa's Breakfast System is depicted in Figure 4-2. The PIM is the only model that must be created by humans in a creative process. To find out how to build such a model you can read a large number of books on UML and modeling. Here we assume that the creative process has been successfully completed with the PIM in Figure 4-2 as the result.
Figure 4-2. PIM of Rosa's Breakfast Service
In the PIM every standard breakfast contains a number of parts; each part indicates the amount in which a certain comestible is present in the breakfast. Every order consists of a number of breakfasts. The price of each breakfast is determined based on the chosen style and the price of the chosen standard breakfast. The price of the order is simply the addition of the prices of all breakfasts plus a small delivery fee.
The model in Figure 4-2 defines the breakfast services independently from any specific technology, so indeed, it is a PIM. But Rosa does not want a model, she wants a running system. Therefore, we need to transform the PIM into a number of PSMs, taking into account the relationships between these PSMs.
The example we will be exploring in this book is the ordering system for Rosa's Breakfast Service. The example system described in this and subsequent chapters is implemented using the OptimalJ tool. You can download OptimalJ and the example at the Web site http://www.klasse.nl/mdaexplained.
4.1.1 The Business
Rosa has founded a company that supplies a complete breakfast delivered to the homes of its customers. Customers can order from one of the breakfast menus on Rosa's Web site, indicate the hour and place of delivery, give their credit card number, and the ordered breakfast will be delivered. Rosa's slogan is: "Surprise your husband, wife, lover, valentine, mother, father, or friend on their special day while still enjoying the comfort of your bed."
Rosa has composed a series of breakfasts, each of which comes with special decorations. For instance, the Valentine breakfast is served on plates decorated with small hearts and cupids, with matching napkins. The items served are set by the choice of breakfast. For instance, if you choose the French breakfast, you will get one cup of coffee, one glass of orange juice, two croissants and one roll, butter, and jam. But if you choose the English breakfast, you will get two fried eggs and bacon, three pieces of toast, marmalade, and a pot of tea. The Champagne Feast breakfast, which always serves two persons, includes a bottle of champagne, four baguettes, a choice of french cheese and pâtés, and a thermos bottle filled with coffee (to sober up afterwards). Orders can be filled for any number of people, where any in the party may order a different breakfast.
The items served are set, but customers can indicate the style in which the breakfast is served. They can choose between simple, grand, and deluxe. A simple breakfast is served on a plastic tray with carton plates and paper napkins. The orange juice glass, when included, is made of plastic, too. A grand breakfast is served on a wooden tray with pottery plates and cups, and simple white cotton napkins, and the glass is made of real glass. A deluxe breakfast is served on a silver tray with a small vase with some flowers. The plates are fine pottery plates and the napkins are decorated, made from the finest linen. Obviously, the price of the breakfast is higher when the serving style is better. Some breakfast types, like the Champagne Feast, can be ordered in the grand or deluxe style only.
Rosa has ten employees that come in her kitchen at half past five in the morning and work until noon. Five of them take care of deliveries, and five do the cooking and preparation of the breakfasts. Rosa's kitchen is located next to a bakery. The first thing Rosa does in the morning is get fresh bread from the bakery. All other ingredients are kept in supply. Twice a week their inventory is resupplied.
Rosa wants to give her customers a bit of flexibility. Customers may, after choosing a standard breakfast as basis, decide to put in extra comestibles, alter the amount of certain parts, and even remove parts from the breakfast. So, if you like the Champagne Feast breakfast, you may order two bottles of champagne instead of one, add another baguette, and leave out the smelly cheese and the coffee (coffee won't help after two whole bottles of champagne anyhow).
4.1.2 The Software System
In this example we are not very interested in the delicious breakfasts Rosa makes; instead, we look at the system needed to support Rosa's business. The ordering system is a standard Web-based, three-tier application. There will be two different Web interfaces, one for the customers, and one for Rosa's employees to indicate which breakfasts they need to make and deliver. If the customer agrees, his or her name and address will be kept in the system. This will enable Rosa to give a discount to regular customers.
The customer Web interface must respond to the customer information. When a known customer logs in, a list of previous orders must be shown with the option to repeat that order. The database should hold customer information: the name, price, and contents of all breakfast types offered, and the ordering information. The middle-tier will primarily add orders and customers to the database.
We have decided to use a three-tier architecture for Rosa's system. Of course, other choices could be made, and the decision on what architecture to use must be made carefully, but that is not the subject of this book. The three-tier application will consist of a database, a middle tier using Enterprise Java Beans (EJB), and a user interface built with Java Server Pages (JSP).
Applying the MDA Framework
Rosa will be interested only in the final system. But, because this is an example of how the MDA framework can be applied, we are interested in the process of building Rosa's Breakfast System. We will dissect this process into parts that have meaning within the MDA framework. We must identify which PSMs and code models should be delivered and which transformation definitions should be used to generate the PSMs and code models. All elements of the MDA framework used in this example are shown in Figure 4-1. The models are shown as rectangles and the transformations are shown as arrows.
Figure 4-1. Three levels of models for Rosa's Breakfast Service
4.2.1 The PIM and PSMs
To start the MDA process we need to build a platform-independent model that comprises the whole of Rosa's business. For the sake of simplicity, our PIM will be written in plain UML. This is the only model that the developer will create completely "by hand." The other models are mostly generated.
Because each tier is implemented using a different technology, we need three PSMs, one for each tier. The first PSM specifies the database, and is described by a relational model depicted in an Entity-Relationship diagram.
The PSM for the middle tier, which we call the EJB model, is written in a language that is a UML variant. It uses classes, associations, and so on, as in UML, but there are a number of stereotypes defined explicitly for the EJB platform.
The PSM for the Web interface is also written in a UML variant. This language uses different stereotypes than the UML variant used for the EJB model. Neither UML variant is standardized as a profile. They are small and simple, so we will not give an explanation of these UML variants.
4.2.2 The PIM to PSM Transformations
Because three PSMs need to be generated, we need three PIM to PSM transformations:
A PIM to Relational model transformation: a transformation that takes as input a model written in UML and produces a model written in terms of Entity-Relationship diagrams.
A PIM to EJB model transformation: a transformation that takes as input a model written in UML and produces a model written in a UML variant using special EJB stereotypes.
A PIM to Web model transformation: a transformation that takes as input a model written in UML and produces a model written in a UML variant using special stereotypes for the Web interface.
4.2.3 The PSM to Code Model Transformations
For each PSM, we need to generate code. Note that in Chapter 2, The MDA Framework, code was explicitly included to fit in our definition of model. Therefore, we can speak of code models written in some programming language. The code model defines the application in code. For Rosa's business we will have three code models in SQL, Java, and JSP, respectively. Therefore, we need three PSM to code transformations:
A relational model to SQL transformation: a transformation that takes as input a model written as an Entity-Relationship model and produces a model written in SQL.
An EJB model to Java transformation: a transformation that takes as input a model written in the UML EJB variant and produces a model written in Java.
A Web model to JSP and HTML transformation: a transformation that takes as input a model written in the UML Web variant and produces a model written in JSP and HTML.
4.2.4 Three Levels of Abstraction
All models in this example describe or specify the same system, although at a different level of abstraction.
At the highest level of abstraction we define the PIM. This model defines the concepts without any specific technology detail.
At the next level there are the PSMs. These models abstract away from coding patterns in the technologies, but still they are platform specific.
At the lowest level there are the three code models. These models are, of course, pure platform specific models.
Figure 4-1 shows the different models at the three levels of abstraction and the transformations between them. Note that the three tiers and the three levels of abstraction are orthogonal. The levels of abstraction are depicted from top to bottom; the tiers are depicted from right to left.
The following two chapters address the transformations and technologies needed to generate the PSMs and code models. Chapter 5 describes the transformation to the three PSMs and Chapter 6 explains portions of the code models of Rosa's Breakfast Service.
The PIM in Detail
The PIM for Rosa's Breakfast System is depicted in Figure 4-2. The PIM is the only model that must be created by humans in a creative process. To find out how to build such a model you can read a large number of books on UML and modeling. Here we assume that the creative process has been successfully completed with the PIM in Figure 4-2 as the result.
Figure 4-2. PIM of Rosa's Breakfast Service
In the PIM every standard breakfast contains a number of parts; each part indicates the amount in which a certain comestible is present in the breakfast. Every order consists of a number of breakfasts. The price of each breakfast is determined based on the chosen style and the price of the chosen standard breakfast. The price of the order is simply the addition of the prices of all breakfasts plus a small delivery fee.
The model in Figure 4-2 defines the breakfast services independently from any specific technology, so indeed, it is a PIM. But Rosa does not want a model, she wants a running system. Therefore, we need to transform the PIM into a number of PSMs, taking into account the relationships between these PSMs.
MDA Today
OMG Standards
The MDA is defined and trademarked by the OMG. We therefore first take a look at the OMG standards that play a role within the MDA framework.
3.1.1 OMG Languages
The OMG defines a number of modeling languages that are suitable to write either PIMs or PSMs. The most well-known language is UML. This is the most widely used modeling language.
The Object Constraint Language (OCL) is a query and expression language for UML, which is an integral part of the UML standard. The term "Constraint" in the name is an unfortunate leftover from the time when OCL was used to specify only constraints to UML models. Currently, OCL is a full query language, comparable to SQL in its expressive power.
The Action Semantics (AS) for UML defines the semantics of behavioral models in UML. Unfortunately, it defines the behavior at a low-level foundation. Therefore, it is not directly suitable for writing PIMs. It lacks the higher level of abstraction that is necessary. The AS is not a language that can be used directly by a modeler, because it does not define a concrete syntax; you cannot write down anything at all in a standardized way.
UML includes a profile mechanism that enables us to define languages derived from the UML language. The language defined in the profile is a subset of UML with additional constraints and suitable for a specific use. It uses the UML diagrammatic notation and OCL textual queries, and looks like UML. Many such profiles are standardized by the OMG; others are not standardized, but publicly available. Official OMG profiles include the CORBA Profile, the Enterprise Distributed Object Computing (EDOC) Profile, the Enterprise Application Integration (EAI) Profile, and the Scheduling, Performance, and Time Profile. More profiles are being developed and will be standardized in the coming years. Profiles are usually suitable for writing PSMs.
The UML/EJB Mapping Specification (EJB01) is an example of a profile that is standardized through the Java Community Process. Several profiles for other programming languages, like Java, C#, and so on, are defined by individual organizations and tool vendors.
Another language that is defined by the OMG is the Common Warehouse Metamodel (CWM). This is a language specifically designed to model data mining and related systems.
Chapter 11 describes the various OMG languages and their role in MDA in more detail.
3.1.2 OMG Language and Transformation Definitions
Languages used within the MDA need to have formal definitions so that tools will be able to automatically transform the models written in those languages. All of the languages standardized by the OMG have such a formal definition. The OMG has a special language called the Meta Object Facility (MOF), which is used to define all other languages. This ensures that tools are able to read and write all languages standardized by the OMG.
The transformation definitions used in the MDA framework are currently defined in a completely nonstandardized way. To allow standardization of these transformation definitions, the OMG is currently working on a standard language to write transformation definitions. This standard is called QVT, which stands for Query, Views, and Transformations. At the time of writing, the Request for Proposals (RfP) for QVT has been published. QVT is still being worked on by OMG members so we don't yet know exactly how the finished specification will look.
UML as PIM Language
As seen in section 1.3.1 the level of completeness, consistency, and unambiguity of a PIM must be very high. Otherwise, it is not possible to generate a PSM from a PIM. Let's investigate to what extent UML is a good language for building PIMs.
3.2.1 Plain UML
The strongest point in UML is the modeling of the structural aspects of a system. This is mainly done through the use of class models, which enables us to generate a PSM with all structural features in place. The example in Chapter 4 shows how this is done.
UML has some weak points that stop us from generating a complete PSM from a PIM. The weak area in UML is in the behavioral or dynamic part. UML includes many different diagrams to model dynamics, but their definition is not formal and complete enough to enable the generation of a PSM. For example, what code (for any platform) would you generate from an interaction diagram, or from a use case?
Plain UML is suitable to build PIMs in which the structural aspects are important. When a PSM is generated, a lot of the work still remains to be done on the resulting model, to define the dynamic aspects of the system.
3.2.2 Executable UML
Executable UML (Mellor 2002) is defined as plain UML combined with the dynamic behavior of the Action Semantics (AS). The concrete syntax used in Executable UML has not been standardized.
The strength of plain UML, modeling the structural aspect, is present in Executable UML as well. Executable UML to some extent mends the weak point in plain UML, the modeling of behavior. In Executable UML the state machine becomes the anchor point for defining behavior. Each state is enhanced with a procedure written in the AS.
In principle, Executable UML is capable of specifying a PIM and generating a complete PSM, but there are a few problems:
Relying on state machines to specify complete behavior is only useful in specific domains, especially embedded software development. In other, more administrative, domains the use of state machines to define all behavior is too cumbersome to be used in practice.
The AS language is not a very high-level language. In fact, the concepts used are at the same abstraction level as a PSM. Therefore, using Executable UML has little advantage over writing the dynamics of the system in the PSM directly. You will have to write the same amount of code, at the same level of abstraction.
The AS language does not have a standardized concrete syntax or notation; therefore, you cannot write anything in a standard way.
Executable UML is suitable within specialized domains, but even there the benefits might be less than you would expect, because of the low abstraction level of the action language.
3.2.3 UML–OCL Combination
Using the combination of UML with OCL to build a PIM allows for PIMs that have a high quality; that is, they are consistent, full of information, and precise. The strong structural aspect of UML can be utilized and made fully complete and consistent. Query operations can be defined completely by writing the body of the operation as an OCL expression. Business rules can be specified using OCL, including dynamic triggers.
The dynamics of the system can be expressed by pre- and post-conditions on operations. For relatively simple operations the body of the corresponding operation might be generated from the post-condition, but most of the time the body of the operation must be written in the PSM. In that case, generating code for the pre- and post-condition ensures that the code written in the PSM conforms to the required specification in the PIM.
Although the dynamics of the systems still cannot be fully specified in the UML–OCL combination, the combination of UML class models with OCL allows for a much more complete generation of PSMs and code than does plain UML. The use of the combination of UML and OCL is at the moment of this writing probably the best way to develop a high quality and high level PIM, because this results in precise, unambiguous, and consistent models that contain much information about the system to be implemented.
Tools
Ever since MDA became a popular buzzword, vendors have claimed that their tools support MDA. Tools that were on the market for many years, even before the name MDA was invented, make these claims. Most of these claims are true, in the sense that they support some aspect of MDA. We will use the MDA framework as shown in Figure 2-7 to analyze what level of support a tool really offers.
3.3.1 Support for Transformations
Support for MDA comes in many different varieties. Simple code generation from a model has been done for more than a decade, and lies well within the boundaries of MDA. The demands that MDA places on models and transformations of models in the ideal situation, however, are very high. In this section we will focus on the support for transformations that tools can offer.
PIM to PSM Transformation Tools
This type of tool transforms a high level PIM into one or more PSMs. This type of tool is barely available at the time of writing, although some tools offer minimal functionality in this area.
PSM to Code Transformation Tools
The most well-known support for MDA is given by tools that act as black-box PSM to code transformation tools. They have a built-in transformation definition and take one predefined type of model as source and produce another predefined type as target. The source model is a PSM, while the target is the code model. In fact, code generation from traditional CASE tools follows this pattern.
Several tools persist the relationship between the PSM and code, and enable you to see changes in either of the models reflected in the other model immediately after the change. This is possible because of the fact that the PSM and code are relatively close to each other, and have almost the same level of abstraction.
PIM to Code Transformation Tools
Another type of tool supports both the PIM to PSM and the PSM to code transformation. Sometimes the user will only see a direct PIM to code transformation and the PSM is left implicit. With this type of tool, the source and target language and the transformation definition are built into the tool that acts as a black box.
UML is usually used as the PIM language. Dynamic functionality that cannot be expressed in UML needs to be added manually in the generated code.
Tunable Transformation Tools
Tools should allow for some tuning or parameterization of a transformation. Access to the transformation definition to adjust it to your own requirements is usually not available. The best one can get today is a transformation definition written in an internal tool-specific scripting language. It is a time-consuming task to make changes to such a script. Because there is no standard language to write transformation definitions yet (see QVT in section 3.1.2), transformation definitions are by definition tool-specific.
Most tools only work for a predefined PIM language, which is often required to be a restricted variant of UML. Although UML diagrams are used to model a PIM, internally the tools do not use the UML language definition, but their own tool-specific definition of UML. Because this does not always follow the UML language definition, even UML experts will have to learn the tool-specific UML definition first and will have difficulty writing transformation definitions.
Transformation Definition Tools
Transformation definition tools support the creation and modification of transformation definitions. This type of tool is needed when you cannot use a transformation definition off the shelf and need to create your own transformation definitions. The only type of transformation definition tool that we have encountered are the tool-specific scripting languages described in the previous paragraph. The heavy dependency of MDA on complex transformation definitions drives a need for a transformation definition language (QVT) and tools that are better suited to this task. More flexible tools should allow a new language definition to be plugged in and used in a transformation. Such tools are not on the market yet.
Because the tools that we have at our disposal today are not ideal, you might conclude that you cannot use MDA successfully today. The situation is far better than that. Although the full potential of MDA might not yet be achievable, the tools we have today can give us enough of the MDA benefits to be useful.
3.3.2 Categorizing Tools
Although transformation tools are at the very heart of an MDA development environment, they are not the only tools that are needed. Next to the functionality that transformation tools bring, other functionality is relevant. For instance, one needs a tool in which models can be made and changed. In Figure 3-1 the functionality that we need in the complete environment is shown. We will explain each item in more detail.
Code Editor (IDE): The common functions that are provided by an Interactive Development Environment (IDE), for example, debugging, compilation, and code editing, cannot be missed.
Code Files: Although we can consider code to be a model, it is usually kept in the form of text-based files. Text-based files are not the format that other "tools" are able to understand. Therefore, we need the following two items:
Code File Parser: A parser that reads a text-based code file and stores the code in the model repository in a model-based form that other tools can use.
Code File Generator: A generator that reads the code in the model repository and produces a text-based code file.
Model Repository: The "database" for models, where models are stored and can be queried using XMI, JMI, or IDL (see section 11.2.1).
Model Editor (CASE tool): An editor for models, in which models can be constructed and modified.
Model Validator: Models used for generation of other models must be extremely well-defined. Validators can check models against a set of (predefined or user-defined) rules to ensure that the model is suitable for use in a transformation.
Transformation Definition Editor: An editor for transformation definitions, in which transformation definitions can be constructed and modified.
Transformation Definition Repository: The storage for transformation definitions.
Figure 3-1. Functionality in an MDA development environment
Most of today's tools combine a number of functions in a more or less open fashion. The traditional CASE tools provide a model editor and a model repository. A code generator based on a scripting language and plugged into a CASE tool provides the transformation tool and transformation definition editor. In that case, the transformation repository is simply text files.
All functions may come in two forms: language specific or generic. A language-specific tool may, for example, provide a model editor for UML and a code generator from UML to C# only. A generic model editor would enable the user to edit any model, as long as a language definition is available.
Selecting Tools
If you are selecting tools to set up your MDA development environment, the above features can help to find your way among the myriad of tools available. First of all you need to find out your own requirements. Are you happy to be language specific, do you want to be able to combine different tools using standardized interfaces, or are you happy with one monolithic tool that incorporates all the functionality you need?
From the tool perspective, you can investigate what functions are supported by a tool and how language specific they are. You should also check whether the functions, the complete tool, or both, can work on models that are interchanged using standard mechanisms (XMI, JMI, IDL). When you go through all of the potential functions, you are able to get a good characterization of the tool in question. There are no tools that provide all functionality in a fully generic way; therefore, you should be prepared to choose several tools that need to be combined.
Although transformations are at the core of MDA, many tools that claim to support MDA do not perform transformations. Instead, they provide some of the other functionality that is required in an MDA development environment. For instance, a tool may only implement the model repository functionality. This is perfectly all right, because you will need to combine multiple tools anyway. The main issue is to find out what features a specific tool supports.
We have not included a tool comparison in this book because the tool market around MDA is still in flux. As the MDA further evolves, tools will start to support pieces of the MDA process. Any characterization of existing tools will be outdated by the time you are reading this. You are much better off applying the above categorization to tools that you encounter yourself. References to tools that claim MDA support can be found at the OMG website: http://www.omg.org/mda.
Development Processes
The MDA does not require a specific process to be used for software development. Processes are not standardized by the OMG, so you are free to choose your own. Of course, the central position of models and transformations in MDA should be reflected in such a process. We will take a short look at some of the more popular processes and show how they can be used for MDA development.
3.4.1 Agile Software Development
A current trend in the software development process is to minimize the amount of effort and time spent in building models that will only serve as documentation, even though they do capture some of the interesting aspects of the software to be built. The purpose is to ensure that software is delivered that works for the users. Since requirements continuously change, the software that is being developed must change accordingly. The ability for a software project to accommodate changes in a flexible and immediate way is the core aspect of Agile Software Development. According to Cockburn (2002), "Working software is valued over comprehensive documentation." Well, we couldn't agree more. At the Web site http://agilemanifesto.org you can find the Agile Manifesto describing the agile principles. On this website you may post a quote why you like this approach. The quote from one of the authors of this book is:
As co-author of the UML standard, people usually think I love large and detailed models. The contrary is true, a model is only worth building if it directly helps to achieve the final goal: building a working system. With the emergence of MDA tools, it becomes possible to directly move from model to code. This "promotes" models from being merely documentation to becoming part of the delivered software, just like the source code.
Because changing a model means changing the software, the MDA approach helps support agile software development.
3.4.2 Extreme Programming
The XP approach is a very popular way of working, where the focus lies on writing code in small increments, such that you have a working system all the time. Each new requirement must be accompanied by an explicit test case, which is used to test the software. When adding new functionality, all previous tests are run in addition to the new tests, to ensure that existing functionality is not broken.
As we explained in section 1.1.1, the focus on code only is too limited. Code must be augmented with so-called "markers" that document the code at a higher level. In extreme programming this is often seen as overhead. When we realize that these markers may take the form of MDA models that directly transform into code, creating these markers is not overhead anymore. On the contrary, the high-level models help to develop the software faster.
This means that we can bring XP to a higher abstraction level, and we might want to talk about "Extreme Modeling."
3.4.3 Rational Unified Process (RUP)
The RUP is a process that is much more elaborate, or much heavier, than the agile or extreme processes. Project managers often like these larger and more structured processes because they give a better feeling of control over the project. Especially for large projects, it is clear that a more elaborate process than extreme programming is needed. On the other hand, many people consider the RUP process as being too large and unwieldy, favoring bureaucratic development of large stacks of paper over "real" software development, i.e., writing code.
UML plays an important role within RUP. Many of the artifacts in RUP take the form of some UML model. If we are able to use these models in an MDA fashion, they can be used to generate PSMs and code. When we configure RUP for an MDA project, we need to make sure that the models that we produce fulfill the requirements that MDA puts on them.
When we do this, the models used in RUP are no longer bureaucratic overhead; they become "real" software development. The balance between writing paper documents and developing code moves more into the direction of developing code. At the same time, the artifacts that are produced will satisfy project managers in their quest for keeping control.
The MDA is defined and trademarked by the OMG. We therefore first take a look at the OMG standards that play a role within the MDA framework.
3.1.1 OMG Languages
The OMG defines a number of modeling languages that are suitable to write either PIMs or PSMs. The most well-known language is UML. This is the most widely used modeling language.
The Object Constraint Language (OCL) is a query and expression language for UML, which is an integral part of the UML standard. The term "Constraint" in the name is an unfortunate leftover from the time when OCL was used to specify only constraints to UML models. Currently, OCL is a full query language, comparable to SQL in its expressive power.
The Action Semantics (AS) for UML defines the semantics of behavioral models in UML. Unfortunately, it defines the behavior at a low-level foundation. Therefore, it is not directly suitable for writing PIMs. It lacks the higher level of abstraction that is necessary. The AS is not a language that can be used directly by a modeler, because it does not define a concrete syntax; you cannot write down anything at all in a standardized way.
UML includes a profile mechanism that enables us to define languages derived from the UML language. The language defined in the profile is a subset of UML with additional constraints and suitable for a specific use. It uses the UML diagrammatic notation and OCL textual queries, and looks like UML. Many such profiles are standardized by the OMG; others are not standardized, but publicly available. Official OMG profiles include the CORBA Profile, the Enterprise Distributed Object Computing (EDOC) Profile, the Enterprise Application Integration (EAI) Profile, and the Scheduling, Performance, and Time Profile. More profiles are being developed and will be standardized in the coming years. Profiles are usually suitable for writing PSMs.
The UML/EJB Mapping Specification (EJB01) is an example of a profile that is standardized through the Java Community Process. Several profiles for other programming languages, like Java, C#, and so on, are defined by individual organizations and tool vendors.
Another language that is defined by the OMG is the Common Warehouse Metamodel (CWM). This is a language specifically designed to model data mining and related systems.
Chapter 11 describes the various OMG languages and their role in MDA in more detail.
3.1.2 OMG Language and Transformation Definitions
Languages used within the MDA need to have formal definitions so that tools will be able to automatically transform the models written in those languages. All of the languages standardized by the OMG have such a formal definition. The OMG has a special language called the Meta Object Facility (MOF), which is used to define all other languages. This ensures that tools are able to read and write all languages standardized by the OMG.
The transformation definitions used in the MDA framework are currently defined in a completely nonstandardized way. To allow standardization of these transformation definitions, the OMG is currently working on a standard language to write transformation definitions. This standard is called QVT, which stands for Query, Views, and Transformations. At the time of writing, the Request for Proposals (RfP) for QVT has been published. QVT is still being worked on by OMG members so we don't yet know exactly how the finished specification will look.
UML as PIM Language
As seen in section 1.3.1 the level of completeness, consistency, and unambiguity of a PIM must be very high. Otherwise, it is not possible to generate a PSM from a PIM. Let's investigate to what extent UML is a good language for building PIMs.
3.2.1 Plain UML
The strongest point in UML is the modeling of the structural aspects of a system. This is mainly done through the use of class models, which enables us to generate a PSM with all structural features in place. The example in Chapter 4 shows how this is done.
UML has some weak points that stop us from generating a complete PSM from a PIM. The weak area in UML is in the behavioral or dynamic part. UML includes many different diagrams to model dynamics, but their definition is not formal and complete enough to enable the generation of a PSM. For example, what code (for any platform) would you generate from an interaction diagram, or from a use case?
Plain UML is suitable to build PIMs in which the structural aspects are important. When a PSM is generated, a lot of the work still remains to be done on the resulting model, to define the dynamic aspects of the system.
3.2.2 Executable UML
Executable UML (Mellor 2002) is defined as plain UML combined with the dynamic behavior of the Action Semantics (AS). The concrete syntax used in Executable UML has not been standardized.
The strength of plain UML, modeling the structural aspect, is present in Executable UML as well. Executable UML to some extent mends the weak point in plain UML, the modeling of behavior. In Executable UML the state machine becomes the anchor point for defining behavior. Each state is enhanced with a procedure written in the AS.
In principle, Executable UML is capable of specifying a PIM and generating a complete PSM, but there are a few problems:
Relying on state machines to specify complete behavior is only useful in specific domains, especially embedded software development. In other, more administrative, domains the use of state machines to define all behavior is too cumbersome to be used in practice.
The AS language is not a very high-level language. In fact, the concepts used are at the same abstraction level as a PSM. Therefore, using Executable UML has little advantage over writing the dynamics of the system in the PSM directly. You will have to write the same amount of code, at the same level of abstraction.
The AS language does not have a standardized concrete syntax or notation; therefore, you cannot write anything in a standard way.
Executable UML is suitable within specialized domains, but even there the benefits might be less than you would expect, because of the low abstraction level of the action language.
3.2.3 UML–OCL Combination
Using the combination of UML with OCL to build a PIM allows for PIMs that have a high quality; that is, they are consistent, full of information, and precise. The strong structural aspect of UML can be utilized and made fully complete and consistent. Query operations can be defined completely by writing the body of the operation as an OCL expression. Business rules can be specified using OCL, including dynamic triggers.
The dynamics of the system can be expressed by pre- and post-conditions on operations. For relatively simple operations the body of the corresponding operation might be generated from the post-condition, but most of the time the body of the operation must be written in the PSM. In that case, generating code for the pre- and post-condition ensures that the code written in the PSM conforms to the required specification in the PIM.
Although the dynamics of the systems still cannot be fully specified in the UML–OCL combination, the combination of UML class models with OCL allows for a much more complete generation of PSMs and code than does plain UML. The use of the combination of UML and OCL is at the moment of this writing probably the best way to develop a high quality and high level PIM, because this results in precise, unambiguous, and consistent models that contain much information about the system to be implemented.
Tools
Ever since MDA became a popular buzzword, vendors have claimed that their tools support MDA. Tools that were on the market for many years, even before the name MDA was invented, make these claims. Most of these claims are true, in the sense that they support some aspect of MDA. We will use the MDA framework as shown in Figure 2-7 to analyze what level of support a tool really offers.
3.3.1 Support for Transformations
Support for MDA comes in many different varieties. Simple code generation from a model has been done for more than a decade, and lies well within the boundaries of MDA. The demands that MDA places on models and transformations of models in the ideal situation, however, are very high. In this section we will focus on the support for transformations that tools can offer.
PIM to PSM Transformation Tools
This type of tool transforms a high level PIM into one or more PSMs. This type of tool is barely available at the time of writing, although some tools offer minimal functionality in this area.
PSM to Code Transformation Tools
The most well-known support for MDA is given by tools that act as black-box PSM to code transformation tools. They have a built-in transformation definition and take one predefined type of model as source and produce another predefined type as target. The source model is a PSM, while the target is the code model. In fact, code generation from traditional CASE tools follows this pattern.
Several tools persist the relationship between the PSM and code, and enable you to see changes in either of the models reflected in the other model immediately after the change. This is possible because of the fact that the PSM and code are relatively close to each other, and have almost the same level of abstraction.
PIM to Code Transformation Tools
Another type of tool supports both the PIM to PSM and the PSM to code transformation. Sometimes the user will only see a direct PIM to code transformation and the PSM is left implicit. With this type of tool, the source and target language and the transformation definition are built into the tool that acts as a black box.
UML is usually used as the PIM language. Dynamic functionality that cannot be expressed in UML needs to be added manually in the generated code.
Tunable Transformation Tools
Tools should allow for some tuning or parameterization of a transformation. Access to the transformation definition to adjust it to your own requirements is usually not available. The best one can get today is a transformation definition written in an internal tool-specific scripting language. It is a time-consuming task to make changes to such a script. Because there is no standard language to write transformation definitions yet (see QVT in section 3.1.2), transformation definitions are by definition tool-specific.
Most tools only work for a predefined PIM language, which is often required to be a restricted variant of UML. Although UML diagrams are used to model a PIM, internally the tools do not use the UML language definition, but their own tool-specific definition of UML. Because this does not always follow the UML language definition, even UML experts will have to learn the tool-specific UML definition first and will have difficulty writing transformation definitions.
Transformation Definition Tools
Transformation definition tools support the creation and modification of transformation definitions. This type of tool is needed when you cannot use a transformation definition off the shelf and need to create your own transformation definitions. The only type of transformation definition tool that we have encountered are the tool-specific scripting languages described in the previous paragraph. The heavy dependency of MDA on complex transformation definitions drives a need for a transformation definition language (QVT) and tools that are better suited to this task. More flexible tools should allow a new language definition to be plugged in and used in a transformation. Such tools are not on the market yet.
Because the tools that we have at our disposal today are not ideal, you might conclude that you cannot use MDA successfully today. The situation is far better than that. Although the full potential of MDA might not yet be achievable, the tools we have today can give us enough of the MDA benefits to be useful.
3.3.2 Categorizing Tools
Although transformation tools are at the very heart of an MDA development environment, they are not the only tools that are needed. Next to the functionality that transformation tools bring, other functionality is relevant. For instance, one needs a tool in which models can be made and changed. In Figure 3-1 the functionality that we need in the complete environment is shown. We will explain each item in more detail.
Code Editor (IDE): The common functions that are provided by an Interactive Development Environment (IDE), for example, debugging, compilation, and code editing, cannot be missed.
Code Files: Although we can consider code to be a model, it is usually kept in the form of text-based files. Text-based files are not the format that other "tools" are able to understand. Therefore, we need the following two items:
Code File Parser: A parser that reads a text-based code file and stores the code in the model repository in a model-based form that other tools can use.
Code File Generator: A generator that reads the code in the model repository and produces a text-based code file.
Model Repository: The "database" for models, where models are stored and can be queried using XMI, JMI, or IDL (see section 11.2.1).
Model Editor (CASE tool): An editor for models, in which models can be constructed and modified.
Model Validator: Models used for generation of other models must be extremely well-defined. Validators can check models against a set of (predefined or user-defined) rules to ensure that the model is suitable for use in a transformation.
Transformation Definition Editor: An editor for transformation definitions, in which transformation definitions can be constructed and modified.
Transformation Definition Repository: The storage for transformation definitions.
Figure 3-1. Functionality in an MDA development environment
Most of today's tools combine a number of functions in a more or less open fashion. The traditional CASE tools provide a model editor and a model repository. A code generator based on a scripting language and plugged into a CASE tool provides the transformation tool and transformation definition editor. In that case, the transformation repository is simply text files.
All functions may come in two forms: language specific or generic. A language-specific tool may, for example, provide a model editor for UML and a code generator from UML to C# only. A generic model editor would enable the user to edit any model, as long as a language definition is available.
Selecting Tools
If you are selecting tools to set up your MDA development environment, the above features can help to find your way among the myriad of tools available. First of all you need to find out your own requirements. Are you happy to be language specific, do you want to be able to combine different tools using standardized interfaces, or are you happy with one monolithic tool that incorporates all the functionality you need?
From the tool perspective, you can investigate what functions are supported by a tool and how language specific they are. You should also check whether the functions, the complete tool, or both, can work on models that are interchanged using standard mechanisms (XMI, JMI, IDL). When you go through all of the potential functions, you are able to get a good characterization of the tool in question. There are no tools that provide all functionality in a fully generic way; therefore, you should be prepared to choose several tools that need to be combined.
Although transformations are at the core of MDA, many tools that claim to support MDA do not perform transformations. Instead, they provide some of the other functionality that is required in an MDA development environment. For instance, a tool may only implement the model repository functionality. This is perfectly all right, because you will need to combine multiple tools anyway. The main issue is to find out what features a specific tool supports.
We have not included a tool comparison in this book because the tool market around MDA is still in flux. As the MDA further evolves, tools will start to support pieces of the MDA process. Any characterization of existing tools will be outdated by the time you are reading this. You are much better off applying the above categorization to tools that you encounter yourself. References to tools that claim MDA support can be found at the OMG website: http://www.omg.org/mda.
Development Processes
The MDA does not require a specific process to be used for software development. Processes are not standardized by the OMG, so you are free to choose your own. Of course, the central position of models and transformations in MDA should be reflected in such a process. We will take a short look at some of the more popular processes and show how they can be used for MDA development.
3.4.1 Agile Software Development
A current trend in the software development process is to minimize the amount of effort and time spent in building models that will only serve as documentation, even though they do capture some of the interesting aspects of the software to be built. The purpose is to ensure that software is delivered that works for the users. Since requirements continuously change, the software that is being developed must change accordingly. The ability for a software project to accommodate changes in a flexible and immediate way is the core aspect of Agile Software Development. According to Cockburn (2002), "Working software is valued over comprehensive documentation." Well, we couldn't agree more. At the Web site http://agilemanifesto.org you can find the Agile Manifesto describing the agile principles. On this website you may post a quote why you like this approach. The quote from one of the authors of this book is:
As co-author of the UML standard, people usually think I love large and detailed models. The contrary is true, a model is only worth building if it directly helps to achieve the final goal: building a working system. With the emergence of MDA tools, it becomes possible to directly move from model to code. This "promotes" models from being merely documentation to becoming part of the delivered software, just like the source code.
Because changing a model means changing the software, the MDA approach helps support agile software development.
3.4.2 Extreme Programming
The XP approach is a very popular way of working, where the focus lies on writing code in small increments, such that you have a working system all the time. Each new requirement must be accompanied by an explicit test case, which is used to test the software. When adding new functionality, all previous tests are run in addition to the new tests, to ensure that existing functionality is not broken.
As we explained in section 1.1.1, the focus on code only is too limited. Code must be augmented with so-called "markers" that document the code at a higher level. In extreme programming this is often seen as overhead. When we realize that these markers may take the form of MDA models that directly transform into code, creating these markers is not overhead anymore. On the contrary, the high-level models help to develop the software faster.
This means that we can bring XP to a higher abstraction level, and we might want to talk about "Extreme Modeling."
3.4.3 Rational Unified Process (RUP)
The RUP is a process that is much more elaborate, or much heavier, than the agile or extreme processes. Project managers often like these larger and more structured processes because they give a better feeling of control over the project. Especially for large projects, it is clear that a more elaborate process than extreme programming is needed. On the other hand, many people consider the RUP process as being too large and unwieldy, favoring bureaucratic development of large stacks of paper over "real" software development, i.e., writing code.
UML plays an important role within RUP. Many of the artifacts in RUP take the form of some UML model. If we are able to use these models in an MDA fashion, they can be used to generate PSMs and code. When we configure RUP for an MDA project, we need to make sure that the models that we produce fulfill the requirements that MDA puts on them.
When we do this, the models used in RUP are no longer bureaucratic overhead; they become "real" software development. The balance between writing paper documents and developing code moves more into the direction of developing code. At the same time, the artifacts that are produced will satisfy project managers in their quest for keeping control.
The MDA Framework
What Is a Model?
The name MDA stresses the fact that models are the focal point of MDA. The models we take into account are models that are relevant to developing software. Note that this includes more than just models of software. When a piece of software is meant to support a business, the business model is relevant as well.
But what exactly do we mean when we use the word model? To come up with a definition that is both general enough to fit many different types of models is difficult. The definition also needs to be specific enough to help us specify automatic transformation of one model into another. In the English dictionary we can find various meanings of model:
The type of an appliance or of a commodity
The example used by an artist
A person posing for an artist
A replica of an item built on a smaller scale, i.e., a miniature
An example of a method of performing work
An ideal used as an example
The form of a piece of clothing or of a hairdo, and so on
What all of the above definitions have in common is that:
A model is always an abstraction of something that exists in reality.
A model is different from the thing it models, e.g., details are left out or its size is different.
A model can be used as an example to produce something that exists in reality.
From these observations, it is apparent that we need a word to indicate "something that exists in reality.[1] " Because all of our models should be relevant in the context of software development, we use the word system. Most of the time the word system refers to a software system, but in the case of a business model, the business itself is the system.
[1] Models themselves are also "things that exist in reality;" therefore, there are models of models. For sake of simplicity, this possibility is not addressed in this chapter. For more on this subject, see Chapter 11, "OMG Standards and Additional Technologies."
Another observation we can make is that a model describes a system in such a way that it can be used to produce a similar system. The new system is not equal to the old one, because a model abstracts away from the details of the system; therefore, details in both old and new system might differ. Yet, because only the details are omitted in the model and the main characteristics remain, the newly produced system has the same main characteristics as the original, i.e., it is similar to it. The more detailed the model is, the more similar the systems it describes will be.
As we set out to find a definition to help us specify the automatic transformation of one model into another, it is clear that not all of the meanings in the dictionary are suitable for use within MDA. Obviously, we will not transform "a person posing for an artist" to another model.
A model is always written in a language. This might be UML, plain English, a programming language, or any other language we can think of. To enable automatic transformation of a model, we need to restrict the models suitable for MDA to models that are written in a well-defined language. A well-defined language has a well-defined form and meaning that can be interpreted automatically by a computer. We consider natural languages as not being well-defined, because they cannot be interpreted by computers. Therefore, they are not suitable for automatic transformations within the MDA framework. From this point onward we use the following definitions:
A model is a description of (part of) a system written in a well-defined language.
A well-defined language is a language with well-defined form (syntax), and meaning (semantics), which is suitable for automated interpretation by a computer.
Note that although many of the example models in this book are written in UML, MDA is certainly not restricted to UML. The only restriction is that the models must be written in a language that is well-defined.
Our definition of model is a very general one. Although most people have a mental picture of a model as being a set of diagrams (as in UML), we do not put any restrictions on the way a model looks (the syntax) as long as it is well-defined. Our definition intentionally includes source code as a model of the software. Source code is written in a well-formed language, the programming language can be understood by a compiler, and it describes a system. It is, of course, a highly platform-specific model, but a model nevertheless. Figure 2-1 shows the relationship between a model, the system it describes, and the language in which it is written. We use the symbols from Figure 2-1 in the remainder of this book to distinguish between models and languages.
Figure 2-1. Models and languages
2.1.1 Relationships between Models
For a given system there can be many different models, that vary in the details that are, or are not, described. It is obvious that two models of the same system have a certain relationship. There are many types of relationships between models. For instance, one model may describe only part of the complete system, while another model describes another, possibly overlapping, part. One model may describe the system with more detail than another. One model may describe the system from a completely different angle than another.
Note that MDA focuses on one specific type of relationship between models: automatic generation of one model from another. This does not mean that the other types of relationships are less important. It only says that these relationships cannot (yet) be automated. For instance, adding attributes to a class and deciding what should be the types of these attributes is not a task that can be automated. It needs human intelligence.
Types of Models
The definition of model given in section 2.1, What Is a Model?, is a very broad one that includes many different kind of models, so we will take a closer look at models. There are many ways to distinguish between types of models, each based on the answer to a question about the model:
In what part of the software development process is the model used? Is it an analysis or design model?
Does the model contain much detail? Is it abstract or detailed?
What is the system that the model describes? Is it a business model or software model?
What aspect of the system does the model describe? Is it a structural or dynamic model?
Is the model targeted at a specific technology? Is it platform independent or platform specific?
At which platform is the model targeted? Is it an EJB, ER, C++, XML, or other model?
What we need to establish is whether these distinctions are relevant in the context of model transformations. The answer to some of the above questions varies according to the circumstances. The distinction made is not a feature of the model itself. Whether a model is considered to be an analysis or design model depends not on the model itself, but on the interpretation of the analysis and design phases in a certain project. Whether a model is considered to be abstract or detailed depends on what is considered to be detail.
When the distinguishing feature is not a feature of the model itself, this feature is not a good indication for characterizing different types of models. So, answering the first two questions in the list above does not clearly distinguish different types of models. The answers to the other questions in the above list do indicate different types of models. We further investigate these distinctions in the following sections.
2.2.1 Business and Software Models
The system described by a business model is a business or a company (or part thereof). Languages that are used for business modeling contain a vocabulary that allows the modeler to specify business processes, stakeholders, departments, dependencies between processes, and so on.
A business model does not necessarily say anything about the software systems used within a company; therefore, it is also called a Computational Independent Model (CIM). Whenever a part of the business is supported by a software system, a specific software model for that system is written. This software model is a description of the software system. Business and software systems describe quite different categories of systems in the real world.
Still, the requirements of the software system are derived from the (part of the) business model that the software needs to support. For most business models there are multiple software systems with different software models. Each software system is used to support a different piece of one business model. So there is a relationship between a business model and the software models that describe the software supporting the business, as shown in Figure 2-2.
Figure 2-2. Business and software models
We have seen that the type of the system described by a model is relevant for model transformations. A CIM is a software independent model used to describe a business system. Certain parts of a CIM may be supported by software systems, but the CIM itself remains software independent. Automatic derivation of PIMs from a CIM is not possible, because the choices of what pieces of a CIM are to be supported by a software system are always human. For each system supporting part of a CIM, a PIM needs to be developed first.
2.2.2 Structural and Dynamic Models
Many people talk about structural models versus dynamic models. In UML, for example, the class diagram is called a structural model and the state diagram a dynamic model, while in reality the class diagram and state diagram are so dependent on each other that they must be regarded as part of the same model.
The fact that we start software modeling by drawing classes in a class diagram doesn't mean we are developing a class model. We are developing a software model by defining static aspects through a static view. If we start our development by drawing a dynamic diagram, like the state or sequence diagram, we are developing a software model by defining dynamic aspects through a dynamic view. Later, when we add a state diagram to our class diagram, or a class diagram to our state diagram, we are merely adding dynamic aspects through a dynamic view to the same model, or vice versa. Therefore, the common terminology is a bit sloppy. The class and state diagrams could better be called structural and dynamic views. Figure 2-3 shows how different diagrams in UML are all views on the same model. They are all written in the same language: UML.
Figure 2-3. Different views of one system in one model
In UML the relationship between dynamic and static views is direct, because they show different visualizations of the same thing in the same model. For example, a class in a UML model is shown as a rectangle with the class name in a class view, while it is shown as the type of an instance in a sequence diagram. The same class can be the anchoring point for a complete state diagram. All the diagrams are views on the same class.
If a system has both structural and dynamic aspects and the language used is able to express both structural and dynamic aspects, the model of the system contains both aspects. Therefore, a UML model of a system includes both the structural and the dynamic aspects, shown in different diagrams.
If structural and dynamic aspects cannot be described in one model because the language used is not able to express certain aspects, there are indeed two models. Note that both models are related; they describe the same system. The type of the model is in such a case more clearly described by naming the language in which it is written than by the use of the connotation "structural" or "dynamic," e.g., an ER-model or Petrinet-model. Figure 2-4 shows a situation where two different models describing the same system are written in two different languages.
Figure 2-4. Different models of one system written in different languages
We can conclude with the observation that the aspect that is described in a diagram or model (i.e., structural, dynamic) is not relevant for the type of a model. The essential characteristic of a model is the language in which the model is written. Some languages are more expressive than others and more suitable for representing certain aspects of a system.
2.2.3 Platform Independent and Platform Specific Models
The MDA standard defines the terms PIM and PSM. The OMG documentation describes this distinction as if this is a clear black-and-white issue. A model is always either a PIM or a PSM. In reality it is difficult to draw the line between platform independent and platform specific. Is a model written in UML specific for the Java platform because one of the class diagrams defines one or more interfaces? Is a model that describes the interactions of components specific for a certain platform only because some of the components are "legacy" components, which may be written in, let's say, COBOL? It is hard to tell.
The only thing one can say about different models is that one model is more (or less) platform specific than another. Within an MDA transformation, we transform a more platform independent model to a model that is more platform specific. Thus, the terms PIM and PSM are relative terms.
2.2.4 The Target Platforms of a Model
The last issue we need to analyze is whether the target platform is a relevant distinction between models in the context of model transformations. Is a design model in UML targeted at Smalltalk distinctively different from a design model in UML targeted at EJB? Yes, most people would say it is. But why? What is the difference?
The difference lies in the use of constructs (in UML) that can be easily mapped to one specific platform, but not to another. A model targeted at EJB has a different structure than a model targeted at Smalltalk. To generate these models from the same PIM we need different transformation rules.
Furthermore, the extent to which a model is platform specific is very important. Using UML profiles (see section 11.8, UML Profiles) a UML model can be made very specific for a certain platform. Such a model should be used as a PSM, not as a PIM. The transformation rules that take such a model as source are quite different from the rules that take a general UML/PIM model as the source.
We conclude that it is very important to know the target platform of a model and the degree to which the model is platform specific. For instance, a relational model targeted at SQL might be specific for a certain database vendor.
What is a Transformation?
The MDA process, as described in section 1.2.1, shows the role that the various models, PIM, PSM, and code play within the MDA framework. A transformation tool takes a PIM and transforms it into a PSM. A second (or the same) transformation tool transforms the PSM to code. These transformations are essential in the MDA development process. In Figure 1-3 we have shown the transformation tool as a black box. It takes one model as input and produces a second model as its output.
When we open up the transformation tool and take a look inside, we can see what elements are involved in performing the transformation. Somewhere inside the tool there is a definition that describes how a model should be transformed. We call this definition the transformation definition. Figure 2-5 shows the structure of the opened up transformation tool.
Figure 2-5. Transformation definitions inside transformation tools
Note that there is a distinction between the transformation itself, which is the process of generating a new model from another model, and the transformation definition. The transformation tool uses the same transformation definition for each transformation of any input model.
In order for the transformation specification to be applied over and over again, independent of the source model it is applied to, the transformation specification relates constructs from the source language to constructs in the target language. We can, for example, define a transformation definition from UML to C#, which describes which C# should be generated for a (or any!) UML model. This situation is depicted in Figure 2-6.
Figure 2-6. Transformation definitions are defined between languages
In general, we can say that a transformation definition consists of a collection of transformation rules, which are unambiguous specifications of the way that (a part of) one model can be used to create (a part of) another model. Based on these observations, we can now define transformation, transformation rule, and transformation definition.
A transformation is the automatic generation of a target model from a source model, according to a transformation definition.
A transformation definition is a set of transformation rules that together describe how a model in the source language can be transformed into a model in the target language.
A transformation rule is a description of how one or more constructs in the source language can be transformed into one or more constructs in the target language.
To be useful at all, a transformation must have specific characteristics. The most important characteristic is that a transformation should preserve meaning between the source and the target model. Of course, the meaning of a model can only be preserved insofar as it can be expressed in both the source and the target model. For example, specification of behavior may be part of a UML model, but not of an Entity-Relationship (ER) model. Even so, the UML model may be transformed into an ER model, preserving the structural characteristics of the system only.
2.3.1 Transformations between Identical Languages
The definition above does not put any limitations on the source and target languages. This means that the source and target model may be written in either the same or in a different language. We can define transformations from a UML model to a UML model or from Java to Java.
There are several situations where this may occur. The technique of refactoring a model or a piece of code (remember code is also a model) can be described by a transformation definition between models in the same language. Another well-known example of a transformation definition is the normalization of an ER model. There are well-defined normalization rules that may be applied over and over again on different ER models with a determined outcome. For instance, the normalization rule that produces a model in the second normal form is:
Shift all attributes in an entity that are not dependent on the complete key of that entity to a separate entity, holding a relationship between the original entity and the newly created one.
This rule may be applied to any ER model. It relates one entity and its attributes in the source model to two entities, their attributes, and a relationship in the target model, where both source and target model are written in the ER-modeling language.
In the case of transformations between UML models, we need to be very careful. Very often the purpose of the source and target models although both in UML, is completely different. In the examples in section 2.5, later in this chapter, we define a transformation from a PIM in UML to a PSM in UML. The trick is that the PSM is restricted to use only constructs from UML that can be mapped one-to-one onto constructs in the Java language. Conceptually, the target language is not plain UML, but a specific subset of UML, which we could call UML-for-Java. This kind of use of UML occurs often, and it is hard to recognize.
The UML-for-Java subset can be formalized by defining a UML profile for Java. Any UML profile in effect defines a completely new language which happens to be derived from the UML language. In Chapter 11 we elaborate on the role of UML profiles in MDA.
The Basic MDA Framework
In the previous sections, we have seen the major elements that participate in the MDA framework: models, PIMs, PSMs, languages, transformations, transformation definitions, and tools that perform transformations. All of these elements fit together in the basic MDA framework, as depicted in Figure 2-7. Although most of the terms have been defined in the previous sections, we summarize of the elements and their role below:
A model is a description of a system.
A PIM is a Platform Independent Model, which describes a system without any knowledge of the final implementation platform.
A PSM is a Platform Specific Model, which describes a system with full knowledge of the final implementation platform.
A model is written in a well-defined language.
A transformation definition describes how a model in a source language can be transformed into a model in a target language.
A transformation tool performs a transformation for a specific source model according to a transformation definition.
Figure 2-7. Overview of the basic MDA framework
From the developer's point of view, the PSM and PIM are the most important elements. A developer puts his focus on developing a PIM, describing the software system at a high level of abstraction. In the next stage, he chooses one or more tools that are able to perform the transformation on the PIM that has been developed according to a certain transformation definition. This results in a PSM, which can then be transformed into code.
Note that the figure only shows one PSM, but that from one PIM often multiple PSMs and potential bridges between them are generated. The figure only shows one transformation between a PIM and a PSM, but another transformation to code is also necessary.
In sections 8.3.1 and 9.4 we complete the basic MDA framework with additional elements at the metalevel. These metalevels are explained in Chapter 8 and Chapter 9. Until Chapter 7, the framework as described in Figure 2-7 is sufficient.
The name MDA stresses the fact that models are the focal point of MDA. The models we take into account are models that are relevant to developing software. Note that this includes more than just models of software. When a piece of software is meant to support a business, the business model is relevant as well.
But what exactly do we mean when we use the word model? To come up with a definition that is both general enough to fit many different types of models is difficult. The definition also needs to be specific enough to help us specify automatic transformation of one model into another. In the English dictionary we can find various meanings of model:
The type of an appliance or of a commodity
The example used by an artist
A person posing for an artist
A replica of an item built on a smaller scale, i.e., a miniature
An example of a method of performing work
An ideal used as an example
The form of a piece of clothing or of a hairdo, and so on
What all of the above definitions have in common is that:
A model is always an abstraction of something that exists in reality.
A model is different from the thing it models, e.g., details are left out or its size is different.
A model can be used as an example to produce something that exists in reality.
From these observations, it is apparent that we need a word to indicate "something that exists in reality.[1] " Because all of our models should be relevant in the context of software development, we use the word system. Most of the time the word system refers to a software system, but in the case of a business model, the business itself is the system.
[1] Models themselves are also "things that exist in reality;" therefore, there are models of models. For sake of simplicity, this possibility is not addressed in this chapter. For more on this subject, see Chapter 11, "OMG Standards and Additional Technologies."
Another observation we can make is that a model describes a system in such a way that it can be used to produce a similar system. The new system is not equal to the old one, because a model abstracts away from the details of the system; therefore, details in both old and new system might differ. Yet, because only the details are omitted in the model and the main characteristics remain, the newly produced system has the same main characteristics as the original, i.e., it is similar to it. The more detailed the model is, the more similar the systems it describes will be.
As we set out to find a definition to help us specify the automatic transformation of one model into another, it is clear that not all of the meanings in the dictionary are suitable for use within MDA. Obviously, we will not transform "a person posing for an artist" to another model.
A model is always written in a language. This might be UML, plain English, a programming language, or any other language we can think of. To enable automatic transformation of a model, we need to restrict the models suitable for MDA to models that are written in a well-defined language. A well-defined language has a well-defined form and meaning that can be interpreted automatically by a computer. We consider natural languages as not being well-defined, because they cannot be interpreted by computers. Therefore, they are not suitable for automatic transformations within the MDA framework. From this point onward we use the following definitions:
A model is a description of (part of) a system written in a well-defined language.
A well-defined language is a language with well-defined form (syntax), and meaning (semantics), which is suitable for automated interpretation by a computer.
Note that although many of the example models in this book are written in UML, MDA is certainly not restricted to UML. The only restriction is that the models must be written in a language that is well-defined.
Our definition of model is a very general one. Although most people have a mental picture of a model as being a set of diagrams (as in UML), we do not put any restrictions on the way a model looks (the syntax) as long as it is well-defined. Our definition intentionally includes source code as a model of the software. Source code is written in a well-formed language, the programming language can be understood by a compiler, and it describes a system. It is, of course, a highly platform-specific model, but a model nevertheless. Figure 2-1 shows the relationship between a model, the system it describes, and the language in which it is written. We use the symbols from Figure 2-1 in the remainder of this book to distinguish between models and languages.
Figure 2-1. Models and languages
2.1.1 Relationships between Models
For a given system there can be many different models, that vary in the details that are, or are not, described. It is obvious that two models of the same system have a certain relationship. There are many types of relationships between models. For instance, one model may describe only part of the complete system, while another model describes another, possibly overlapping, part. One model may describe the system with more detail than another. One model may describe the system from a completely different angle than another.
Note that MDA focuses on one specific type of relationship between models: automatic generation of one model from another. This does not mean that the other types of relationships are less important. It only says that these relationships cannot (yet) be automated. For instance, adding attributes to a class and deciding what should be the types of these attributes is not a task that can be automated. It needs human intelligence.
Types of Models
The definition of model given in section 2.1, What Is a Model?, is a very broad one that includes many different kind of models, so we will take a closer look at models. There are many ways to distinguish between types of models, each based on the answer to a question about the model:
In what part of the software development process is the model used? Is it an analysis or design model?
Does the model contain much detail? Is it abstract or detailed?
What is the system that the model describes? Is it a business model or software model?
What aspect of the system does the model describe? Is it a structural or dynamic model?
Is the model targeted at a specific technology? Is it platform independent or platform specific?
At which platform is the model targeted? Is it an EJB, ER, C++, XML, or other model?
What we need to establish is whether these distinctions are relevant in the context of model transformations. The answer to some of the above questions varies according to the circumstances. The distinction made is not a feature of the model itself. Whether a model is considered to be an analysis or design model depends not on the model itself, but on the interpretation of the analysis and design phases in a certain project. Whether a model is considered to be abstract or detailed depends on what is considered to be detail.
When the distinguishing feature is not a feature of the model itself, this feature is not a good indication for characterizing different types of models. So, answering the first two questions in the list above does not clearly distinguish different types of models. The answers to the other questions in the above list do indicate different types of models. We further investigate these distinctions in the following sections.
2.2.1 Business and Software Models
The system described by a business model is a business or a company (or part thereof). Languages that are used for business modeling contain a vocabulary that allows the modeler to specify business processes, stakeholders, departments, dependencies between processes, and so on.
A business model does not necessarily say anything about the software systems used within a company; therefore, it is also called a Computational Independent Model (CIM). Whenever a part of the business is supported by a software system, a specific software model for that system is written. This software model is a description of the software system. Business and software systems describe quite different categories of systems in the real world.
Still, the requirements of the software system are derived from the (part of the) business model that the software needs to support. For most business models there are multiple software systems with different software models. Each software system is used to support a different piece of one business model. So there is a relationship between a business model and the software models that describe the software supporting the business, as shown in Figure 2-2.
Figure 2-2. Business and software models
We have seen that the type of the system described by a model is relevant for model transformations. A CIM is a software independent model used to describe a business system. Certain parts of a CIM may be supported by software systems, but the CIM itself remains software independent. Automatic derivation of PIMs from a CIM is not possible, because the choices of what pieces of a CIM are to be supported by a software system are always human. For each system supporting part of a CIM, a PIM needs to be developed first.
2.2.2 Structural and Dynamic Models
Many people talk about structural models versus dynamic models. In UML, for example, the class diagram is called a structural model and the state diagram a dynamic model, while in reality the class diagram and state diagram are so dependent on each other that they must be regarded as part of the same model.
The fact that we start software modeling by drawing classes in a class diagram doesn't mean we are developing a class model. We are developing a software model by defining static aspects through a static view. If we start our development by drawing a dynamic diagram, like the state or sequence diagram, we are developing a software model by defining dynamic aspects through a dynamic view. Later, when we add a state diagram to our class diagram, or a class diagram to our state diagram, we are merely adding dynamic aspects through a dynamic view to the same model, or vice versa. Therefore, the common terminology is a bit sloppy. The class and state diagrams could better be called structural and dynamic views. Figure 2-3 shows how different diagrams in UML are all views on the same model. They are all written in the same language: UML.
Figure 2-3. Different views of one system in one model
In UML the relationship between dynamic and static views is direct, because they show different visualizations of the same thing in the same model. For example, a class in a UML model is shown as a rectangle with the class name in a class view, while it is shown as the type of an instance in a sequence diagram. The same class can be the anchoring point for a complete state diagram. All the diagrams are views on the same class.
If a system has both structural and dynamic aspects and the language used is able to express both structural and dynamic aspects, the model of the system contains both aspects. Therefore, a UML model of a system includes both the structural and the dynamic aspects, shown in different diagrams.
If structural and dynamic aspects cannot be described in one model because the language used is not able to express certain aspects, there are indeed two models. Note that both models are related; they describe the same system. The type of the model is in such a case more clearly described by naming the language in which it is written than by the use of the connotation "structural" or "dynamic," e.g., an ER-model or Petrinet-model. Figure 2-4 shows a situation where two different models describing the same system are written in two different languages.
Figure 2-4. Different models of one system written in different languages
We can conclude with the observation that the aspect that is described in a diagram or model (i.e., structural, dynamic) is not relevant for the type of a model. The essential characteristic of a model is the language in which the model is written. Some languages are more expressive than others and more suitable for representing certain aspects of a system.
2.2.3 Platform Independent and Platform Specific Models
The MDA standard defines the terms PIM and PSM. The OMG documentation describes this distinction as if this is a clear black-and-white issue. A model is always either a PIM or a PSM. In reality it is difficult to draw the line between platform independent and platform specific. Is a model written in UML specific for the Java platform because one of the class diagrams defines one or more interfaces? Is a model that describes the interactions of components specific for a certain platform only because some of the components are "legacy" components, which may be written in, let's say, COBOL? It is hard to tell.
The only thing one can say about different models is that one model is more (or less) platform specific than another. Within an MDA transformation, we transform a more platform independent model to a model that is more platform specific. Thus, the terms PIM and PSM are relative terms.
2.2.4 The Target Platforms of a Model
The last issue we need to analyze is whether the target platform is a relevant distinction between models in the context of model transformations. Is a design model in UML targeted at Smalltalk distinctively different from a design model in UML targeted at EJB? Yes, most people would say it is. But why? What is the difference?
The difference lies in the use of constructs (in UML) that can be easily mapped to one specific platform, but not to another. A model targeted at EJB has a different structure than a model targeted at Smalltalk. To generate these models from the same PIM we need different transformation rules.
Furthermore, the extent to which a model is platform specific is very important. Using UML profiles (see section 11.8, UML Profiles) a UML model can be made very specific for a certain platform. Such a model should be used as a PSM, not as a PIM. The transformation rules that take such a model as source are quite different from the rules that take a general UML/PIM model as the source.
We conclude that it is very important to know the target platform of a model and the degree to which the model is platform specific. For instance, a relational model targeted at SQL might be specific for a certain database vendor.
What is a Transformation?
The MDA process, as described in section 1.2.1, shows the role that the various models, PIM, PSM, and code play within the MDA framework. A transformation tool takes a PIM and transforms it into a PSM. A second (or the same) transformation tool transforms the PSM to code. These transformations are essential in the MDA development process. In Figure 1-3 we have shown the transformation tool as a black box. It takes one model as input and produces a second model as its output.
When we open up the transformation tool and take a look inside, we can see what elements are involved in performing the transformation. Somewhere inside the tool there is a definition that describes how a model should be transformed. We call this definition the transformation definition. Figure 2-5 shows the structure of the opened up transformation tool.
Figure 2-5. Transformation definitions inside transformation tools
Note that there is a distinction between the transformation itself, which is the process of generating a new model from another model, and the transformation definition. The transformation tool uses the same transformation definition for each transformation of any input model.
In order for the transformation specification to be applied over and over again, independent of the source model it is applied to, the transformation specification relates constructs from the source language to constructs in the target language. We can, for example, define a transformation definition from UML to C#, which describes which C# should be generated for a (or any!) UML model. This situation is depicted in Figure 2-6.
Figure 2-6. Transformation definitions are defined between languages
In general, we can say that a transformation definition consists of a collection of transformation rules, which are unambiguous specifications of the way that (a part of) one model can be used to create (a part of) another model. Based on these observations, we can now define transformation, transformation rule, and transformation definition.
A transformation is the automatic generation of a target model from a source model, according to a transformation definition.
A transformation definition is a set of transformation rules that together describe how a model in the source language can be transformed into a model in the target language.
A transformation rule is a description of how one or more constructs in the source language can be transformed into one or more constructs in the target language.
To be useful at all, a transformation must have specific characteristics. The most important characteristic is that a transformation should preserve meaning between the source and the target model. Of course, the meaning of a model can only be preserved insofar as it can be expressed in both the source and the target model. For example, specification of behavior may be part of a UML model, but not of an Entity-Relationship (ER) model. Even so, the UML model may be transformed into an ER model, preserving the structural characteristics of the system only.
2.3.1 Transformations between Identical Languages
The definition above does not put any limitations on the source and target languages. This means that the source and target model may be written in either the same or in a different language. We can define transformations from a UML model to a UML model or from Java to Java.
There are several situations where this may occur. The technique of refactoring a model or a piece of code (remember code is also a model) can be described by a transformation definition between models in the same language. Another well-known example of a transformation definition is the normalization of an ER model. There are well-defined normalization rules that may be applied over and over again on different ER models with a determined outcome. For instance, the normalization rule that produces a model in the second normal form is:
Shift all attributes in an entity that are not dependent on the complete key of that entity to a separate entity, holding a relationship between the original entity and the newly created one.
This rule may be applied to any ER model. It relates one entity and its attributes in the source model to two entities, their attributes, and a relationship in the target model, where both source and target model are written in the ER-modeling language.
In the case of transformations between UML models, we need to be very careful. Very often the purpose of the source and target models although both in UML, is completely different. In the examples in section 2.5, later in this chapter, we define a transformation from a PIM in UML to a PSM in UML. The trick is that the PSM is restricted to use only constructs from UML that can be mapped one-to-one onto constructs in the Java language. Conceptually, the target language is not plain UML, but a specific subset of UML, which we could call UML-for-Java. This kind of use of UML occurs often, and it is hard to recognize.
The UML-for-Java subset can be formalized by defining a UML profile for Java. Any UML profile in effect defines a completely new language which happens to be derived from the UML language. In Chapter 11 we elaborate on the role of UML profiles in MDA.
The Basic MDA Framework
In the previous sections, we have seen the major elements that participate in the MDA framework: models, PIMs, PSMs, languages, transformations, transformation definitions, and tools that perform transformations. All of these elements fit together in the basic MDA framework, as depicted in Figure 2-7. Although most of the terms have been defined in the previous sections, we summarize of the elements and their role below:
A model is a description of a system.
A PIM is a Platform Independent Model, which describes a system without any knowledge of the final implementation platform.
A PSM is a Platform Specific Model, which describes a system with full knowledge of the final implementation platform.
A model is written in a well-defined language.
A transformation definition describes how a model in a source language can be transformed into a model in a target language.
A transformation tool performs a transformation for a specific source model according to a transformation definition.
Figure 2-7. Overview of the basic MDA framework
From the developer's point of view, the PSM and PIM are the most important elements. A developer puts his focus on developing a PIM, describing the software system at a high level of abstraction. In the next stage, he chooses one or more tools that are able to perform the transformation on the PIM that has been developed according to a certain transformation definition. This results in a PSM, which can then be transformed into code.
Note that the figure only shows one PSM, but that from one PIM often multiple PSMs and potential bridges between them are generated. The figure only shows one transformation between a PIM and a PSM, but another transformation to code is also necessary.
In sections 8.3.1 and 9.4 we complete the basic MDA framework with additional elements at the metalevel. These metalevels are explained in Chapter 8 and Chapter 9. Until Chapter 7, the framework as described in Figure 2-7 is sufficient.
The MDA Development Process
Traditional Software Development
Software development is often compared with hardware development in terms of maturity. While in hardware development there has been much progress, e.g., processor speed has grown exponentially in twenty years, the progress made in software development seems to be minimal. To some extent this is a matter of appearances. The progress made in software development cannot be measured in terms of development speed or costs.
Progress in software development is evident from the fact that it is feasible to build much more complex and larger systems. Just think how quickly and efficiently we would be able to build a monolithic mainframe application that has no graphical user interface and no connections to other systems. We never do this anymore, and that is why we do not have solid figures to support the idea that progress has been made.
Still, software development is an area in which we are struggling with a number of major problems. Writing software is labor intensive. With each new technology, much work needs to be done again and again. Systems are never built using only one technology and systems always need to communicate with other systems. There is also the problem of continuously changing requirements.
To show how MDA addresses these problems, we will analyze some of the most important problems with software development and discover the cause of these problems.
1.1.1 The Productivity Problem
The software development process as we know it today is often driven by low-level design and coding. A typical process, as illustrated in Figure 1-1, includes a number of phases:
Conceptualization and requirements gathering
Analysis and functional description
Design
Coding
Testing
Deployment
Figure 1-1. Traditional software development life cycle
Whether we use an incremental and iterative version of this process, or the traditional waterfall process, documents and diagrams are produced during phases 1 through 3. These include requirements descriptions in text and pictures, and often many Unified Modeling Language (UML) diagrams like use cases, class diagrams, interaction diagrams, activity diagrams, and so on. The stack of paper produced is sometimes impressive. However, most of the artifacts from these phases is just paper and nothing more.
The documents and corresponding diagrams created in the first three phases rapidly lose their value as soon as the coding starts. The connection between the diagrams and the code fades away as the coding phase progresses. Instead of being an exact specification of the code, the diagrams usually become more or less unrelated pictures.
When a system is changed over time, the distance between the code and the text and diagrams produced in the first three phases becomes larger. Changes are often done at the code level only, because the time to update the diagrams and other high-level documents is not available. Also, the added value of updated diagrams and documents is questionable, because any new change starts in the code anyway. So why do we use so much precious time building high-level specifications?
The idea of Extreme Programming (XP) (Beck 2000) has become popular in a rapid fashion. One of the reasons for this is that it acknowledges the fact that the code is the driving force of software development. The only phases in the development process that are really productive are coding and testing.
As Alistair Cockburn states in Agile Software Development (Cockburn 2002), the XP approach solves only part of the problem. As long as the same team works on the software, there is enough high-level knowledge in their heads to understand the system. During initial development this is often the case. The problems start when the team is dismantled, which usually happens after delivery of the first release of the software. Other people need to maintain (fix bugs, enhance functionality, and so on) the software. Having just code and tests makes maintenance of a software system very difficult. Given five-hundred thousand lines of code (or even much more), where do you start to try and understand how a system works?
Alistair Cockburn talks about "markers" that need to be left behind for new people who will work on the software. These markers often take the form of text and higher-level diagrams. Without them you are literally lost. It is like being dropped in a foreign city without a map or road signs that show the directions.
So, either we use our time in the first phases of software development building high-level documentation and diagrams, or we use our time in the maintenance phase finding out what the software is actually doing. Neither way we are directly productive in the sense that we are producing code. Developers often consider these tasks as being overhead. Writing code is being productive, writing models or documentation is not. Still, in a mature software project these tasks need to be done.
1.1.2 The Portability Problem
The software industry has a special characteristic that makes it stand apart from most other industries. Each year, and sometimes even faster, new technologies are being invented and becoming popular (for example Java, Linux, XML, HTML, SOAP, UML, J2EE, .NET, JSP, ASP, Flash, Web Services, and so on). Many companies need to follow these new technologies for good reasons:
The technology is demanded by the customers (e.g., Web interfaces).
It solves some real problems (e.g., XML for interchange or Java for portability).
Tool vendors stop supporting old technologies and focus on the new one (e.g., UML support replaces OMT).
The new technologies offer tangible benefits for companies and many of them cannot afford to lag behind. Therefore, people have to jump on these new technologies quite fast. As a consequence, the investments in previous technologies lose value and they may even become worthless.
The situation is even more complex because the technologies themselves change as well. They come in different versions, without a guarantee that they are backwards compatible. Tool vendors usually only support the two or three most recent versions.
As a consequence, existing software is either ported to the new technology, or to a newer version of an existing technology. The software may remain unchanged utilizing the older technology, in which case the existing, now legacy, software needs to interoperate with new systems that will be built using new technology.
1.1.3 The Interoperability Problem
Software systems rarely live in isolation. Most systems need to communicate with other, often already existing, systems. As a typical example we have seen that over the past years many companies have been building new Web-based systems. The new end-user application runs in a Web browser (using various technologies like HTML, ASP, JSP, and so on) and it needs to get its information from existing back-end systems.
Even when systems are completely built from scratch, they often span multiple technologies, sometimes both old and new. For example, when a system uses Enterprise Java Beans (EJB), it also needs to use relational databases as a storage mechanism.
Over the years we have learned not to build huge monolithic systems. Instead we try to build components that do the same job by interacting with each other. This makes it easier (or at all possible) to make changes to a system. The different components are all built using the best technology for the job, but need to interact with each other. This has created a need for interoperability.
1.1.4 The Maintenance and Documentation Problem
In the previous sections, we touched upon the problem of maintenance. Documentation has always been a weak link in the software development process. It is often done as an afterthought. Most developers feel their main task is to produce code. Writing documentation during development costs time and slows down the process. It does not support the developer's main task. The availability of documentation supports the task of those that come later. So, writing documentation feels like doing something for the sake of prosperity, not for your own sake. There is no incentive to writing documentation other than your manager, who tells you that you must.
This is one of the main reasons why documentation is often not of very good quality. The only persons that can check the quality are fellow developers who hate the job of writing documentation just as much. This also is the reason that documentation is often not kept up to date. With every change in the code the documentation needs to be changed as well—by hand!
The developers are wrong, of course. Their task is to develop systems that can be changed and maintained afterwards. Despite the feelings of many developers, writing documentation is one of their essential tasks.
A solution to this problem at the code level is the facility to generate the documentation directly from the source code, ensuring that it is always up to date. The documentation is in effect part of the code and not a separate entity. This is supported in several programming languages, like Eiffel and Java. This solution, however, only solves the low-level documentation problem. The higher-level documentation (text and diagrams) still needs to be maintained by hand. Given the complexity of the systems that are built, documentation at a higher level of abstraction is an absolute must.
The Model Driven Architecture
The Model Driven Architecture (MDA) is a framework for software development defined by the Object Management Group (OMG). Key to MDA is the importance of models in the software development process. Within MDA the software development process is driven by the activity of modeling your software system.
In this section we first explain the basic MDA development life cycle, and next illustrate how MDA can help to solve (at least part of) the problems mentioned in the previous sections.
1.2.1 The MDA Development Life Cycle
The MDA development life cycle, which is shown in Figure 1-2, does not look very different from the traditional life cycle. The same phases are identified. One of the major differences lies in the nature of the artifacts that are created during the development process. The artifacts are formal models, i.e., models that can be understood by computers. The following three models are at the core of the MDA.
Figure 1-2. MDA software development life cycle
Platform Independent Model
The first model that MDA defines is a model with a high level of abstraction that is independent of any implementation technology. This is called a Platform Independent Model (PIM).
A PIM describes a software system that supports some business. Within a PIM, the system is modeled from the viewpoint of how it best supports the business. Whether a system will be implemented on a mainframe with a relational database or on an EJB application server plays no role in a PIM.
Platform Specific Model
In the next step, the PIM is transformed into one or more Platform Specific Models (PSMs). A PSM is tailored to specify your system in terms of the implementation constructs that are available in one specific implementation technology. For example, an EJB PSM is a model of the system in terms of EJB structures. It typically contains EJB-specific terms like "home interface," "entity bean," "session bean," and so on. A relational database PSM includes terms like "table," "column," "foreign key," and so on. It is clear that a PSM will only make sense to a developer who has knowledge about the specific platform.
A PIM is transformed into one or more PSMs. For each specific technology platform a separate PSM is generated. Most of the systems today span several technologies, therefore it is common to have many PSMs with one PIM.
Code
The final step in the development is the transformation of each PSM to code. Because a PSM fits its technology rather closely, this transformation is relatively straightforward.
The MDA defines the PIM, PSM, and code, and also defines how these relate to each other. A PIM should be created, then transformed into one or more PSMs, which then are transformed into code. The most complex step in the MDA development process is the one in which a PIM is transformed into one or more PSMs.
Raising the Level of Abstraction
The PIM, PSM, and code are shown as artifacts of different steps in the development life cycle. More importantly, they represent different abstraction levels in the system specification. The ability to transform a high level PIM into a PSM raises the level of abstraction at which a developer can work. This allows a developer to cope with more complex systems with less effort.
1.2.2 Automation of the Transformation Steps
The MDA process may look suspiciously much like traditional development. However, there is a crucial difference. Traditionally, the transformations from model to model, or from model to code, are done mainly by hand. Many tools can generate some code from a model, but that usually goes no further than the generation of some template code, where most of the work still has to be filled in by hand.
In contrast, MDA transformations are always executed by tools as shown in Figure 1-3. Many tools are able to transform a PSM into code; there is nothing new to that. Given the fact that the PSM is already very close to the code, this transformation isn't that exciting. What's new in MDA is that the transformation from PIM to PSM is automated as well. This is where the obvious benefits of MDA come in. How much effort has been spent in your projects with the painstaking task of building a database model from a high-level design? How much (precious) time was used by building a COM component model, or an EJB component model from that same design? It is indeed about time that the burden of IT-workers is eased by automating this part of their job.
Figure 1-3. The three major steps in the MDA development process
At the time of writing, the MDA approach is very new. As a result of this, current tools are not sophisticated enough to provide the transformations from PIM to PSM and from PSM to code for one hundred percent. The developer needs to manually enhance the transformed PSM and/or code models. However, current tools are able to generate a running application from a PIM that provides basic functionality, like creating and changing objects in the system. This does allow a developer to have immediate feedback on the PIM that is under development, because a basic prototype of the resulting system can be generated on the fly.
MDA Benefits
Let us now take a closer look at what application of MDA brings us in terms of improvement of the software development process.
1.3.1 Productivity
In MDA the focus for a developer shifts to the development of a PIM. The PSMs that are needed are generated by a transformation from PIM to PSM. Of course, someone still needs to define the exact transformation, which is a difficult and specialized task. But such a transformation only needs to be defined once and can then be applied in the development of many systems. The payback for the effort to define a transformation is large, but it can only be done by highly skilled people.
The majority of developers will focus on the development of PIMs. Since they can work independently of details and specifics of the target platforms, there is a lot of technical detail that they do not need to bother with. These technical details will be automatically added by the PIM to PSM transformation. This improves the productivity in two ways.
In the first place, the PIM developers have less work to do because platform-specific details need not be designed and written down; they are already addressed in the transformation definition. At the PSM and code level, there is much less code to be written, because a large amount of the code is already generated from the PIM.
The second improvement comes from the fact that the developers can shift focus from code to PIM, thus paying more attention to solving the business problem at hand. This results in a system that fits much better with the needs of the end users. The end users get better functionality in less time.
Such a productivity gain can only be reached by the use of tools that fully automate the generation of a PSM from a PIM. Note that this implies that much of the information about the application must be incorporated in the PIM and/or the generation tool. Because the high-level model is no longer "just paper," but directly related to the generated code, the demands on the completeness and consistency of the high-level model (PIM) are higher than in traditional development. A human reading a paper model may be forgiving—an automated transformation tool is not.
1.3.2 Portability
Within the MDA, portability is achieved by focusing on the development of PIMs, which are by definition platform independent. The same PIM can be automatically transformed into multiple PSMs for different platforms. Everything you specify at the PIM level is therefore completely portable.
The extent to which portability can be achieved depends on the automated transformation tools that are available. For popular platforms, a large number of tools will undoubtedly be (or become) available. For less popular platforms, you may have to use a tool that supports plug-in transformation definitions, and write the transformation definition yourself.
For new technologies and platforms that will arrive in the future, the software industry needs to deliver the corresponding transformations in time. This enables us to quickly deploy new systems with the new technology, based on our old and existing PIMs.
1.3.3 Interoperability
We have been incomplete regarding the overall MDA picture. As shown in Figure 1-4, multiple PSMs generated from one PIM may have relationships. In MDA these are called bridges. When PSMs are targeted at different platforms, they cannot directly talk with each other. One way or another, we need to transform concepts from one platform into concepts used in another platform. This is what interoperability is all about. MDA addresses this problem by generating not only the PSMs, but the necessary bridges between them as well.
Figure 1-4. MDA interoperability using bridges
If we are able to transform one PIM into two PSMs targeted at two platforms, all of the information we need to bridge the gap between the two PSMs is available. For each element in one PSM we know from which element in the PIM it has been transformed. From the PIM element we know what the corresponding element is in the second PSM. We can therefore deduce how elements from one PSM relate to elements in the second PSM. Since we also know all the platform-specific technical details of both PSMs (otherwise we couldn't have performed the PIM-to-PSM transformations), we have all the information we need to generate a bridge between the two PSMs.
Take, for example, one PSM to be a Java (code) model and the other PSM to be a relational database model. For an element Customer in the PIM, we know to which Java class(es) this is transformed. We also know to which table(s) this Customer element is transformed. Building a bridge between a Java object in the Java-PSM and a table in the Relational-PSM is easy. To retrieve an object from the database, we query the table(s) transformed from Customer, and instantiate the class(es) in the other PSM with the data. To store an object, we find the data in the Java object and store it in the "Customer" tables.
Cross-platform interoperability can be realized by tools that not only generate PSMs, but the bridges between them, and possibly to other platforms, as well. You can "survive" technology changes while preserving your investment in the PIM.
1.3.4 Maintenance and Documentation
Working with the MDA life cycle, developers can focus on the PIM, which is at a higher level of abstraction than code. The PIM is used to generate the PSM, which in turn is used to generate the code. The model is an exact representation of the code. Thus, the PIM fulfills the function of high-level documentation that is needed for any software system.
The big difference is that the PIM is not abandoned after writing. Changes made to the system will eventually be made by changing the PIM and regenerating the PSM and the code. In practice today, many of the changes are made to the PSM and code is regenerated from there. Good tools, however, will be able to maintain the relationship between PIM and PSM, even when changes to the PSM are made. Changes in the PSM will thus be reflected in the PIM, and high-level documentation will remain consistent with the actual code.
In the MDA approach the documentation at a high level of abstraction will naturally be available. Even at that level, the need to write down additional information, which cannot be captured in a PIM, will remain. This includes, for example, argumentation for choices that have been made while developing the PIM.
MDA Building Blocks
Now what do we need to implement the MDA process? The following are the building blocks of the MDA framework:
High-level models, written in a standard, well-defined language, that are consistent, precise, and contain enough information on the system.
One or more standard, well-defined languages to write high-level models.
Definitions of how a PIM is transformed to a specific PSM that can be automatically executed. Some of these definitions will be "home-made," that is, made by the project that works according to the MDA process itself. Preferably, transformation definitions would be in the public domain, perhaps even standardized, and tunable to the individual needs of its users.
A language in which to write these definitions. This language must be interpreted by the transformation tools, therefore it must be a formal language.
Tools that implement the execution of the transformation definitions. Preferably these tools offer the users the flexibility to tune the transformation step to their specific needs.
Tools that implement the execution of the transformation of a PSM to code.
At the time of writing, many of the above building blocks are still under development. Chapter 3 provides an overview of where we stand today.
In the following chapters each of the building blocks is further examined and we show how it fits into the overall MDA framework.
Software development is often compared with hardware development in terms of maturity. While in hardware development there has been much progress, e.g., processor speed has grown exponentially in twenty years, the progress made in software development seems to be minimal. To some extent this is a matter of appearances. The progress made in software development cannot be measured in terms of development speed or costs.
Progress in software development is evident from the fact that it is feasible to build much more complex and larger systems. Just think how quickly and efficiently we would be able to build a monolithic mainframe application that has no graphical user interface and no connections to other systems. We never do this anymore, and that is why we do not have solid figures to support the idea that progress has been made.
Still, software development is an area in which we are struggling with a number of major problems. Writing software is labor intensive. With each new technology, much work needs to be done again and again. Systems are never built using only one technology and systems always need to communicate with other systems. There is also the problem of continuously changing requirements.
To show how MDA addresses these problems, we will analyze some of the most important problems with software development and discover the cause of these problems.
1.1.1 The Productivity Problem
The software development process as we know it today is often driven by low-level design and coding. A typical process, as illustrated in Figure 1-1, includes a number of phases:
Conceptualization and requirements gathering
Analysis and functional description
Design
Coding
Testing
Deployment
Figure 1-1. Traditional software development life cycle
Whether we use an incremental and iterative version of this process, or the traditional waterfall process, documents and diagrams are produced during phases 1 through 3. These include requirements descriptions in text and pictures, and often many Unified Modeling Language (UML) diagrams like use cases, class diagrams, interaction diagrams, activity diagrams, and so on. The stack of paper produced is sometimes impressive. However, most of the artifacts from these phases is just paper and nothing more.
The documents and corresponding diagrams created in the first three phases rapidly lose their value as soon as the coding starts. The connection between the diagrams and the code fades away as the coding phase progresses. Instead of being an exact specification of the code, the diagrams usually become more or less unrelated pictures.
When a system is changed over time, the distance between the code and the text and diagrams produced in the first three phases becomes larger. Changes are often done at the code level only, because the time to update the diagrams and other high-level documents is not available. Also, the added value of updated diagrams and documents is questionable, because any new change starts in the code anyway. So why do we use so much precious time building high-level specifications?
The idea of Extreme Programming (XP) (Beck 2000) has become popular in a rapid fashion. One of the reasons for this is that it acknowledges the fact that the code is the driving force of software development. The only phases in the development process that are really productive are coding and testing.
As Alistair Cockburn states in Agile Software Development (Cockburn 2002), the XP approach solves only part of the problem. As long as the same team works on the software, there is enough high-level knowledge in their heads to understand the system. During initial development this is often the case. The problems start when the team is dismantled, which usually happens after delivery of the first release of the software. Other people need to maintain (fix bugs, enhance functionality, and so on) the software. Having just code and tests makes maintenance of a software system very difficult. Given five-hundred thousand lines of code (or even much more), where do you start to try and understand how a system works?
Alistair Cockburn talks about "markers" that need to be left behind for new people who will work on the software. These markers often take the form of text and higher-level diagrams. Without them you are literally lost. It is like being dropped in a foreign city without a map or road signs that show the directions.
So, either we use our time in the first phases of software development building high-level documentation and diagrams, or we use our time in the maintenance phase finding out what the software is actually doing. Neither way we are directly productive in the sense that we are producing code. Developers often consider these tasks as being overhead. Writing code is being productive, writing models or documentation is not. Still, in a mature software project these tasks need to be done.
1.1.2 The Portability Problem
The software industry has a special characteristic that makes it stand apart from most other industries. Each year, and sometimes even faster, new technologies are being invented and becoming popular (for example Java, Linux, XML, HTML, SOAP, UML, J2EE, .NET, JSP, ASP, Flash, Web Services, and so on). Many companies need to follow these new technologies for good reasons:
The technology is demanded by the customers (e.g., Web interfaces).
It solves some real problems (e.g., XML for interchange or Java for portability).
Tool vendors stop supporting old technologies and focus on the new one (e.g., UML support replaces OMT).
The new technologies offer tangible benefits for companies and many of them cannot afford to lag behind. Therefore, people have to jump on these new technologies quite fast. As a consequence, the investments in previous technologies lose value and they may even become worthless.
The situation is even more complex because the technologies themselves change as well. They come in different versions, without a guarantee that they are backwards compatible. Tool vendors usually only support the two or three most recent versions.
As a consequence, existing software is either ported to the new technology, or to a newer version of an existing technology. The software may remain unchanged utilizing the older technology, in which case the existing, now legacy, software needs to interoperate with new systems that will be built using new technology.
1.1.3 The Interoperability Problem
Software systems rarely live in isolation. Most systems need to communicate with other, often already existing, systems. As a typical example we have seen that over the past years many companies have been building new Web-based systems. The new end-user application runs in a Web browser (using various technologies like HTML, ASP, JSP, and so on) and it needs to get its information from existing back-end systems.
Even when systems are completely built from scratch, they often span multiple technologies, sometimes both old and new. For example, when a system uses Enterprise Java Beans (EJB), it also needs to use relational databases as a storage mechanism.
Over the years we have learned not to build huge monolithic systems. Instead we try to build components that do the same job by interacting with each other. This makes it easier (or at all possible) to make changes to a system. The different components are all built using the best technology for the job, but need to interact with each other. This has created a need for interoperability.
1.1.4 The Maintenance and Documentation Problem
In the previous sections, we touched upon the problem of maintenance. Documentation has always been a weak link in the software development process. It is often done as an afterthought. Most developers feel their main task is to produce code. Writing documentation during development costs time and slows down the process. It does not support the developer's main task. The availability of documentation supports the task of those that come later. So, writing documentation feels like doing something for the sake of prosperity, not for your own sake. There is no incentive to writing documentation other than your manager, who tells you that you must.
This is one of the main reasons why documentation is often not of very good quality. The only persons that can check the quality are fellow developers who hate the job of writing documentation just as much. This also is the reason that documentation is often not kept up to date. With every change in the code the documentation needs to be changed as well—by hand!
The developers are wrong, of course. Their task is to develop systems that can be changed and maintained afterwards. Despite the feelings of many developers, writing documentation is one of their essential tasks.
A solution to this problem at the code level is the facility to generate the documentation directly from the source code, ensuring that it is always up to date. The documentation is in effect part of the code and not a separate entity. This is supported in several programming languages, like Eiffel and Java. This solution, however, only solves the low-level documentation problem. The higher-level documentation (text and diagrams) still needs to be maintained by hand. Given the complexity of the systems that are built, documentation at a higher level of abstraction is an absolute must.
The Model Driven Architecture
The Model Driven Architecture (MDA) is a framework for software development defined by the Object Management Group (OMG). Key to MDA is the importance of models in the software development process. Within MDA the software development process is driven by the activity of modeling your software system.
In this section we first explain the basic MDA development life cycle, and next illustrate how MDA can help to solve (at least part of) the problems mentioned in the previous sections.
1.2.1 The MDA Development Life Cycle
The MDA development life cycle, which is shown in Figure 1-2, does not look very different from the traditional life cycle. The same phases are identified. One of the major differences lies in the nature of the artifacts that are created during the development process. The artifacts are formal models, i.e., models that can be understood by computers. The following three models are at the core of the MDA.
Figure 1-2. MDA software development life cycle
Platform Independent Model
The first model that MDA defines is a model with a high level of abstraction that is independent of any implementation technology. This is called a Platform Independent Model (PIM).
A PIM describes a software system that supports some business. Within a PIM, the system is modeled from the viewpoint of how it best supports the business. Whether a system will be implemented on a mainframe with a relational database or on an EJB application server plays no role in a PIM.
Platform Specific Model
In the next step, the PIM is transformed into one or more Platform Specific Models (PSMs). A PSM is tailored to specify your system in terms of the implementation constructs that are available in one specific implementation technology. For example, an EJB PSM is a model of the system in terms of EJB structures. It typically contains EJB-specific terms like "home interface," "entity bean," "session bean," and so on. A relational database PSM includes terms like "table," "column," "foreign key," and so on. It is clear that a PSM will only make sense to a developer who has knowledge about the specific platform.
A PIM is transformed into one or more PSMs. For each specific technology platform a separate PSM is generated. Most of the systems today span several technologies, therefore it is common to have many PSMs with one PIM.
Code
The final step in the development is the transformation of each PSM to code. Because a PSM fits its technology rather closely, this transformation is relatively straightforward.
The MDA defines the PIM, PSM, and code, and also defines how these relate to each other. A PIM should be created, then transformed into one or more PSMs, which then are transformed into code. The most complex step in the MDA development process is the one in which a PIM is transformed into one or more PSMs.
Raising the Level of Abstraction
The PIM, PSM, and code are shown as artifacts of different steps in the development life cycle. More importantly, they represent different abstraction levels in the system specification. The ability to transform a high level PIM into a PSM raises the level of abstraction at which a developer can work. This allows a developer to cope with more complex systems with less effort.
1.2.2 Automation of the Transformation Steps
The MDA process may look suspiciously much like traditional development. However, there is a crucial difference. Traditionally, the transformations from model to model, or from model to code, are done mainly by hand. Many tools can generate some code from a model, but that usually goes no further than the generation of some template code, where most of the work still has to be filled in by hand.
In contrast, MDA transformations are always executed by tools as shown in Figure 1-3. Many tools are able to transform a PSM into code; there is nothing new to that. Given the fact that the PSM is already very close to the code, this transformation isn't that exciting. What's new in MDA is that the transformation from PIM to PSM is automated as well. This is where the obvious benefits of MDA come in. How much effort has been spent in your projects with the painstaking task of building a database model from a high-level design? How much (precious) time was used by building a COM component model, or an EJB component model from that same design? It is indeed about time that the burden of IT-workers is eased by automating this part of their job.
Figure 1-3. The three major steps in the MDA development process
At the time of writing, the MDA approach is very new. As a result of this, current tools are not sophisticated enough to provide the transformations from PIM to PSM and from PSM to code for one hundred percent. The developer needs to manually enhance the transformed PSM and/or code models. However, current tools are able to generate a running application from a PIM that provides basic functionality, like creating and changing objects in the system. This does allow a developer to have immediate feedback on the PIM that is under development, because a basic prototype of the resulting system can be generated on the fly.
MDA Benefits
Let us now take a closer look at what application of MDA brings us in terms of improvement of the software development process.
1.3.1 Productivity
In MDA the focus for a developer shifts to the development of a PIM. The PSMs that are needed are generated by a transformation from PIM to PSM. Of course, someone still needs to define the exact transformation, which is a difficult and specialized task. But such a transformation only needs to be defined once and can then be applied in the development of many systems. The payback for the effort to define a transformation is large, but it can only be done by highly skilled people.
The majority of developers will focus on the development of PIMs. Since they can work independently of details and specifics of the target platforms, there is a lot of technical detail that they do not need to bother with. These technical details will be automatically added by the PIM to PSM transformation. This improves the productivity in two ways.
In the first place, the PIM developers have less work to do because platform-specific details need not be designed and written down; they are already addressed in the transformation definition. At the PSM and code level, there is much less code to be written, because a large amount of the code is already generated from the PIM.
The second improvement comes from the fact that the developers can shift focus from code to PIM, thus paying more attention to solving the business problem at hand. This results in a system that fits much better with the needs of the end users. The end users get better functionality in less time.
Such a productivity gain can only be reached by the use of tools that fully automate the generation of a PSM from a PIM. Note that this implies that much of the information about the application must be incorporated in the PIM and/or the generation tool. Because the high-level model is no longer "just paper," but directly related to the generated code, the demands on the completeness and consistency of the high-level model (PIM) are higher than in traditional development. A human reading a paper model may be forgiving—an automated transformation tool is not.
1.3.2 Portability
Within the MDA, portability is achieved by focusing on the development of PIMs, which are by definition platform independent. The same PIM can be automatically transformed into multiple PSMs for different platforms. Everything you specify at the PIM level is therefore completely portable.
The extent to which portability can be achieved depends on the automated transformation tools that are available. For popular platforms, a large number of tools will undoubtedly be (or become) available. For less popular platforms, you may have to use a tool that supports plug-in transformation definitions, and write the transformation definition yourself.
For new technologies and platforms that will arrive in the future, the software industry needs to deliver the corresponding transformations in time. This enables us to quickly deploy new systems with the new technology, based on our old and existing PIMs.
1.3.3 Interoperability
We have been incomplete regarding the overall MDA picture. As shown in Figure 1-4, multiple PSMs generated from one PIM may have relationships. In MDA these are called bridges. When PSMs are targeted at different platforms, they cannot directly talk with each other. One way or another, we need to transform concepts from one platform into concepts used in another platform. This is what interoperability is all about. MDA addresses this problem by generating not only the PSMs, but the necessary bridges between them as well.
Figure 1-4. MDA interoperability using bridges
If we are able to transform one PIM into two PSMs targeted at two platforms, all of the information we need to bridge the gap between the two PSMs is available. For each element in one PSM we know from which element in the PIM it has been transformed. From the PIM element we know what the corresponding element is in the second PSM. We can therefore deduce how elements from one PSM relate to elements in the second PSM. Since we also know all the platform-specific technical details of both PSMs (otherwise we couldn't have performed the PIM-to-PSM transformations), we have all the information we need to generate a bridge between the two PSMs.
Take, for example, one PSM to be a Java (code) model and the other PSM to be a relational database model. For an element Customer in the PIM, we know to which Java class(es) this is transformed. We also know to which table(s) this Customer element is transformed. Building a bridge between a Java object in the Java-PSM and a table in the Relational-PSM is easy. To retrieve an object from the database, we query the table(s) transformed from Customer, and instantiate the class(es) in the other PSM with the data. To store an object, we find the data in the Java object and store it in the "Customer" tables.
Cross-platform interoperability can be realized by tools that not only generate PSMs, but the bridges between them, and possibly to other platforms, as well. You can "survive" technology changes while preserving your investment in the PIM.
1.3.4 Maintenance and Documentation
Working with the MDA life cycle, developers can focus on the PIM, which is at a higher level of abstraction than code. The PIM is used to generate the PSM, which in turn is used to generate the code. The model is an exact representation of the code. Thus, the PIM fulfills the function of high-level documentation that is needed for any software system.
The big difference is that the PIM is not abandoned after writing. Changes made to the system will eventually be made by changing the PIM and regenerating the PSM and the code. In practice today, many of the changes are made to the PSM and code is regenerated from there. Good tools, however, will be able to maintain the relationship between PIM and PSM, even when changes to the PSM are made. Changes in the PSM will thus be reflected in the PIM, and high-level documentation will remain consistent with the actual code.
In the MDA approach the documentation at a high level of abstraction will naturally be available. Even at that level, the need to write down additional information, which cannot be captured in a PIM, will remain. This includes, for example, argumentation for choices that have been made while developing the PIM.
MDA Building Blocks
Now what do we need to implement the MDA process? The following are the building blocks of the MDA framework:
High-level models, written in a standard, well-defined language, that are consistent, precise, and contain enough information on the system.
One or more standard, well-defined languages to write high-level models.
Definitions of how a PIM is transformed to a specific PSM that can be automatically executed. Some of these definitions will be "home-made," that is, made by the project that works according to the MDA process itself. Preferably, transformation definitions would be in the public domain, perhaps even standardized, and tunable to the individual needs of its users.
A language in which to write these definitions. This language must be interpreted by the transformation tools, therefore it must be a formal language.
Tools that implement the execution of the transformation definitions. Preferably these tools offer the users the flexibility to tune the transformation step to their specific needs.
Tools that implement the execution of the transformation of a PSM to code.
At the time of writing, many of the above building blocks are still under development. Chapter 3 provides an overview of where we stand today.
In the following chapters each of the building blocks is further examined and we show how it fits into the overall MDA framework.
Subscribe to:
Posts (Atom)