Comentarios, discusiones, notas, sobre tendencias en el desarrollo de la tecnología informática, y la importancia de la calidad en la construcción de software.
martes, diciembre 27, 2005
Uno más que encuentra SilkTide
Un buen diagnóstico para quien quiera construír una página. Para más observaciones y recomendaciones, le quedan dos posibilidades: siga el enlace de Google, o visite el sitio. Por mi parte, trataré de seguir sus consejos, que en algunos casos coinciden con los lados flacos que ya conocía, y en algún momento resolveré...
sábado, diciembre 24, 2005
jueves, diciembre 08, 2005
Control de cambios explicado por Jim Johnston
Domain Specific Modeling explicado en tres palabras
En alguno de los artículos anteriores sobre DSM aquí, algún enlace apunta a éstas ideas. Pero bien, aquí está el original.
miércoles, noviembre 23, 2005
ITToolbox: un blog de interés en EAI
miércoles, noviembre 16, 2005
A propósito del debate sobre educación en España
Existe en España en estos días un fuerte debate debido a una proposición del partido de gobierno de modificar su actual ley de educación. Si bien no conozco todos los términos de la discusión -nadie lo podría pretender en un mes de estadía- el conjunto de elementos resulta de interés para cualquier país latinoamericano.
Tomo datos de www.quealicante.com (15-11-05):
Sin entrar en otros detalles de modificaciones, algunas semejantes a otras que pasáramos en Argentina, se cuestiona en particular la inestabilidad de la legislación de educación: se cambió en España cuatro veces el sistema de educación en 25 años. Guiándome por la memoria, desde la década del sesenta en Argentina, el número de cambios puede haber sido de uno por cada ciclo político, quizá unos cinco o seis. El resultado de esta inconsecuencia fue el progresivo deterioro, con estudiantes que comienzan un ciclo educativo en un sistema y lo terminan en otro, y profesores que ya no saben cuál es el que están aplicando, remitiéndose finalmente a la mecánica de las exigencias burocráticas curriculares, vacías de contenido educativo.
Qué hacen los países que tienen éxito en sus resultados educativos:
"Finlandia, Japón y Corea son los países con mejor nivel educativo del mundo, según los últimos informes internacionales. Mientras que los japoneses no cambian su ley desde 1984, los finlandeses tienen un récord de permanencia, su ley educativa lleva vigente 35 años."
"A los partidos finlandeses les costó diez años ponerse de acuerdo pero al final lo consiguieron y aprobaron la ley en 1970. Hicieron una reforma educativa que han respetado los diferentes gobiernos y ninguna autoridad duda de que es una de los más eficientes del mundo."
"Los dos países asiáticos se han situado a la cabeza en desarrollo educativo en los últimos años. Ambos tienen leyes con más de veinte años de antigüedad. Son sistemas consolidados y muy eficientes, sobre todo en las ramas técnicas y científicas."
"La última reforma educativa del Reino Unido se hizo en 1987 y fue calificada en su momento como “agresiva política e ideológicamente”. Después de más de quince años, ahora se están planteando hacer una ley nueva."
"Canadá tiene vigentes algunas leyes que no cambia desde hace más de cien años"
Otros aspectos destacables:
Cómo maneja Finlandia la comprensión de la enseñanza:
"El sistema educativo finés es muy exigente y saben que “no se pueden permitir el lujo de perder a un alumno por el camino”. Los malos estudiantes repiten y tienen exámenes de control en diferentes cursos que les ayudan a mejorar. Hablan dos o tres idiomas y desde los siete años saben leer perfectamente."Canada y la universalidad de la educación:
"Hay diez sistemas educativos diferentes en todo el país y ni siquiera tienen un Ministerio de Educación. Algunas leyes educativas tienen más de un siglo y nadie se plantea cambiarlas. Es completamente gratuita y obligatoria desde los 5 años. No tienen exámenes de acceso a las carreras . Para entrar en la Universidad los aspirantes no tienen que pasar un exámendel tipo de la selectividad. Las propias universidades hacen sus exámenes y eligen, por sus propias capacidades, a los alumnos."Lo mismo en Nueva Zelanda:
"Hay unas pocas instituciones religiosas o filosóficas que se dedican a la educación pero la mayoría de los niños acuden a colegios públicos. De hecho la consideran un asunto de Estado. Es gratuita desde los 5 hasta los 18 años. Los padres vigilan de cerca a sus hijos .En el sistema educativo de Nueva Zelanda se crean ‘comisiones de padres’ para que sean éstos los que estén pendientes de sus propios hijos. Así ellos participan de su formación."Presupuesto dedicado a educación:
"El porcentaje del Producto Interior Bruto que gasta España en educación es de un 4.7%. Estamos todavía muy lejos de la media de los países de la OCDE que gastan un 6%. Todos los países nórdicos superan esa media."Porcentaje dedicado a becas:
"En España se gasta más o menos un 8% del presupuesto de educación en becas o ayudas. La media de los países de laOCDE es un 17%. Los alumnos de estos países tienen más oportunidades."El reconocimiento y la formación de los educadores:
"En España, los profesores están pagados como la media de los países de nuestro entorno según los últimos datos de la UNESCO. Cobran en su año de ingreso 24.460 euros cantidad similar que en Australia, donde cobran 25.565 euros o en Holanda, donde se les paga 25.864. La media europea está en 25.700 euros, mientras que en el conjunto de la OCDE, que agrupa a los países más desarrollados la media baja hasta los 22.358. En todo caso, los sindicatos de enseñanza han puesto siempre énfasis en el desprestigio de la profesión."
"(en Finlandia) ...La figura del profesor, por tradición, es respetada mucho más que otras profesiones liberales en el país escandinavo. Al fin y al cabo les confían la formación de sus hijos. Quizá por eso el ciclo formativo de los profesores, dependiendo a las especialidades llega a durar siete años, los mismos que en España tiene que dedicar un futuro médico. Forman un auténtico equipo con los padres y se llegan a comunicar con ellos a través de SMS."
¿Es realmente imposible proponerse este tipo de objetivos? O se trata de un esquema de prioridades irracional y egoísta el que sufrimos en nuestros países?
martes, noviembre 15, 2005
Serena Software cambia de manos
"Our decision to partner with Silver Lake to take the company private represents the culmination of a thorough review of our standalone plan and strategic alternatives and we believe this is the best value proposition for our shareholders", dice su jefe ejecutivo, Mark Woodward.Parecería ser la renuncia a proseguir en el proyecto independiente. Se simplifica el campo?
jueves, noviembre 10, 2005
Cómo prototipar un juego por semana
¿Interesado?Here's a crazy game idea: Drag trash-talkin' gobs of goo to build a giant tower higher and higher. They squirm and giggle and climb upward over the backs of their brothers, but be careful! A constant battle against gravity, if you build a tower that's too unstable, it will all fall down.
"Tower of Goo" was downloaded over 100,000 times within months of hitting the net, it was dubbed “Internet Game of the Month” in one magazine, it was demoed on G4 and at the Experimental Gameplay Workshop at GDC, and it was one of over fifty games we made as a part of the Experimental Gameplay Project at Carnegie Mellon's Entertainment Technology Center.
And like the rest of them, it was made in under a week, by one person.
The project started in Spring 2005 with the goal of discovering and rapidly prototyping as many new forms of gameplay as possible. A team of four grad students, we locked ourselves in a room for a semester with three rules:
1. Each game must be made in less than seven days,
2. Each game must be made by exactly one person,
3. Each game must be based around a common theme i.e. "gravity", "vegetation", "swarms", etc.As the project progressed, we were amazed and thrilled with the onslaught of web traffic, with the attention from gaming magazines, and with industry professionals and academics all asking the same questions, "How are you making these games so quickly?" and "How can we do it too?"
We lay it all out here. Through the following tips, tricks, and examples, we will discuss the methods that worked and those that didn't. We will show you how to slip into a rapid prototyping state of mind, how to set up an effective team, and where to start if you've thought about making something new, but weren't sure how. We hope these well-tested guidelines come in useful for you and your next project, big or small!
Vea las recomendaciones de Gabler, Kucic, Gray, y Shodhan, y el experimiento mismo...
miércoles, septiembre 28, 2005
Ultimos días en La Serena...
viernes, septiembre 09, 2005
ECMDA: Consistencia en el desarrollo de software conducido por modelos
The following issues fall within the scope of the workshop: Understanding consistency in the context of MDA and UML in particular – informal consistency from different points of views, static and dynamic consistency. Definition of consistency in MDA and UML – set of properties or other techniques, Checking consistency, Ensuring consistency, Model transformation preserving consistency Practical realization, Tools support for checking and ensuring consistency Relationship between consistency and formal techniques and languages Consistency driven development process, A number of artefacts are produced during the software developments process and MDA in particular. Those artefacts are usually expressed in UML and should be related in different way to each other. One of the most important relationship between artefacts is that they should be consistent. Putting this question into the discussion and trying to find answer to this question is the main topic to be addressed during the workshop. (...)
Todo el material es de interés no sólo para abordar el propio problema convocante, sino para entender el uso de OCL, las operaciones de transformación de modelos, y los metamodelos en UML, y aún más en general, el uso de artefactos capaces de generar código.
Un tema que particularmente estimulante, es planteado por Robert Wagner, Holger Giese, y Ulrich Nickel, de la Universidad de Padeborn, Alemania; Flexibilidad en las reglas de consistencia aplicadas:
In most CASE tools, the consistency checks being performed are rather static and predefined as they are hard coded into the tool . Thus, new consistency rules neither can be added nor can existing consistency rules be adapted to special user, enterprise, project, target language, or domain specific demands. However, during large projects you will never obtain a complete set of rules covering all relevant inconsistencies. In fact, the set of consistency rules will be expanded and refined through the whole lifecycle of a project. Thus, for a tool developer it becomes infeasible to identify all consistency rules in advance.Más sobre ECMDA.
In this paper we present a plug-in for a flexible and incremental consistency management realized within the FUJABA TOOL SUITE. FUJABA itself is an Open Source UML CASE tool project. It was started by the software engineering group at the University of Paderborn in fall 1997 and has a special focus on code generation from UML diagrams resulting in a visual programming language. Hence, consistency management was an important issue from the beginnings since consistent specifications are a required prerequisite for an error-free implementation.
Herramientas Comerciales y Open Source participantes. FUJABA.
Historial de conferencias sobre MDA de la asociación.
sábado, septiembre 03, 2005
Una comparación sobre ambiente de seguridad entre Java y .NET
Java and .NET have similar security goals and mechanisms. .NET’s design benefited from past experience with Java. Examples of this cleaner design include the MSIL instruction set, code access security evidences, and the policy configuration. .NET has been able to shield the developer from some of the underlying complexity through their new architecture.No obstante, es necesario recordar que la similitud está en el uso de una máquina virtual, pero que .Net ejecuta múltiples lenguajes sólo sobre Windows, y Java sólo un lenguaje sobre múltiples plataformas. En una decisión de arquitectura, las alternativas por ahora son más amplias sobre Java, sin mencionar la confiabilidad del sistema operativo huésped.
Where Java evolved from an initial platform with limited security capabilities, .NET incorporated more security capability into its original design. With age and new features, much of the legacy code of Java still remains for backwards compatibility including the possibility of a null SecurityManager, and the absolute trust of classes on the bootclasspath. Hence, in several areas .NET has security advantages over Java because of its simpler and cleaner design.
Most of the lessons to learn from Java’s vulnerabilities echo Saltzer and Schroeder’s classic principles, especially economy of mechanism, least privilege and fail-safe defaults. Of course, Java’s designers were aware of these principles, even though in hindsight it seems clear there were occasions where they could
(and should) have been followed more closely than they were. Some areas of design present conflicts between security and other design goals including fail-safe defaults vs. usability and least privilege vs. usability and complexity. For example, the initial stack walk introduced in Java has evolved to a more complex stack walk in both architectures to enable developers limit privileges. In addition, both
platforms default policies could be more restrictive to improve security, but restrictive policies hinder the execution of programs. .NET’s use of multi-level policies with multiple principals provides another example of showing the principles of least privilege and fail-safe defaults in contention with usability and
complexity. Several of the specific complexities that proved to be problematic in Java have been avoided in the .NET design, although .NET introduced new complexities of its own. Despite .NET’s design certainly not being perfect, it does provide encouraging evidence that system designers can learn from past security vulnerabilities and develop more secure systems. We have no doubts, however, that system designers will continue to relearn these principles for many years to come.
viernes, septiembre 02, 2005
Apuntes sobre performance en Java
lunes, agosto 29, 2005
Le interesan a todos los estándares y la compatibilidad?
While it would seem that getting your browser to display the grinning face is simple, getting it to pass the Acid2 test is anything but. Currently there are 90 recommendations within the W3C, ranging from Cascading Style Sheets 2 (CSS 2) and HTML 4.01 to XML-binary Optimized Packaging.
WaSP's test doesn't try to incorporate all 90 recommendations, just the ones the group believes should be in all browsers based on Web developer input. It looks for basic support of HTML 4, CSS 1, data URLs and Portable Network Graphics (PNG), as well as particular implementations from these and other specifications.
The goal of the test is to get browsers to implement the Web standards tested in a consistent fashion, so what the Web surfers see using IE is the same as what another views in Safari, for example.
Lo probé con Mozilla Firefox 1.06 y con IE 6.02. El resultado es frustrante. IE 7.0 no lo soportará, según señala el artículo, y, en fin, solo tres browsers lo consiguen: Safari, iCab, y Konqueror.
UML no es OOD: cómo se relacionan
viernes, agosto 26, 2005
MDA en la empresa: Visto desde múltiples perspectivas
Para directivos:
MDA allows you to get more work done with the same people, or to do the same amount of work with less people. The time to completion on new projects will diminish radically, as will the time to make changes on existing systems.La principal objeción:
In terms of cost, management will need to consider more than the price of tools. The biggest cost will be education, because not everyone in the organization will be familiar with MDA. Others may need time to become comfortable with the new approach. Therefore, the cost of the transition must be measured in time as well as money, but the end benefits should add up favorably.Para analistas y diseñadores:
Para arquitectos:Analysts capture business requirements into a formal model. Because the work of capturing requirements is people-driven, most of it is the same regardless of approach. In MDA the formal model is Unified Modeling Language (UML), so your analysts will need to have a working knowledge of UML to implement MDA.
For the most part, the designer's job isn't affected by the transition to MDA. Designers turn analyst's models into more evolved ones, encompassing classes, state diagrams, detailed method actions, and so on. Because the designer's models are used to generate code, they must be detailed and precise. If your organization's UML tool uses executable UML, the designer will be able to more easily demonstrate the behavior of the model to customers.
Company architects will likely be excited by your proposal to transition to MDA. MDA makes it much easier for architects to ensure that a system is implemented the way they designed it, from start to finish. As the system evolves, it will be easy to add new technologies, re-use existing code, or plug-in third-party modules. Performance tuning will be a snap using existing mappings, and documentation will always be up to date. As a result of these factors the overall quality of the system will improve, and fine-tuning it will be easier than ever. Most architects like to design and specify things before they are implemented, so for them MDA is a welcome change.En todo caso, no suscribiría ciento por ciento la posibilidad de que la documentación esté siempre actualizada. No conozco (quienquiera puede desmentirme) herramientas del tipo MDA que puedan mantener como una colección integrada toda la documentación asociada. Por eso es interesante el concepto de Software Factory de Jack Greenfield, si el caso fuera que SF puede lograr esta integración.
Para programadores:
Every programmer knows how stupid it is to write the Customer
class and its functions over and over. MDA cuts out much of that redundant work so that programmers can focus on more productive tasks like writing mappings to different platforms and fine-tuning existing mappings for specific reasons. Programmers put their platform expertise to better use this way, and they also get to concentrate on tough coding jobs that even the best MDA tools can't handle. With all the busy work cut out, programming can become more fun and also more challenging: programmers need to know their tools and languages well in order to work efficiently in an MDA environment.
Para testeadores:Testing and maintaining are the grunt work of the development cycle, and MDA cuts out much of the worst of both. With the right tools, testers can generate test scripts directly from a system's UML model. This obviates the necessity of writing endless nearly identical test scripts and code fragments, allowing testers to focus on more crucial tasks.Para mantenedores:
When not hunting down missing documentation, maintainers can usually be found frowning over out-of-date, poorly written documents that fail to yield the exact information they require. Not so with MDA, where the model is always current and applicable to solving real problems. MDA documentation is built in, which cuts down significantly on both frustration and high maintenance costs.Agregaría que no sólo la documentación está integrada -aquí entendida en el sentido estricto del diseño del modelo-, sino que la inserción de cambios es simple, mucho más que la que se pueda pensar basado en un sistema no dirigido por modelo, dado que las partes están intrínsecamente integradas y articuladas, y un cambio se propaga en forma automática, reduciendo el alcance del problema a seguir la ruta del impacto, verificando conflictos que se pueden localizar en todos los casos, y a planificar las transformaciones que se deban implementar al ejecutar los cambios. (Hablando en términos generales; siempre puede haber excepciones).
Finalmente, adhiero a las recomendaciones de Mikko sobre la transición a la adopción de la arquitectura:
Convincing your organization to make the transition to MDA is only half the game: implementing the new approach is the second half. After you've adopted the MDA approach and settled on the tools you need to implement it, you'll want to embark on your first-ever MDA project together. Before you even launch the project, you should clearly spell out your goals. Admit that there is hype around MDA and explain its benefits, using the proposed project as an example. Make sure that everyone on the development team understands what you're aiming for. For example, if you want to generate a complete application, provide one or two examples so that people see it can be done. If you have a different intention, then clearly state it and provide examples to back up your goal.
Start with a good project that is small enough to process quickly. It's always best to introduce a new technology or approach with a project that is quick and relatively simple. After you've done a few smaller projects, everyone will have learned the basics and executing larger ones will be easier.
Make sure that everyone who needs support gets it. Project managers might be concerned about the amount of work involved in the new approach, the duration of certain tasks, and so on. Developers may be challenged by the unfamiliar techniques and tools. Having a knowledgeable expert or consultant on the job could be helpful in this phase of the transition.
Follow up is important. After the project is finished, make a final report detailing the successes and failures of the project. Be sure to record both -- plenty of praise for what worked as well as a clear discussion of problems and how they may be resolved in the future.
lunes, agosto 22, 2005
"la apabullante Corea"
Dice Bär :
Semejante pujanza, que a ojos de un argentino parece sencillamente "milagrosa", se debe -para Deok- a las bondades de su sólido Sistema Nacional de Innovación. Para hacerse una idea de las dimensiones de esta organización basta con mencionar que en 2003 había en Corea 297.060 personas dedicadas a actividades de investigación y desarrollo -198.171 eran científicos, alrededor de siete por cada mil personas económicamente activas-. Para mantenerla activa, Corea invierte unos 19.400 millones de dólares anuales en ciencia, el 2,64% de su PBI, una cifra en aumento año tras año.Nuestros números, por el contrario, son innombrables. No lo podemos explicar por riqueza natural, porque Corea tiene escasas, no lo podemos explicar por la diferencia de tradición histórica, porque otros países "jóvenes" tienen buenos números. Tampoco porque hubieran permitido el ingreso del capital externo a su país, porque también nosotros lo hacemos. Ni siquiera por las historias políticas recientes, porque Corea tiene antecedentes con algunos parecidos a los nuestros. ¿No sería mejor poner el ojo en cada asunto de importancia que esté estancado, y crudamente terminar con las falacias con que los encubrimos?
jueves, agosto 18, 2005
IBM: Clasificación ortogonal de defectos
Así se introduce el problema en el Center for Software Engineering, de IBM:
Traditionally, defects represent the undesirable aspects of a software's quality. Root Cause Analysis (RCA)and Statistical Growth Modeling (e.g. S-curves) have played useful roles in the analysis of software defects. Effective RCA, while yielding exhaustive details on each defect, takes substantial investment of resources for completion and points to too many actions as a result. Growth modeling, on the other hand, provides an easy way to monitor trends, but is not capable of suggesting corrective actions due to the inadequate capture of the semantics behind the defects.
ODC is a scheme to capture the semantics of each software defect quickly. It is the definition and capture of defect attributes that make mathematical analysis and modeling possible. Analysis of ODC data provides a valuable diagnostics method for evaluating the various phases of the software life cycle (design, development, test and service) and the maturity of the product. This is much like the diagnostics done in medicine using the blood sample from a patient to understand the existing health conditions and arrive at corrective actions. ODC makes it possible to push the understanding and use of defects well beyond quality.
sábado, agosto 06, 2005
Simplemente Buenos Aires
miércoles, julio 20, 2005
Cuál es el marco de trabajo de las DSL Tools en VS 2005?
Materiales de JavaOne 2005 disponibles
sábado, julio 16, 2005
EAI-ERP: Parte de situación
Bernd Fisher: Generación de código certificable
Otros trabajos: Patrones en la reingeniería de código OO, por Oscar NierstraszABSTRACT Code generators based on template expansion techniques are easier to build than purely deductive systems but do not guarantee the same level of assurance: instead of providing correctness-by-construction, the correctness of the generated code depends on the correctness of the generator itself. We present an alternative assurance approach, in which the generator is extended to enable Hoare-style safety proofs for each individual generated program. The proofs ensure that the generated code does not go wrong, i.e., does not violate certain conditions during its execution.
The crucial step in this approach is to extend the generator in such way that it produces all required annotations (i.e., pre-/postconditions and loop invariants) without compromising the assurance provided by the subsequent verification phase. This is achieved by embedding annotation templates into the code templates, which are then instantiated in parallel by the generator. This is feasible because the structure of the generated code and the possible safety properties are known when the generator is developed. It does not compromise the provided assurance because the annotations only serve as auxiliary lemmas and errors in the annotation templates ultimately lead to unprovable safety obligations.
We have implemented this approach and integrated it into the AutoBayes and AutoFilter program generators. We have then used it to fully automatically prove that code generated by the two systems satisfies both language-specific properties such as array-bounds safety or proper variable initialization-before-use and domain-specific properties such as vector normalization, matrix symmetry, or correct sensor input usage.
lunes, julio 11, 2005
Cuál es el alcance real de los lenguajes de dominio (DSL)?
Languages Workbenches: the killer-app for Domain Specific Languages?
Languages Workbenches and Model Driven Architecture
Entre ellos, una referencia a LABRI (Laboratoire Bordelais de Recherche en Informatique) ofrece una visión general del tema, e introduce en las características distintivas de los lenguajes, y da una lista de casos.
Un aspecto fundamental discutido es la factibilidad de construír un DSL, y quién está en condiciones de hacerlo. Cito:
(...) However, they also raise a key issue which, if not addressed, could obstruct the use of DSLs (...): how does one design and implement a DSL? Resolving this issue is critical to make the approach profitable since there is no point in reducing the complexity of program development by shifting all the complexity into the construction and maintenance of a DSL programming environment. Another related question is: who will develop DSLs? Even in the programming language community, only a few people have actually designed a language. A fortiori, we cannot expect software engineers to have the full expertise to build up new languages. Thus, it is crucial that a methodology and tools are provided to make the DSL approach widely accessible.El laboratorio desarrolló una solución (un framework) llamada SPRINT. Ver el método que crearon para construír un Lenguaje de Dominio, da una idea de su real capacidad de ser una herramienta genérica para colaborar en el proceso de armado de un sistema (Específicamente, quiero saber cómo es de posible crear un DSL para un proyecto específico, con un problema específico, como dice la presentación de Microsoft (Think of a DSL as a small, highly focused language for solving some clearly identifiable problem that an analyst, architect, developer, tester, or system administrator must wrestle with. Developers are already familiar with examples of DSLs; SQL for data manipulation, XSD for XML document structure definition, and so on. Another example from Visual Studio Team Edition for Software Architects is a DSL for modeling the logical structure of datacenter hardware and host software configurations. This DSL and its related graphical designer can be used to validate during design time that applications are configured to match their intended deployment targets, alerting the developer to problems when they can be fixed more cheaply.Good ways to find candidate DSLs are to identify the patterns used by developers, and then to encapsulate them into a modeling language, or to surface the concepts in a software framework as abstractions in a modeling language that can then generate small amounts of code that extend the framework. These techniques allow us to control the amount and complexity of generated code, offering real value to the developers, without the hassles that characterized CASE products.)
Sigue la descripción del marco metodológico a seguir para especificar un DSL según Sprint:
Our methodology is based on a framework outlined in an earlier paper by Thibault and Consel and put into practice in several DSL prototypes (GAL, PLAN-P) and publications . It can be summarized as follows. (For the sake of clarity, the phases of our methodology are presented sequentially. In practice, the whole process needs to be iterated.)Estos son los pasos a seguir:
Como se ve, construír un DSL, aun con el auxilio de una herramienta que encauce metodológicamente y abrevie pasos, no es una tarea trivial. Sin hablar de que existe un paso previo metodológico no cubierto (Assuming a problem family has been identified...)
- Language analysis
- Assuming a problem family has been identified, the first step is to analyze the commonalities and the variations in the corresponding program family. This analysis is fueled by domain knowledge. The result of this analysis includes a description of objects and operations that are needed to express solutions to the family of problems, as well as language requirements (e.g., analyzability) and elements of design (e.g., notations).
- Interface definitions
- The next phase is to refine the design elements of the DSL. To do so, the syntax of the DSL is defined and its informal semantics is developed. The informal semantics relates the syntactic constructs to the objects and operations (i.e., the building blocks) identified previously. Additionally, the domain of objects and the type of operations are formalized, thus forming the signature of semantic algebras.
- Staged semantics
- The semantics of a GPL is typically split between the compile-time and the run-time actions. These two parts are also referred to as the static and the dynamic semantics of a language. We propose to perform the same separation in the semantics of a DSL. With respect to software architecture concerns, this separation makes stages of configuration explicit.
- Formal definition
- Once the static and dynamic components of the language have been determined, the DSL is formally defined. Valuation functions define the semantics of the syntactic constructs. They specify how the operations of the semantic algebras (i.e., the building blocks) are combined.
- Abstract machine
- Then, the dynamic semantic algebras are grouped to form a dedicated abstract machine which models the dynamic semantics of the DSL. From the denotational semantics, the DSL is given an interpretation in terms of this abstract machine. The state of the semantics is globalized and mapped into abstract machine entities (e.g., registers) dedicated to the program family.
- Implementation
- The abstract machine is then given an implementation (typically, a library), or possibly many, to account for different operational contexts. The valuation function can be implemented as an interpreter based on an abstract machine implementation, or as a compiler to abstract machine instructions.
- Partial evaluation
- While interpreting is more flexible, compiling is more efficient. To get the best of both worlds, we use a program transformation technique, namely, partial evaluation, to automatically transform a DSL program into a compiled program, given only an interpreter. Writing the language definition as an interpreter allows fast prototyping, and easier maintenance and extension. However, it does not compromise efficiency as partial evaluation is able to remove the interpretation layer overhead, yielding efficient DSL implementations.
Ni cuestiono los DSLs, ni mucho menos el desarrollo de esta herramienta, para aumentar el campo cubierto por DSLs específicos. Sólo pongo en duda con fuerza la posibilidad de que una colección de DSLs sea la alternativa a un modelo de herramienta del tipo MDA.
En forma desnuda, las afirmaciones de Greenfield y Microsoft no son sino una promesa o una especulación comercial, y la cruda verdad es que estamos hablando del Visual Studio usando C# u otro lenguaje .NET, para hacer interfaz con aquellos DSL que existan, cuando el problema se presente,...y el resto es C#, es decir, lo mismo.
lunes, julio 04, 2005
El duro trabajo de salir del enfoque procedural
Pero el más interesante y valioso de los varios casos sobre lo mismo es el llamado "OOP/OOD Philosopy". Invito a seguir la discusión y poner a prueba sus criterios.
lunes, junio 20, 2005
Otro ejemplo del compromiso europeo con MDA
Fraunhofer-Gesellschaft de Munich y el IKV++ Technologies AG, de Alemania, y Thales ATM, con sede en varios países, trabajan en común en un proyecto de evaluación temprana de los conceptos de MDA:
The objective of MASTER project (IST-2001-34600) is to perform an early validation of the concept of MDA (Model Driven Architecture) which is currently being promoted by the OMG (Object Management Group). The validation consists in an experimental approach. MDA concepts are to be applied in the Air Traffic Management (ATM) domain and is focused on achieving the following more detailed objectives:Vea su esquema de implementación de MDA aquí
Develop a variability specification language and apply it to the ATM domain to improve customising requirements according to the final user needs, and map variability (open decisions) on to the architecture.
Provide the mechanisms to derive and analyse the development process activities and project management parameters (such as cost, effort, time, etc.) since the very early stages in contract negotiation and project definition, by using the architecture as the basis for reasoning about this.
Implement the MDA concepts of separation of concerns to achieve independence from specific platform technologies and therefore enable the specialisation of resources (business knowledge separate from technological knowledge)
Additionally, the project aims at providing tool support for these activities. Tool support will allow for the modelling of variability among ATM systems using a variability specification language, automate the project management based analysis on the architecture and support the transformation mappings among different levels of abstraction of an MDA approach.
Finally, one very important aim of the project is to contribute to the development of the MDA standard and the consolidation of ATM domain. The project plans to achieve this through active participation in the standardisation groups of OMG.
viernes, junio 17, 2005
Microsoft y MDA-MDD - Domain Specific Languages
La idea de usar lenguajes específicos para problemas específicos es contrapuesta a la idea de los modelos específicos de plataforma (PSM), suponiendo que para una plataforma corresponde un solo modelo de plataforma. Independientemente de que sea corrientemente así o no, no parece esta una razón para oponerlos. Sí es correcto en todos caso que un DSL puede ser el soporte de generación de un modelo dado.Microsoft has learned from past industry experiences, and plans to avoid the pitfalls of CASE by adopting an approach to model-driven development based on the following ideas:
- A model should be a first-class artifact in a project—not just a piece of documentation waiting to become outdated. Models have a precise syntax, are often best edited and viewed using a graphical tool, and embody semantics that determine how domain-specific concepts in models map to other implementation artifacts, such as code, project structures, and configuration files. In this way, a model is a lot like a source code file, and the mechanisms that synchronize it with other implementation artifacts are a lot like compilers.
- A model represents a set of abstractions that support a developer in a well-defined aspect of development. (...)
- Since models can abstract and aggregate information from a number of artifacts, they can more readily support consistency checks and other forms of analysis. For example, an application connectivity model might support contract protocol validation, security analysis, or performance analysis.
- Models can be implemented by a process similar to compilation, where the code, configuration files, and other implementation artifacts generated by the compiler are never edited by hand. More often, however, they are implemented by a combination of generated and hand-edited artifacts. In such cases, it is critically important to carefully manage the way the generated and hand-edited artifacts fit together. As mentioned above, the failure to do this effectively was one of the primary shortcomings of CASE products. We have been using several techniques to ensure that generated and hand-edited artifacts are kept separate, and that code added by a developer is never disturbed when boilerplate code required by a tool is generated. These techniques include the use of class delegation and inheritance, and particularly the use of "partial classes," a new feature of the .NET languages in Visual Studio 2005, which has been specifically defined with this task in mind.
We call modeling languages defined in these ways Domain Specific Languages or DSLs. Think of a DSL as a small, highly focused language for solving some clearly identifiable problem that an analyst, architect, developer, tester, or system administrator must wrestle with. Developers are already familiar with examples of DSLs; SQL for data manipulation, XSD for XML document structure definition, and so on. Another example from Visual Studio Team Edition for Software Architects is a DSL for modeling the logical structure of datacenter hardware and host software configurations. This DSL and its related graphical designer can be used to validate during design time that applications are configured to match their intended deployment targets, alerting the developer to problems when they can be fixed more cheaply.
Good ways to find candidate DSLs are to identify the patterns used by developers, and then to encapsulate them into a modeling language, or to surface the concepts in a software framework as abstractions in a modeling language that can then generate small amounts of code that extend the framework. These techniques allow us to control the amount and complexity of generated code, offering real value to the developers, without the hassles that characterized CASE products.
Microsoft recently announced the DSL Tools, which will enable customers and partners to build DSLs using the same technology in Visual Studio 2005 that we used to build the modeling tools that will ship with Visual Studio Team Edition for Software Architects. This technology, which can be thought of as "tools to build tools," simplifies the task of defining DSLs and reduces the effort and skills required to build graphical editors and compilers for them.
La idea de crear código combinando generación automática y trabajo manual, mantiene al producto dentro del marco que MS cuestiona para los modelos desarrollados desde UML. No hay ninguna diferencia: (Models can be implemented by a process similar to compilation, where the code, configuration files, and other implementation artifacts generated by the compiler are never edited by hand. More often, however, they are implemented by a combination of generated and hand-edited artifacts. In such cases, it is critically important to carefully manage the way the generated and hand-edited artifacts fit together)
El código manual siempre será opaco para el modelo, por lo que fácilmente devendrá desincronizado, y exige seguimiento del desarrollador acerca de la coordinación entre ambos mundos.
Los DSL son productos orientados a la solución de problemas determinados, bien definidos para cada caso, bien probados, y con la solución más adecuada al caso. Así lo indica el artículo, así lo conocemos de otras definiciones. La novedad que ofrece MS es ofrecer una herramienta para la construcción de DSLs. Esto es coherente con ofrecer como contraposición a un PSM (Platform Specific Model) los DSLs en general: es decir, si se va a decir que los DSL son la solución frente al mapeo que se produce entre un PIM y un PSM, debe ser posible construír un DSL para cada problema vinculado a una plataforma que se presente. Es eso posible? Sin duda que se puede construír un generador o mapeador a un problema que tenga reglas que ya se aplicaban manualmente. Pero es viable?. Seguramente, dependiendo de la magnitud del proyecto, o de la continuidad con que el mismo problema deba ser resuelto. Si se dispone de un equipo que pueda dedicar la cantidad de horas hombre necesarias, y el conocimiento correspondiente, para crear todas los algoritmos que interpreten un dominio, y que testeen todos los escenarios posibles, seguramente se logrará crear un DSL que cumpla con todos los requisitos propuestos. Seguramente dedicarse a este objetivo no está en el ámbito de la productividad y reducción de costos que se proponen los desarrollos basados en modelo y generadores. Salvo que estemos hablando de empresas de primer orden mundial. En el resto del prosaico mundo, la exigencia por resolver problemas con el menor costo implica que la búsqueda de herramientas de desarrollo rápido trate de conseguir eso: desarrollo rápido.
Así, el toolbox para crear DSLs, es seguramente una herramienta para empresas de software dedicadas a nichos de software que trabajen con .NET y Visual Studio.
Poco, estrecho alcance para el nivel de la crítica presentada por la visión de Microsoft a la OMG.
lunes, junio 13, 2005
Microsoft y MDA-MDD - Segunda aproximación
Debo aclarar que tengo un prejuicio, y es que la insistencia de MS en desechar, desvalorar o desacreditar los dos estándares de OMG relacionados (MDA y UML) tiene una simple razón comercial. El mejor resultado de desentrañar su iniciativa, sería determinar los valores positivos que la estrategia de modelado de MS pueda aportar.
También debo decir que yo mismo, por mi experiencia personal con Plex, puedo coincidir en que no necesariamente se debe construír un producto por medio de transformaciones de un modelo generado a partir de un esquema UML. Aunque creo que entre los propios constructores del estándar, aceptan que pueden existir otras vías de pasar desde un modelo hacia una aplicación.
Salvado esto, quisiera enfocar el punto más valioso de la estrategia MS:
In our opinion, a single PIM and a single PSM per target platform, all developed using a general purpose modeling language, as prescribed by MDA, are insufficient to support the significantly greater levels of automation promised by model-driven development.De acuerdo. Tratar de elevar el diseño orientado por modelos a un esquema que soporte en forma amplia el ciclo de vida completo de una aplicación, es una consecuencia natural de la construcción evolutiva de un modelo.
Cómo se ejemplifica en el documento de MS:
Rich automation of the software life cycle requires many additional types of models, such as models that:Dejando de lado que algunos de estos casos debieran ser contemplados hoy día por un esquema UML (seguramente el ítem 3), hay aquí un conjunto de escenarios que requerirían conducción, trazabilidad, interconexión o integración, automatización, y que cubren un paso más que lo que hoy una transformación PIM => PSM puede alcanzar. Muchos de ellos pudieran ser encarados encauzando el uso de un modelo dentro de una herramienta de manejo de configuración, que hoy cubren casi todos los ítems, al menos en una forma secuencial histórica. Pero mucho mejor sería que el mismo modelo articulara todos estos aspectos.
- Capture, analyze, and manage requirements; identifying trace relationships between requirements, architectural design and implementation constructs, enabling validation that requirements have been implemented, and supporting impact analysis when requirements change.
- Define software architecture in a way that supports security, performance and reliability analysis, and other forms of evaluation; and in a way that enables predictable assembly of systems from components, and efficient and reversible step-by-step transformations from requirements and deployment,
- Define how the executable components of a system are packaged, identifying the types of resources in the deployment environment required by each component, and binding the components to specific instances of those resource types,
- Define test cases, test data sets, test harnesses, and other artifacts, making it easier to evaluate the quality of software developed using models, and to manage and display the test results
- Identify traceability relationships between models and other artifacts, making it easier to support business impact analysis when systems go down, configure systems to satisfy requirements, and enforce constraints during system configuration,
- Define configurations of source artifacts used to build executables, making it easier to version those configurations, and to associate defect reports and feature change requests with specific versions.
MS habla de "many additional types of models" para referirse al modo en que estos aspectos serían expresados, planteando el problema como una colección de modelos. Me conforma más la idea de OMG que habla de visiones encaradas por distintos diagramas. La idea de colección de modelos me hace pensar que la integración de ellos se lograría por medio de la IDE en la que se describieran y recolectaran, con lo que deja un grado de subjetivismo en cómo serán interpretados o aplicados. Más aún, cuando la IDE en que están incluídos es una parte del Visual Studio mismo, en donde la construcción de código tiene de automático aquello que pueda ser derivado de un wizard, una clase heredada, un header, o un patrón aplicado. Sobre esta característica retornaré en otro momento.
En resumen, la idea de extender, y las áreas propuestas de extensión, son de real interés, y creo que son valiosos generadores de ideas. El cuánto sea esto logrado en esta iniciativa, es algo que debe verse bajando más cerca de cada una de las herramientas concretas con las que se lo intenta. En cambio, el hecho de manejar estos recursos como una colección, sugiere que la idea no está acabada, y que en estos términos puede ser incluso peligrosa, excepto que se tenga una buena disciplina para su utilización. El desarrollo basado en modelos justamente se propuso atacar la complejidad del diseño, facilitando la visión arquitectónica, yendo hacia la abstracción, y poniendo bajo control la complejidad de las decisiones de detalle. De un modo en que la cascada de consecuencias que se derivan de las definiciones en el modelo abstracto, estén bajo control. No veo en el esquema propuesto por MS facilidades amplias de este tipo. Pudiera convertirse en lo contrario.
Volveremos sobre esto avanzando un poco...
miércoles, junio 08, 2005
Microsoft explica su posición sobre MDA-MDD
Customers and partners are keen to understand Microsoft's strategy for model-driven development and its support in Visual Studio Team System. When we explain our strategy to them, they frequently express interest in some of the same topics, and raise some of the same concerns. In this document we set out our strategy for model-driven development as a series of questions and responses that address these topics and concerns. The first five questions deal with the main pillars of our strategy, which we have described with detailed answers and explanations.En los próximos días trataré de sacar lo mejor del documento...
lunes, mayo 30, 2005
Si los programadores fueran albañiles...
Uno de Enero
Hoy me han llevado al solar por primera vez. La situación es perfecta: tiene el Metro a dos pasos y una cafetería enfrente donde sirven menú del día. El viejo bloque de pisos, al que va a sustituir nuestra nueva construcción, lleva un año al borde de la ruina. Mi propia empresa ha colocado varios puntales que, por el momento, han ido evitando que el caduco edificio reviente por sus múltiples grietas. La construcción de este megalito de ladrillo dió comienzo hace cinco años, y aunque los pisos superiores nunca llegaron a recibir el agua, la electricidad y el enfoscado de las paredes, en diez meses los cimientos ya se habían desplazado peligrosamente y las vigas presentaban peligrosas fisuras. La cansada torre de viviendas ya ha cumplido su propósito y ahora nosotros la conduciremos a una muerte dulce. Por supuesto, el viejo edificio no será demolido hasta despues de construir y probar el nuevo, lo que nos deja poco espacio de maniobra; pero no vamos a dejar a todas esas familias en la calle durante la construcción. De cualquier modo, los vecinos de la vieja y decadente estructura nos miran con recelo. Saben que el nuevo edificio tendrá viviendas mas cómodas, pero algunos de los residentes no podrán costearlas. Ni se qué va a ser de esta gente, ni es asunto mío. Llegan los primeros camiones de ladrillos.
Dos de enero
Me han presentado a Alberto, la persona a quien "voy a reportar". No me han dicho si es el capataz, el jefe de obra, el aparejador, o el arquitecto; sólo me han dicho que todo lo que tenga que "reportar", se lo "reporte" a él. Así que, por donde él diga, yo zaca-zaca, como una locomotora. Esa es la definición que me han dado de nuestra metodología. He buscado "reportar" en el diccionario, y no aparece.
Seis de febrero
En algo más de un mes, hemos cavado medio metro de cimientos. Ayer Alberto nos dijo que empezáramos a poner ladrillos, porque el tiempo designado para la cimentación se había agotado hace dos semanas. No acepto nuestras excusas de que las prometidas excavadoras aun no habían llegado y que nos habíamos visto obligados a cavar con las paletas de enyesar. Un compañero se trajo una pala de cavar que guardaba de una obra anterior y casi le echan por razones deontológicas. Según Alberto, lo que pasa es que frecuentamos demasiado la cafetería. El asunto se ha zanjado con un «hale, a levantar paredes y luego que cada palo aguante su vela». El trabajo sin planos es dificultoso. Los cimientos tienen una forma algo pintoresca. He pedido una plomada para que las paredes queden verticales y he recibido improperios poniendo en duda mi masculinidad. Ya sé que Alberto no es el arquitecto, porque el arquitecto es un tal Ignacio. Pasó a supervisar la obra el otro día, aunque aun no había nada que ver. Me han llegado rumores, aunque no son muy dignos de crédito, de que existen fotocopias de planos.
Doce de mayo
Anoche estuvimos hasta las siete de la mañana cubriendo con tablas y enmoquetando el espacio que algún día ocupara el despacho de la sexta planta, aunque el edificio no es aun mas que una maraña de vigas de todos los tamaños y algunas paredes que habrá que tirar más tarde porque están en el sitio equivocado. Hemos traído baterías para los fluorescentes y unos muebles de caoba preciosos. Por suerte, todo estuvo a punto para la demo. Izamos al cliente con la grúa hasta su futuro despacho y pudo contemplar la vista que se disfrutaría desde el emplazamiento. El viento hizo que la pared oeste, que dos de mis compañeros sujetaban con la espalda, se derrumbara con gran estruendo sobre la mesa de caoba en el peor momento. Gracias a Dios, el cliente fue comprensivo: esto pasa siempre en las demos, y él está curado de espanto, dijo mientras le sacudíamos el polvo del traje. Dice que el lunes que viene vendrá a probar las instalaciones sanitarias. Supliremos con cubos la inexistencia de tuberías.
Veintitrés de febrero
Han transcurrido casi catorce meses. Llevamos ya siete de retraso y el edificio no acaba de superar el estado de "casi terminado". Soy de los pocos albañiles que no ha cambiado de obra en este tiempo. Alberto esta consumido por la zozobra y se pasa el día en la cafetería trasegando Soberanos. El arquitecto no ha vuelto a pasar por aquí. Los rumores dicen que existieron unos planos, pero no eran de un bloque de pisos, sino de un polideportivo. Por lo visto, en las reuniones del comité de construcción se dijo que la filosofía era la misma y que sólo harían falta modificaciones mínimas. Ahora comprendo por qué nos hicieron instalar aros de baloncesto en el hueco del ascensor. Siempre dije que acabaríamos teniendo que quitarlos o aquello no era un hueco de ascensor, que era cuestión de lógica. Alberto siempre me contestaba que no le viniera con tecnicismos. Estoy perdiendo la vocación de albañil. He decidido apuntarme por las tardes a un curso de informática, a ver si puedo cambiar de vida. Este oficio mío no es serio
* Publicado en la sección "Cartas del iluso" de la revista Solo Programadores, número 19, Marzo 1996.
Copyright © 1994-1998, ATI, Asociación de Técnicos de Informática.
viernes, mayo 27, 2005
¿Abarca y Devora III? - Un comentario posiblemente muy a propósito...
En primer lugar, el tono del escrito, que pareciera preparado por un fan adolescente de Microsoft (mal iría a leer un libro de análisis de tendencias presentado por un fan adolescente). Su visión del "ñoño-opositor-linuxero" me temo que muy bien le pudiera caber a él mismo. Una persona que piensa que los problemas de competencia comercial que se dirimieron en el período que ejemplifica, pudieran reducirse a una guerra de fans, desconoce probablemente el impacto que esas guerras comerciales dejaron. O se trata de un documento destinado a captar el entusiasmo de los nerds del bando propio...
En segundo lugar, una de sus dos afirmaciones centrales: "Microsoft es la única empresa de la lista (de grandes compañias de software) que no ha cometido un error garrafal e insensato". Quizá sea de interés estudiar los errores de sus competidores caídos, pero pensar que ese punto explica el predominio de Microsoft, es ignorar u ocultar otro elemento fundamental de su predominio: la utilización a destajo de su posición dominante, que está expresada en una cadena interminable de juicios por monopolio (Windows Media Player, Netscape, por sólo mencionar dos bien conocidos, no son reflejos de errores estratégicos del competidor, sino de abuso de posición monopólica, y es revulsivo escuchar al prologuista explicar la caída de Netscape por el plan de reescritura del código, ignorando la demanda sustentada a través de varios años, acerca de la inclusión forzosa de IE dentro de Windows). Más aún, seremos expectadores privilegiados de la historia, ya que ahora va por Google, y veremos allí cómo se aplica la doctrina de "no cometer errores garrafales": MSN Search también estará incluída en todo lo que fuera Windows, y, si usted es usuario de Messenger, podrá notar que es forzado a actualizarse a la nueva versión 7.0 incondicionalmente, y cuasi forzado a aceptar el MSN Search incluído en la actualización; al menos a mí, me resulta ultrajante que no pueda abrir mi versión 6.x si no acepto la 7.0 (algo parecido con XP). Las discusiones en la Unión Europea y China acerca de las reglas de conducta del gigante, creo que eximen de comentarios.
Spolsky supone un segundo argumento justificativo del predominio de MS, y es que está a su frente un programador. Creo que le hace muy poco mérito a Gates...
Abarca y Devora II - Descripción del invento
Lo que sigue es la lista de Claims de la patente:
What is claimed is:
1. A system that facilitates mapping arbitrary data models, comprising a mapping component that receives respective metadata from at least two arbitrary data models, and maps expressions between the data models.
2. The system of claim 1, the data models are query languages.
3. The system of claim 1, the data models are data access languages.
4. The system of claim 1, the data models are data manipulation languages.
5. The system of claim 1, the data models are data definition languages.
6. The system of claim 1, the data models include at least an object model and at least one relational model where the object model is mapped to at least one of the relational models.
7. The system of claim 1, the data models include object models where one of the object models is mapped to at least one of the other object models.
8. The system of claim 1, the data models include an XML model and at least one relational model where the XML model is mapped to at least one of the relational models.
9. The system of claim 1, the data models are XML models where one of the XML models is mapped to at least one of the other XML models.
10. The system of claim 1, the data models include an XML model and at least one object model where the XML model is mapped to at least one of the object models.
11. The system of claim 1, the data models are relational models where one relational model is mapped to at least one of the other relational models.
12. The system of claim 1, the data models include an XML model and an object model where the object model is mapped to at least one of the XML models.
13. The system of claim 1, the data models are of the same structure.
14. The system of claim 1, the mapping component receives topology data that is derived from the metadata.
15. The system of claim 1, the data models are read-only.
16. The system of claim 1, the data models are mapped without modifying the metadata or structure of the data models themselves.
17. The system of claim 1, the expressions comprise at least one of a structure, field, and relationship.
18. The system of claim 17, the expressions mapped between the data models are at least one of the same, different, and a combination of the same and different.
19. The system of claim 1, the mapping component creates structural transformations on the data of a data model by at least one of creating or collapsing hierarchies, moving attributes from one element to another, and introducing new relationships.
20. The system of claim 1, the mapping component relates and connects the same mappable concepts between the data models.
21. The system of claim 1, the mapping between the data models is directional.
22. The system of claim 1, the data models include a source domain and a target domain, such that a structure and field of the target domain can be mapped at least once.
23. The system of claim 1, the data models include a source domain and a target domain, such that a structure and field of the source domain can be mapped multiple times.
24. The system of claim 1, the data models include a source domain and a target domain such that the mapping component allows a user to operate on the data models through a query language of the target domain.
25. The system of claim 1, the data models include a source domain and a target domain such that the source domain is the persistent location of the data.
26. The system of claim 1, the data models include a source domain and a target domain such that mapping translates a query written in a query language of the target domain into a query language of the source domain.
27. The system of claim 1, the mapping component facilitates automatically synchronizing updates made in a target data model to a source data model.
28. The system of claim 1, the mapping component includes a mapping file that maps like concepts of the respective metadata.
29. A computer executing the system of claim 1.
30. A system that facilitates mapping arbitrary data models, comprising: source metadata that represents source concepts of a source data source; target metadata that represents target concepts of at least one target data source; and a mapping component that receives the source metadata and the target metadata and maps the concepts from the source data source to the target metadata associated with one or more of the target data sources.
31. The system of claim 30, the mapped concepts are at least one of the same, different, and a combination of the same and different.
32. The system of claim 30, the data sources are of the same structure.
33. The system of claim 30, the source concepts and the target concepts are the same in both of the data sources.
34. The system of claim 30, the mapping component relates and maps the same concepts between the data sources.
35. The system of claim 30, the source and target concepts include a relationship element that is a link and association between two structures in the same data source.
36. The system of claim 35, the relationship element defines how a first structure relates to a second structure in the same data source.
37. The system of claim 30, the source data source and the target data source are disposed on a network remote from the mapping component.
38. The system of claim 30, the mapping component is local to at least one of the source data source and the target data source.
39. A method of mapping data between data models, comprising: receiving respective metadata from at least two arbitrary data models; and mapping expressions between at least two of the data models based upon the metadata.
40. The method of claim 39, the expressions mapped between the two data models are the same expressions.
41. The method of claim 39, further comprising defining a source data schema and a target data schema and information missing in the schemas.
42. The method of claim 39, further comprising transforming data during a mapping of a source data model to a target data model using a function.
43. The method of claim 39, further comprising synchronizing changes made in a target data model with a source data model.
44. A method for mapping arbitrary data models, comprising: receiving source metadata that represents source concepts of a source data source and target metadata that represents target concepts of a target data source; and mapping the concepts between the source and target data sources based upon the source metadata and the target metadata.
45. The method of claim 44, the data sources are of the same structure.
46. The method of claim 44, further comprising, creating a variable in a source domain; restricting the variable with conditions; and mapping the variable to the target concept.
47. The method of claim 46, the variable is created at least one of implicitly and explicitly.
48. The method of claim 46, the variable represents an empty result set.
49. The method of claim 44, the mapping is stackable between the source and target data via one or more intermediate mapping stages.
50. The method of claim 44, further comprising selecting an optimal path for mapping between the source and the target.
51. The method of claim 50, the optimal path is selected with a central control entity based on at least one of available bandwidth and interruptions in the path.
52. The method of claim 44, further comprising accessing a mapping algorithm in response to selecting an optimal path between a plurality of the data sources and plurality of the data targets.
53. The method of claim 52, the mapping algorithm is associated with a structure of the source data and the target data.
54. A system that facilitates mapping data between arbitrary data models, comprising: means for receiving source metadata that represents source concepts of a source data source and target metadata that represents target concepts of a target data source; and means for mapping the concepts between the source and target data sources based upon the source metadata and the target metadata.
55. The system of claim 54, the means for mapping includes a mapping means that relates and maps the same concepts between the data sources.
jueves, mayo 26, 2005
Mel Brooks: "Abarca y Devora"
The patent is A data mapping architecture for mapping between two or more data sources without modifying the metadata or structure of the data sources themselves. Data mapping also supports updates. The architecture also supports at least the case where data sources that are being mapped, are given, their schemas predefined, and cannot be changed.En la lista de propietarios de la patente aparece un número importante de personas vinculadas a Microsoft. La voz común es considerar que es Microsoft mismo quien avanza sobre una idea que fue ampliamente aplicada a través de muchos años por muchas empresas, que puede incluír distintos perfiles de problemas, pero que es señalada especialmente como amenaza a los diseños existentes en OR mapping (mapeo de relacional a objeto y viceversa).
Primer contacto con la noticia, una advertencia temprana de Jack Herrington en CGN-Talk, desde donde se puede leer el contenido completo de la patente aprobada, y los nombres de sus propietarios. Luego, la nota en The Server Side apuntada en el título de éste artículo, aribuyendo a Microsoft la acción, y, fundamentalmente, las consecuencias. Como en la discusión de The Server Side se dice, una patente no es definitiva, sino controversial; pero implica un avance definido hacia un objetivo restrictivo: poner una idea que en varios aspectos es de dominio público y académico, en manos de un grupo de beneficiarios de futuros reclamos de propiedad intelectual.
Aunque es temprano para atribuirle un padre, los "indicios vehementes" señalan uno, y uno acostumbrado a moverse con estos valores.
La siguiente es la introducción descriptiva del invento:
[0004] The present invention disclosed and claimed herein, in one aspect thereof, comprises a mapping format designed to support a scenario where two (or more) data sources need to map to each other, without modifying the metadata or structure of the data sources themselves. Mapping is provided, for example, between an Object space and a relational database, Object space and XML data model, an XML data model and a Relational data model, or mapping could be provided between any other possible data model to XML, relational data model, or any other data model. The mapping format supports updates, and also supports the case where both data sources being mapped are given, their schemas are predefined, and cannot be changed (i.e., read-only). An approach that was previously used to map, for example, XML data to a relational database required making changes to the XML schema definition (XSD) file in order to add annotations. The mapping format of the present invention works as if the XSD file is owned by an external entity and cannot be changed. It also allows reuse of the same XSD file, without editing, for multiple mappings to different data sources (databases, etc.).Quisiera destacar el último párrafo de la introducción:
[0005] Each data model exposes at least one of three concepts (or expressions) to mapping: structure, field, and relationship. All of these concepts can be mapped between the data models. It is possible that one data model may have only one or two of the expressions to be mapped into another data model that has three expressions. The mapping structure is the base component of the mapping schema and serves as a container for related mapping fields. A field is a data model concept that holds typed data. Relationship is the link and association between two structures in the same data model, and describes how structures in the same domain relate to each other. The relationship is established through common fields of the two structures and/or a containment/reference where a structure contains another structure. These are just examples of relationships, since other relationships can be established (e.g., siblings, functions, . . . ). The present invention allows establishing arbitrary relationships. A member of a data model can be exposed as a different mapping concept depending on the mapping context.
[0006] Semantically, mapping is equivalent to a view (and a view is actually a query) with additional metadata, including reversibility hints and additional information about the two mapped domains. When one data source is mapped to another, what is really being requested is that it is desired that the Target schema is to be a view of the Source schema. Mapping is a view represented in one data domain on top of another data domain, and defines the view transformation itself. Mapping can create complex views with structural transformations on the data, which transformations create or collapse hierarchies, move attributes from one element to another, and introduce new relationships.
[0007] Mapping relates and connects the same mappable concepts between two or more mapped models. Mapping is also directional. Thus, one domain is classified as a Source and the other is classified as a Target. The directionality of mapping is important for mapping implementation and semantics, in that, a model that is mapped as a Source has different characteristics then a model that is mapped as a Target. The Target holds the view of the source model, where the mapping is materialized using the query language of the target domain. The Source is the persistent location of the data, and mapping translates the query written in the target domain query language to the source domain query language. One difference between Source and Target is that a structure or field from the Source or Target model has some restrictions regarding the number of mappings that can apply for structures and fields. In the target domain, a structure and a field can only be mapped once, whereas in the Source domain, a structure and a field can be mapped multiple times. For example, a Customers table can be mapped to a BuyingCustomer element and a ReferringCustomer element in the Target domain. However, a local element or local class can be mapped only once. Another difference that stems for the directional attribute of mapping is that mapping allows users to operate on the mapped models through the query language of the target domain (e.g., using XQuery for mapping a Relational model to an XML model, and OPath, for mapping a Relational model to an Object model).
[0008] Another important attribute of the mapping architecture is that of being updateable. In the past, developers had to write code to propagate and synchronize changes between the domains. However, in accordance with the present invention, the mapping engine now performs these tasks. That is, when the user creates, deletes, or modifies a structure in the target domain, these changes are automatically synchronized (persisted) to the source domain by the target API and mapping engine.
[0009] In another aspect thereof, the mapping architecture is stackable where multiple stages of mappings may occur from a source to a target.
[0010] To the accomplishment of the foregoing and related ends, certain illustrative aspects of the invention are described herein in connection with the following description and the annexed drawings. These aspects are indicative, however, of but a few of the various ways in which the principles of the invention may be employed and the present invention is intended to include all such aspects and their equivalents. Other advantages and novel features of the invention may become apparent from the following detailed description of the invention when considered in conjunction with the drawings....
martes, mayo 17, 2005
El compromiso europeo con MDA
Modelware en su presentación en dos palabras:
Entre otros colaboradores destacables, tales como France Telecom, el Fraunhofer Gesellschaft de Alemania, e IBM UK, quisiera destacar a la Universidad Politécnica de Madrid, y a Jean Bézivin, por el Institut National de Recherche en Informatique et en Automatique de Francia.There is a growing gap between the demands of the end-users and the solutions the currently used methods of software development achieve. A paradigm to close this gap consists of the use of models for the construction of software. Model Driven Development (MDD) puts this concept on the critical path of the software development. The ultimate goal of the project MODELWARE is the large-scale deployment of Model Driven Development.
MODELWARE has three major objectives:
- Objective A: To develop a solution to enable a 15-20% productivity increase of software systems development. This solution will be based on MDD.
- Objective B: To lead the industrialisation of the solution.
- Objective C: To ensure the successful adoption of that solution by industry.
Based on the exploitation of higher levels of abstractions in the specification and design of software systems, MODELWARE provides a rigorous and coherent framework bringing major improvements in engineering processes. The framework automates the production of most software artefacts (tests, documentation, code, etc) and allows better capitalisation of know-how.
MODELWARE establishes the missing link between advanced formal approaches proposed by the academic communities and more traditional industry driven engineering solutions.
MODELWARE defines and develops the complete infrastructure required for large scale deployment of Model Driven Development strategies and validates it in several business areas. MODELWARE combines innovations in modelling technologies, tool development (both generic and domain specific), standardization, experimentations, change management, and users-suppliers relationships.
UMT-QTV en su parca presentación:
UMT-QVT is a tool for model transformation and code generation of UML/XMI models. UMT-QVT provides an environment in which new generators can be plugged in. The tool environment is implemented in Java. Generators are implemented in either XSLT or Java.
UMT-QVT is the 'Sourceforge' name for the UMT tool, which is short for UML Model Transformation Tool. It is not, as you may or may not assume, an implementation of the forthcoming OMG QVT (Query, View, Transformation) standard. However, it might migrate towards that eventually.....
Jon Oldevik, de Noruega, ofrece más información en su sitio ModelBased.
miércoles, mayo 04, 2005
Extreme Programming revisitado
Los "pros"de Timothy:
Testeo
Unit testing has certainly been around for many, many years, but it's certainly not unreasonable to say that we owe a great deal of today's increased emphasis on unit testing to XP. Unit testing was promoted as one of XP's core practices. Along with the recognition of unit testing as a powerful aid to developers came the methodology of test-driven development. This is a development style in which unit tests are always written prior to the code which they are testing.Plan de trabajo
What XP refers to as the "planning game", is another good practice that can be carried over into other methodologies. XP does a very good job in maintaining schedules, plans, and keeping the plan very current. The iteration and build strategies advocated by XP are very good strategies for any project. Amongst these strategies, XP advocates frequent releases, short and well defined development iterations, frequent builds, and emphasis on build automation. Whether you choose to practice XP or not, these are strategies that add value to any methodology.
En este caso, el plan es referido al corto plazo. En este terreno, XP es muy interesante, y quisiera aplicar tanto como pudiera especialmente el objetivo de releases cortos: Ciclo corto de desarrollo tanto como se pueda....
Otros aspectos en pro no son mencionados, sino brevemente agrupados en el punto anterior. Quisiera agregar un "pro" apenas indicado al comienzo de su artículo: la propiedad común, o quizá mejor comunitaria, del código. Seguramente no llevado al extremo de la rotación diaria en la escritura de código, pero la rotación en la construcción del código es un elemento fuerte para lograr un mejor dominio del producto en el que se trabaja. Todo lo contario a la práctica usual, donde cada participante de un proyecto suele estar especializado en una actividad. En realidad, Timothy critica como uno de los "cons" de XP este concepto:
XP also does not believe in the need for "experts". Their philosophy is that all programmers should be well balanced on all technologies employed by the application. While this sounds admirable as a goal, the reality of this is that you kill the enthusiasm of an expert, and encourage a mediocre understand of all technologies in all your programmers. It is simply not realistic for all programmers to become experts in all of a project's technologies. Individuals who want to gain expertise in a specific technology, such as security, GUI design, databases, etc. are discouraged.Creo que aquí coexisten dos aspectos: uno, que también veo, es el poco acento puesto en el análisis global o de largo plazo, dando primordial importancia a la escritura de código. El otro aspecto es la rotación en la elaboración del trabajo: Es cierto que un especialista no debe ser reemplazado, y que su trabajo tiene su cualidad propia, pero, en primer lugar, rotar entre pares es posible y recomendable, para mejorar el conocimiento global del producto en construcción, al menos lo suficiente para que un área no dependa de una sola persona. Adicionalmente, la visión global cohesiona y posibilita incluso la mejora a través de la observación del par.
Los "cons" de Timothy (the not so good):
El cliente en el equipo
Another core practice of XP asks you to keep a customer representative onsite with your development team so that you are able to quickly work out requirements confusion, get answers to questions, and get feedback on your progress. This sounds great, but it’s the practice I usually refer to as the fantasy practice. We'd all love to have James Gosling onsite also in case we run into any Java problems, but it's simply not realistic, just as having an onsite customer is not realistic in most cases. When you are able to achieve this, absolutely go for it. It's a grand idea, but not one that a software development methodology should rely on as a core practice.Hacer las cosas simples
XP also tends to be full of short catch phrases summarizing its core principles. One of those is "Do the Simplest thing Possible." Here they are advocating that you should always implement the simplest, least complicated solution to the task at hand. They specifically warn against coding for anticipated future extensibility and add-ons that may or may not actually happen. While in general, this may not be bad advice, I think making this a core principle of XP is going too far with this attitude. Often, it does make a great deal of sense to spend the time to implement something that, while not necessarily the simplest solution, is a smarter and better solution. Remember this; the simplest solution is not always the best solution. This tenet seems to be born out of programmers who are adverse to strategic design, analysis, and modeling and simply want to code.(El subrayado es mío)Aquí adhiero en tanto la simplicidad tenga relación con la oposición a un desarrollo de visión estratégica. Por lo demás, la simplicidad es recomendable.
Programación de a pares:
En este punto adhiero por completo a su criterio, al que remito. Sólo le agrego una especialización "local". Quisiera saber cuántos equipos de trabajo de Latinoamérica podrían disponer equipos de pares para escribir código en una máquina...
El código es toda la documentación:
While not an official XP practice, it is a common belief in the XP community that source code alone can serve as the developer documentation for a project. This may be fine for the developers currently working on the project, but have you ever tried joining a project in progress with a large code base and being told that the only documentation for what they've created so far is the source code? Believe it or not, many XPers would say that is perfectly fine, and this is how they document their projects. Again here, we have a failure to consider the future and what is strategically best for the organization as a whole.
En resumen...
The best way to view XP is as a collection of principles which may be applied to your software development project individually or in sets. You should not be dogmatic in your views about XP and believe that it is an all or nothing methodology. I'd go a step further even and refer to XP as a methodology toolbox, rather than a methodologyDe acuerdo, Timothy. La mejor actitud sería tener una cartilla de XP a mano, y releerlas para reconsiderar el trabajo de la próxima semana, pero como auxiliar de tu método predefinido de trabajo...