Comentarios, discusiones, notas, sobre tendencias en el desarrollo de la tecnología informática, y la importancia de la calidad en la construcción de software.
Sistemáticamente analizo cómo los lectores de este blog se interesan por él, porque es una vía de realimentación, una apertura a nuevas ideas. Y también a veces una sorpresa, considerando de dónde o por qué lo leen...Hoy, se trata de Globant. A raíz de una búsqueda en la que ésta página se vió involucrada sobre Mariano Englebienne -uno de los socios de la empresa-, obtuve dos o tres noticias que amplían mis referencias sobre la empresa y Argentina:
En primer lugar, encuentro que probablemente tengamos algo de historia en común con dos de sus fundadores, Umaran y Englebienne: Alguna vez estuve como ellos en el Mariano Moreno de Mar del Plata, y con una Umaran fuimos compañeros de promoción...vengo de una generación en diáspora, y no es malo tener noticias positivas de los orígenes.
En segundo lugar, el conocimiento de dos blogs que ya agendé en mi Google Reader, del tipo que me gustaría ver más a menudo en Argentina, aunque reconozco que estando lejos, mi visión puede ser tan limitada como la de la caverna de Platón...los dos blogs son Multitag, de Salevsky y Calviño, y Espíritu Emprendedor, de Lucas. Abiertos de ideas, positivos, preocupados por poner Argentina en nuevos carriles. Por Ezequiel Calviño llegué justamente a una de las referencias a Globant y sus socios. Calviño publica una referencia a una nota en La Nación publicada en la sección Empleo. Nunca se me hubiera ocurrido seguir las noticias de empleo del diario, y de otra forma me lo hubiera perdido.
En tercer lugar, las referencias de Englebienne a su empresa van vinculadas a las referencias al Parque Científico Tecnológico de Tandil. Al parque creo que me he referido sólo tangencialmente en éste blog, aunque le he dedicado más espacio en mi página sobre Plex. En algún momento habrá que dedicarle tiempo, y quizá referencias permanentes a todos los centros de su tipo argentinos y latinoamericanos.
Finalmente, la nota misma publicada por Calviño tiene dos o tres elementos que quisiera reproducir aquí. Van al final...
Entonces, de la nota de Calviño: Por qué eligieron Tandil:
-¿Por qué eligieron Tandil para abrir una nueva oficina? Tandil tiene la carrera de Ingeniería en Sistemas de mayor tradición del país; yo soy egresado de la Universidad Nacional del Centro de Tandil. Hay no menos de 30 empresas operando allá. Ahí abrimos hace poco nuestro laboratorio de innovación, donde trabajamos y jugamos con tecnologías. Y en nuestro próximo centro de desarrollo habrá una sala de música. Si a alguien le gusta tocar un instrumento para despejarse lo podrá hacer.
Sobre el potencial de los recursos humanos:
-¿En la Argentina hay buen nivel de inglés? No nos es tan complicado conseguir gente que hable inglés. Me parece que estamos bastante bien posicionados con respecto a otros países de la región. -¿Cómo ve la formación de ingenieros o licenciados en Sistemas en la Argentina? Muy buena. Yo fui educado en la Argentina y nunca el conocimiento con el cual fui dotado me ha puesto en un nivel de inferioridad. Cada vez más vamos a ver que la alta tecnología se empieza a desarrollar en países no centrales. Corea del Sur es un polo en el desarrollo de tecnologías de plasma, Israel lo es en redes inalámbricas y creo que esto demuestra que el futuro es brillante para la Argentina si es que apuesta a este tipo de industrias, que es de un alto valor agregado. -¿Qué es lo más importante de un CV? No soy muy exigente con los CV. Cuando los leo paso por la experiencia laboral y me gusta mucho ver los hobbies, creo que dicen mucho de qué tipo de individuo es el que está enfrente.
Este 17 de mayo Dominion convocó una reunión para presentar la versión 6.0 de Plex. La reunión sirvió también para convocar un encuentro de usuarios, junto a representantes de CA (Paulo Colaço Dias) y de Websydian (Mikkel Schnack y Søren Madsen), y discutir aspectos de la evolución futura de Plex y 2E. En el sitio se pueden descargar cuatro presentaciones; dos específicas sobre técnicas de aplicación de 2E, y dos sobre Plex y Websydian, respectivamente. Los cuatro aspectos más destacables de la presentación de Plex: el soporte del generador para C# y la arquitectura .NET, el soporte de ANT, el soporte de SOA, y el respaldo a Websydian. La reunión forma parte del cronograma europeo. El programa, y otras presentaciones, pueden consultarse en la Wiki de Plex.
Justin Fielding, un blogger de Techrepublic, analiza las ventas de Windows Vista en sus primeros cien días. Al margen de apreciaciones sobre los puntos débiles ya tradicionales, Fielding encuentra a Vista resultando una segunda versión de Windows ME, si se considera que ya está anunciado el reemplazo de Vista para el 2009.
I think ultimately it’s irrelevant whether or not Windows Vista becomes the huge success Microsoft are trying to tell people it is. They have already announced that the successor to Vista is expected to launch in 2009. With such a short shelf life, Windows Vista simply looks prettier than its predecessor and fills a sales gap much in the same way Windows ME did before the launch of Windows XP.
Acostumbrado a políticas como la del ISeries, de crecimiento con respeto de la inversión realizada por el cliente, cada vez me resulta más decepcionante una filosofía de lanzamiento constante de novedades sin respaldo ni soporte por la inversión acumulada por los compradores. ¿Ha tratado de acudir a la base de datos de problemas de MSDN? Suelo recurrir aquí trabajando con C++. Pero sucede que, por razones de compatibilidad, mis compilaciones usan la versión VC 6.0, no la versión .NET. De cada diez búsquedas, cinco pueden terminar en una página inexistente. Es tan exasperante, que resulta mucho más eficiente hacer la misma búsqueda por Google o cualquier otro buscador, que encontrará la página, o lo que quede de ella. Evidentemente, soy invitado a tirar el C++ 6.0, y pasarme a .NET.
El posible intercambio de activos entre France Telecom (Orange) y Deutsche Telekom ha beneficiado a todo el sector en Europa. Destaca Vodafone, que se ha sumado a las alzas de France Telecom y Deutsche Telekom, a pesar de que se quedaría sin Ya.com. En España, Jazztel, que ahora podría ser presa de Vodafone, ha cerrado subiendo casi un 4%. El posible intercambio de activos entre France Telecom (Orange) y Deutsche Telekom está beneficiando a todo el sector en Europa. Destaca Vodafone , a pesar de que se quedaría sin Ya.com, que ha cerrado con subidas por valor de un 3,67 por ciento; mientras que France Telecom lo hace un 2,08 por ciento y Deutsche Telekom lo un 2,7 por ciento. En España, Telefónica subió un 1,56 por ciento, mientras que Jazztel , que ahora podría ser presa de Vodafone, se decantó por alzas de un 3,67 por ciento. Por su parte, Ahorro ha elevado a la operadora KPN a "comprar" ante la posible operación Orange-Deutsche Telekom. "Subimos nuestra recomendación en KPN de mantener a comprar con un precio objetivo de 12,8 euros por acción", comentan los expertos del bróker de las cajas de ahorro. Y es que Orange podría venderle a Deutsche Telekom su filial de telefonía móvil holandesa a cambio de Ya.com, lo que daría a la teleco alemana la oportunidad de sumar 2,1 milllones de clientes a los 2,6 millones que ya tiene y convirtiéndose en la segunda operadora del país. Ahorro Corporación cree que esta operación reforzaría el liderazgo de KPN porque se eliminaría un actor del mercado e "implicaría una mejora de los márgenes operativos derivada de una menor presión competitiva". Ahorro señala que "si se confirmara el escenario de mercado celular doméstico consolidado el precio objetivo se incrementaría hasta 13,5 euros por acción".
Orange quería los 400.000 clientes de ADSL que tiene Ya.com, y ya. El resto no lo quiere para nada y, de hecho, es más que probable que se acabe deshaciendo de casi todo. La red está duplicada, porque Orange tiene equipos propios en el 80 por ciento de las centrales de Telefónica donde están enganchados los clientes de Ya.com. Le sobrará buena parte del personal, ya que, si con Vodafone no se duplicaban buena parte de los puestos por no tener ningún negocio en telefonía fija, ahora todo son duplicidades. Y se duplica la televisión por ADSL, que Ya.com acaba de lanzar y que en el caso de la filial de Deutsche Telekom utiliza la plataforma de Microsoft, lo que permitía ver la televisión de Ya.com utilizando la consola Xbox como decodificador. Casi todo sobra ahora lo que, según las citadas fuentes, tenía ayer desolada a la plantilla de Ya.com, que se ve ahora en manos de una compañía que afronta una reestructuración tras la salida de su consejero delegado, Belarmino García, y la llegada como sustituto de Jean-Marc Vignolles que no, no es español y viene a poner en orden Orange España, filial de France Télécom. Encima le sale barato, por tratarse de un intercambio, con lo que no suma a los más de 10.000 millones de euros que le costó comprar Amena.
IT Jungle publica un reportaje a David McGovern, de Marlin Equity, donde responde a preguntas sobre la empresa (Marlin) y sobre los planes acerca de Aldon. Si Aldon es buen negocio y da dividendos, progresará...
Marlin is interested in acquiring mature businesses that we think have strong management teams and are in consolidating industries. We look at businesses where revenues are driven either by having a mission-critical product or system, or brand power, or another defensible position. Despite the technology background that I have and the deals that we have done to date, we are focusing on three verticals. Technology will be the broadest and probably the most frequent area, but we are about to sign up a consumer deal and we are also focusing on healthcare businesses. Generally speaking, we are industry agnostic, so long as the company meets a certain profile for us. (...) We expect to build out with Aldon in a very similar way that we have built out in ERP software. We feel very strongly that the iSeries portion of the market is not going anywhere any time soon[1], and more particularly around the product set, Aldon has a very dominant or strong position within the iSeries, and they also have products that surround that, which positions the company well for growth and which are not just geared for iSeries customers only. There's a lot of opportunity to sell into Unix, Linux, and Windows environments where the customer might be iSeries-centric, but also operates these other environments. There is also a large installed base of iSeries customers where you can upsell additional products.
[1] Luego corrije: lo que quiso decir:
[Timothy Prickett Morgan, el autor del reportaje]: That has been my experience too at IT Jungle. But can I give Marlin some advice? Can we not say that the System i or iSeries market is not going anywhere? There are obviously two different connotations to that phrase. . . . and this is how my kids are going to get to college.
David McGovern: We certainly agree that the iSeries market is not going away. We think that the iSeries market is going somewhere.
Mientras continúo pasando a un lugar más accesible mis notas, van apareciendo materiales de interés olvidados. La cita recordada aquí refiere a un artículo en The Server Side de Griffin Caprio, que, en el marco de la "popularización" de las herramientas de Microsoft orientadas a software factories, ofrece una visión más imparcial y generalizadora del problema que la que otras declaran. Lo más interesante del artículo:
Moving from Craftsmanship to Industrialization All too often, highly skilled application developers and architects have to use their time for low-level, implementation level tasks. Usually, junior developers are not able to complete such tasks because of lack of appropriate domain knowledge, requiring the senior developer to mentor the junior developer. This fosters not only knowledge transfer, but also an introduction to the complexities of the current development environment. Since developers are always involved at some stage of development, very little time is spent in making development more efficient, especially low-level implementation details. This method of development resembles early goods based industries, where single workers create custom solutions, tailored to each individual requirement. Think early tailors or shoe cobblers. This craftsmanship approach to software development does not scale very well. The lack of quality senior developers creates a mentoring environment, where specialized knowledge must be transferred, similar to an apprenticeship. Since there is such a hands-on approach required, each part of the project need to be created, most of the time, by hand. This often leads to higher quality, but also leads to performance and efficient issues. Migrating from a craftsmanship-based industry to a more industrial-based industry has been the path of progression for many more mature industries. If this is the end result for so many other industries, why is software development still based on small groups of highly specialized craftsmen? Most people within the IT industry will agree that a form of standardization and modularization is the key to enabling the kind of reuse required for efficient industrialization of software development. What they don’t agree on is the means to which this standardization and modularization is achieved. Software Factories aim to address this effort by prescribing a process by which software can be modularized into reusable assets.
Caprio se refiere a los tres componentes de las Factorías de Software (en el sentido SF de Greenfield): Modelos y Patrones, Lenguajes específicos de Dominio (DSL), y Lineas de Productos (SPL). Siendo un artículo temprano (marzo 2005), aún no insistía en lo que luego se convirtiera en una oposición de tipo comercial entre la iniciativa de la OMG (Model Driven Development, MDA) y Microsoft (Software Factories, SF, Domain Specific Languages, DSL): Sobre el papel de los modelos, Caprio dice:
“using models to capture high level information, usually expressed informally, and to automate its implementation, either by compiling models to produce executables, or by using them to facilitate the manual development of executables” The importance of models comes from their ability to keep a consistent representation of concepts within a project. For example, while it’s easy to draw and manipulate a person object in a model, it’s much more difficult to manipulate the same person object at a lower level, because the person object could be represented by class files, tables, and columns in a database.
Dejo de lado las palabras sobre DSLs, que para 2005 eran sólo una expresión de deseos. En cuanto a las líneas de producto de software, indica:
Software Product Lines are entire subsets of components that can be configured, assembled, and packaged to provide a fairly complete product. The resulting product should only require customization from a developer for the highly specialized aspects of the project. Perhaps the largest component of a Software Factory, Software Product Lines not only provide the greatest value, but also require the greatest investment. Software must be carefully partitioned into distinct and reusable components and those components must readily fit together in a coherent manner. Configuration is the key to Software Product Lines, as projects must be able to pick and choose which components they want to utilize, and then generate an application off of that configuration.
Pero lo más interesante de este artículo temprano, es que se refiere no solo a las iniciativas de Microsoft, sino a las de Sun. De ellas se puede hablar un poco más, y así abrir un poco más las ideas sobre el problema. Así presenta el tema Caprio:
While all of the promises of Software Factories sound appealing, many companies have tried to provide the tools and components, only to fail under the load of inflexible tools or proprietary formats. All of this is about to change. Big name companies like Microsoft and Sun are getting ready to release many of the components necessary for building and assembling a Software Factory within an organization. With the release of Visual Studio 2005, Microsoft will unveil several add-ins and plug-ins that enable the creation of not only Domain Specific Languages, but also the integration of those languages with the IDE itself. This will allow developers to manipulate and use the language from within the Visual Studio.NET IDE. Not to be outdone, Sun Microsystems is working on its own implementation of Software Factory technology, simply named ‘Project Ace’. Although, very little details of ‘Project Ace’ are available, developers shouldn’t expect Sun to let Microsoft provide .NET tools, without answering with a comparable set of tools for Java.
The Ace Project at Sun Labs has developed a way to fundamentally simplify the process of creating modern, web-enabled business applications.
Applications are specified in the Ace language, DASL§, which is a higher level architectural programming language. This language is the gem at the heart of Ace.
The DASL language abstracts out complexities, such as persistence and distribution. Using DASL, the application writer concentrates on domain details specific to the application's purpose, instead of worrying about the application's deployment architecture, middleware APIs, remote object invocations, and other details of the implementation "stack". The DASL language defines precise application and business logic and semantics, including application-level transactions.
DASL is based on published specifications and therefore can be considered open.
Ace is not a tool, it is a language which has a reference implementation that includes an ease-of-use tool.
Although the DASL language is compiled in the reference implementation, it can also be interpreted.
The generators which come with Ace can be customized.
The next version of the reference implementation will have a pluggable Ace generator API.
DASL can call Java code.
Ace doesn't compete with Java, it works at a higher level and generates pure Java code.
The distribution details are implemented automatically at the time the application is compiled and deployed, based on the well defined semantics of the DASL language.
There is no Ace runtime.
The Ace approach results in tremendous cost and time savings when creating a J2EE Application.
As new architectures beyond J2EE evolve, new code generators can be written to support them without modifying the DASL language, and without rewriting applications written in it.
The preliminary results of this new way of building business applications have been remarkable. We have found that in typical web applications as they are written today, roughly 5 to 10% of the code has to do with domain logic, and roughly 90 to 95% has to do with distribution. Using our approach, there is a huge reduction in the number of lines of code required to specify these applications, and a corresponding huge reduction in the time it takes to write and debug the applications.
For example, the Java J2EE Pet Store application can be specified using about 1000 lines of Ace code, as compared to 14,000 lines of code in the native J2EE implementation. In fact, of those 1000 lines of Ace code, 750 of them can be created graphically via UML diagrams that are part of our GUI development tool, so the entire Pet Store is actually specified in Ace by writing only 250 lines of traditional "code". (...) The Ace project team believes that application programming languages must evolve to handle the distribution automatically. Just as early programmers who wrote in assembly language had to worry about register allocation, stack protocols for invoking library functions, and other details of the machine on which the applications were run, today's programmers worry about passing data and objects back and forth between various computers, services, languages, and data sources. We have defined DASL as the first of a new breed of higher level languages that abstract out the distribution details and handle them efficiently by capturing the global semantics of the application. (...) The benefits of our approach to application writing include:
Huge reduction in the time it takes to write and debug applications (many fewer lines of code).
Ability to retarget existing applications to new deployment architectures without having to change the application specification. Retargeting is crucial for several reasons:
Applications initially written on a small scale (smaller number of simultaneous users) often must be redeployed with additional tiers if they become popular, e.g., the customer base grows.
As hardware and technology advance, existing applications must often be rewritten for the new technology.
Automatic verification of correctness of application-level transaction semantics.
Global optimization of communication between the browser, web server, proxy server, application server, and database server tiers. While such optimization is possible today, it is so tedious and time-consuming that it is often not done.
Si bien el proyecto ACE no parece estar activo, de todas formas estudiar sus objetivos contribuye a tener una idea más amplia del problema a resolver. Sobre su lenguaje DASL, en Wikipedia.
Aldon, uno de los tres o cuatro líderes en la oferta de herramientas de administración de cambios, configuración y versionamiento en el mercado de ISeries (AKA AS400), anunció en estos días su adquisición por una sociedad dedicada a las inversiones (Marlin Equity Partners) como parte de un paquete de compras más o menos orientadas a fortalecerse en el mercado de ERP´s: CMS y XKO. Un adecuado comentario sobre la operación en System INetworks: Chris Maxcer dice sobre este tipo de adquisiciones:
There are two common ways that private equity firms look to make a profit when they purchase software companies:
They buy a company, remove much of the staff, reduce operational costs by slashing marketing and sales, and end up with a product or maintenance revenue stream that is suddenly highly profitable — for a short period of time, at least, before the company's dwindling assets get split, sold, or fall off the face of the earth. Sometimes, when this happens, the company puts on a joyous face of rapture over the acquisition as a way to hide the impending destruction realignment.
They buy a company, sometimes beleaguered and sometimes not, and start growing it by pumping in investment dollars in the form of product enhancements, additional acquisitions to supplement existing products, or go-to-market teams. Sometimes they, too, make layoffs and eliminate inefficiencies, but most definitely it's with an eye toward a bigger future that will eventually take the new company public or lead it to a new sale that returns the invested amounts, along with a tidy profit, in a 3-to-5-year time frame.
Aldon, a leading software change-management System i-focused vendor, is most definitely the second.
But How Do You Know?
Initial acquisitions, under both common formats, often appear the same to the outside world, but in the first method noted above, the company simply can't keep up the charade for more than a few months. They say they are interested in growth, but there's no evidence of growth — no acquisitions, no real or compelling product enhancements, and certainly no new products.
In the second, the company buys other companies, enhances products, pours development into new products, works out better go-to-market strategies, and hires new talent. In addition, the company stays in touch with the relevant media outlets in its market and is eager to talk about its new efforts, the results, and to share its excitement for the both the market and what the company is up to.
Qué perspectivas? Sólo en base a los proyectos previos a la adquisición:
"We've been the largest provider of change-management systems in the System i marketplace, and what we want to do is provide our System i customers with more things as they are moving into new technology areas," Magid says[Dan Magid, ex CEO, que se mantiene como consultor dentro de la nueva sociedad]. "The marketplace for the traditional System i management is pretty mature, but the marketplace for the things they are doing around their System i applications, building web interfaces and web applications around their iSeries code or building Windows interfaces or putting services that talk to their traditional iSeries applications . . . that is changing and moving forward very rapidly. We want to be able to provide our customers with solutions in that kind of arena." Specifically, Aldon is looking into potential testing tools, business process automation tools that help companies work through the business process changes that coincide with new software application rollouts, build process tools for the open source environment, and tools that manage non-DB2 database changes. "Our strategy is to be like an ERP system for the IT application development and management organization," Magid says.
En años pasados, MKS y Softlanding pasaron también por procesos de adquisición que sin duda lo potenciaron en el primer caso (desde su orígen en Silvon) y lo mantuvieron en el segundo. Probablemente suceda lo mismo en este.
También de mis notas antiguas, la página de IBM sobre investigaciones acerca de patrones de diseño: buenos materiales y buena bibliografía sobre un tema que ha pasado a ser un patrimonio común del desarrollo de software. El diseño de patrones ha devenido un componente básico del estándar de diseño basado en modelos, y probablemente ambos se convertirán en un pilar del desarrollo de lineas de producto de software (SPL), al modo en que el SEI lo entiende. Entre otros materiales destacables, Automatic code generation (Vlissides y otros):
Automatic code generation adds a dimension of utility to design patterns. Users can see how domain concepts map into code that implements the pattern, and they can see how different trade-offs change the code. Once generated, the user can put the code to work immediately, if not quite noninvasively. Much remains to be explored. The concept of design patterns is in its infancy--the tools that support the concept, even more so. Our tool is just a start. It exploits only a fraction of the intellectual leverage that design patterns provide. For example, the tool is limited to system design and implementation; it does not support domain analysis, requirements specification, documentation, or debugging. All of these areas stand to benefit from design patterns, though at this point it might not be clear exactly how. Then again, the principles underpinning our tool were not clear until we had experience using patterns for design. Application is the first and necessary step; only then can we hope to automate profitably.
Habrá que sistematizar todo el material asociado...
Sigo ordenando notas antiguas: este artículo fue publicado también en Methods & Tools, en su número Summer 1999. Escrito por Robert Bamford y William J. Deibler II, de SSQC, compara los puntos de vista sobre Manejo de Configuración del software (CM) de IEEE (Institute of Electrical and Electronics Engineers), ISO (International Organisation for Standardisation), y SEI(Software Engineering Institute). Partes 1, 2 y 3. Buen artículo y excelente bibliografía.
Limpiando notas antiguas, encuentro mucha información de interés que pasaré aquí para no perder (y a del.icio.us). En Methods & Tools, destaco un artículo sobre testeo de GUI's(1) , (2) y (3)(año 2000), así como el autor, Barry Dorgan, que deja disponibles otros artículos en su página. Dorgan integra el grupo Software.Testing (1) y (2). Lateralmente, de la información de Software.Testing, dos o tres sitios vinculados a testeo de software para agendar, especialmente TestingFaqs, y Software Configuration Management FAQ de Dave Eaton.
Mark Dalgarno, en su blog, viene comentando desde hace algunos días las sesiones de Code Generation 2007, que muestra el crecimiento de las técnicas destinadas a construír software d manera más rigurosa y confiable. Como dice la página de la reunión, "Our session leaders are recognised experts in Software Factories, Domain-Specific Languages, MDA and Generative Programming", es decir, confluyen allí las distintas aproximaciones existentes hoy a la generación "industrial" de software. Dos reportajes aproximan al contenido de sus discusiones: a Fabrizio Pugnetti, de Artisan, y a Tony Clark, de Xactium. Sobre Dalgarno, en otro momento.
Llegué al artículo por los laberintos de del.icio.us, siguiendo una nota en la misma asociaciónsobre Google. En octubre de 2006, Elisabeth Grant escribió sobre el valor de la Wikipedia (Wikipedia: Valuable Resource or Abyss of Misinformation?), sin cerrar una posición, pero puntualizando varios de los problemas que el proyecto implica. Sus observaciones son anteriores a las decisiones tomadas por el equipo conductor de Wikipedia, pero siguen siendo fundamentales:
The Internet is often the first place many students go when gathering research for a paper, project, or other class assignment. And while there are many excellent and invaluable resources available online, the quality of one site is still under debate: Wikipedia. Wikipedia is the marriage of the wiki software, which allows the public access to edit and update pages of a site, with the structure of the encyclopedia. As mentioned in yesterday’s blog post on a related resource, Wikimapia, allowing anyone to update and edit is both productive (allowing much more information to be contributed), and problematic (who checks to make sure that new contents and edits are correct?). As more and more people are turning to Wikipedia for answers, particularly students who are using Wikipedia as a source, it becomes more important to ask: Can we trust Wikipedia?
Varios artículos relacionados parten desde esta invitación inicial a discutir el proyecto. El más interesante es el escrito por Roy Rosenzweig.
Como siempre, el grupo comp.object mantiene buenas discusiones sobre diseño orientado a objetos, frecuentemente entrelazado con la programación procedural o las bases de datos relacionales, particularmente por la participación (o interferencia) de algún partidario del estilo procedural. En el caso que se menciona ahora, cómo representar la tercera forma normal usando clases, en palabras de H.S. Lahman:
I really hope this was not a homework question... B-)
> How object-oriented design can be normalized? Normalization, anyway > related to OOD?
The Class Model in UML is underlain by the same relational data model branch of set theory that underlies DBMSes. As a result the Class Model needs to normalized to Third Normal Form just like an RDB schema. [Normal Forms above third are rarely relevant because they mostly deal with identifier conventions and we rarely use explicit identifiers for objects.]
Most OOA/D authors don't talk explicitly about normalization. However, every OOA/D author will provide a suite of rules for constructing Class Models that essentially ensure Third Normal Form under the guise of things like one-fact-one-place. For example, the most comprehensive book available on Class Modeling is Leon Starr's "Executable UML: How to Build Class Models". Leon doesn't even mention Normal Form as far as I recall, yet he provides the most comprehensive set for guidelines for normalization that I have seen.
In a nutshell we have:
1NF: all responsibilities must be a simple domain. For knowledge attributes this means that the attribute must be described in terms of an abstract data type (ADT) that can be manipulated as if it were a scalar. For attributes that can be expressed in terms of fundamental values, the domain of data values must have a single semantics. So a domain of {UNSPECIFIED, 5, 6, 7} is invalid because it captures two separate semantics: valid data values of 5, 6, 7 and whether or not the data is specified at all.
For behaviors this means that the behavior responsibility must be cohesive and self-contained. Self-contained means it can depend on knowledge attributes but it can't depend upon other object's behavior responsibilities. (Note that this comes for free if one follows the methodology's dictums about encapsulation.)
2NF; all responsibilities are fully dependent on the object identity. Typically objects do not have explicit identity attributes but they do have an unambiguous mapping to some some uniquely identifiable problem space entity. This means the "value" of the property depends solely on what problem space entity is abstracted in the object.
As a practical matter 2NF is not very relevant to OO development because it is really about compound identifiers (i.e., multiple attributes combine to define the object identity). What 2NF is saying is that if there are multiple explicit identity attributes, then the "value" of a non-identity attribute must be dependent on /all/ of the identity attributes, not just some of them. A classic example of this is:
The style attribute is clearly dependent on the particular House identity, which must be fully specified. The same thing seems true for the 'builder' attribute since each House is built by one builder. But suppose construction policy is that a builder builds all the houses in a particular subdivision. Now the 'builder' value is fully specified if one only knows {developmentID, subdivisionID}. So the 'builder' attribute really belongs in the [Subdivision] class. [Note that if the development id seriously homogenized, all Houses in the same subdivision might have the same style. In that case, style also belongs in [Subdivision].]
3NF: all responsibilities depend upon nothing but the object identity. Essentially this means that the "value" of a responsibility cannot depend upon knowledge attributes that are not explicit identity attributes. A classic example of this problem is:
The problem here is that it is highly unlikely that 'cost' is only dependent on the House identity. In fact, it is probably dependent on the style or on the combination of {builder, style}. IOW, only the /combination/ of {builder, style, and cost} is dependent solely on the identity of House, not the individual values. So, assuming cost is solely dependent on style, we need:
where the unique combination of values is captured indirectly through the relationship to [Style].
---
One must be careful not to confuse coincidental values or data domains with dependency. Consider Washing Machine and Refrigerator objects that both have a 'color' attribute. If the colors are designed to be color coordinated from the same manufacturer, they will have identical data domains for 'color'. It is quite possible that an object from both sets may be colored chartreuse. Nonetheless they are quite different things. How is that the 'color' attribute doesn't violate 1NF (same data domains semantics) and 3NF (both have the same color)?
The trick is to think for such generic qualities in terms of 'color of'. IOW, the color of a Washing Machine is chartreuse and the color of a Refrigerator is chartreuse. Thus the color of a Washing Machine is not semantically the same as the color of a Refrigerator even though the value is the same. Similarly, we have:
[Appliance] + color A | +--------+-------------+ | | [Refrigerator] [Washing Machine]
The notion of the color of an Appliance is something shared; it raises the level of abstraction of color of a Refrigerator and color of a Washing Machine to a common ground for both. Since a Refrigerator is an Appliance, it has the color attribute.
This example, though, underscores an important difference between normalization applied to OO Class Models and the Data Models used for RDB Schemas. In the OO case Refrigerator can implement a different data domain than a Washing Machine for the 'color' attribute (i.e., they aren't required to be color coordinated). That is not true in Data Models where there will be exactly one data domain for 'color'.
The reason is that in Data Modeling the [Appliance] table is instantiated separately from the subclass tables and the subclass tables do not have a 'color' attribute. So there is only one attribute in one table with one data domain.
In contrast, in the OO Class Model the superclasses cannot be instantiated separately so an object resolves the entire tree. Thus the object identity /includes/ the [Appliance] properties. In addition, the Class Model only identifies the responsibility (i.e., What it is); it does not define its implementation (i.e., How it does it).
So Refrigerator and Washing Machine can provide different implementations of the 'color' attribute, such as different data domains. IOW, we resolve object identity at the leaf level of the tree through inheritance. Thus unique data domains can be associated with subclasses even though the responsibility is identified in a superclass.
Hace pocos días, en un blog que sigo habitualmente, se conversaron dos palabras sobre el estado actual y futuro de China en tecnología informática: el órgano oficial chino hablaba de un millón de personas trabajando en la industria del software en el país, y allí se conversó sobre el valor real de esa cifra, relacionándola con el estado de la industria en India. Pues bien, Dr Dobb dedica una breve nota al estado de situación de la alta tecnología en China, que da una mejor perspectiva al asunto:
The high-tech trade association AeA, formerly known as the "American Electronics Association," has released the 14th edition of its competitiveness series. The report outlines the "15 Year Science and Technology Plan" announced by Chinese leaders in January 2006 and analyzes China’s capacity to implement the plan.
China’s 15-Year Plan is designed to boost science, technology, and innovation with the long-term goal of becoming a preeminent global economic and technological power. It calls for China to raise R&D spending from the current 1.4 percent of its economic output to 2 percent by 2010 and 2.5 percent by 2020. With China’s GDP growing by over 7 percent per year, these commitments would put Chinese R&D investments above $100 billion annually, placing it in the same league as the U.S. and Japan.
Among the highlights, the report points out that:
In early 2006, China announced a 15-year plan to boost science, technology, and innovation with the long-term goal of becoming a preeminent global economic and technological power.
Though every detail of the plan has not been made public, it calls for China to raise R&D investment from the current 1.4 percent of its economic output to 2.0 percent by 2010 and 2.5 percent by 2020.
China is pouring investment into its universities to create world-class education and research centers. Since 1998, state financing for higher education has more than doubled, reaching $10.4 billion in 2003.
China had 926,000 researchers in 2004, second only to the United States -- 77 percent more than it had in 1995.
China’s 15-year plan intends to move the country beyond its current reliance on foreign technology to spawn "indigenous innovation." China faces challenges in enacting its 15-year plan: protecting intellectual property; reforming capital markets; encouraging risk taking; and embracing the free flow of ideas required for innovation.
China's growing technology industry is largely dominated by multinational companies and much of this is low value-added, labor intensive manufacturing. The 15-Year Plan intends to change that by investing heavily in such cutting-edge areas as nanotechnology and biotechnology to spawn "indigenous innovation."
El reporte aludido, por si es necesario reiterarlo, se puede bajar aquí.
FTP On Line publica una nota introductoria sobre MDA (Model Driven Development para quienes lean aquí por primera vez) con la particularidad de que discute la idea de aplicar MDA a las arquitecturas orientadas a servicio (SOA). Dado la arquitectura de cualquier generador basado en MDA, no cabe duda de que esto es posible. Así presenta el tema Steve Andrews, autor de la nota:
Despite the advances in SOA technologies and techniques, a composite application is only as good as its underlying individual application services. In this article you'll explore how raising the level of abstraction and using model-driven techniques can address the most common issues associated with building applications.
Luego de destacar los principales elementos de un modelo desarrollado bajo MDA (un modelo conceptual (CIM), un modelo independiente de la plataforma (PIM), un modelo asociado a cada plataforma de implementación (PSM), un modelo de transformación (TM), Andrews apunta a SOA:
Model-driven development (MDD) enables a more rapid, complete and direct path from a business problem to an executable business solution. MDD transforms the business knowledge in a PIM into a PSM based on knowledge about a technical platform, and then transforms the PSM into a PSI that can be deployed to a run-time environment. The fidelity of the transformation model determines how rapidly and completely this process can be completed. A PSI can be developed manually based on a PIM and PSM, but it will take longer and may introduce human error (see Figure 1). You can employ several MDD strategies. For example, architected model-driven development (AMDD) combines application frameworks (which handle common application infrastructure concerns) with MDD generative techniques. AMDD further decreases the amount of manual development that must be done by raising the level of technical abstraction. Ultimately, all MDD variants are focused on delivering business results without letting the technology get in the way. MDD is not a silver bullet that obviates the need for solid software engineering fundamentals. Good requirements that unambiguously define the problem are still important, as their fidelity in the CIM determines the potential fidelity of the PIM. Luckily, since MDD compresses the actual coding time, it frees up more time to devote to understanding user needs. Over time, as transformation- and platform-specific models evolve, the need for higher-priced development resources will decline. Since the application architecture is "baked into" the generated PSM, you can focus on writing business logic almost exclusively. For example, if part of the application architecture exposes behavior using Web services, you can encapsulate the rules for generating Web service interfaces in the transformation model one time and apply it many times. Your alternative is manually re-implementing the Web service technical details for each service, which takes away from time spent focusing on the services' business logic.
Pero lo más interesante del caso, es que Andrews es autor de un proyecto MDA de código abierto: Atlas. Existen actualmente muchos desarrollos de código abierto que representan una oportunidad única para elevar el nivel de productividad de un equipo, y aprender un enfoque superior de desarrollo de software. Muchos pueden encontrarse en Code Generation.