lunes, septiembre 24, 2007

MSXML entra en fase de despedida...

La noticia en InfoQ:

Back in March we reported that Microsoft was going to "killbit" MSXML 4. Due to its wide use and a lack of a suitable replacement, they have rescinded that decision.
Non-critical support for MSXML 4 ended a long time ago. However, the replacement, MSXML 6, lacks CAB support, which in turn makes switching to it more difficult than necessary.
CAB files are compressed files similar to ZIP, but specifically designed for Windows OS and Software components. They can be signed like ActiveX components, making them a primary means for deploying extensions to Internet Explorer.
Early responses to this are not favorable. Complaints are centralized around Microsoft waiting until just before the cut-off date to reverse their decision and complaints that lack of CAB support was a well-known issue that should have been addressed months ago.
For the time being, security issues will be addressed in MSXML 4 while new features and performance enhancements will only be given to MSXML 6. Microsoft's XML team has also promised to better integrate MSXML 6 into the OS deployments, presumably via Windows Updates and future service packs. There is no word on how long this reprieve will last or when MSXML 6 will finally be fully supported.

Es conveniente leer la nota original de la que parte InfoQ.
Habrá que prever soporte de más parsers donde corresponda...(tarea a realizar). Como sucede en general, probablemente la compatibilidad hacia atrás será árdua. En algún momento hablaremos sobre las política de actualizaciones...

domingo, septiembre 23, 2007

Factorías de Software ¿Horizontales y Verticales? dos o tres apuntes más

No más sobre el artículo de Jezz, salvo dos o tres apuntes finales:
  • Las líneas de Producto no pueden ignorar las variaciones de plataforma. La afirmación de Jezz acerca de este tema, constituye una importante limitación de SF, si así es (y así debe ser muy probablemente):
    Now, I don’t want to set the expectation that this mapping can or should be platform independent (such that MDA promotes) since, platform independence is a much larger challenge than technology or implementation independence. And one that practical software factories in general avoid. Platform independence requires platform independent architectures, and effective specific software factories will define successful platform specific architectures. Remember, software factories are very solution domain specific. Platform independent models are likely to be too abstract to become practically useful for product line development.
  • La idea de concentrarse en "factorías horizontales" es una cuestión de mercado, legítima, pero que no representa todos las formas de entender los "core assets".
  • La cuestión planteada sobre propiedad intelectual (tres capas de participantes) conduce a un tema que es importante, ineludible, y difícil de resolver con estas restricciones de plataforma y de marco base: la integración de "legacy assets".
En adelante, volvermos sobre estos temas, pero ya no alrededor del artículo de Jezz.

sábado, septiembre 22, 2007

Una sutil variación sobre SFs...

Para seguir con la nota al pie de la primera entrada sobre el tema, estoy volviendo atrás en la agresiva campaña sobre Software Factories versus MDA/UML. Reviso el blog de Jack Greenfield, actualizado por última vez el 24 de noviembre de 2006. Encuentro allí que el papel mencionado sobre factorías en Visual Studio, ha desaparecido. Sí se encuentran la discusión 1 y el papel 2, el más usado durante un buen tiempo. A la vez, Keith Short publica en enero de 2006 su última entrada en su blog. Quizá Microsoft haya preferido desarrollar sus ideas sin las aristas irónicas sobre MDA y UML que todavía pueden verse en el sitio de Greenfield y Short.

Factorías de Software ¿Horizontales y Verticales? III

Volviendo una vez más sobre la nota de Jezz Santos, quisiera tomar uno de los puntos críticos observados por él sobre el modelo Software Factories: la variabilidad, que en definitiva es el punto fundamental, en su nota, que permite construír una "factoría vertical", basada en una "horizontal". Dice Jezz sobre el tema:
In order for a horizontal software factory itself to be reusable in multiple vertical domains (which is where it is ultimately destined for) there has to be some level of customization that allows it to be specialized towards any particular vertical domain. If we don’t allow this, then we simply can’t adapt a factory to create specific enough product lines, which means we are limited to the productivity we can achieve with it as a reusable asset. That’s not to say we can’t. We can do the product lines, but the overhead of doing so is much larger than it needs to be, because some things in the particular domain will either be common, or specific across the whole product line for that domain. Remember, the power of factories is in how specific they can be.
Y qué observa sobre la variabilidad disponible (en su criterio):

Verticalizing a factory today can seem impossible given the tool-sets we have at our disposal.
For most customizations of today’s factories you need to ‘crack open’ the factory and mess about with the source code internals of the recipes, DSL’s and the like. This is not a guided experience, and requires very deep technical knowledge of the factory, and deep technical VS extensibility skills. Not an easy ramp-up for those new to factories.
Extending a factory today is a function of what external interfaces the factory may expose today (if any at all!). For example the patterns & practices Web Service Software Factory does a good job of providing a number of extensibility points and interfaces, both via a programming model and configuration. But anything beyond what it supports at the programming API level, and you'll have to crack it wide open, effectively voiding the warranty and support policy of the factory. It’s wild-west country from there onwards.

Más adelante, de nuevo sobre sus posibilidades actuales:
(...) we would need a way of formally declaring the points of variability of the product architecture of a base (horizontal) factory. These declared points of variability could then be provided with default values/views/assets by the base factory. Or the factory may declare them ‘abstract’ requiring vertical factories to be built to subclass them and extend these points through customization interfaces.
We would then need a ‘customization authoring tool’ that we use to load the base factory into, and extend these pre-defined customization points with standard tools, that creates for us a set of customized assets output into a vertical factory in some form.
In this way we would not have to crack open a software factory any alter recipes or DSL’s just to predefine say, the data contracts of a web service product. Instead we can provide a simple subclass that hooks into a predefined point of variability customization interface and either provide fixed factory configuration, or extend the architecture of the product.
However, I feel this customization capability may be too far out for us right now. Instead we need to look at practical means today to achieve the same net effect (albeit a less guided experience).
Jezz insiste sobre este estado del problema en varios puntos, mientras examina las vías de "verticalizar", pero este es el planteo, sin entrar en más detalles...Sus palabras dan una sensación de inmadurez del esquema en su totalidad, con soluciones aún no retempladas para aspectos básicos de lo que debe conseguir (un poco distinto de las tajantes afirmaciones que en algunos papeles se observan).
Para tomar un poco de distancia de esta vista del problema, es conveniente releer cómo define variabilidad el SEI:

All architectures are abstractions that admit a plurality of instances; a great source of their conceptual value is, after all, the fact that they allow us to concentrate on design while admitting a number of implementations. But a product line architecture goes beyond this simple dichotomy between design and code–it is concerned with identifying and providing mechanisms to achieve a set of explicitly allowed variations (because when exercised, these variations become products). Choosing appropriate variation mechanisms may be among the product line architect's most important tasks. The variation mechanisms chosen must support

  • the variations reflected in the products. The product constraints (see Core Asset Development) and the result of a scoping exercise (see the "Scoping" practice area) provides information about envisioned variations in the products of the product line–variations that will need to be supported by the architecture. These variations often manifest as different quality attributes. For example, a product line may include both a high-performance product with enhanced security features and a low-end version of the same product.

  • the production strategy and production constraints (as described in Core Asset Development). The variation mechanisms provided by the architecture should be chosen carefully, so they support the way the organization plans to build products.

  • efficient integration. Integration may assume a greater role for software product lines than for one-off systems simply because of the number of times it's performed. A product line with a large number of products and upgrades requires a smooth and easy process for each product. Therefore, it pays to select variation mechanisms that allow for reliable and efficient integration when new products are turned out. This need for reliability and efficiency means some degree of automation. For example, if the variation mechanism chosen for the architecture is component selection and deselection, you will want an integration tool that carries out your wishes by selecting the right components and feeding them to the compiler or code generator. If the variation mechanism is parameterization or conditional compilation, you will want an integration tool that checks the parameter values for consistency and compatibility before it feeds those values to the compilation step. Hence, the variation mechanism chosen for the architecture will go hand in hand with the integration approach (see the "Software System Integration" practice area).

Support for variation can take many forms (and be exercised many times [Clements 2002c, p. 64]). Mechanisms to achieve variation in the architecture are discussed under "Example Practices."

Products in a software product line exist simultaneously and may vary from each other in terms of their behavior, quality attributes, platform, network, physical configuration, middleware, and scale factors and in a multitude of other ways. Each product may well have its own architecture, which is an instance of the product line architecture achieved by exercising the variation mechanisms. Hence, unlike an organization engaged in single-system development, a product line organization will have to manage many related architectures simultaneously.

There must be documentation for the product line architecture as it resides in the core asset base and for each product's architecture (to the extent that it varies from the product line architecture). For the product line architecture, the views need to show the variations that are possible and must describe the variation mechanisms chosen with the rationale for the variation. Furthermore, a description–the attached process–is required that explains how to exercise the mechanisms to create a specific product. The views of the product architectures, on the other hand, have to show how those variation mechanisms have been used to create this product's architecture. As with all core assets, the attached process becomes the part of the production plan that deals with the architecture.

Respecto a los mecanismos de variación, transcribimos su definición:

Variation mechanisms (1): Jacobson, Griss, and Jonsson discuss the mechanisms for supporting variability in components (see the following table) [Jacobson 1997a]. Each mechanism provides a different type of variability. The variation of functionality happens at different times depending on the type. Some of these variation types are included in the specification implicitly. For example, when a parameter is used, the specification includes the specific type of component mentioned in the contract or any component that is a specialization of that component. In the template instantiation example below, the parameter to the template is Container, which permits variation implicitly via the Inheritance pattern. The Container parameter can be replaced by any of its subclasses, such as Set or Bag.

One aspect of variability that is important in a product line effort is whether the variants must be identified at the time of product line architecture definition or can be discovered during the individual product's architectural phase. Inheritance allows for a variant to be created without the existing component having knowledge of the new variant. Likewise, template instantiation allows for the discovery of new parameter values after the template is designed; however, the new parameter must satisfy the assumptions of the template, which may not be stated explicitly in the interface of the formal parameter. In most cases, configuration further constrains the variation to a fixed set of attributes and a fixed set of values for each attribute.

Types of Variation [Jacobson 1997a]

Mechanism

Time of Specialization

Type of Variability

Inheritance

At class definition time

Specialization is done by modifying or adding to existing definitions.

Example: LongDistanceCall inherits from PhoneCall.

Extension

At requirements time

One use of a system can be defined by adding to the definition of another use.

Example: WithdrawalTransaction extends BasicTransaction.

Uses

At requirements time

One use of a system can be defined by including the functionality of another use.

Example: WithdrawalTransaction uses the Authentication use.

Configuration

Previous to runtime

A separate resource, such as file, is used to specialize the component.

Example: JavaBeans properties file

Parameters

At component implementation time

A functional definition is written in terms of unbound elements that are supplied when actual use is made of the definition.

Example: calculatePriority(Rule)

Template instantiation

At component implementation time

A type specification is written in terms of unbound elements that are supplied when actual use is made of the specification.

Example: ExceptionHandler

Generation

Before or during runtime

A tool that produces definitions from user input.

Example: Configuration wizard

Variation mechanisms (2): Anastasopoulos and Gacek expound on a somewhat different set of variation options that includes [Anastasopoulos 2000a]

  • aggregation/delegation: an object-oriented technique in which the functionality of an object is extended by delegating the work it cannot normally perform to an object that can. The delegating object must have a repertoire of candidates (and their methods) and assumes a role resembling that of a service broker.
  • inheritance: which assigns base functionality to a superclass and extended or specialized functionality to a subclass. Complex forms include dynamic and multiple inheritance, in addition to the more standard varieties.
  • parameterization: as described above
  • overloading: which means reusing a named functionality to operate on different data types. Overloading promotes code reuse but at the cost of understandability and code complexity.
  • properties in the Delphi language: which are attributes of an object. Variability is achieved by modifying the attribute values or the actual set of attributes.
  • dynamic class loading in Java: where classes are loaded into memory when needed. A product can query its context and that of its user to decide which classes to load at runtime.
  • static libraries: which contain external functions that are linked to after compilation time. By changing the libraries, you can change the implementations of functions that have known names and signatures.
  • dynamic link libraries: which give the flexibility of static libraries but defer the decision until runtime based on context and execution conditions
  • conditional compilation: which puts multiple implementations of a module in the same file. One is chosen at compile time according to the appropriate preprocessor directives.
  • frame technology: Frames are source files equipped with preprocessor-like directives that allow parent frames to copy and adapt child frames and form hierarchies. On top of each hierarchical assembly of frames lies a corresponding specification frame that collects code from the lower frames and provides the resulting ready-to-compile module.
  • the ability of a program to manipulate data that represents information about itself or its execution environment or state. Reflective programs can adjust their behavior based on their context.
  • aspect-oriented programming: which is described in the "Architecture Definition" practice area
  • design patterns: which are extensible, object-oriented solution templates catalogued in various handbooks (for example, the work of Gamma and colleagues [Gamma 1995a]). We mentioned the Adapter Design pattern specifically as a variation mechanism earlier in this practice area.
La variabilidad es un aspecto central del desarrollo de Líneas de Producto, que es extensamente tratada en SPL, con definiciones precisas sobre cómo sostenerla. Más exactas que las que Jezz parece manejar. No lo cuestiono: SF seguirá madurando, construirá DSLs, y las inconsistencias tomarán un rumbo estable. Lo que no cuadra, es alguna publicidad tan terminante con otras formas de encarar el problema.

martes, septiembre 18, 2007

Factorías de Software ¿Horizontales y Verticales? II

Siguiendo la nota anterior...
El planteo de Factorías de Software sostenido por Microsoft contiene un componente que produce confusión, al denominar su iniciativa con las mismas palabras usadas durante mucho tiempo por distintos emprendimientos de desarrollo con criterios de rigor industrial. "Software Factory" implica un concepto de mayor alcance que el dado por el conjunto de recomendaciones y productos asociados a la idea por Microsoft. Pero existe un segundo componente en la definición que también resulta controversial: el desarrollo como líneas de producto (Software Product Lines, SPL, en los conceptos del SEI (Software Engineering Institute). En este caso, el concepto es central, y explícitamente indicado como base de las Software Factories por Jack Greenfield. Sin embargo, existen diferencias en el desarrollo práctico de las factorías, si nos atenemos a Jezz, y a lo que se puede ver en el sitio de MS. Al comienzo de su nota, Jezz declara una vez más la relación existente entre SF y SPL:
In order for a horizontal software factory itself to be reusable in multiple vertical domains (which is where it is ultimately destined for) there has to be some level of customization that allows it to be specialized towards any particular vertical domain. If we don’t allow this, then we simply can’t adapt a factory to create specific enough product lines, which means we are limited to the productivity we can achieve with it as a reusable asset. That’s not to say we can’t. We can do the product lines, but the overhead of doing so is much larger than it needs to be, because some things in the particular domain will either be common, or specific across the whole product line for that domain. Remember, the power of factories is in how specific they can be.
La vinculación es central en el libro de Greenfield, fundamento de esta iniciativa:
Recognizing and planning around these assumptions leads to defining families of similar solutions, and separating their common features, which make them similar, from their variable features, which make one member of the family different from the next. The common features are used to develop a custom process for building the members of the family, and of a set of reusable assets supporting the process. The variable features are used to drive parameterization, configuration, assembly and other techniques for accommodating the differences between the family members.
This is the thinking underlying the development of the components that are truly reusable, such as RDBMSs and GUI frameworks, although it may not have been recognized at the time by all of the people involved in building those components.
Work at the Software Engineering Institute (SEI) formalized this thinking in the concept of Software Product Lines. Software Product Line practices have been well established through many books, case studies and experience reports, and are widely accepted in the software engineering community.
Software Factories is a methodology for using domain specific languages and other technologies to automate Software Product Lines. They were developed by leveraging the work of David Garlan, as described in the book, in consultation with the authors of Software Product Lines at the SEI, and with other experts in related disciplines, such as pattern languages, generative programming and feature based development.[Comentado en su blog]
Por lo tanto, es razonable ver cuánto ayuda SF en ese terreno, así como deslindar el valor de la idea de Líneas de Producto Software más allá de la encarnación en el proyecto de Microsoft. Y vale lo mismo para las Factorías de Software. Es decir, es importante discernir que ambos conceptos no son necesariamente lo que Microsoft postule, y que pueden diverger. Que el concepto es del mayor interés, y que no tiene por qué acompañar el resultado que le dé a Microsoft, así como también las ideas de la empresa pueden contribuír al perfeccionamiento de la iniciativa. Como lo puede hacer MDA (Model Driven Architecture), aunque Jezz no desee entrar en esa discusión.
Más aún, es absolutamente legítimo suponer que el libro de Greenfield, que obra como sustento de SF (uno referencia al otro, mutuamente), sea aplicable con otro conjunto de herramientas que no sea Visual Studio Team System (VSTS), ni sus extensiones. Por esto mismo el concepto es interesante, conveniente de analizar, poniendo entre paréntesis los recursos concretos con que Jezz construye. ¿Es generalizable el tipo de problemas con que Jezz se encuentra? ¿O tienen que ver con su implementación de la idea de SPL?.
Sobre esto trataremos de escribir, con los limitados recursos de tiempo que disponemos. Si fuera posible, sería bueno presentar también otros puntos de vista que hoy existen sobre cómo resolver SPL.

sábado, septiembre 15, 2007

Factorías de Software ¿Horizontales y Verticales?

Jezz Santos abre en su blog una dimensión de las factorías de software, versión Microsoft, que posiblemente sea una de las mejores maneras de cotejar las diferencias con el estándar MDA y otras variantes de desarrollo orientado por modelos. Desgraciadamente no tengo casi tiempo, lo que limita mis posibilidades de desmenuzar en detalle sus comentarios, pero trataré de releerlo, analizarlo, y luego apuntar algunas diferencias.
Para comenzar, se presenta el problema, en palabras de Jezz:

The title of this post would not mean anything if you didn't know what 'horizontal' and 'vertical' software factories were. In that case then for simplification, ‘Factory Verticalization’ merely means ‘Factory Specialization’. Ironically enough though, most of the factories being build today are ‘Horizontal Factories’, so specialization in these cases means ‘Factory Verticalization’.
In almost all my interactions with customers about factories I am constantly reminded that ‘verticalization’ is the one key principal aspects of software factories today which is largely going unaddressed (and poorly understood). I am convinced that there is a clear and immediate requirement that we need to deal with more effectively in the present. I go so far to say that without catering for Verticalization of any factory, would seriously limit the adoption of factories in the future. It is such an important aspect to provide for if we are to realize the full vision of product lines and asset reuse.

Inmediatamente, Jezz carga un poco más la definición:

Before I get into this lengthy discussion about different aspects of verticalization, I think I need to explain what we mean by 'Horizontal' and 'Vertical' software factories in order to set the right context for the discussion.
In software factories we use these terms (vertical and horizontal) in the respect as they use them to describe markets. In software development these terms are generally well understood to differentiate broad skill-sets (or more general capabilities) typically focused at particular technologies and platforms, from the skill-sets/capabilities applied to specific industry domains (e.g. finance, manufacturing, retail, etc). The intersection of these axes in software engineering, in theory at least, applies the expert technical solution people to build instances of solutions for particular vertical domains, under the guidance and knowledge provided by the industry domain experts. The assumption is that the horizontal assets (people and artifacts) can be reused (with some specialization) across many vertical domains - it’s really a synergy of the two axes.

Existen en este planteo dos aspectos para analizar:
  1. ¿Software Factories representa un "mercado horizontal" para Microsoft?
  2. ¿Qué distancia existe entre este concepto de lineas de producto, y el de Software Product Lines del SEI (para mencionar el centro que probablemente más trabaja sobre este punto)
Sobre el primer punto, así parecería demostrarlo el repositorio visible en Codeplex, (Microsoft patterns and practices, Software Factories), con cuatro casos desarrollados. Asimismo, las referencias de Jezz en su nota:
From a marketing perspective, if you can call it that (which is really isn’t), one of the primary reasons Microsoft currently releases only horizontal factories is that: at present, software factories are a new concept to the industry, and Microsoft is leading the charge here. It makes perfect sense that the first factories from Microsoft must have some large significance and impact and benefit to the existing marketplace. So they chose horizontal factories to tackle first
Sobre el segundo, las definiciones del SEI dicen algo distinto:
A software product line (SPL) is a set of software-intensive systems that share a common, managed set of features satisfying the specific needs of a particular market segment or mission and that are developed from a common set of core assets in a prescribed way.
(...) How is production made more economical? Each product is formed by taking applicable components from the base of common assets, tailoring them as necessary through preplanned variation mechanisms such as parameterization or inheritance, adding any new components that may be necessary, and assembling the collection according to the rules of a common, product-line-wide architecture. Building a new product (system) becomes more a matter of assembly or generation than one of creation; the predominant activity is integration rather than programming. For each software product line, there is a predefined guide or plan that specifies the exact product-building approach.
(...) The common set of assets and the plan for how they are used to build products don't just materialize without planning, and they certainly don't come free. They require organizational foresight, investment, planning, and direction. They require strategic thinking that looks beyond a single product. The disciplined use of the common assets to build products doesn't just happen either. Management must direct, track, and enforce the use of the assets. Software product lines are as much about business practices as they are about technical practices.
A partir de aquí, conversaremos. Comenzando esta semana...

Nota: Esto es muy curioso...Desapareció del sitio de Microsoft el ítem sobre SF en Arquitecturas, y ya no se encuentra el enlace a la explicación difundida durante meses sobre el tema (http://msdn.microsoft.com/architecture/overview/softwarefactories/) . Tendré que buscar más (¿?)

lunes, septiembre 10, 2007

OOXML, segunda parte

Mary Jo Foley ("An unblinking eye on Microsoft"), comenta la votación en ISO sobre el estándar propuesto por Microsoft, en ZD Net:

As readers of this blog know, I believe that the world is big enough for multiple file-format specifications. I don’t think the Open Document Format (ODF) deserves to be the only format sanctioned as an “open standard.” That said, I also believe Microsoft deserved to lose this vote. Why?

1. Lobbying is legal. But certain lobbying tactics are not
. Microsoft officials admitted that one of the company’s employees behaved inappropriately in Sweden, attempting to influence partners to vote for OOXML approval. It’s good Microsoft admitted that this was wrong. But it still makes me wonder whether company officials did the same in other countries and were just not caught. And if anyone thinks Microsoft was the only company engaging in lobbying around this standard battle, you need to stop drinking the IBM Koolaid.

2. Microsoft has a history of changing specs at will and leaving developers in the lurch. It’s true you can teach an old dog new tricks (especially when the U.S. Department of Justice, state attorneys general and your competitors are all watching to make sure the dog is behaving properly). But when a specification is created and maintained by a single company or entity, it’s more prone to being manipulated and abused.

3. Openness is in the eye of the beholder. Microsoft considers OOXML open, yet so far, it hasn’t been able to get its own Mac Office product to interoperate with the new OOXML formats in Office 2007. Microsoft has enlisted a number of its new friends to build OOXML-ODF converters, but it has done so only an attempt to “prove” to standards makers that OOXML isn’t the island that it is.

Microsoft isn’t throwing in the towel: It is predicting it can overcome objections by the time the final tally is taken for ISO standardization. Between now and then, both Microsoft and IBM and other ODF backers will, no doubt, continue to lobby as to why OOXML should/shouldn’t become an ISO standard.

In spite of the rhetoric on both sides, Microsoft wants OOXML to gain ISO standardization so that it won’t lose out on government contracts that require “open,” standards-based products. Microsoft’s competitors don’t want Microsoft to obtain ISO standardization because they see this loss as a chance for them to finally lessen Microsoft’s 90-plus-percent market share in the desktop-productivity suite business.

This battle’s not about interoperability, motherhood and apple pie: It’s about Microsoft wanting to keep its desktop-suite monopoly and its competitors seeking ways to break Redmond’s stranglehold on this part of Microsoft’s business.

Está claro...Quizá la mayor objeción sea la referida a la continuidad de soporte del estándar. Lo certifica una simple mirada a sus versiones de Windows.
Otro comentario, Ryan Paul, en Ars Technica.

viernes, septiembre 07, 2007

Seguridad en bases de datos (a propósito de Monster)

Peter Schooff, en Ebizq, menciona, en el contexto de algunos problemas de seguridad recientes (1 y 2), cinco puntos que considera "claves" en la seguridad de bases de datos expuestas, retomando recomendaciones de Forrester Research: Monitoreo, evaluación de riesgo, enmascaramiento de datos (este punto es nuevo para mí) , encriptación, auditoría

Security is not just some product that you can buy off the shelf and that’s it, you’re secure. According to Noel Yuhanna, a principal analyst at Forrester Research, a secure database is a matter of process, not technology, which is why a plan is so important. Forrester recommends the following five steps, which was taken from eWeek:

1) Monitoring -- Automated monitoring tools are important because of the sheer volume of activity going on with many databases, making it impossible for someone to do it without automation.
2) Vulnerability Assessment -- Data needs to be rated according to it’s significance, and the more important it is, the better it needs to be protected. According to Forrester’s Yuhanna, “Once you classify the data, then you build a policy around it.”
3) Data Masking -- Many think you need to keep the data off the database from the very beginning. Smart users actually hide important data by overlaying false values, so the applications can continue to work normally and the important data is never exposed.
4) Encryption -- We all know how important data encryption is, but encryption still comes with what seems to be more challenges than solutions. But while encryption can be difficult to implement, encrypting key data is critical to good security practices, and also an essential element of HIPAA and PCI regulations.5) Auditing -- Very simply, security is an ongoing process, and frequent auditing will keep a company on it’s security toes and hopefully keep it out of the data-breach news.
5) Auditing -- Very simply, security is an ongoing process, and frequent auditing will keep a company on it’s security toes and hopefully keep it out of the data-breach news.

El informe comentado resalta algo conocido, que Monster evidentemente recuerda: According to Forrester Research, 80 percent of companies lack even the most basic database security plan.

miércoles, septiembre 05, 2007

ISO rechaza el estándar OOXML de Microsoft

Microsoft viene intentando establecer como estándar ISO a su formato de documentos Office Open XML desde hace algún tiempo. Barrapunto sigue este esfuerzo (con bastante desacuerdo al intenso lobbismo desarrollado por Microsoft) desde hace semanas, y había anticipado informalmente el resultado que hoy reportan New York Times e InfoQ, entre muchos otros. El resultado es el rechazo a su formato como estándar ISO. NYTimes destaca el aspecto comercial de la disputa por el estándar:
The fight over the standard, while technically arcane, is commercially important because more governments are demanding interchangeable open document formats for their vast amounts of records, instead of proprietary formats tied to one company’s software. The only standardized format now available to government buyers is OpenDocument Format, developed by a consortium led by I.B.M., which the I.S.O. approved in May 2006.
(...) More than 90 percent of all digital text documents in the world are in Microsoft formats, according to the consulting firm Gartner. Many national and local governments in Europe and some in the United States are requiring open formats to reduce their reliance on Microsoft. In an open format, the computer code is public, which allows developers to create new products that use it without paying royalties.
Sobre la votación, NYTimes:
Of the 87 countries that participated, 26 percent opposed Microsoft’s bid. Under the rules for approval, no more than 25 percent of the countries could oppose the bid. Microsoft also failed to win the vote of 66 percent of 41 countries on another panel of I.S.O. and I.E.C. members.
Infoq explica el significado de estos votos:
According to the official news approval for Microsoft's OOXML Format would have required "at least 2/3 (i.e. 66.66 %) of the votes cast by national bodies participating in ISO/IEC JTC 1 to be positive; and no more than 1/4 (i.e. 25 %) of the total number of national body votes cast negative. Neither of these criteria were achieved, with 53 % of votes cast by national bodies participating in ISO/IEC JTC 1 being positive and 26 % of national votes cast being negative". Microsoft on the other hand speaks of a "strong global support":We are extremely delighted to see that 51 ISO members, representing 74 percent of the qualified votes, have already voiced their support for ISO ratification of Open XML, and that many others have indicated they will support ratification once their comments are resolved in the next phase of the ISO process.
Sobre la actividad lobbista de Microsoft, Infoq menciona los comentarios de Andy Updegrove:
Many voices have criticized the process. Andy Updegrove, a standards expert, is one of them. He stated his concerns about the process in the US and other countries as well as the system in general. One day before the official news he predicted the outcome to the point:With the polls now closed and the early results in (some public, some not), think it's time to predict with assurance that ISO will announce tomorrow that ISO/IEC DIS 29500, the draft specification based upon Microsoft’s Office Open XML formats, has failed to achieve enough yes votes to gain approval at this time. This, with all due respect to the contrary prediction of The Old Gray Lady and US Paper of Record, the New York Times.
The final vote has been a moving target for some time, and for a variety of reasons. In most cases, the dynamism in the vote has been as a result of various types of behavior by Microsoft, both alleged as well as, in some cases, admitted. In one case, that behavior led to the Swedish national vote being thrown out and replaced with an abstention, after it became apparent that one company voted more than once (Microsoft admitted that an employee had sent a memo urging business partners to join the National Body and vote to approve, and assuring them that their related fees would be offset by Microsoft marketing incentives).

ISO emitió un comunicado de prensa sobre la votación y el proceso de aprobación:

A ballot on whether to publish the draft standard ISO/IEC DIS 29500, Information technology – Office Open XML file formats, as an International Standard by ISO (International Organization for Standardization) and IEC (International Electrotechnical Commission) has not achieved the required number of votes for approval.
The five-month ballot process ended on 2 September and was open to the IEC and ISO national member bodies from 104 countries, including 41 that are participating members of the joint ISO/IEC technical committee, JTC 1, Information technology.
Approval requires at least 2/3 (i.e. 66.66 %) of the votes cast by national bodies participating in ISO/IEC JTC 1 to be positive; and no more than 1/4 (i.e. 25 %) of the total number of national body votes cast negative. Neither of these criteria were achieved, with 53 % of votes cast by national bodies participating in ISO/IEC JTC 1 being positive and 26 % of national votes cast being negative.
Comments that accompanied the votes will be discussed at a ballot resolution meeting (BRM) to be organized by the relevant subcommittee of ISO/IEC JTC 1 (SC 34, Document description and processing languages) in February 2008 in Geneva, Switzerland.
The objective of the meeting will be to review and seek consensus on possible modifications to the document in light of the comments received along with the votes. If the proposed modifications are such that national bodies then wish to withdraw their negative votes, and the above acceptance criteria are then met, the standard may proceed to publication.
Otherwise, the proposal will have failed and this fast-track procedure will be terminated. This would not preclude subsequent re-submission under the normal ISO/IEC standards development rules.
ISO/IEC DIS 29500 is a proposed standard for word-processing documents, presentations and spreadsheets that is intended to be implemented by multiple applications on multiple platforms. According to the submitters, one of its objectives is to ensure the long-term preservation of documents created over the last two decades using programmes that are becoming incompatible with continuing advances in the IT field.
ISO/IEC DIS 29500 was originally developed as the Office Open XML Specification by Microsoft Corporation which submitted it to Ecma International for transposing into an ECMA standard. Following a process in which other IT industry players participated, Ecma International subsequently published the document as ECMA standard 376.
Ecma International then submitted the standard in December 2006 to ISO/IEC JTC 1, with whom it has category A liaison status, for adoption as an International Standard under the JTC 1 "fast track" procedure. This allows a standard developed within the IT industry to be presented to JTC 1 as a Draft International Standard (DIS) that can be adopted after a process consisting of a one-month review by the national bodies of JTC 1 and then a five-month ballot open to all voting national bodies of ISO and IEC.

Usuarios de Barrapunto señalan que de todas formas el proceso no está concluído; sólo la vía rápida (fast track), como el comunicado de ISO indica.

martes, septiembre 04, 2007

Enseñanzas del ataque a Monster

Volviendo sobre el ataque sufrido por Monster, Chris DeVoney, en Techmentor, apunta a fallos en prevenciones que típicamente debieran tomarse en un entorno corporativo, así como a fallos de las reglas de análisis de problemas en el control; DeVoney supone que indudablemente hubo deficiencias en el análisis de comportamiento de quienes accedían a los datos:
Monster.com missed an obvious piece of the puzzle by not applying behavioral monitoring of the searches of its resume database. It is difficult to pull 1.6 million records of personal information out of a database without having some of that activity appear atypical, such as the time of day of the search or the geographic locations searched relative to the recruiter’s own sphere. That’s the purpose of database monitoring and compliance tools.
DeVoney supone que el problema nació en el eslabón más débil, el usuario externo con acceso a los datos sensibles (en este caso, los agentes de Recursos Humanos que utilizan Monster), y que políticas laxas en el acceso y seguimiento. Si bien sus afirmaciones no proceden del análisis interno del caso, probablemente no se equivoque.

lunes, septiembre 03, 2007

Chile y Uruguay, buscando negocios

Siguiendo el camino que mantienen desde hace largo tiempo, se realizó un encuentro exploratorio de oportunidades de negocios entre organizaciones empresarias y autoridades de Chile y Uruguay. Reproducción de una noticia publicada por América Economía:
En el marco del Acuerdo de Cooperación firmado a principios de este año entre la chilena Corfo y la Corporación Nacional para el Desarrollo del Uruguay (CND) se realizó en Santiago el encuentro Chileno-Uruguayo de la Industria del Software.
La actividad -presidida por Carlos Álvarez, vicepresidente ejecutivo de Corfo y Carlos Pita, Embajador de Uruguay en Chile; contó con la asistencia de múltiples invitados que mantuvieron más de 160 reuniones bilaterales de negocios con sus pares chilenos. El objetivo era analizar negocios conjuntos y prospectar inversiones en Chile.
Este encuentro busca concretar alianzas tecnológicas que mejoren las oportunidades de desarrollo y complementariedad empresarial entre ambos países. Asimismo, fomentar la inversión nacional y extranjera en Chile, ligada a las industrias TI chilena y uruguaya.
En este sentido, Uruguay ha alcanzado un alto estándar de productos exportables y cuenta con capital humano de alta calificación, mientras que Chile posee la ventaja de numerosos tratados de libre comercio e infraestructura tecnológica de alto nivel; atributos muy atractivos para la creciente industria de offshoring a nivel mundial y en la que Chile quiere posicionarse como plataforma de servicios tecnológicos internacionales.
Entre las empresas extranjeras asistentes se encuentran Zonamerica, importante parque de negocios y tecnología que nace como zona franca, pero hoy en día alberga importantes empresas de offshoring, como call centers, servicios financieros, desarrollo de software y biotecnología; CES, Centro de Ensayos de Software, empresa que prueba la calidad y detecta las fallas antes de sacar los programas al mercado; De Larrobla & Asociados / Bantotal ofrece una solución operativa integral para entidades financieras y bancos, presente en más de 35 bancos en Latinoamérica, entre otros.
Carlos Álvarez, vicepresidente ejecutivo de Corfo, señaló que “estamos promocionando a Chile como destino “near shore” de inversiones tecnológicas norteamericanas. La cercanía a la que se apela, antes sólo aplicable a Canadá y México, surge de la comparación con oferentes de este tipo de servicios localizados en otros continentes, los cuales, justamente por su gran crecimiento, están viendo dificultada su disponibilidad de recursos humanos calificados. Chile junto con Uruguay, puede suplir esta insuficiencia, además en zonas horarias similares a los de Estados Unidos, país que demanda el 70% de estos servicios a nivel mundial”.
Por su parte, el embajador de Uruguay, Carlos Pita, explicó que “las relaciones bilaterales entre nuestros países pasan por un momento excepcional. Así lo demuestra la reciente visita del presidente Vázquez, los 8 acuerdos estratégicos firmados con Chile, muchos de éstos en materia de cooperación, así como la creación de una comisión bilateral de comercio e inversiones permanente”.

Seguridad en Internet, últimas noticias

Desde hace años, tengo una cuenta en Monster, que alguna vez usé para iniciar conversaciones con una empresa en Texas, y que siempre me ha servido para conocer el estado del mercado informático en Estados Unidos, Canadá, y, en menor medida, en España. Como usuario, en los últimos días recibí una nota de la compañía reconociendo que fueron blanco de un ataque que obtuvo datos de empleadores, con incertidumbre del real alcance del ataque. Ya conocía su existencia, que fue alertado también por Symantec. El punto más notable de la nota es su reconocimiento de la imposibilidad de determinar el alcance de la entrada en sus datos. Así lo comunicó Monster:
Protecting the job seekers who use our website is a top priority, and we value the trust you place in Monster. Regrettably, opportunistic criminals are increasingly using the Internet for illegitimate purposes. As is the case with many companies that maintain large databases of information, Monster is from time to time subject to attempts to illegally extract information from its database.
As you may be aware, the Monster resume database was recently the target of malicious activity that involved the illegal downloading of information such as names, addresses, phone numbers, and email addresses for some of our job seekers with resumes posted on Monster sites. Monster responded to this specific incident by conducting a comprehensive review of internal processes and procedures, notified those job seekers that their contact records had been downloaded illegally, and shut down a rogue server that was hosting these records.
The Company has determined that this incident is not the first time Monster's database has been the target of criminal activity. Due to the significant amount of uncertainty in determining which individual job seekers may have been impacted, Monster felt that it was in your best interest to take the precautionary steps of reaching out to you and all Monster job seekers regarding this issue. Monster believes illegally downloaded contact information may be used to lure job seekers into opening a "phishing" email that attempts to acquire financial information or lure job seekers into fraudulent financial transactions. This has been the case in similar attacks on other websites.
Como parte de su propósito de salvar lo que se pueda del desastre de esta entrada, Monster ofrece en la misma nota algunas ayudas en casos típicos de fraude que se podrían gestionar a partir de la información ganada por los asaltantes (es decir, lo que vendrá):
Qué dice Symantec sobre el ataque:
Yesterday, we analyzed a sample of a new Trojan, called Infostealer.Monstres, which was attempting to access the online recruitment Web site, Monster.com. It was also uploading data to a remote server. When we accessed this remote server, we found over 1.6 million entries with personal information belonging to several hundred thousand people. We were very surprised that this low profile Trojan could have attacked so many people, so we decided to investigate how the data could have been obtained.
Interestingly, only connections to the hiring.monster.com and recruiter.monster.com subdomains were being made. These subdomains belong to the “Monster for employers” only site, the section used by recruiters and human resources personnel to search for potential candidates, post jobs to Monster, et cetera. This site requires recruiters to log in to view information on candidates.
Upon further investigation, the Trojan appears to be using the (probably stolen) credentials of a number of recruiters to login to the Web site and perform searches for resumes of candidates located in certain countries or working in certain fields. The Trojan sends HTTP commands to the Monster.com Web site to navigate to the Managed Folders section. It then parses the output from a pop-up window containing the profiles of the candidates that match this recruiter’s saved searches.
The personal details of those candidates, such as name, surname, email address, country, home address, work/mobile/home phone numbers and resume ID, are then uploaded to a remote server under the control of the attackers.
This remote server held over 1.6 million entries with personal information belonging to several hundred thousands candidates, mainly based in the US, who had posted their resumes to the Monster.com Web site.
Such a large database of highly personal information is a spammer’s dream. In fact, we found the Trojan can be instructed to send spam email using a mail template downloadable from the command & control server.
The main file used by Infostealer.Monstres, ntos.exe, is also commonly used by Trojan.Gpcoder.E, and both also have a similar icon for the executable file that reproduces the Monster.com company logo—hardly a coincidence.
Furthermore, Trojan.Gpcoder.E has reportedly been spammed in Monster.com phishing emails. These emails were very realistic, containing personal information of the victims. They requested that the recipient download a Monster Job Seeker Tool, which in fact was a copy of Trojan.Gpcoder.E. This Trojan will encrypt files in the affected computer and leaves a text file requesting money to be paid to the attackers in order to decrypt the files. The code for Gpcoder is rather similar to that of Monstres, which may indicate the same hacker group is behind both Trojans.
En la medida en que la información no está en un entorno cerrado, y todo apunta a que difícilmente en el futuro esto cambiará, la exposición a riesgo de fraude sobre una empresa, organización o persona, es alta. Las normas usuales de auditoría informática han cambiado para siempre...Creo que se puede decir que antes que afirmar que nuestro entorno es seguro, primero se debe pensar que no hemos sido de interés para un ataque, o todavía no nos hemos enterado de hasta dónde ha llegado. Quien no dé importancia al problema, está en un error profundo.