miércoles, enero 25, 2006

Qué patrón de diseño usar para versionamiento?

Un interesante hilo de discusión se acaba de abrir en comp.object, el sitio de discusión sobre OOD en Google:
I need to implement a history/revision system. Such that there will be an object and as it's values change, the revision changes and there is a history of the old values. What is the correct design pattern to use to implement something like this?
HS Lahman y otros abren ideas sobre cómo tratar el versionamiento como objetos con historia. Para seguir, y si se anima, aportar.

CSS para principiantes

En Techrepublic, entre los cientos de recursos disponibles, un buen manual de aprendizaje de CSS y XHTML.

Mas sobre Google Talk y Open Federation

Sean Michael Kerner dedica un breve artículo a Open Federation, destacando la implementación del protocolo Jingle , base para un salto hacia VOIP:
The Jingle specification extends the XMPP protocol for VoIP and other P2P communications and uses.
Enlace al artículo en el título de esta nota.

sábado, enero 21, 2006

Una visión irónica de la polemica C++/Java

Suelo encontrar entre los devotos de C++ y Java una oposición calurosa que tiende a elogiar sin críticas el bando propio, y negar todo en el lenguaje o arquitectura ajenos. Billy Hollis ofrece un resúmen sarcástico de la evolución de C a C++ y C#, que (no sin razón) incluye a Java en la familia. Al menos, puede ayudar a poner sobre la mesa los lados flacos de "la famila"...
Aunque ya pocos se sorprenderían, también se puede ironizar con BASIC.

jueves, enero 19, 2006

CORBA-web Services: closely/loosely/thightly coupled?

En Middleware Matters, el blog de Steve Vinowsky (IONA), existe una discusión sobre CORBA y servicios web, acerca de un punto en común entre ellos: El intento de articular aplicaciones en ambientes heterogéneos, en forma de componentes, que ofrezcan servicios capaces de ser tomados sin necesidad de conocer detalles internos de implementación. ¿Por qué falló CORBA? (no hay duda sobre esto). ¿Corren el mismo riesgo los servicios Web? ¿Loosely Coupled está realmente logrado?.
Ver los comentarios de Michi Henning:

I agree with Mark that CORBA has failed on the Internet. We simply don't see public integration using CORBA among companies. Instead, CORBA is typically used for communication among application components that are developed by the same team, but is not used by companies to offer a public remote API that anyone could call. Sure, I can send CORBA messages over the Internet. But that's not the crux of the question. It's much more a question of whether unrelated parties use CORBA (or WS) to communicate: one party provides the server, and an unrelated party then uses the server, much like a person using a web browser accesses a web server. And I don't see either CORBA or WS being used that way (other than for trivial toy applications).

I also agree with Mark that WS is no more loosely coupled than CORBA. WS proponents claim that loose coupling is achieved by using XML, because XML can be parsed without a priori knowledge of the contents of a message. This is the famous "the receiver can ignore what it doesn't understand" argument. I see many problems with this argument:

- This idea of loose coupling passes the buck to the application. Basically, the sender sends a message that can have all sorts of data in it, and then the application, at run time, has to make sense of it somehow. This is a recipe for bugs because type checking is not enforced by a compiler, and not enforced by the distribution infrastructure.

- We have WSDL. But WSDL ends up creating type definitions that are just as tightly-coupled as IDL ones. (And everyone seems to agree that WSDL is important.) But, where does that leave loose coupling? We have XML at the protocol level, which is loose, and we have WSDL at the application level, which is not loose. There seem to be contradictory messages and intents here.

- Yes, I can define WSDL that makes things optional and types them "loosely", in some sense (just as I can define IDL that does that). But if I do this, the value of having a type system in the first place diminishes, and I'm back to passing the buck to the application, which then again has to figure out at run time (instead of at compile time) whether a particular received message actually makes sense. And note that it is the *application code* that is responsible for this, not the distribution technology, so I get to write type checking code in my applications over and over and over again...

- Even if I do define WSDL that is "loose" and makes lots of things optional, that typically doesn't help me. Loose coupling isn't of interest just for its own sake, but is of interest because people are looking for a way to solve the versioning problem: how can I evolve a distributed application over time without breaking everything that is deployed already, and without having to recompile and redeploy the universe? If I define WSDL that is "loose" to start with, so I get the loose coupling I so much need, by implication, I know in advance how the application will evolve: I put the "loose" bits in the WSDL definitions where I expect future variation in the data. But real life doesn't work that way. None of us is prescient and, as a rule, what makes the versioning problem so hard is that we *don't* know how an application will evolve in the future. In other words, people who say that I can solve the problem by writing "loose" WSDL are kidding themselves: the real world is not cooperative enough for this to work.

- The old argument of "the receiver can ignore what it doesn't understand" is fallacious. For one, versioning and loose coupling are not about just being able to send additional data, but also about changing existing data, operations, parameters, and exceptions. Moreover, real-world versioning is sometimes not about changing interfaces or data types but about changing *behavior*: it is common for someone to want to change the behavior of an operation for a new version of the system, but without changing any data types or operation signatures. Second, the assumption that things will work just because the receiver can "ignore what it does not understand" is very naive. What I don't know can hurt me as much as what I do know. (Would you sign a contract that I put in front of you when several paragraphs are written in a language you cannot understand?)

- Trying to achieve loose coupling at the protocol level is simply the wrong place in the abstraction hierarchy: the protocol is about moving bits back and forth, and about doing this reliably and efficiently. Loose coupling is about dealing with application-specific data types and interfaces and whether it's possible to gracefully evolve these over time. I don't see why I have to have a "loose" protocol in order to enable loose coupling.

If we are interested in loose coupling, multiple interfaces are a far better approach. Instead of trying to define one interface that is loose enough to accommodate all the possible variations, I define several interfaces, each of which accommodates exactly one variation. That way, each interface is strongly typed, but, collectively, all the interfaces together are loosely typed because they offer several alternatives for sending a message. Given that, what I need is a mechanism to select the interface that best suits my job, and a mechanism that lets the receiver of a message know which version is being addressed by the sender. Put those mechanisms into the distribution platform, so applications don't have to reinvent them all the time, and you have a workable answer to the "loose coupling" dilemma that doesn't require me to sacrifice static type safety, and that doesn't throw on-the-wire bandwidth and CPU cycles around as if they were growing on trees.

This idea works very well, and isn't new either: COM supports multiple interfaces, where a single object with a single object identity can present different personas to the world, and Ice has a mechanism called "facets" that allows versioning of distributed applications.

La discusión ofrece también una reseña de los problemas que atravesó CORBA, que no solo fueran técnicos, sino también comerciales, el mismo conflicto que sigue acompañando la elaboración de estándares hoy (los comentarios de Tom Welsh).
Un comentario de erwin enfría los entusiasmos:

I've done a limited nr of CORBA-based solution architectures and implementations a while ago. My feeling is that CORBA definitely is not a failure. I even see 2 big success areas: technological and analytical.

Technologically speaking, CORBA offers a robust and complete stack with a clear and well-documented approach on designing and implementing distributed solutions.
Only disadvantage : tool/server providers were not really interested in providing true interoperability...

The firewall issue is a fake argument.

One of the great values of OMG/CORBA that hasn't been mentioned yet, is the extensive analysis of what's needed for distributed applications. I have the feeling that lots of "new" paradigms arriving later can be seen as re-implementations of the CORBA standards/services etc.

Of course, with some specific variations, e.g. messages with some arbitrary text format that happens to be parseable by an XML-parser (and where lots of other acronyms can be applied to flabbergast any interested reader) i.o. a binary format.

In the web services domain, people are only starting to discover what's needed for a complete architecture stack.

There's a big chance that the result will indeed be a system that has similar services as CORBA, but this time with human-readable messages and the ability to flow through port 80.

Humans can imagine valid semantics for these messages. But is this really valuable? I agree with Michi Henning that in applications, semantics can not be derived/invented at run-time anyway, they have to be a-priori known (and coded), whether the msg is binary or text-based (with or without standard tools to parse the text).

The fact that people now seem to be willing to wait for a nr of years for this web services platform to stabilize and become more complete, and for tool/server providers to build more-or-less interoperable systems (e.g. MS and rest-of-world), seems to be more a new mindset in the SW-world than to be based on technological reasons.

I'm sure it would have been much easier/faster to provide interoperability on IIOP, but at the time MS could not do that as they were still positioning (D)COM as the holy grail.

But now, XML has such a huge mindshare that everyone is willing to invest in technological interoperability...

Byte-streams didn't have the same sex-appeal... ;-)

Loose coupling will never happen, at least not before we have application components with reasoning capabilities.

Ken Horn extiende la discusión en sus comentarios de su blog.
El debate se produce entre activos participantes de la construcción de medios de soporte de arquitecturas heterogéneas: Michi Henning propone ICE. Mark Baker propone REST. Harold Carr propone PEPt, y otros más...
La discusión, en el enlace del título de este artículo.

Google Talk toma vuelo

Google Talk se convierte en un sistema federado, y marcha a ser también multiplataforma: un sistema capaz de aceptar usuarios de otros sistemas de mensajería instantánea, siempre que mantengan un protocolo común, en este caso XMPP, usado por "Earthlink, Gizmo Project, Tiscali, Netease, Chikka, MediaRing, and thousands of other ISPs, universities, corporations and individual users". Por este camino se acaban los estancos de la mensajería instantánea (Messenger, Yahoo), sin otros requisitos que soportar el protocolo. En el futuro, soportará también SIP (VOIP, Video conferencia). Pero además, Google anuncia clientes de mensajería para Linux y Max OSX, además de aceptar clientes de otros proveedores: "The Google Talk service is built to support industry standards. You can connect to the Google Talk service using Google's own client, as well as many other IM clients developed by third parties".