miércoles, diciembre 30, 2009

Microsoft, IBM: la compañia es el centro...(pero el mundo está afuera)

Publicado hoy por Joel Spolsky:

Microsoft Careers: “If you’re looking for a new role where you’ll focus on one of the biggest issues that is top of mind for KT and Steve B in ‘Compete’, build a complete left to right understanding of the subsidiary, have a large amount of executive exposure, build and manage the activities of a v-team of 13 district Linux& Open Office Compete Leads, and develop a broad set of marketing skills and report to a management team committed to development and recognized for high WHI this is the position for you!”

This is ironic, to use the Alanis Morissette meaning of the word [NSFW video].

The whole reason Microsoft even needs a v-team of 13, um, “V DASHES” to compete against Open Office is that they’ve become so insular that their job postings are full of incomprehensible jargon and acronyms which nobody outside the company can understand. With 93,000 employees, nobody ever talks to anyone outside the company, so it's no surprise they've become a bizarre borg of "KT", "Steve B", "v-team", "high WHI," CSI, GM, BG, BMO (bowel movements?) and whatnot.

When I worked at Microsoft almost two decades ago we made fun of IBM for having a different word for everything. Everybody said, "Hard Drive," IBM said "Fixed Disk." Everybody said, "PC," IBM said "Workstation." IBM must have had whole departments of people just to FACT CHECK the pages in their manuals which said, "This page intentionally left blank."

Now when you talk to anyone who has been at Microsoft for more than a week you can’t understand a word they’re saying. Which is OK, you can never understand geeks. But at Microsoft you can’t even understand the marketing people, and, what’s worse, they don’t seem to know that they’re speaking in their own special language, understood only to them.

¿Hacia el autismo?

sábado, diciembre 26, 2009

La exportación de servicios en Argentina

Publicado por Daniel Sticco, en Infobae, el 26 de diciembre:

La retracción de la economía mundial no afectó la generación de divisas por parte de una de las principales industrias sin chimeneas. La informática, como la actividad de profesionales de la arquitectura, contadores y abogados y técnicos ingresaron en nueve meses u$s3.615 millones, mientras que las importaciones de ese tipo de servicios sólo llegaron al equivalente a u$s1.438 millones.

Si bien las exportaciones de servicios de informática e información, juntamente con los profesionales y técnicos y los personales vinculados con la cultura y la recreación apenas se elevaron entre enero y septiembre en 2,4%, sobresalieron en comparación con la retracción de 25% que en ese período acusó la venta al exterior del conjunto de bienes, frente a similar período de un año atrás.

De este modo, los u$s84 millones adicionales que aportó este sector al balance de pagos con el resto del mundo, se sumaron al ya tradicional saldo positivo que esta industria aporta, con u$s2.176 M en los primeros 9 meses de 2009, el cual contrasta con el desequilibrio del conjunto de los servicios de u$s505 millones en ese período.

Tipo de cambio y algo más
Amén de las reconocidas capacidades y potencialidades de los profesionales que se desempeñan en el país, se destaca que el principal impulsor de las exportaciones de esos servicios, no sólo a América latina, como era habitual, sino a países de Europa y Oriente, fue la política de tipo de cambio alto que adoptó la Argentina desde el abandono de la convertibilidad del peso.

Y aunque en el último año los ajustes salariales llegaron a superar el ajuste de la paridad cambiaria que en valores nominales siguió el Banco Central, los datos de la Balanza de Pagos que difundió el INDEC, permitieron comprobar que esta rama de los servicios alcanzó en 2009 el segundo lugar como principal generador de divisas.

Sólo superó en el primer nonestre del año la marca de exportaciones por u$s3.615 M, la venta de harinas y pellets de la extracción de soja con u$s6.287 M. Atrás quedaron los u$s2.561 M logrado por los despachos de aceite de soja, los u$s2.749 M de las terminales automotrices, u$s1.734 M de aceites de petróleo, u$s1.664 M de los porotos de soja y u$s1.188 M del maíz.

Un tipo de cambio alto determina que el valor de los servicios profesionales convertidos a dólares, euros, u otra divisas se reduzca sensiblemente, fenómeno que no ocurría en los tiempos de la convertibilidad, donde la fortaleza del peso había dejado fuera de competencia a gran parte de los estudios profesionales, en particular de aquellos y medianos y pequeños que no tenían entidad como para hacer prevalecer sus dotes técnicas sobre el costo de contratación.
Sticco parece asignar a Informática una parte destacable de este total de venta de servicios, aunque no aporta un desglose de estas cifras, ni tampoco da oportunidad de consultar la fuente. La noticia es recogida por varias publicaciones, algunas de las cuales difieren en el período que consideran. Relacionando con las cifras de períodos anteriores, probablemente el total sea por el conjunto de servicios, y la parte perteneciente a consultoría o servicios informáticos sea sólo un porcentaje de esa cifra. Si bien se debe tener en cuenta que otras exportaciones están por debajo de sus posibilidades, el hecho de que ocupen el segundo lugar en aporte de divisas no se puede ignorar. Quizá la industria del software finalmente logre suficiente consistencia como para ocupar un sitio en los mercados globales.

miércoles, diciembre 23, 2009

El IManifest en España

Yendo adelante un paso más, Help400 pone en marcha el Manifiesto de soporte del iSeries (AKA AS400) también en España, un tradicionalmente fuerte mercado del AS400:
Con esta iniciativa pretendemos reanimar el mercado AS/400 de habla hispana a través de un anuncio insertado en la doble página central de la revista ServerNEWS (el mejor espacio, tanto en su versión impresa como digital) en donde figuren como mínimo 18 participantes (máximo 26), sumándonos así al impacto mediático generado por los iManifest de Europa y de Estados Unidos.
Puede ver su presentación en el sitio de ServerNews.

domingo, diciembre 13, 2009

Una discusión en LinkedIn sobre el alcance de MDD/MDA

Algunos días atrás se mencionó aquí una discusión en LinkedIn sobre MDD, que criticaba el valor del concepto. Algunos de los cuestionamientos no parecían estar adecuadamente orientados. Sin embargo, las respuestas generadas sí aportan elementos importantes sobre la relevancia del desarrollo basado en modelos, sea que se hable de modelos genéricos o modelos específicos de dominio. En términos generales, lo que sigue es lo más relevante de la conversación:

Cuestionamiento de la portabilidad de los modelos:
(...) until recently the only choices for these modelling languages were proprietary. So by using MDD not only were you choosing to use an obscure programming language with no published standard, you were locking yourself into the only proprietary tool that implemented it. These days its not so bad; there are open source MDD tools. But as far as I know there is no formal standard for the modelling language, which makes the portability of your model between tools effectively zero. This is ironic, given that portability is supposed to be one of the big advantages of MDD.
Este punto es particularmente observable, ya que MDD (y aún DSM) están lejos de ser cerrados, aún tratándose de lenguajes de modelado propietarios. Si sobre algo se ha trabajado en años recientes, es sobre interoperabilidad y metamodelos. Pensando en su visión enfocada en DSLs, Juha-Pekka Tolvanen contesta:
Based on my experience in hundreds of cases, MDD makes always sense when we can raise the level of abstraction with models above the code. In other words, the use of class diagram to represent a class with a rectangle symbol in order to generate a class in file makes very little sense! People who have tried this - usually with some UML tools - are obviously disappointed.
In order to raise the level of abstraction we usually need domain-specific concepts - often implemented into a language and supported by some tool. Luckily the metamodel-based tools that allow to define own languages are always ‘open’ in that respect that while they allow to define code generators (or have API, XML format etc for interchange) they always allow you to move your models to other similar tools too. So there is no locking as you described and there are even published bridges among metatools. IMHO: (and I work for a tool vendor) you should always choose a tool that allows you to move your data (models and metamodels) to other tools - if you don’t do that you have the lock problem you mentioned.
Cuestionamiento a la productividad:
First, to those who point to practical experience [...]: a "successful" project, at best, just means it it was delivered within the estimated budget and timescale (at worst it means a failure that was declared a success for political reasons). The real question is: does MDD deliver projects that cost less and take less time than any of the alternatives. I can have a successful project using assembler, provided I have the budget and time. So: do you have evidence comparing like-for-like that MDD costs less, and if so by how much?
I have only seen one study for MDD (which I can't locate right now) that found MDD development was 40% cheaper than conventional development. However that only compared two development teams, and so could easily have been due to random variation. Furthermore just using a higher level language can give even bigger savings. A study by Lutz Prechelt http://page.mi.fu-berlin.de/prechelt/Biblio//jccpprtTR.pdf gives a clear view of the range available both in developer ability and programming language. Ulf Wiger found that Erlang quadrupled productivity over C++, and the sparse evidence available suggests that Haskell is even more productive.
A este respecto, Juha-Pekka Tolvanen, con gran experiencia en lenguajes específicos de dominio, responde acudiendo a experiencias bien conocidas:
[Acerca de la observación sobre productividad] (“does MDD projects cost less and take less time”) is obviously difficult to answer in general since there are so many modeling languages and programming languages to compare. Ultimately it depends on how well the languages compared fit to the particular problem to be solved.
Since evaluating even two alternatives takes a lot of resources with a proper research method, companies usually don’t make detailed evaluation or at least do not publish them. Fortunately, some do. For example, Panasonic implemented the same system twice and reported 500% productivity improvement when comparing "MDD" to traditional manual coding in C (see http://www.dsmforum.org/events/DSM07/papers/safa.pdf). Polar arranged a laboratory study and a pilot project measuring at least 750% productivity (see http://www.dsmforum.org/events/DSM09/Papers/Karna.pdf). Similar significantly gains are reported by companies like Nokia, Lucent, EADS and Siemens which have defined their own domain-specific modeling languages along with code generators. You can check cases on using Domain-Specific Modeling from http://www.dsmforum.org/cases.html and if you want to see example languages on various domains, check http://www.metacase.com/cases/dsm_examples.html.
These above mentioned productivity figures are quite different than 40% mentioned. I would guess that they used a modeling language that only slightly raises the level of abstraction. Paul, can you find the reference for that case? We have been trying to collect reported cases over the years and this would be valuable addition.
Y Mark Dalgarno, basándose en las experiencias volcadas en las conferencias de Code Generation, responde:
The Code Generation conference has documented MDD success stories in the following domains:

Financial trading platforms
Administrative enterprise applications
Grid & Cloud applications
JEE enterprise applications
Control & Data acquistion systems
Simulation & Training systems
User interface generation
Mobile device applications
Insurance applications
Telecomms applications
Home automation systems
Business and organisational workflow applications
Technology platform migration
Security policy management
Legacy application modernization
Defence systems
Web applications
Hiding framework complexity, handling multiple frameworks
Middleware applications
Sensor Systems

This is by no means the limit of MDD potential.

HLLs (including the ones you list) only allow you to raise your abstraction level so far and it still seems that where MDD is applicable it can outperform HLL by an order of magnitude in terms of productivity depending on the problem domain.
(Se podría agregar a la contestación de Mark, que, más aún, sería indudablemente posible crear modelos que generasen código con los lenguajes de alto nivel sugeridos como "reemplazo" de los modeladores). Este punto particularmente muestra que el cuestionamiento parte de una base errónea.

Finalmente, dos aspectos defendidos por dos de los participantes merecen destacarse:
Rui Curado afirma:
MDD, with proper tool support, gives you additional benefits like traceability, automation, collective knowledge sharing.
Andrea Rocchini pone en el centro de la discusión lo mejor de MDD:
The real advantage of MDD is defining a problem/application with semantic rather than procedural flow.
A model contain all and only the relevant information about problem; it's totally decoupled from implementing technology.
A model can be transformed/interpreted with many tools and many patterns, now and in the future.
From a model we can produce many kind of entity: applications (for many platform), documentation, simulation processes, ...
A model could be designed by an expert of the domain problem rather than a programmer (an expert of IT technology ...)

A procedural flow in a programming language instead is a monolithic object; it's not much recyclable, is strongly technology dependant, conceptual design and implementation are mixed.
The necessary knowledge to have for developing a modern application is all-changing and every day growing.
All that is very frustrating.

Naturally the MDD approach is less free and result depend on quality of tool/interpreter.

Para quienes sean miembros de LinkedIn, la dirección de la discusión, aquí.

martes, diciembre 08, 2009

Johan den Haan acerca de las virtudes de desarrollo basado en modelos (MDD)

Quisiera destacar algo que ya otros hicieron antes, pero no en castellano: las quince razones que Johan den Haan destaca en defensa del desarrollo basado en modelos. Lo haré muy brevemente, remitiendo a su artículo en inglés, pero en pocos días hablaremos un poco más de la crítica a MDD que se desarrolló en LinkedIn, que es su visión inversa. Tan pronto haya tiempo...
Las quince razones de Johan, simplemente enumeradas:
1. MDD es más rápido
2. MDD ofrece un mejor costo (cost-effectiveness)
3. MDD conduce a una mayor calidad
4. MDD es menos propenso a errores
5. MDD conduce a validaciones más claras
6. MDD produce softwaqre menos afectado por cambios de personal
7. MDD potencia los expertos de un dominio
8. MDD permite a los programadores avanzados a enfocarse en los problemas más árduos
9. MDD tiende un puente entre el enfoque de negocios y el tecnológico
10. MDD permite que el software sea menos sensible a los cambios de requerimientos
11. MDD permite que el software sea menos sensible a los cambios de tecnología
12. MDD realmente fuerza el cumplimento de una arquitectura
13. MDD captura conocimiento del dominio
14. MDD produce documentación actualizada del modelo
15. MDD permite enfocarse en problemas de negocios en lugar de hacerlo en la tecnología

Adhiero cien por cien a ellas. Remito a su artículo para su explicación ampliada; y en unos días, volveremos y daremos una vuelta de tuerca a partir de las críticas comentadas.

sábado, diciembre 05, 2009

El manifiesto sobre derechos fundamentales en Internet, y las nuevas herramientas sociales

En El País, el dos de este mes:
La noticia más vista en ELPAÍS.com del martes, con 150.000 páginas vistas y un récord de 2.336 comentarios, se titulaba En cinco años esto desaparece. No habrá ni canciones ni música. En ella, se daba cuenta de la queja de la industria musical española, atenazada por la piratería, y de su exigencia de medidas a Miguel Sebastián para garantizar su superviviencia. De las entradas, más de 15.000 procedían de sitios de microbloggin como Twitter, de redes sociales como Facebook y de agregadores como Menéame, y otros tantos sitios de mucho tráfico e influencia en el mercado online español. Estos resultados, que la colocan entre las más vistas de la semana, dejan entrever la tremenda repercusión mediática que se estaba generando.

Al terremoto que provocó ayer en la Red la protesta de los músicos le ha seguido este miércoles otro no menor, el Anteproyecto de Ley de Economía Sostenible, presentado por el Gobierno en el Congreso y que afecta especialmente al uso de Internet como hasta la fecha lo conocemos, ya que incluye algunas cláusulas en las que, bajo orden de una comisión de ámbito nacional integrada por expertos independientes , se podrán cortar servicios de Internet a los que proporcionen enlaces de descarga a música y películas sin pagar derechos de autor.

Blogueros, periodistas, responsables de páginas web, profesionales y creadores de Internet han redactado un manifiesto, En defensa de los derechos fundamentales en Internet, en el que rechazan la medida. Lo han colgado a las nueve y, según Google Blog Search, en apenas seis horas, más de 58.000 blogs se han hecho eco del texto y Google, usado por más del 95% de los internautas españoles, ha incorporado más de un millón de páginas sobre el tema.

Pero el gran aliado para difundir el manifiesto ha sido Twitter, aunque es difícil de cuantificar el impacto de la noticia en cifras. Sí se puede, no obstante, medir de forma cualitativa y su alcance ha sido notable. De hecho, la fortaleza y capacidad de influencia de este sitio, que no es sólo una web más de microbloggin para comunicarse de manera informal sino un nuevo canal de comunicación, se ha puesto de manifiesto como nunca antes en España. Cualquier iniciativa de protesta requiere de un cauce de expansión del mensaje masivo y Twitter, sobre todo, lo garantiza. Hace un año, sólo 250.000 internautas en España usaban el servicio. En octubre de 2009, más de 1,2 millones de personas lo utilizan en un mes, ya sea para informarse o compartir cualquier tipo de mensaje, según datos de la empresa Nielsen Online, a través de su herramienta NetView (panel de Audiencia de usuarios sólo de España. Hogar y Trabajo).

Bajo la denominación de #hashtag se engloban los temas momento en Twitter. En pocas horas, los internautas seleccionaron el tema #manifiesto como identificador del documento de los internautas. Con algunas de las herramientas del mercado, se puede analizar el impacto que el asunto está generando y los resultados de difusión son espectaculares en cuanto a los mensajes contrarios al anteproyecto que se están difundiendo por toda la Red. Otras herramientas intentan reproducir de manera semántica los principales valores asociados al tema elegido. En este caso, palabras como derechos, defensa, Internet, fundamentales, están siendo difundidos en volúmenes notables a lo largo del día, vinculadas todas al manifiesto y sus connotaciones.

Sumando el ruido que han hecho los principales blogs de este país, su presencia en Twitter y la potencia de Google, el manifiesto ha logrado un inusitado poder de difusión.

Internet como ejercicio de democracia

En la corriente discusión española sobre el control (o no) del acceso a Internet, hay un aspecto lateral pero más que interesante: una ley es presentada a hurtadillas, pero, tan pronto alcanza algo de difusión, pasa "de boca en boca" por la red, y genera una respuesta opuesta masiva y pública; lo suficientemente contundente para que el gobierno resolviera convocar a un grupo de "notables" para moderar la crítica. Y esto, hecho a través de las herramientas sociales de comunicación que se han generalizado en la red: los mensajes minimalistas de Twitter, la comunicación interactiva de Google Wave, las herramientas de mensajería instantánea, que logran cientos o miles de declaraciones públicas, las redes sociales, los feeds de noticias. El mismo escenario que meses atrás viéramos en las elecciones de Irán, o en varios incidentes de China.
Así lo cuenta Gustavo Bravo:
Ayer fue un día informativamente agotador. Por ello, ruego me permitan relajar el tono de discusión en este nuestro rincón. Ayer había tanta gente que apenas se oían unos a otros. Hoy, víspera de fin de semana largo, se espera menos afluencia, mejor discusión y una energía más positiva que otros días, donde ha llegado a alzarse en exceso la crispación.

Traigo una historia que pocos encontraran interesante a priori, por lo que uno tratará de esforzarse por llamar su atención. Las cuestiones importantes sobre el asunto tratado ayer aquí y hoy aquí, son vitales para comprender por qué Internet es la mayor revolución informativa de la historia de la humanidad, y por qué este canal es muchos canales en uno.

Ruego por último presten atención a esta información y tengan en mente dos hechos clave: primero, que el Gobierno ha intentado anular (una vez más) al Poder Judicial y segundo, que la ministra González Sinde se ha creído que los intereses de Internet los representan 14 personas elegidas como quien elige un menú en una cafetería; “lo que haya y rapidito, que tengo una cita”.

Al margen de estos gravísimos acontecimientos, desde el miércoles han sucedido otras proezas la mar de fascinantes. Entre ellas, que la cita de ayer es la primera vez que un Gobierno reúne a distintas personas para debatir una propuesta de ley basándose en la popularidad de las mismas y el ruido que han hecho en los nuevos canales y redes sociales, sin que representen a ninguna asociación ni colectivo; y otra, que es también la primera ocasión en la que una reunión de este tipo es 'radiada' y comentada en directo por los asistentes, que han resultado así ser protagonistas e informadores, en tiempo real, de los contenidos discutidos en un encuentro de carácter oficial.

Esto (que tal y como ha sucedido es un insulto a la inteligencia), informativamente resulta espectacular, y abre una ventana a la imaginación que parecía cerrada, quién sabe si por velocidad con la que se vive.

Un manifiesto, redactado rápida, torpemente y sin jerarquía por un grupo de espontáneos con la muy loable intención de tumbar una ley demencial, fue construido a partir de una herramienta llamada Google Wave que permite crear documentos y editarlos entre varias personas en tiempo real. Mientras toda la comunidad digital internacional todavía se está preguntado “esto de Google Wave para qué leches sirve y cómo funciona”, un grupo de periodistas, blogueros y empresarios de Internet ha conseguido encontrar un modo más que constructivo de sacarle partido. ¡Y esto ha ocurrido en España!

Hace tres años, cuando creé mi cuenta de Twitter, nadie sabía de qué iba a servir “un dichoso micro canal personal en el que sobrealimentar el ego y contar las miserias de uno”. Ahora está considerado el segundo mejor invento después de Internet, gracias al potencial informativo que contiene en sus 140 caracteres. Hace una semana, escuché opiniones similares a las de Twitter en sus inicios pero descalificando las posibilidades de Google Wave. Cómo no vamos a enamorarnos de esta profesión. Díganme.

Internet; ese mundo al margen y que parece que poco o nada tiene que ver con la realidad, empieza poco a poco a alienarse con la sociedad y ha funcionar como la herramienta de comunicación que es. Un absurdo proyecto de ley ha durado dos días. Ni siquiera le ha dado tiempo a ser tramitado en el Parlamento. Si de verdad que existe lo que llaman Democracia, debe de ser esto.
Más allá de las prevenciones pesimistas, la tecnología es instrumental, y puede ser usada en uno u otro sentido. Estos dos últimos días lo demuestran.

miércoles, diciembre 02, 2009

Internet, negocios, cultura, acceso abierto

Reproduzco la declaración de Martín Varsavsky sobre los controles al acceso a la cultura en la nueva ley española de economía sostenible. No sé si fue pensada así, pero podría ser publicada como un manifiesto:

Ante la inclusión en el Anteproyecto de Ley de Economía sostenible de modificaciones legislativas que afectan al libre ejercicio de las libertades de expresión, información y el derecho de acceso a la cultura a través de Internet, los periodistas, bloggers, usuarios, profesionales y creadores de Internet manifestamos nuestra firme oposición al proyecto, y declaramos que:

1. Los derechos de autor no pueden situarse por encima de los derechos fundamentales de los ciudadanos, como el derecho a la privacidad, a la seguridad, a la presunción de inocencia, a la tutela judicial efectiva y a la libertad de expresión.
2. La suspensión de derechos fundamentales es y debe seguir siendo competencia exclusiva del poder judicial. Ni un cierre sin sentencia. Este anteproyecto, en contra de lo establecido en el artículo 20.5 de la Constitución, pone en manos de un órgano no judicial -un organismo dependiente del ministerio de Cultura-, la potestad de impedir a los ciudadanos españoles el acceso a cualquier página web.
3. La nueva legislación creará inseguridad jurídica en todo el sector tecnológico español, perjudicando uno de los pocos campos de desarrollo y futuro de nuestra economía, entorpeciendo la creación de empresas, introduciendo trabas a la libre competencia y ralentizando su proyección internacional.
4. La nueva legislación propuesta amenaza a los nuevos creadores y entorpece la creación cultural. Con Internet y los sucesivos avances tecnológicos se ha democratizado extraordinariamente la creación y emisión de contenidos de todo tipo, que ya no provienen prevalentemente de las industrias culturales tradicionales, sino de multitud de fuentes diferentes.
5. Los autores, como todos los trabajadores, tienen derecho a vivir de su trabajo con nuevas ideas creativas, modelos de negocio y actividades asociadas a sus creaciones. Intentar sostener con cambios legislativos a una industria obsoleta que no sabe adaptarse a este nuevo entorno no es ni justo ni realista. Si su modelo de negocio se basaba en el control de las copias de las obras y en Internet no es posible sin vulnerar derechos fundamentales, deberían buscar otro modelo.
6. Consideramos que las industrias culturales necesitan para sobrevivir alternativas modernas, eficaces, creíbles y asequibles y que se adecuen a los nuevos usos sociales, en lugar de limitaciones tan desproporcionadas como ineficaces para el fin que dicen perseguir.
7. Internet debe funcionar de forma libre y sin interferencias políticas auspiciadas por sectores que pretenden perpetuar obsoletos modelos de negocio e imposibilitar que el saber humano siga siendo libre.
8. Exigimos que el Gobierno garantice por ley la neutralidad de la Red en España, ante cualquier presión que pueda producirse, como marco para el desarrollo de una economía sostenible y realista de cara al futuro.
9. Proponemos una verdadera reforma del derecho de propiedad intelectual orientada a su fin: devolver a la sociedad el conocimiento, promover el dominio público y limitar los abusos de las entidades gestoras.
10. En democracia las leyes y sus modificaciones deben aprobarse tras el oportuno debate público y habiendo consultado previamente a todas las partes implicadas. No es de recibo que se realicen cambios legislativos que afectan a derechos fundamentales en una ley no orgánica y que versa sobre otra materia.

Hemos llegado a un punto en que el negocio está oscureciendo completamente el problema. Irónicamente, la muerte de la cultura no viene de la mano de la despersonalización de la técnica, como Abbagnano, Heidegger y otros filósofos estimaban hace setenta años, sino de sus propios creadores y sus representantes.

sábado, noviembre 28, 2009

Plex-XML

Para usuarios de Plex: el proyecto Plex-XML parece haber alcanzado un interesante grado de madurez. Para todo aquél interesado, puede obtener más información en el sitio de AllAbout, y luego seguir el interesantísimo tutorial (interesantísimo para quienes conocen lo que esos tres o cuatro paneles implican). AllAbout mantiene una Wiki con información sobre el crecimiento del proyecto. Este sigue un camino doblemente interesante: en primer lugar, extiende las prestaciones de Plex, explotando las características que lo permiten: el API del modelo, la posibilidad de desarrollar patrones, y las facilidades de exportación. En segundo lugar, el recurso a la apertura a la comunidad de desarrolladores, que sigue demostrando que el producto se está potenciando a través de su relativa apertura a la iniciativa de sus usuarios.

miércoles, noviembre 25, 2009

Una atinada crítica a DSL

Hoy ha habido mucho que leer. Dos o tres de los artículos aparecidos trataré de comentarlos en sucesivas notas, y otros están apuntados en mis enlaces de Del.icio.us.
El primero es muy breve, pero sustancioso: Rui Curado, miembro de The Model Driven Software Network, presenta sus objeciones a los lenguajes específicos de dominio (DSLs). Son razones que también suscribo:
En primer lugar, Rui reconoce que su uso de DSLs es limitado. Por lo mismo, recurre a lo que otros dicen:

I really don’t have much experience with DSLs, so I won’t use my own arguments. I’ll let the community speak for myself. Here is a tiny sample of DSL criticism:

http://c2.com/cgi/wiki?DomainSpecificLanguage

… the Tower Of Babel effect that results from having so many languages that a person needs to be familiar with (if not master) in order to get common jobs done.

writing a good DSL that is robust and reusable beyond a limited context, is a lot of work. It requires persons with both domain expertise and language design expertise (not necessarily the same person) to pull off. A truly reusable DSL will require reasonable documentation, otherwise it won’t survive the departure of its designers.

http://www.theserverside.com/news/thread.tss?thread_id=42078

The other mistake some folks make is they think that with a DSL that “non-coders” can actually use the DSL.

But since a DSL is designed for automation of an underlying objet model, writing in a DSL is just like writing in any language — whether it’s DOS .BAT files or Java, and it takes a coding mindset and understanding of the underlying domain as represented by the computer in order to make effective use.

There was much more written on this thread. You can go to the original page to read more opinions.

Rui plantea sus objeciones a modo de interrogantes:

As general use of DSLs become mainstream, so become the complaints about their shortcomings. If we take so much time to master a general purpose language, should we invest a comparable amount of time in limited-use languages? How can we get support for a DSL, apart from its own creators? Where’s community support? What happens after the departure of the language’s creator? What’s BNF? Do I need it?

DSL critics say really useful DSLs are hard and expensive to create. DSL supporters answer that DSLs are not designed, they evolve. Well, won’t any of those “evolutionary steps” risk breaking the entire development based on that DSL, much like a broken app build? Will the evolution in the language be severe enough to trash what has been done so far? Can you imagine yourself developing a complex C++ software system while C++ itself was still being designed and developed?

Los lenguajes específicos de dominio son una excelente herramienta, y ciertamente pueden cubrir más adecuadamente dominios específicos, comparados a los modeladores de "propósito general", como suelen denominarlos los sostenedores de DSLs. Pero éste es su límite, precisamente. Un DSL está confinado a un dominio. Es muy útil para un propósito específico, pero no es adecuado para articular un sistema complejo. Una Babel de lenguajes no es la solución para manejar una gran aplicación, y eso deberían reconocerlo quienes promueven uno u otro esquema de DSLs. Inversamente, un modelador de propósito general, en el sabor que se prefiera, es capaz de manejar tal sistema, y probablemente pueda integrar dominios específicos con el auxilio de un DSL. A un DSL le falta horizonte, perspectiva, que es la dimensión que se espera de un MDD. En cierto sentido, la declinación de Oslo (y antes, de las "Software Factories") y la puesta del acento en los rasgos "DSLs" de sus herramientas, mantiene la visión de que el equipo de desarrollo de Microsoft no tiene todavía claro cómo obtener una visión global del proceso de desarrollo.
Sobre el resto de las observaciones, comparto las interrogaciones de Rui. Particularmente, a quiénes está destinada una herramienta creadora de lenguajes específicos de dominio: más allá de las sugerencias de que serían casi de uso generalizado, sólo un equipo robusto, capaz de asignar tiempo al desarrollo de una sintaxis, puede dedicarse a construír un lenguaje particular: una gran corporación con suficiente presupuesto, una empresa dedicada a una línea de productos específica. Por lo demás, valen todas las preguntas.
En cuanto a Rui, es autor de ABSE, una herramienta de modelado cuyo proyecto está en curso.

domingo, noviembre 22, 2009

Criticas a UML

Steven Kelly, a comienzos de octubre, destacó un estudio de W.J. Dzidek, E. Arisholm y L.C. Briand, (publicado en IEEE Transactions on Software Engineering, Vol 34 No 3, de Mayo/Junio de 2008), acerca de la eventual productividad de UML en el mantenimiento de una aplicación. El objetivo del estudio era medir y comparar el rendimiento de UML generando código a partir de un modelo, versus un equipo programando en Java. Un segundo aspecto de mucha importancia era que este proyecto se desarrollara como mantenimiento, no como desarrollo desde inicio. Es decir, cada equipo debía modificar distintos aspectos de una aplicación existente y documentada, en lugar de desarrollar desde cero y sin condicionamientos.
El experimento no deja muy bien parado a UML, que da diferencias a favor no muy grandes, a condición de excluír los tiempos de actualización de los diagramas. El equipo programando en Java logra mantenerse bastante cerca de los tiempos del equipo que trabaja con UML.
Estos son los aspectos que destaca Steven, comprometido con Metaedit, una de las herramientas orientadas a Domain Specific Languages mas consolidadas en el mercado, que considera revalidada su afirmación sobre el uso de UML: "empirical research shows that using UML does not improve software development productivity".
Las observaciones de Steven sobre la validez del experimento son atinadas:

One bad thing about the article is that it tries to obfuscate this clear result by subtracting the time spent on updating the models: the whole times are there, but the abstract, intro and conclusions concentrate on the doctored numbers, trying to show that UML is no slower. Worse, the authors try to give the impression that the results without UML contained more errors -- although they clearly state that they measured the time to a correct submission. They claim a "54% increase in functional correctness", which sounded impressive. However, alarm bells started ringing when I saw the actual data even shows a 100% increase in correctness for one task. That would mean all the UML solutions were totally correct, and all the non-UML solutions were totally wrong, wouldn't it? But not in their world: what it actually meant was that out of 10 non-UML developers, all their submissions were correct apart from one mistake made by one developer in an early submission, but which he later corrected. Since none of the UML developers made a mistake in their initial submissions of that particular task, they calculated a 100% difference, and try to claim that as a 100% improvement in correctness -- ludicrous!

To calculate correctness they should really have had a number of things that had to be correct, e.g. 20 function points. Calculated like that, the value for 1 mistake would drop by a factor of 20, down from 100% to just 5% for that developer, and 0.5% over all non-UML developers. I'm pretty sure that calculated like that there would be no statistically significant difference left. Even if there was, times were measured until all mistakes were corrected, so all it would mean is that the non-UML developers were more likely to submit a code change for testing before it was completely correct. Quite possibly the extra 15% of time spent on updating the models gave the developer time to notice a mistake, perhaps when updating that part of the model, and so he went straight back to making a fix rather than first submitting his code for testing. In any case, to reach the same eventual level of quality took 15% longer with UML than without: if you have a quality standard to meet, using UML won't make you get there any more certainly, it will just slow you down.

Queda por ver cuánto influyó en el resultado la elección de la herramienta usada (Borland Together for Eclipse), y si otro modelador hubiera mejorado los números. Sin embargo resulta notable encontrar tanta proximidad entre uno y otro equipo. Sigue pareciendo que hacer pasar todo el desarrollo del modelo por los tipos de diagramas hoy existentes, es insuficiente. Esta es una discusión reiterada en The Model Driven Software Network, tanto en conversaciones anteriores (1, 2, 3), como en la misma que se abriera sobre este experimento.

Por mi parte, quisiera agregar a las observaciones de Steven una más:
El experimento se propone actuar sobre un modelo "en movimiento", para evitar crear un caso de condiciones ideales, en las que, arrancando de cero, se contruye una solución limpia. Sin embargo, al partir de un desarrollo prolijo, estandarizado, bien documentado, está creando un ambiente de laboratorio. Nunca la comparación entre un desarrollo basado en un modelo y un desarrollo basado en código directo (digamos 3GL) será así: en la medida que un desarrollo basado en modelos y otro equivalente basado en código evolucionen, la oscuridad del diseño crecerá, y probablemente lo hará en forma exponencial, siendo mayor el diferencial cuanto más tiempo y actores hayan pasado. Si trabajamos con un diseño de seis meses de antiguedad, las diferencias en opacidad serán no muy grandes; pero si la aplicación tiene dos, tres, cuatro años, y por ella han pasado dos o tres oleadas de desarrolladores, indudablemente será más productivo trabajar con un modelo que navegando y apostando porque el cambio que hagamos no estalle por otro lado.
En el estudio se omite que un trabajo basado en código fuente directo estará expuesto a distintos estilos de grupos o personas intervinientes, que puede tener callejones sin salida, parches, y aún fraudes o sabotajes. De ninguna manera, sobre una aplicación compleja, los tiempos de trabajar a través de un modelo podrán ser iguales a los tiempos que se tomarán haciéndolo sobre el código mismo. Y mucho menos si las personas acaban de ser contratadas para esa tarea, como pretende hacerlo el experimento.
Esto sin introducir alguna variable independiente, como por ejemplo, un cambio de versión en software de infraestructura o de framework que afecte a la aplicación.
Como siempre, debo aclarar que no uso UML en mi actividad diaria. Sin embargo, prefiero extender el alcance de las críticas al uso de herramientas de modelado en general. Soy conciente que Steven no se pronuncia a favor del código fuente, porque él lo ve desde el campo del desarrollo basado en modelos, pero compara UML contra DSLs. Este es otro asunto, que merece tiempo aparte. Pero creo que es necesario dejar bien claro que el concepto genérico de desarrollo basado en modelos es indudablemente superior al uso de código directo, y mayor cuanto más complejo sea el caso. De lo que se trata es de encontrar una fórmula para expresar de manera flexible y ágil la conducta dinámica de un modelo, algo que UML no parece resolver de manera satisfactoria todavía. ¿Existe solución? Sin duda que la habrá. En mi caso, por lo menos, una solución existe. Pero eso será también aparte.

sábado, noviembre 14, 2009

Oslo: quitando ambiguedad a las expectativas

Duro papel se le ha reservado a Douglas Purdy: anunciar la defunción del proyecto Oslo, y hacerlo con entusiasmo. Considerando la fugacidad de Internet, y la variabilidad de la información ofrecida por Microsoft sobre sus productos, es hora de conservar una visión de lo que fué Oslo, antes de que su memoria sea rediseñada. He intentado buscar páginas específicas de Microsoft en The Internet Machine, pero parece que sus páginas son difíciles de archivar: ninguna de las buscadas tuvo resultados. Sin embargo, queda la Wikipedia: lo que sigue es lo esencial de la versión todavía no modificada del producto, al día 3 de octubre, fecha del último cambio:

History

Originally, in 2007, the "Oslo" name encompassed a much broader set of technologies including "updated messaging and workflow technologies in the next version of BizTalk Server and other products" such as the .NET Framework, Microsoft Visual Studio, and Microsoft System Center (specifically the Operations Manager and Configuration Manager).[1]

By September 2008, however, Microsoft changed its plans to redesign BizTalk Server[2] Other pieces of the original "Oslo" group were also broken off and given identities of their own; "Oslo" ceased to be a container for future versions of other products. Instead, it was identified as a set of software development and systems management tools:[2] around "Oslo".

  • A centralized repository for application workflows, message contracts (which describe an application's supported message formats and protocols), and other application components
  • A modeling language to describe workflows, contracts, and other elements stored in the repository
  • A visual editor and other development tools for the modeling language
  • A process server to support deployment and execution of application components from the repository.

When "Oslo" was first presented to the public at the Microsoft Professional Developers Conference in October 2008, this list has been focused even further. The process server was split off as code name "Dublin" that would work with "Oslo", leaving "Oslo" itself composed of the first three components above that are presently described (and rearranged) as follows:[3]

  • A storage runtime (the code name "Oslo" repository, built on Microsoft SQL Server) that is highly optimized to provide your data schemas and instances with system-provided best SQL Server practices for scalability, availability, security, versioning, change tracking, and localization.
  • A configurable visual tool (Microsoft code name "Quadrant") that enables you and your customers to interact with the data schemas and instances in exactly the way that is clearest to you and to them. That is, instead of having to look at data in terms of tables and rows, "Quadrant" allows every user to configure its views to naturally reveal the full richness of the higher-level relationships within that data.
  • A language (Microsoft code name "M") with features that enable you to model (or describe) your data structures, data instances, and data environment (such as storage, security, and versioning) in an interoperable way. It also offers simple yet powerful services to create new languages or transformations that are even more specific to the critical needs of your domain. This allows .NET Framework runtimes and applications to execute more of the described intent of the developer or architect while removing much of the coding and recoding necessary to enable it.

Relationship to "Dynamic IT"

"Oslo" is also presently positioned as a set of modeling technologies for the .NET platform and part of the effort known as Dynamic IT. Bob Muglia, Senior Vice President for Microsoft's Server & Tools Business, has said this about Dynamic IT:[4]

It costs customers too much to maintain their existing systems and it's not easy enough for them to build new solutions. [We're focused] on bringing together a cohesive solution set that enables customers to both reduce their ongoing maintenance costs while at the same time simplifying the cost of new application development so they can apply that directly to their business.

The secret of this is end-to-end thinking, from the beginning of the development cycle all the way through to the deployment and maintenance, and all the way throughout the entire application lifecycle.

One of the pillars of this initiative is an environment that is "model-driven" wherein every critical aspect of the application lifecycle from architecture, design, and development through to deployment, maintenance, and IT infrastructure in general, is described by metadata artifacts (called "models") that are shared by all the roles at each stage in the lifecycle. This differs from the typical approach in which, as Bob Kelly, General Manager of Microsoft's Infrastructure Server Marketing group put it,[5]

[a customer's] IT department and their development environment are two different silos, and the resulting effect of that is that anytime you want to deploy an application or a service, the developer builds it, throws it over the wall to IT, they try to deploy it, it breaks a policy or breaks some configuration, they hand that feedback to the developer, and so on. A very costly [way of doing business].

By focusing on "models"—model-based infrastructure and model-based development—we believe it enables IT to capture their policies in models and also allows the developers to capture configuration (the health of that application) in a model, then you can deploy that in a test environment very easily and very quickly (especially using virtualization). Then having a toolset like System Center that can act on that model and ensure that the application or service stays within tolerance of that model. This reduces the total cost of ownership, makes it much faster to deploy new applications and new services which ultimately drive the business, and allows for a dynamic IT environment.

To be more specific, a problem today is that data that describes an application throughout its lifecycle ends up in multiple different stores. For example:

  • Planning data such as requirements, service-level agreements, and so forth, generally live in documents created by products such as Microsoft Office.
  • Development data such as architecture, source code, and test suites live within a system like Microsoft Visual Studio.
  • ISV data such as rules, processes modes, etc. live within custom data stores.
  • Operation data such as health, policies, service-level agreements, etc., live within a management environment like Microsoft System Center.

Between these, there is little or no data sharing between the tools and runtimes involved. One of the elements of "Oslo" is to concentrate this metadata into the central "Oslo" repository based on SQL Server, thereby making that repository really the hub of Dynamic IT.

Model-Driven Development

"Oslo," then, is that set of tools that make it easier to build more and more of any application purely out of data. That is, "Oslo" aims to have the entire application throughout its entire lifecycle completely described in data/metadata that it contained within a database. As described on "Oslo" Developer's Center:[3]

Model-driven development in the context of "Oslo" indicates a development process that revolves around building applications primarily through metadata. This means moving more of the definition of an application out of the world of code and into the world of data, where the developer's original intent is increasingly transparent to both the platform (and other developers). As data, the application definition can be easily viewed and quickly edited in a variety of forms, and even queried, making all the design and implementation details that much more accessible. As discussed in this topic already, Microsoft technologies have been moving in this direction for many years; things like COM type libraries, .NET Framework metadata attributes, and XAML have all moved increasingly toward declaring one's intentions directly as data—in ways that make sense for your problem domain—and away from encoding them into a lower-level form, such as x86 or .NET intermediate language (IL) instructions. This is what the code name "Oslo" modeling technologies are all about.

The "models" in question aren't anything new: they simply define the structure of the data in a SQL server database. These are the structures with which the "Oslo" tools interact.

Characteristics of the "Oslo" Repository and Domains

From the "Oslo" Developer's Center: [3]

The "Oslo" Repository provides a robust, enterprise-ready storage location for the data models. It takes advantage of the best features of SQL Server 2008 to deliver on critical areas such as scalability, security, and performance. The "Oslo" repository's Base Domain Library (BDL) provides infrastructure and services, simplifying the task of creating and managing enterprise-scale databases. The repository provides the foundation for productively building models and model-driven applications with code name "Oslo" modeling technologies.

"Oslo" also includes additional pre-built "domains," which are pre-defined models and tools for working with particular kinds of data. At present, such domains are included for:[6]

  1. The Common Language Runtime (CLR), which supports extracting metadata from CLR assemblies and storing them in the "Oslo" repository in such a way that they can be explored and queried. A benefit to this domain is that it can maintain such information about the code assets of an entire enterprise, in contrast to tools such as the "Object Explorer" of Microsoft Visual Studio that only works with code assets on a single machine.
  2. Unified Modeling Language (UML), which targets the Object Management Group's Unified Modeling Language™ (UML™) specification version 2.1.2
    . UML 2.1.2 models in the Object Management Group's XML Metadata Interchange (XMI) version 2.1
    file format can be imported into the code name "Oslo" repository with a loader tool included with "Oslo".

Note that while the "Oslo" repository is part of the toolset, models may be deployed into any arbitrary SQL Server database; the "Quadrant" tool is also capable of working with arbitrary SQL Server databases.

Characteristics of the "M" Modeling Language

According to the "Oslo" Developer's Center, the "M" language and its features are used to define "custom language, schema for data (data models), and data values." [3] The intention is to allow for very domain-specific expression of data and metadata values, thereby increasing efficiency and productivity. A key to "M" is that while it allows for making statements "about the structure, constraints, and relationships, but says nothing about how the data is stored or accessed, or about what specific values an instance might contain. By default, 'M' models are stored in the 'Oslo' repository, but you are free to modify the output to any storage or access format. If you are familiar with XML, the schema definition feature is like XSD." [3] The "M" language and its associated tools also simplify the creation of custom domain-specific languages (DSLs) by providing a generic infrastructure engine (parser, lexer, and compiler) that's configured with a specific "grammar". Developers have found many uses for such easy-to-define customer languages.[7]

Recognizing the widespread interest in the ongoing development of the language, Microsoft shifted that development in March 2009 to a public group of individuals and organizations called the "M" Specification Community.

Characteristics of the "Quadrant" Model Editor

"Oslo's" model editor, known as "Quadrant," is intended to be a new kind of graphical tool for editing and exploring data in any SQL Server database. As described on the "Oslo" Developer's Center: [3]

A user can open multiple windows (called workpads) in "Quadrant". Each workpad can contain a connection to a different database, or a different view of the same database. Each workpad also includes a query box in which users can modify the data to a single record or a set of records that match the query criteria.

"Quadrant" features a different way of visualizing data: along with simple list views and table views, data can be displayed in a tree view, in a properties view, and in variable combinations of these four basic views. An essential part of this is the ability to dynamically switch, at any time, between the simplest and the most complex views of the data. As you explore data with these views, insights and connections between data sets previously unknown may become apparent. And that has benefits for those using the Microsoft "Oslo" modeling technologies to create new models. As part of the "Oslo" modeling technologies toolset, "Quadrant" enables "Oslo" developers to view new models with "Quadrant" viewers. The "Quadrant" data viewing experience enables designers of DSLs to quickly visualize the objects that language users will work with. In this way, "Quadrant" will give developers a quick vision of their models. With this feedback, "Quadrant" can also provide a reality check for the model designer, which may in turn lead to better data structures and models.

In the future, Microsoft intends for "Quadrant" to support greater degrees of domain-specific customization, allowing developers to exactly tailor the interaction with data for specific users and roles within an enterprise.

Si usted sigue el enlace de la Wikipedia (recuadro arriba a la derecha, "Code Name Oslo", dirección del website), encontrará que el proyecto Oslo ya no existe como unidad, sino que es conducido al Data Platform Developer Center. Ninguna referencia por allí, nada que nos comunique lo que Douglas Purdy anuncia en su blog, aunque puede ser pronto. Sin embargo, el link ya es redireccionado. Luego veremos...
Todavía hoy, 14 de noviembre, el enlace http://msdn.microsoft.com/en-us/library/cc709420.aspx remite al apartado sobre Oslo en MSDN Library, en su referencia a .NET. Indudablemente todo el contenido deberá sufrir reingeniería. Recorrer su contenido todavía no excesivamente transformado, da una idea de la magnitud de la renuncia, si luego retornamos a la parca redefinición de Purdy:

The components of the SQL Server Modeling CTP are:

  • “M” is a highly productive, developer friendly, textual language for defining schemas, queries, values, functions and DSLs for SQL Server databases
  • “Quadrant” is a customizable tool for interacting with large datasets stored in SQL Server databases
  • “Repository” is a SQL Server role for the the secure sharing of models between applications and systems

We will announce the official names for these components as we land them, but the key thing is that all of these components are now part of SQL Server and will ship with a future release of that product.

No sólo las páginas de MSDN sobre Oslo, o Douglas, deberán ser "refactorizados". Desde junio de 2008, Steve Cook asumió tareas de integración de UML dentro de Visual Studio y paralelamente colaborando con el proyecto Oslo en la integración de UML ¿qué papel jugaba Oslo en este proyecto? ¿cuál será el papel de Steve ahora? Si recorremos las noticias publicadas a través del último año y medio, la impresión que queda es que dos vías de investigación paralelas coexistieron, y que una de ellas al menos ha pasado a vía muerta. Ahora tiene sentido lo que Stuart Kent comentara en noviembre de 2008:

The Oslo modeling platform was announced at Microsoft's PDC and we've been asked by a number of customers what the relationship is between DSL Tools and Oslo. So I thought it would be worth clearing the air on this. Keith Short from the Oslo team has just posted on this very same question. I haven’t much to add really, except to clarify a couple of things about DSL Tools and VSTS Team Architect.

As Keith pointed out, some commentators have suggested that DSL Tools is dead. This couldn’t be further from the truth. Keith himself points out that "both products have a lifecycle in front of them". In DSL Tools in Visual Studio 2010 I summarize the new features that we're shipping for DSL Tools in VS 2010, and we'll be providing more details in future posts. In short, the platform has expanded to support forms-based designers and interaction between models and designers. There's also the new suite of designers from Team Architect including a set of UML designers and technology specific DSLs coming in VS 2010. These have been built using DSL Tools. Cameron has blogged about this, and there are now some great videos describing the features, including some new technology for visualizing existing code and artifacts. See this entry from Steve for details.

The new features in DSL Tools support integration with the designers from team architect, for example with DSLs of your own, using the new modelbus, and we're exploring other ways in which you can enhance and customize those designers without having to taking the step of creating your own DSL. Our T4 text templating technology will also work with these designers for code generation and will allow access to models across the modelbus. You may also be interested in my post Long Time No Blog, UML and DSLs which talks more about the relationship between DSLs and UML.

Pero volviendo a quienes comprometieron sus opiniones en favor de Oslo, ¿cuál es su sensación ahora? Me refiero a opiniones vertidas como en el caso del artículo "Creating Modern Applications: Workflows, Services, and Models", de David Chapell. En un tiempo tan temprano como octubre de 2008, David adelantó las características del proyecto, vendiendo lo que aún era un lineamiento. Todavía después hemos visto como algunos de los elementos adelantados eran dejados aparte, y, siendo todavía un proyecto inmaduro, volvía a ser presentado como una realidad. Y así, hasta el crudo despertar del 10 de noviembre.
Entre otras conclusiones que pueden extraerse de este proyecto ahora aparentemente en proceso de entierro, dos tienen particular interés:
  • No es un buen modelo de negocios el vender como realidades lo que aún son esbozos. El cliente (empresas usuarias finales, comunidad de desarrolladores, consultores independientes, investigadores) saldrá herido, por distintas razones: algunos por postergar decisiones en espera de un producto estrella, otros por comprometer su palabra en favor de algo descartado, y otros por perder tiempo en espera de una herramienta que no fue tomada en serio.
  • Es problemático depositar el desarrollo de investigaciones de avanzada en los planes de mercado de una empresa.

martes, noviembre 10, 2009

Oslo: ¿El parto de los montes?

Acabo de leer la nota de Douglas Purdy, y debo hacerlo dos veces, y revisar mis anteojos: ¿Oslo pasa a ser una herramienta de modelado para SQL Server? ¿M es un lenguaje textual para definir esquemas, consultas, funciones y DSLs para bases de datos SQL Server? ¿Quadrant es una herramienta para interactuar con datasets? Purdy confirma lo que Jacques Dubray adelantara al anunciarse la intervención del Data Developer Center.
El post completo de Douglas, con resaltados verdes míos:

As I stated in my previous post, we have been on a journey with “Oslo”. At the 2007 SOA/BP conference we announced that “Oslo” was a multiyear, multiproduct effort to simplify the application development lifecycle by enhancing .NET, Visual Studio, Biztalk and SQL Server. At PDC 2008, we announced that various pieces of “Oslo” were being spun off and shipped in the application server (“Dublin”), the cloud (.NET Services), and the .NET Framework (WF/WCF 4.0). We rechristened the ‘Oslo” name for the modeling platform pieces of the overall vision.

In the year since PDC 2008, we delivered three public CTPs and conducted many software design reviews (SDRs) with key customers, partners and analysts. We listened intently to the feedback and it helped us to shape our approach toward bring this technology to market. With PDC now one week away, we are beginning to disclose the next chapter in the journey to “Oslo”, with more to be unveiled at various keynotes and sessions at the PDC event itself.

Of the key things we observed over the last year was the real, tangible customer value in applying “Oslo” to working with SQL Server. Time after time we heard that “M” would make interacting with the database easier, provided we offered a good end to end experience with tools (VS) and frameworks (Entity Framework and Data Services) that developers use today. We heard that developers wanted to use the novel data navigation/editing approach offered by “Quadrant” to access their data in whatever SQL Server they wanted, not just the “Repository”. We heard that the notion of a “Repository” as something other than SQL Server was getting in the way of our conversations with customers.

Another thing we learned was that most of the customers that we wanted to leverage the modeling platform were already using SQL Server as their “repository”. Take an application like SharePoint. It is already model-driven. It already stores its application definition in a database. Dynamics is the same way. Windows Azure is the same way. System Center is the same way. What we didn’t have was a common language, tools or models that spanned all of these applications, although they were all leveraging the same database runtime. The simplest path to get all of these customers sharing a common modeling platform seemed obvious.

Lastly, we learned that the folks on the SQL Server team were hearing the need for additional mechanisms to make the database more approachable to developers. Developers did not want use three different languages to build their database applications (T-SQL, a .NET language and a XML mapping file). Developers wanted new tools that let them deal with the truly massive amount of data they need to handle on a daily basis. Developers wanted to radically simplify their interactions with the database, with a straightforward way of writing down data and getting an application as quickly as possible.

With all of the above in mind, we just announced (at VS Connections) the transition from “Oslo” to SQL Server Modeling. At PDC, we will release a new CTP using this name, SQL Server Modeling CTP, that will begin to demonstrate how developers will use these technologies in concert with things like T-SQL, ADO.NET, ASP.NET and other parts of the .NET Framework to build database applications.

The components of the SQL Server Modeling CTP are:

  • “M” is a highly productive, developer friendly, textual language for defining schemas, queries, values, functions and DSLs for SQL Server databases
  • “Quadrant” is a customizable tool for interacting with large datasets stored in SQL Server databases
  • “Repository” is a SQL Server role for the the secure sharing of models between applications and systems

We will announce the official names for these components as we land them, but the key thing is that all of these components are now part of SQL Server and will ship with a future release of that product.

At PDC, we will unify the “Oslo” Developer Center and the Data Developer Center. You will be able to find the new SQL Server Modeling CTP at our new home (http://msdn.microsoft.com/data) the first day of PDC. I encourage you to download this CTP and send us your feedback.

If you are attending PDC, we have some great sessions and keynotes that will highlight the work we are doing with SQL Server Modeling. My personal favorite is “Active Directory on SQL Server Modeling” (the actual title is The ‘M’-Based System.Identity Model for Accessing Directory Services), which is going to show how a serious “ISV” is using these technologies.

Speaking for myself and the team, we are very excited about this transition. Many of us have worked on numerous “v1” products while at Microsoft. This sort of transition is exactly what successful “v1” products/technologies undergo based our collective experience. You have a vision based on customer need. You write some code. You get customer feedback. You adjust. You repeat. You find the place that maximizes your investment for customers. You focus like a laser on delivering that customer value. You ship.

Looking forward to the next chapter…

Aquí hay cien observaciones para hacer. Será la próxima vez. Por hoy, la noticia en sí es suficiente.

lunes, noviembre 09, 2009

Plex: Ramon Chen recuerda la importancia del Lava Lounge


En abril de este año, Ramon Chen publicó un buen análisis de la creación de la comunidad de usuarios de Synon, llamada Lava Lounge, asociándolo a su producto Obsydian, el que hoy es Plex. Destaca su importancia en la construcción de una fuerte comunidad en la que colaboran el dueño del producto (entonces Synon) y sus usuarios. Un modelo que algo más despersonalizado se ha mantenido a través del tiempo, en parte gracias a la energía puesta en juego por Bill Hunt, el actual responsable del producto en CA. Dice Ramon:
1. Some History and Background (skip to section 2 if you only care about the marketing part of it :-) )
Back in 1995, way before social networking, there were very limited ways to get your message out to a wide audience. Even full Internet access was somewhat restricted at many companies and corporate e-mails were just getting going outside of the “firewall”. I had just taken over product marketing at Synon, adding to my product management duties and had made up a list of high priority items for my next 90 days. One idea that had always been brewing in the back of my mind was the notion of an online Synon community. Synon at this time, was already a highly successful company with about 3000 companies using the app dev tool worldwide. We regularly held annual user conferences which averaged 600+ attendees and there were local regional chapters of user groups which met every few months. What was missing was a way of consistently distributing up to date information about our products and also to tap into the passion and evangilism of our customers.

I first discovered the WWW back in 1993 when I was an architect for our Unix product called Synon/Open. I soon got myself online at home via a service called Netcom which I later converted to AOL. Synon (and many other companies) were using CompuServe back then, and I also had an account for those purposes. AOL was rapidly gaining the market share of popularity, although it restricted usage within it’s “Walled Garden” and browsing of the WWW was done outside of the service via a TCP/IP tunnel. Nevertheless, I reasoned that many now had access to the Synon corporate website (either from work or from home), so I determined that it was the right time to launch a community for Synon online.

I approached my boss at the time Bill Yeack and made a proposal for the LavaLounge (so called because the new product that we had just launched was called Obsydian – a black lava rock). He gave his blessing, but not a budget. His challenge “Show me that there is interest and the dollars will follow”. Given this, I worked with Bill Harada, our excellent internal graphics and web designer, and asked him to create an area off the Synon corporate website with a set of template pages. I then got hold of a copy of Frontpage and built my first website, the LavaLounge was born!

2. If you build it will they come?
I had to find a way to encourage Synon users to “join and register” for the LavaLounge so that we could control access via login and pwd to restricted content. Being an ex-developer I reasoned that exclusivity was always a major selling point but the “cool factor” was being the first to get on board. In addition to the usual “pre-qualification, limited number of spots available” messaging, I used the concept of a Club (similar to the frequent flyer clubs and loyalty programs of consumer product companies). Dubbed “Club Lava”, I further used the only currency I had (with no budget), the sequential allocation of Club Lava IDs starting at 00001 allocated on a first submitted, first allocated basis. I also published an electronic membership card, which people could print out and put in their wallet to recognize their “status”.

November 1995 (there would later be a re-vamp of the site in Sept 2006), I launched the LavaLounge with my colleague Wasim Ahmad and began accepting Club Lava memberships (click for the registration form)
Intially registrations were slow, due to the limited ways we could get the word out, but registrations really began picking up and much to my delight, I could see on the CompuServe chat boards that people were comparing their Club Lava ID numbers to see who had the lowest ones! Word of mouth began to spread and within a week we had over 100 registrants.

3. Ok, they’re here, now what?
The next phase was to “make good” on the promises of Club Lava which included all of the benefits we advertised:

  • Access to Club Lava, a password protected area of the Lava Lounge where you will be able to chat and exchange messages with fellow Obsydian developers from around the World.
  • Access to tips and techniques from Dr O (who will be moving into and operating exclusively in Club Lava).
  • A unique membership number assigned in the order applications are submitted and approved, identifying you as a charter Club Lava member.
  • Your name and e-mail registered in the online Club Lava Directory (you will be automatically registered unless you specify otherwise) recognizing you as a leading edge Obsydian developer.
  • Opportunities to be interviewed for “Hot Rocks” (which will contain profiles of the hottest Obsydian projects on the planet)
  • Invitations to special Club Lava events at Synon International User Conferences.
  • The chance to win special Lava merchandise from the Lava Object Store (under construction & awaiting permits).
  • Regular “LavaLights” e-mails. Club Lava members news and views e-mail from Synon, keeping you informed of the latest developments in the World of Obsydian.
  • An official Club Lava Membership card, personalized with your name & company name

We made good on all of those benefits, through lots of hard nights. But the most important was our relentless postings on the LavaLounge and Club Lava. Just as it is today, CONTENT, CONTENT, CONTENT is key. Fresh, new, interesting, relevant and consistent (just like blogging :-) )

Also by this time, I had gone back to my boss and showed him the list of registrants and gotten some $$$ for future activities which I put to good use, producing the first round of Club Lava black t-shirts which I would distribute at Club Lava events at the Synon User Conferences and on my travels around the world at regional user groups. Each attendee would also have a special badge indicating their ID number and also be invited to present on their tips/best practices, they would also be lagter recognized online for their contributions to forum questions and their evangilism through reward points.

4. What ultimately was the point of Club Lava and the LavaLounge?
The formation of the Club achieved several objectives:

  1. It brought together the Synon customers and partners into an online forum so that they could exchange ideas, help each other and build long lasting relationships … some of which are still evident today even through 2 M&A of the Synon product line which is now owned by Computer Associates
  2. It allowed us to distribute customer only information through a secured medium using the web and supported opportunities for us to inform and upsell new offerings through roadmap updates
  3. We captured use cases and statistics from our customer base on a large scale which I would later use for product management interviews and further focus groups and requirements analysis
  4. We asked them “what do you most like about Obsydian?” on the registration form, and the answers were enlightening. We were later able to use those quotes with approval in outbound marketing materials
  5. We unleashed the pent up evangilism and expertise within our knowledgeable customer base to increase the implementation successes of our products as well as strengthening our external marketing perception of a happy customer base (which no doubt contributed towards our eventual acquisition by Sterling Software)
  6. In terms of stats and metrics: Over 18 months of the Club and Lounge’s existence:
    Approx 1000+ members by acquisition, 5 major and point releases previewed, 15 product focus groups initiated with 150 responses worldwide, over 20 major deals leveraging references from technical community and lots and lots of happy evangilising customers who are still dedicated to the product today.

Much of this probably seems obvious to many experienced marketers, but nearly 15 years ago, it was a little bit innovative.


domingo, noviembre 08, 2009

Frank Soltis y el iManifest


The Four Hundred comenta a Frank Soltis a propósito de la iniciativa de socios de negocios del iSeries (AKA AS400) de tomar en sus manos la promoción del AS400. Dice Soltis sobre la política comercial de IBM:
"It has been clear to me that it's up to user groups and business partners to continue to promote the product," Soltis says. "That was something that IBM made a decision on sometime back in the 1990s. Lou Gerstner came in (as IBM chairman and CEO) and one of his first decisions was that IBM would promote IBM rather than promote individual products. He took the individual budgets that general managers had for advertising and consolidated them into one budget that focused on IBM. That has really never changed since."
Es esta actitud de poner en segundo lugar la promoción de sus productos la que el iManifest intenta modificar, primero en Japón, y ahora en Estados Unidos.
"IBM does not have to market Windows," Soltis points out. "The world knows what it is and Microsoft does their job promoting it. The same thing with Unix. You don't see vendors marketing Unix. They market it from the standpoint that ours is better than anybody else's, but they don't have to promote the concept of Unix. With IBM i and z, both systems are well-known within their user bases, but not very well known outside of that. You have to really promote those. In that sense, i has suffered a bit because the rest of the industry does not promote IBM i. From IBM's standpoint, I don't think they see much difference among the platforms in terms of which ones require more marketing."
Soltis estima que una actividad organizada y extensa de apoyo del iManifest en Estados Unidos impulsará a IBM a promover y participar en esta defensa:

"One of the things I admire about iManifest Japan is that it is very organized," Soltis says. "The group is made up of many people who have been together for many years. It is similar to the U.S. in that sense. They tend to work very closely. There is a lot of cooperation. That seems to be paying off. This is cooperation not just with the business partner community but also with IBM."

Get the numbers, get the cooperation, and get the organization within iManifest U.S. and IBM will get onboard. Soltis is sure of that.

"IBM will get involved in the iManifest in the United States, if iManifest puts together a good enough coalition. It has shown that it will do this by participating in iManifest Japan," he says.

IBM has co-sponsored at least two events with iManifest Japan that have promoted both the IBM i and the business partners products. Both have been described as successful by companies affiliated with iManifest Japan.

Although he holds no formal position with iManifest Japan, Soltis feels close to the developments going on there. He has a dialogue with key people in that organization and is discussing what has worked in that situation. The open communication should make things easier for iManifest U.S., but that's not to say it will be easy.

Soltis compromete su participación en el movimiento:
"I am looking at taking the iManifest message to the business partners and user groups, and that fits within the role that I am currently involved in," Soltis says. "I plan to continue this level of involvement for at least several more years. This is a way that I can contribute to the System i community. I think eventually you will see joint activities among all iManifest regions--Japan, EMEA, and the U.S. To me it would make a lot of sense to do this on a worldwide basis. Some of the big business partners that are worldwide in scope would probably see advantages in working across all geographies."
Notable situación en la que una empresa descuida algunos de sus mejores productos en pos de un nuevo modelo de negocios, que deben ser defendidos por sus socios comerciales. Luego hablaremos de ese modelo... que quizá ya no sea tan beneficioso si no tiene el respaldo de aquellos productos que le dieran prestigio y reputación
Sobre el iManifest, su contenido original en la iniciativa japonesa.

Fotografía de Soltis, tomada de IBM Systems magazine, en un artículo acerca de su participación en la concepción del AS400.