Comentarios, discusiones, notas, sobre tendencias en el desarrollo de la tecnología informática, y la importancia de la calidad en la construcción de software.
sábado, noviembre 28, 2009
Plex-XML
miércoles, noviembre 25, 2009
Una atinada crítica a DSL
El primero es muy breve, pero sustancioso: Rui Curado, miembro de The Model Driven Software Network, presenta sus objeciones a los lenguajes específicos de dominio (DSLs). Son razones que también suscribo:
En primer lugar, Rui reconoce que su uso de DSLs es limitado. Por lo mismo, recurre a lo que otros dicen:
Rui plantea sus objeciones a modo de interrogantes:I really don’t have much experience with DSLs, so I won’t use my own arguments. I’ll let the community speak for myself. Here is a tiny sample of DSL criticism:
http://c2.com/cgi/wiki?DomainSpecificLanguage
… the Tower Of Babel effect that results from having so many languages that a person needs to be familiar with (if not master) in order to get common jobs done.
…
writing a good DSL that is robust and reusable beyond a limited context, is a lot of work. It requires persons with both domain expertise and language design expertise (not necessarily the same person) to pull off. A truly reusable DSL will require reasonable documentation, otherwise it won’t survive the departure of its designers.http://www.theserverside.com/news/thread.tss?thread_id=42078
The other mistake some folks make is they think that with a DSL that “non-coders” can actually use the DSL.
But since a DSL is designed for automation of an underlying objet model, writing in a DSL is just like writing in any language — whether it’s DOS .BAT files or Java, and it takes a coding mindset and understanding of the underlying domain as represented by the computer in order to make effective use.
There was much more written on this thread. You can go to the original page to read more opinions.
Los lenguajes específicos de dominio son una excelente herramienta, y ciertamente pueden cubrir más adecuadamente dominios específicos, comparados a los modeladores de "propósito general", como suelen denominarlos los sostenedores de DSLs. Pero éste es su límite, precisamente. Un DSL está confinado a un dominio. Es muy útil para un propósito específico, pero no es adecuado para articular un sistema complejo. Una Babel de lenguajes no es la solución para manejar una gran aplicación, y eso deberían reconocerlo quienes promueven uno u otro esquema de DSLs. Inversamente, un modelador de propósito general, en el sabor que se prefiera, es capaz de manejar tal sistema, y probablemente pueda integrar dominios específicos con el auxilio de un DSL. A un DSL le falta horizonte, perspectiva, que es la dimensión que se espera de un MDD. En cierto sentido, la declinación de Oslo (y antes, de las "Software Factories") y la puesta del acento en los rasgos "DSLs" de sus herramientas, mantiene la visión de que el equipo de desarrollo de Microsoft no tiene todavía claro cómo obtener una visión global del proceso de desarrollo.As general use of DSLs become mainstream, so become the complaints about their shortcomings. If we take so much time to master a general purpose language, should we invest a comparable amount of time in limited-use languages? How can we get support for a DSL, apart from its own creators? Where’s community support? What happens after the departure of the language’s creator? What’s BNF? Do I need it?
DSL critics say really useful DSLs are hard and expensive to create. DSL supporters answer that DSLs are not designed, they evolve. Well, won’t any of those “evolutionary steps” risk breaking the entire development based on that DSL, much like a broken app build? Will the evolution in the language be severe enough to trash what has been done so far? Can you imagine yourself developing a complex C++ software system while C++ itself was still being designed and developed?
Sobre el resto de las observaciones, comparto las interrogaciones de Rui. Particularmente, a quiénes está destinada una herramienta creadora de lenguajes específicos de dominio: más allá de las sugerencias de que serían casi de uso generalizado, sólo un equipo robusto, capaz de asignar tiempo al desarrollo de una sintaxis, puede dedicarse a construír un lenguaje particular: una gran corporación con suficiente presupuesto, una empresa dedicada a una línea de productos específica. Por lo demás, valen todas las preguntas.
En cuanto a Rui, es autor de ABSE, una herramienta de modelado cuyo proyecto está en curso.
domingo, noviembre 22, 2009
Criticas a UML
El experimento no deja muy bien parado a UML, que da diferencias a favor no muy grandes, a condición de excluír los tiempos de actualización de los diagramas. El equipo programando en Java logra mantenerse bastante cerca de los tiempos del equipo que trabaja con UML.
Estos son los aspectos que destaca Steven, comprometido con Metaedit, una de las herramientas orientadas a Domain Specific Languages mas consolidadas en el mercado, que considera revalidada su afirmación sobre el uso de UML: "empirical research shows that using UML does not improve software development productivity".
Las observaciones de Steven sobre la validez del experimento son atinadas:
Queda por ver cuánto influyó en el resultado la elección de la herramienta usada (Borland Together for Eclipse), y si otro modelador hubiera mejorado los números. Sin embargo resulta notable encontrar tanta proximidad entre uno y otro equipo. Sigue pareciendo que hacer pasar todo el desarrollo del modelo por los tipos de diagramas hoy existentes, es insuficiente. Esta es una discusión reiterada en The Model Driven Software Network, tanto en conversaciones anteriores (1, 2, 3), como en la misma que se abriera sobre este experimento.One bad thing about the article is that it tries to obfuscate this clear result by subtracting the time spent on updating the models: the whole times are there, but the abstract, intro and conclusions concentrate on the doctored numbers, trying to show that UML is no slower. Worse, the authors try to give the impression that the results without UML contained more errors -- although they clearly state that they measured the time to a correct submission. They claim a "54% increase in functional correctness", which sounded impressive. However, alarm bells started ringing when I saw the actual data even shows a 100% increase in correctness for one task. That would mean all the UML solutions were totally correct, and all the non-UML solutions were totally wrong, wouldn't it? But not in their world: what it actually meant was that out of 10 non-UML developers, all their submissions were correct apart from one mistake made by one developer in an early submission, but which he later corrected. Since none of the UML developers made a mistake in their initial submissions of that particular task, they calculated a 100% difference, and try to claim that as a 100% improvement in correctness -- ludicrous!
To calculate correctness they should really have had a number of things that had to be correct, e.g. 20 function points. Calculated like that, the value for 1 mistake would drop by a factor of 20, down from 100% to just 5% for that developer, and 0.5% over all non-UML developers. I'm pretty sure that calculated like that there would be no statistically significant difference left. Even if there was, times were measured until all mistakes were corrected, so all it would mean is that the non-UML developers were more likely to submit a code change for testing before it was completely correct. Quite possibly the extra 15% of time spent on updating the models gave the developer time to notice a mistake, perhaps when updating that part of the model, and so he went straight back to making a fix rather than first submitting his code for testing. In any case, to reach the same eventual level of quality took 15% longer with UML than without: if you have a quality standard to meet, using UML won't make you get there any more certainly, it will just slow you down.
Por mi parte, quisiera agregar a las observaciones de Steven una más:
El experimento se propone actuar sobre un modelo "en movimiento", para evitar crear un caso de condiciones ideales, en las que, arrancando de cero, se contruye una solución limpia. Sin embargo, al partir de un desarrollo prolijo, estandarizado, bien documentado, está creando un ambiente de laboratorio. Nunca la comparación entre un desarrollo basado en un modelo y un desarrollo basado en código directo (digamos 3GL) será así: en la medida que un desarrollo basado en modelos y otro equivalente basado en código evolucionen, la oscuridad del diseño crecerá, y probablemente lo hará en forma exponencial, siendo mayor el diferencial cuanto más tiempo y actores hayan pasado. Si trabajamos con un diseño de seis meses de antiguedad, las diferencias en opacidad serán no muy grandes; pero si la aplicación tiene dos, tres, cuatro años, y por ella han pasado dos o tres oleadas de desarrolladores, indudablemente será más productivo trabajar con un modelo que navegando y apostando porque el cambio que hagamos no estalle por otro lado.
En el estudio se omite que un trabajo basado en código fuente directo estará expuesto a distintos estilos de grupos o personas intervinientes, que puede tener callejones sin salida, parches, y aún fraudes o sabotajes. De ninguna manera, sobre una aplicación compleja, los tiempos de trabajar a través de un modelo podrán ser iguales a los tiempos que se tomarán haciéndolo sobre el código mismo. Y mucho menos si las personas acaban de ser contratadas para esa tarea, como pretende hacerlo el experimento.
Esto sin introducir alguna variable independiente, como por ejemplo, un cambio de versión en software de infraestructura o de framework que afecte a la aplicación.
Como siempre, debo aclarar que no uso UML en mi actividad diaria. Sin embargo, prefiero extender el alcance de las críticas al uso de herramientas de modelado en general. Soy conciente que Steven no se pronuncia a favor del código fuente, porque él lo ve desde el campo del desarrollo basado en modelos, pero compara UML contra DSLs. Este es otro asunto, que merece tiempo aparte. Pero creo que es necesario dejar bien claro que el concepto genérico de desarrollo basado en modelos es indudablemente superior al uso de código directo, y mayor cuanto más complejo sea el caso. De lo que se trata es de encontrar una fórmula para expresar de manera flexible y ágil la conducta dinámica de un modelo, algo que UML no parece resolver de manera satisfactoria todavía. ¿Existe solución? Sin duda que la habrá. En mi caso, por lo menos, una solución existe. Pero eso será también aparte.
sábado, noviembre 14, 2009
Oslo: quitando ambiguedad a las expectativas
Si usted sigue el enlace de la Wikipedia (recuadro arriba a la derecha, "Code Name Oslo", dirección del website), encontrará que el proyecto Oslo ya no existe como unidad, sino que es conducido al Data Platform Developer Center. Ninguna referencia por allí, nada que nos comunique lo que Douglas Purdy anuncia en su blog, aunque puede ser pronto. Sin embargo, el link ya es redireccionado. Luego veremos...History
Originally, in 2007, the "Oslo" name encompassed a much broader set of technologies including "updated messaging and workflow technologies in the next version of BizTalk Server and other products" such as the .NET Framework, Microsoft Visual Studio, and Microsoft System Center (specifically the Operations Manager and Configuration Manager).[1]
By September 2008, however, Microsoft changed its plans to redesign BizTalk Server[2] Other pieces of the original "Oslo" group were also broken off and given identities of their own; "Oslo" ceased to be a container for future versions of other products. Instead, it was identified as a set of software development and systems management tools:[2] around "Oslo".
- A centralized repository for application workflows, message contracts (which describe an application's supported message formats and protocols), and other application components
- A modeling language to describe workflows, contracts, and other elements stored in the repository
- A visual editor and other development tools for the modeling language
- A process server to support deployment and execution of application components from the repository.
When "Oslo" was first presented to the public at the Microsoft Professional Developers Conference in October 2008, this list has been focused even further. The process server was split off as code name "Dublin" that would work with "Oslo", leaving "Oslo" itself composed of the first three components above that are presently described (and rearranged) as follows:[3]
- A storage runtime (the code name "Oslo" repository, built on Microsoft SQL Server) that is highly optimized to provide your data schemas and instances with system-provided best SQL Server practices for scalability, availability, security, versioning, change tracking, and localization.
- A configurable visual tool (Microsoft code name "Quadrant") that enables you and your customers to interact with the data schemas and instances in exactly the way that is clearest to you and to them. That is, instead of having to look at data in terms of tables and rows, "Quadrant" allows every user to configure its views to naturally reveal the full richness of the higher-level relationships within that data.
- A language (Microsoft code name "M") with features that enable you to model (or describe) your data structures, data instances, and data environment (such as storage, security, and versioning) in an interoperable way. It also offers simple yet powerful services to create new languages or transformations that are even more specific to the critical needs of your domain. This allows .NET Framework runtimes and applications to execute more of the described intent of the developer or architect while removing much of the coding and recoding necessary to enable it.
Relationship to "Dynamic IT"
"Oslo" is also presently positioned as a set of modeling technologies for the .NET platform and part of the effort known as Dynamic IT. Bob Muglia, Senior Vice President for Microsoft's Server & Tools Business, has said this about Dynamic IT:[4]
It costs customers too much to maintain their existing systems and it's not easy enough for them to build new solutions. [We're focused] on bringing together a cohesive solution set that enables customers to both reduce their ongoing maintenance costs while at the same time simplifying the cost of new application development so they can apply that directly to their business.
…
The secret of this is end-to-end thinking, from the beginning of the development cycle all the way through to the deployment and maintenance, and all the way throughout the entire application lifecycle.One of the pillars of this initiative is an environment that is "model-driven" wherein every critical aspect of the application lifecycle from architecture, design, and development through to deployment, maintenance, and IT infrastructure in general, is described by metadata artifacts (called "models") that are shared by all the roles at each stage in the lifecycle. This differs from the typical approach in which, as Bob Kelly, General Manager of Microsoft's Infrastructure Server Marketing group put it,[5]
[a customer's] IT department and their development environment are two different silos, and the resulting effect of that is that anytime you want to deploy an application or a service, the developer builds it, throws it over the wall to IT, they try to deploy it, it breaks a policy or breaks some configuration, they hand that feedback to the developer, and so on. A very costly [way of doing business].
By focusing on "models"—model-based infrastructure and model-based development—we believe it enables IT to capture their policies in models and also allows the developers to capture configuration (the health of that application) in a model, then you can deploy that in a test environment very easily and very quickly (especially using virtualization). Then having a toolset like System Center that can act on that model and ensure that the application or service stays within tolerance of that model. This reduces the total cost of ownership, makes it much faster to deploy new applications and new services which ultimately drive the business, and allows for a dynamic IT environment.
To be more specific, a problem today is that data that describes an application throughout its lifecycle ends up in multiple different stores. For example:
- Planning data such as requirements, service-level agreements, and so forth, generally live in documents created by products such as Microsoft Office.
- Development data such as architecture, source code, and test suites live within a system like Microsoft Visual Studio.
- ISV data such as rules, processes modes, etc. live within custom data stores.
- Operation data such as health, policies, service-level agreements, etc., live within a management environment like Microsoft System Center.
Between these, there is little or no data sharing between the tools and runtimes involved. One of the elements of "Oslo" is to concentrate this metadata into the central "Oslo" repository based on SQL Server, thereby making that repository really the hub of Dynamic IT.
Model-Driven Development
"Oslo," then, is that set of tools that make it easier to build more and more of any application purely out of data. That is, "Oslo" aims to have the entire application throughout its entire lifecycle completely described in data/metadata that it contained within a database. As described on "Oslo" Developer's Center:[3]
Model-driven development in the context of "Oslo" indicates a development process that revolves around building applications primarily through metadata. This means moving more of the definition of an application out of the world of code and into the world of data, where the developer's original intent is increasingly transparent to both the platform (and other developers). As data, the application definition can be easily viewed and quickly edited in a variety of forms, and even queried, making all the design and implementation details that much more accessible. As discussed in this topic already, Microsoft technologies have been moving in this direction for many years; things like COM type libraries, .NET Framework metadata attributes, and XAML have all moved increasingly toward declaring one's intentions directly as data—in ways that make sense for your problem domain—and away from encoding them into a lower-level form, such as x86 or .NET intermediate language (IL) instructions. This is what the code name "Oslo" modeling technologies are all about.
The "models" in question aren't anything new: they simply define the structure of the data in a SQL server database. These are the structures with which the "Oslo" tools interact.
Characteristics of the "Oslo" Repository and Domains
From the "Oslo" Developer's Center: [3]
The "Oslo" Repository provides a robust, enterprise-ready storage location for the data models. It takes advantage of the best features of SQL Server 2008 to deliver on critical areas such as scalability, security, and performance. The "Oslo" repository's Base Domain Library (BDL) provides infrastructure and services, simplifying the task of creating and managing enterprise-scale databases. The repository provides the foundation for productively building models and model-driven applications with code name "Oslo" modeling technologies.
"Oslo" also includes additional pre-built "domains," which are pre-defined models and tools for working with particular kinds of data. At present, such domains are included for:[6]
- The Common Language Runtime (CLR), which supports extracting metadata from CLR assemblies and storing them in the "Oslo" repository in such a way that they can be explored and queried. A benefit to this domain is that it can maintain such information about the code assets of an entire enterprise, in contrast to tools such as the "Object Explorer" of Microsoft Visual Studio that only works with code assets on a single machine.
- Unified Modeling Language (UML), which targets the Object Management Group's Unified Modeling Language™ (UML™) specification version 2.1.2
. UML 2.1.2 models in the Object Management Group's XML Metadata Interchange (XMI) version 2.1file format can be imported into the code name "Oslo" repository with a loader tool included with "Oslo".Note that while the "Oslo" repository is part of the toolset, models may be deployed into any arbitrary SQL Server database; the "Quadrant" tool is also capable of working with arbitrary SQL Server databases.
Characteristics of the "M" Modeling Language
According to the "Oslo" Developer's Center, the "M" language and its features are used to define "custom language, schema for data (data models), and data values." [3] The intention is to allow for very domain-specific expression of data and metadata values, thereby increasing efficiency and productivity. A key to "M" is that while it allows for making statements "about the structure, constraints, and relationships, but says nothing about how the data is stored or accessed, or about what specific values an instance might contain. By default, 'M' models are stored in the 'Oslo' repository, but you are free to modify the output to any storage or access format. If you are familiar with XML, the schema definition feature is like XSD." [3] The "M" language and its associated tools also simplify the creation of custom domain-specific languages (DSLs) by providing a generic infrastructure engine (parser, lexer, and compiler) that's configured with a specific "grammar". Developers have found many uses for such easy-to-define customer languages.[7]
Recognizing the widespread interest in the ongoing development of the language, Microsoft shifted that development in March 2009 to a public group of individuals and organizations called the "M" Specification Community.
Characteristics of the "Quadrant" Model Editor
"Oslo's" model editor, known as "Quadrant," is intended to be a new kind of graphical tool for editing and exploring data in any SQL Server database. As described on the "Oslo" Developer's Center: [3]
A user can open multiple windows (called workpads) in "Quadrant". Each workpad can contain a connection to a different database, or a different view of the same database. Each workpad also includes a query box in which users can modify the data to a single record or a set of records that match the query criteria.
"Quadrant" features a different way of visualizing data: along with simple list views and table views, data can be displayed in a tree view, in a properties view, and in variable combinations of these four basic views. An essential part of this is the ability to dynamically switch, at any time, between the simplest and the most complex views of the data. As you explore data with these views, insights and connections between data sets previously unknown may become apparent. And that has benefits for those using the Microsoft "Oslo" modeling technologies to create new models. As part of the "Oslo" modeling technologies toolset, "Quadrant" enables "Oslo" developers to view new models with "Quadrant" viewers. The "Quadrant" data viewing experience enables designers of DSLs to quickly visualize the objects that language users will work with. In this way, "Quadrant" will give developers a quick vision of their models. With this feedback, "Quadrant" can also provide a reality check for the model designer, which may in turn lead to better data structures and models.In the future, Microsoft intends for "Quadrant" to support greater degrees of domain-specific customization, allowing developers to exactly tailor the interaction with data for specific users and roles within an enterprise.
Todavía hoy, 14 de noviembre, el enlace http://msdn.microsoft.com/en-us/library/cc709420.aspx remite al apartado sobre Oslo en MSDN Library, en su referencia a .NET. Indudablemente todo el contenido deberá sufrir reingeniería. Recorrer su contenido todavía no excesivamente transformado, da una idea de la magnitud de la renuncia, si luego retornamos a la parca redefinición de Purdy:
No sólo las páginas de MSDN sobre Oslo, o Douglas, deberán ser "refactorizados". Desde junio de 2008, Steve Cook asumió tareas de integración de UML dentro de Visual Studio y paralelamente colaborando con el proyecto Oslo en la integración de UML ¿qué papel jugaba Oslo en este proyecto? ¿cuál será el papel de Steve ahora? Si recorremos las noticias publicadas a través del último año y medio, la impresión que queda es que dos vías de investigación paralelas coexistieron, y que una de ellas al menos ha pasado a vía muerta. Ahora tiene sentido lo que Stuart Kent comentara en noviembre de 2008:The components of the SQL Server Modeling CTP are:
- “M” is a highly productive, developer friendly, textual language for defining schemas, queries, values, functions and DSLs for SQL Server databases
- “Quadrant” is a customizable tool for interacting with large datasets stored in SQL Server databases
- “Repository” is a SQL Server role for the the secure sharing of models between applications and systems
We will announce the official names for these components as we land them, but the key thing is that all of these components are now part of SQL Server and will ship with a future release of that product.
Pero volviendo a quienes comprometieron sus opiniones en favor de Oslo, ¿cuál es su sensación ahora? Me refiero a opiniones vertidas como en el caso del artículo "Creating Modern Applications: Workflows, Services, and Models", de David Chapell. En un tiempo tan temprano como octubre de 2008, David adelantó las características del proyecto, vendiendo lo que aún era un lineamiento. Todavía después hemos visto como algunos de los elementos adelantados eran dejados aparte, y, siendo todavía un proyecto inmaduro, volvía a ser presentado como una realidad. Y así, hasta el crudo despertar del 10 de noviembre.The Oslo modeling platform was announced at Microsoft's PDC and we've been asked by a number of customers what the relationship is between DSL Tools and Oslo. So I thought it would be worth clearing the air on this. Keith Short from the Oslo team has just posted on this very same question. I haven’t much to add really, except to clarify a couple of things about DSL Tools and VSTS Team Architect.
As Keith pointed out, some commentators have suggested that DSL Tools is dead. This couldn’t be further from the truth. Keith himself points out that "both products have a lifecycle in front of them". In DSL Tools in Visual Studio 2010 I summarize the new features that we're shipping for DSL Tools in VS 2010, and we'll be providing more details in future posts. In short, the platform has expanded to support forms-based designers and interaction between models and designers. There's also the new suite of designers from Team Architect including a set of UML designers and technology specific DSLs coming in VS 2010. These have been built using DSL Tools. Cameron has blogged about this, and there are now some great videos describing the features, including some new technology for visualizing existing code and artifacts. See this entry from Steve for details.
The new features in DSL Tools support integration with the designers from team architect, for example with DSLs of your own, using the new modelbus, and we're exploring other ways in which you can enhance and customize those designers without having to taking the step of creating your own DSL. Our T4 text templating technology will also work with these designers for code generation and will allow access to models across the modelbus. You may also be interested in my post Long Time No Blog, UML and DSLs which talks more about the relationship between DSLs and UML.
Entre otras conclusiones que pueden extraerse de este proyecto ahora aparentemente en proceso de entierro, dos tienen particular interés:
- No es un buen modelo de negocios el vender como realidades lo que aún son esbozos. El cliente (empresas usuarias finales, comunidad de desarrolladores, consultores independientes, investigadores) saldrá herido, por distintas razones: algunos por postergar decisiones en espera de un producto estrella, otros por comprometer su palabra en favor de algo descartado, y otros por perder tiempo en espera de una herramienta que no fue tomada en serio.
- Es problemático depositar el desarrollo de investigaciones de avanzada en los planes de mercado de una empresa.
martes, noviembre 10, 2009
Oslo: ¿El parto de los montes?
El post completo de Douglas, con resaltados verdes míos:
Aquí hay cien observaciones para hacer. Será la próxima vez. Por hoy, la noticia en sí es suficiente.As I stated in my previous post, we have been on a journey with “Oslo”. At the 2007 SOA/BP conference we announced that “Oslo” was a multiyear, multiproduct effort to simplify the application development lifecycle by enhancing .NET, Visual Studio, Biztalk and SQL Server. At PDC 2008, we announced that various pieces of “Oslo” were being spun off and shipped in the application server (“Dublin”), the cloud (.NET Services), and the .NET Framework (WF/WCF 4.0). We rechristened the ‘Oslo” name for the modeling platform pieces of the overall vision.
In the year since PDC 2008, we delivered three public CTPs and conducted many software design reviews (SDRs) with key customers, partners and analysts. We listened intently to the feedback and it helped us to shape our approach toward bring this technology to market. With PDC now one week away, we are beginning to disclose the next chapter in the journey to “Oslo”, with more to be unveiled at various keynotes and sessions at the PDC event itself.
Of the key things we observed over the last year was the real, tangible customer value in applying “Oslo” to working with SQL Server. Time after time we heard that “M” would make interacting with the database easier, provided we offered a good end to end experience with tools (VS) and frameworks (Entity Framework and Data Services) that developers use today. We heard that developers wanted to use the novel data navigation/editing approach offered by “Quadrant” to access their data in whatever SQL Server they wanted, not just the “Repository”. We heard that the notion of a “Repository” as something other than SQL Server was getting in the way of our conversations with customers.
Another thing we learned was that most of the customers that we wanted to leverage the modeling platform were already using SQL Server as their “repository”. Take an application like SharePoint. It is already model-driven. It already stores its application definition in a database. Dynamics is the same way. Windows Azure is the same way. System Center is the same way. What we didn’t have was a common language, tools or models that spanned all of these applications, although they were all leveraging the same database runtime. The simplest path to get all of these customers sharing a common modeling platform seemed obvious.
Lastly, we learned that the folks on the SQL Server team were hearing the need for additional mechanisms to make the database more approachable to developers. Developers did not want use three different languages to build their database applications (T-SQL, a .NET language and a XML mapping file). Developers wanted new tools that let them deal with the truly massive amount of data they need to handle on a daily basis. Developers wanted to radically simplify their interactions with the database, with a straightforward way of writing down data and getting an application as quickly as possible.
With all of the above in mind, we just announced (at VS Connections) the transition from “Oslo” to SQL Server Modeling. At PDC, we will release a new CTP using this name, SQL Server Modeling CTP, that will begin to demonstrate how developers will use these technologies in concert with things like T-SQL, ADO.NET, ASP.NET and other parts of the .NET Framework to build database applications.
The components of the SQL Server Modeling CTP are:
- “M” is a highly productive, developer friendly, textual language for defining schemas, queries, values, functions and DSLs for SQL Server databases
- “Quadrant” is a customizable tool for interacting with large datasets stored in SQL Server databases
- “Repository” is a SQL Server role for the the secure sharing of models between applications and systems
We will announce the official names for these components as we land them, but the key thing is that all of these components are now part of SQL Server and will ship with a future release of that product.
At PDC, we will unify the “Oslo” Developer Center and the Data Developer Center. You will be able to find the new SQL Server Modeling CTP at our new home (http://msdn.microsoft.com/data) the first day of PDC. I encourage you to download this CTP and send us your feedback.
If you are attending PDC, we have some great sessions and keynotes that will highlight the work we are doing with SQL Server Modeling. My personal favorite is “Active Directory on SQL Server Modeling” (the actual title is The ‘M’-Based System.Identity Model for Accessing Directory Services), which is going to show how a serious “ISV” is using these technologies.
Speaking for myself and the team, we are very excited about this transition. Many of us have worked on numerous “v1” products while at Microsoft. This sort of transition is exactly what successful “v1” products/technologies undergo based our collective experience. You have a vision based on customer need. You write some code. You get customer feedback. You adjust. You repeat. You find the place that maximizes your investment for customers. You focus like a laser on delivering that customer value. You ship.
Looking forward to the next chapter…
lunes, noviembre 09, 2009
Plex: Ramon Chen recuerda la importancia del Lava Lounge
En abril de este año, Ramon Chen publicó un buen análisis de la creación de la comunidad de usuarios de Synon, llamada Lava Lounge, asociándolo a su producto Obsydian, el que hoy es Plex. Destaca su importancia en la construcción de una fuerte comunidad en la que colaboran el dueño del producto (entonces Synon) y sus usuarios. Un modelo que algo más despersonalizado se ha mantenido a través del tiempo, en parte gracias a la energía puesta en juego por Bill Hunt, el actual responsable del producto en CA. Dice Ramon:
1. Some History and Background (skip to section 2 if you only care about the marketing part of it )
Back in 1995, way before social networking, there were very limited ways to get your message out to a wide audience. Even full Internet access was somewhat restricted at many companies and corporate e-mails were just getting going outside of the “firewall”. I had just taken over product marketing at Synon, adding to my product management duties and had made up a list of high priority items for my next 90 days. One idea that had always been brewing in the back of my mind was the notion of an online Synon community. Synon at this time, was already a highly successful company with about 3000 companies using the app dev tool worldwide. We regularly held annual user conferences which averaged 600+ attendees and there were local regional chapters of user groups which met every few months. What was missing was a way of consistently distributing up to date information about our products and also to tap into the passion and evangilism of our customers.I first discovered the WWW back in 1993 when I was an architect for our Unix product called Synon/Open. I soon got myself online at home via a service called Netcom which I later converted to AOL. Synon (and many other companies) were using CompuServe back then, and I also had an account for those purposes. AOL was rapidly gaining the market share of popularity, although it restricted usage within it’s “Walled Garden” and browsing of the WWW was done outside of the service via a TCP/IP tunnel. Nevertheless, I reasoned that many now had access to the Synon corporate website (either from work or from home), so I determined that it was the right time to launch a community for Synon online.
I approached my boss at the time Bill Yeack and made a proposal for the LavaLounge (so called because the new product that we had just launched was called Obsydian – a black lava rock). He gave his blessing, but not a budget. His challenge “Show me that there is interest and the dollars will follow”. Given this, I worked with Bill Harada, our excellent internal graphics and web designer, and asked him to create an area off the Synon corporate website with a set of template pages. I then got hold of a copy of Frontpage and built my first website, the LavaLounge was born!
2. If you build it will they come?
I had to find a way to encourage Synon users to “join and register” for the LavaLounge so that we could control access via login and pwd to restricted content. Being an ex-developer I reasoned that exclusivity was always a major selling point but the “cool factor” was being the first to get on board. In addition to the usual “pre-qualification, limited number of spots available” messaging, I used the concept of a Club (similar to the frequent flyer clubs and loyalty programs of consumer product companies). Dubbed “Club Lava”, I further used the only currency I had (with no budget), the sequential allocation of Club Lava IDs starting at 00001 allocated on a first submitted, first allocated basis. I also published an electronic membership card, which people could print out and put in their wallet to recognize their “status”.November 1995 (there would later be a re-vamp of the site in Sept 2006), I launched the LavaLounge with my colleague Wasim Ahmad and began accepting Club Lava memberships (click for the registration form)
Intially registrations were slow, due to the limited ways we could get the word out, but registrations really began picking up and much to my delight, I could see on the CompuServe chat boards that people were comparing their Club Lava ID numbers to see who had the lowest ones! Word of mouth began to spread and within a week we had over 100 registrants.3. Ok, they’re here, now what?
The next phase was to “make good” on the promises of Club Lava which included all of the benefits we advertised:
- Access to Club Lava, a password protected area of the Lava Lounge where you will be able to chat and exchange messages with fellow Obsydian developers from around the World.
- Access to tips and techniques from Dr O (who will be moving into and operating exclusively in Club Lava).
- A unique membership number assigned in the order applications are submitted and approved, identifying you as a charter Club Lava member.
- Your name and e-mail registered in the online Club Lava Directory (you will be automatically registered unless you specify otherwise) recognizing you as a leading edge Obsydian developer.
- Opportunities to be interviewed for “Hot Rocks” (which will contain profiles of the hottest Obsydian projects on the planet)
- Invitations to special Club Lava events at Synon International User Conferences.
- The chance to win special Lava merchandise from the Lava Object Store (under construction & awaiting permits).
- Regular “LavaLights” e-mails. Club Lava members news and views e-mail from Synon, keeping you informed of the latest developments in the World of Obsydian.
- An official Club Lava Membership card, personalized with your name & company name
We made good on all of those benefits, through lots of hard nights. But the most important was our relentless postings on the LavaLounge and Club Lava. Just as it is today, CONTENT, CONTENT, CONTENT is key. Fresh, new, interesting, relevant and consistent (just like blogging )
Also by this time, I had gone back to my boss and showed him the list of registrants and gotten some $$$ for future activities which I put to good use, producing the first round of Club Lava black t-shirts which I would distribute at Club Lava events at the Synon User Conferences and on my travels around the world at regional user groups. Each attendee would also have a special badge indicating their ID number and also be invited to present on their tips/best practices, they would also be lagter recognized online for their contributions to forum questions and their evangilism through reward points.
4. What ultimately was the point of Club Lava and the LavaLounge?
The formation of the Club achieved several objectives:
- It brought together the Synon customers and partners into an online forum so that they could exchange ideas, help each other and build long lasting relationships … some of which are still evident today even through 2 M&A of the Synon product line which is now owned by Computer Associates
- It allowed us to distribute customer only information through a secured medium using the web and supported opportunities for us to inform and upsell new offerings through roadmap updates
- We captured use cases and statistics from our customer base on a large scale which I would later use for product management interviews and further focus groups and requirements analysis
- We asked them “what do you most like about Obsydian?” on the registration form, and the answers were enlightening. We were later able to use those quotes with approval in outbound marketing materials
- We unleashed the pent up evangilism and expertise within our knowledgeable customer base to increase the implementation successes of our products as well as strengthening our external marketing perception of a happy customer base (which no doubt contributed towards our eventual acquisition by Sterling Software)
- In terms of stats and metrics: Over 18 months of the Club and Lounge’s existence:
Approx 1000+ members by acquisition, 5 major and point releases previewed, 15 product focus groups initiated with 150 responses worldwide, over 20 major deals leveraging references from technical community and lots and lots of happy evangilising customers who are still dedicated to the product today.Much of this probably seems obvious to many experienced marketers, but nearly 15 years ago, it was a little bit innovative.
domingo, noviembre 08, 2009
Frank Soltis y el iManifest
The Four Hundred comenta a Frank Soltis a propósito de la iniciativa de socios de negocios del iSeries (AKA AS400) de tomar en sus manos la promoción del AS400. Dice Soltis sobre la política comercial de IBM:
"It has been clear to me that it's up to user groups and business partners to continue to promote the product," Soltis says. "That was something that IBM made a decision on sometime back in the 1990s. Lou Gerstner came in (as IBM chairman and CEO) and one of his first decisions was that IBM would promote IBM rather than promote individual products. He took the individual budgets that general managers had for advertising and consolidated them into one budget that focused on IBM. That has really never changed since."Es esta actitud de poner en segundo lugar la promoción de sus productos la que el iManifest intenta modificar, primero en Japón, y ahora en Estados Unidos.
"IBM does not have to market Windows," Soltis points out. "The world knows what it is and Microsoft does their job promoting it. The same thing with Unix. You don't see vendors marketing Unix. They market it from the standpoint that ours is better than anybody else's, but they don't have to promote the concept of Unix. With IBM i and z, both systems are well-known within their user bases, but not very well known outside of that. You have to really promote those. In that sense, i has suffered a bit because the rest of the industry does not promote IBM i. From IBM's standpoint, I don't think they see much difference among the platforms in terms of which ones require more marketing."Soltis estima que una actividad organizada y extensa de apoyo del iManifest en Estados Unidos impulsará a IBM a promover y participar en esta defensa:
Soltis compromete su participación en el movimiento:"One of the things I admire about iManifest Japan is that it is very organized," Soltis says. "The group is made up of many people who have been together for many years. It is similar to the U.S. in that sense. They tend to work very closely. There is a lot of cooperation. That seems to be paying off. This is cooperation not just with the business partner community but also with IBM."
Get the numbers, get the cooperation, and get the organization within iManifest U.S. and IBM will get onboard. Soltis is sure of that.
"IBM will get involved in the iManifest in the United States, if iManifest puts together a good enough coalition. It has shown that it will do this by participating in iManifest Japan," he says.
IBM has co-sponsored at least two events with iManifest Japan that have promoted both the IBM i and the business partners products. Both have been described as successful by companies affiliated with iManifest Japan.
Although he holds no formal position with iManifest Japan, Soltis feels close to the developments going on there. He has a dialogue with key people in that organization and is discussing what has worked in that situation. The open communication should make things easier for iManifest U.S., but that's not to say it will be easy.
"I am looking at taking the iManifest message to the business partners and user groups, and that fits within the role that I am currently involved in," Soltis says. "I plan to continue this level of involvement for at least several more years. This is a way that I can contribute to the System i community. I think eventually you will see joint activities among all iManifest regions--Japan, EMEA, and the U.S. To me it would make a lot of sense to do this on a worldwide basis. Some of the big business partners that are worldwide in scope would probably see advantages in working across all geographies."Notable situación en la que una empresa descuida algunos de sus mejores productos en pos de un nuevo modelo de negocios, que deben ser defendidos por sus socios comerciales. Luego hablaremos de ese modelo... que quizá ya no sea tan beneficioso si no tiene el respaldo de aquellos productos que le dieran prestigio y reputación
Sobre el iManifest, su contenido original en la iniciativa japonesa.
Fotografía de Soltis, tomada de IBM Systems magazine, en un artículo acerca de su participación en la concepción del AS400.
lunes, noviembre 02, 2009
As400: la rebelión del mercado
i + POWER: The Spanish iManifestVolveremos sobre el tema. La insólita situación en la que los socios comerciales de IBM deben toman el mando de la promoción de un producto suyo, lo merece...
Ya adelantábamos que el Manifiesto por el IBM i era una iniciativa con la que nos sentíamos identificados y un ejemplo a seguir. Y como comentábamos en la entrada anterior, sería todo un éxito iniciar el año 2010 con la aparición de los iManifest EMEA y USA en dos de los periódicos financieros más influyentes a nivel mundial. Sin embargo, los directivos de las pequeñas y medianas empresas de los 20 paises de habla hispana no suelen ser lectores habituales del Financial Times o del Wall Street Journal y, por su situación y dispersión geográfica, es difícil creer que una iniciativa similar pudiera influir en un mercado donde, no lo olvidemos, IBM mantiene un crecimiento de más de un 10% anual.
Por supuesto, hablando de i + POWER (el actual entorno AS/400) deberían ser IBM y sus BP's quienes intentaran ganar cuota de mercado o, como mínimo, intentar mantener la que ya tienen, pero no es así desde hace años. En la web de IBM España podemos leer que es la "plataforma ideal para la implementación eficaz de aplicaciones de procesamiento empresarial, con soporte de más de 5.000 soluciones provenientes de más de 2.500 proveedores de software independientes (ISV)".
¿Sí, y dónde están? ¿Quienes son estos proveedores..? Tal vez de esa inacción informativa provenga el desánimo del entorno AS/400 y su lenta erosión. Ante una situación que a todos nos afecta, "The Spanish iManifest" (...) es nuestra propuesta para iniciar el año 2010 con optimismo.
El típico adagio de "el buen paño en el arca se vende" ha perdido toda vigencia. Muy al contrario, podría afirmarse rotundamente, a la vista de miles y miles de experiencias, que el buen paño en el arca "no" se vende, si no existe uno u otro tipo de actividad de marketing que lo dé a conocer y lo haga desear. Por tanto, si tiene intereses comerciales en el mercado de AS/400 y quiere participar en esta iniciativa, envíenos un mensaje a n.navarro@help400.com indicando "Spanish iManifest" en el asunto.
domingo, noviembre 01, 2009
Windows en nueva época, II
Last month Microsoft rolled out Windows 7 and opened the first of a chain of new retail stores. As usual with such announcements, there's been loads of hoopla and ginned-up excitement. But mostly people are just relieved. Windows 7 replaces Vista, one of the most disastrous tech products ever. It also caps the end of a decade in which Microsoft's founder, Bill Gates, stepped aside, and the company lost its edge.
Ten years ago, when Gates appointed his longtime second in command, Steve Ballmer, as his replacement as CEO, Microsoft was still the meanest, mightiest tech company in the world, a juggernaut that bullied friends and foes alike and which possessed an operating-system franchise that was practically a license to print money.
[...] That was then. Now, instead of being scary, Microsoft has become a bit of a joke. Yes, its Windows operating system still runs on more than 90 percent of PCs, and the Office application suite rules the desktop. But those are old markets. In new areas, Microsoft has stumbled. Apple created the iPod, and the iTunes store, and the iPhone. Google dominates Internet search, operates arguably the best e-mail system (Gmail) and represents a growing threat in mobile devices with Android. Amazon has grown to dominate online retail, then launched a thriving cloud-computing business (it rents out computer power and data storage), and capped it off with the Kindle e-reader. Microsoft's answers to these market leaders include the Zune music player, a dud; the Bing search engine, which is cool but won't kill Google; Windows Mobile, a smart-phone software platform that has been surpassed by others; and Azure, Microsoft's cloud-computing service, which arrives next year—four years behind Amazon.
How did this happen? How did Microsoft let tens of billions in revenue (and hundreds of billions in market capitalization) slip through its fingers? Hassles with antitrust regulators distracted Microsoft's management and made the company more timid. But the bigger reason seems to be that in January 2000, Gates stepped down as CEO. It's been downhill ever since.Ballmer is by all accounts an incredibly bright and intensely competitive guy. But he's no Bill Gates. Gates was a software geek. He understood technology. Ballmer is a business guy. To Ballmer's credit, in his decade at the helm Microsoft's revenues have nearly tripled, from $23 billion to $58 billion. The company has built a huge new business selling "enterprise" software—programs that run corporate data centers. Microsoft has also done well in videogames with its Xbox player.
But the problem with putting nontechies in charge of tech companies is that they have blind spots. Gates was quick to recognize that the Internet represented a threat to Microsoft, and he led the campaign to destroy Netscape. In those days Microsoft was still nimble enough that it could pivot quickly and catch up on a rival. Since then the company has become bureaucratic and lumbering.Worse yet, as Microsoft slowed down, the rest of the world sped up. The new generation of Internet companies needed little capital to get started and could scale up quickly. Google got so big so fast that by the time Microsoft recognized the threat, it could not catch up. With Apple, the threat was not the iPod player itself but the Internet-based iTunes store; by the time Microsoft could create a credible clone of the Apple store, Apple had the market locked down.
Meanwhile, Microsoft's core business hit a snag with Vista. Its engineers have spent three years undoing their mess; Windows 7 doesn't leap past what Apple offers, but it's still really terrific. But while Microsoft has been distracted fixing its broken Windows, yet another new crop of Internet saplings has gained root: Facebook and Twitter in social media, Hulu and YouTube (owned by Google) in online video.
And so it goes. This is perhaps why, in the 10 years of Ballmer's reign, Microsoft's stock has dropped by nearly 50 percent, from $55 to $29. (Apple shares have climbed 700 percent; Google has gone up 400 percent since its IPO in 2004.) A spokesman for Microsoft points out that the company pays a quarterly dividend and in 2004 paid out a special dividend worth $32 billion. Still, it's been a pretty dismal 10 years. Unless the company can do more than focus on the past, the next decade might not be any better.
Windows en nueva época
Windows 7: Winning Battle, Losing War?
“But here's a funny thing. By the end of the week, I looked at what I was doing on the tiny screen - and found that just about everything involved software not made by Microsoft. So I'd installed the Firefox browser in preference to Internet Explorer, and started writing documents using Google Docs rather than Microsoft Word, and checking my e-mail via Gmail. As for music, I'd installed iTunes, and to feed my social networking needs, I placed Tweetdeck on the taskbar.I had ended up furnishing my new Windows 7 home with some familiar items from elsewhere - so perhaps the operating system matters less than it once did.” says Rory Cellan-Jones after reviewing the new software for BBC.
In general, the feedback about 7 is it’s more stable, faster than Vista.
But in the 5+ years Microsoft lost with Vista, the world has clearly moved on to SaaS, to social networks, to mobile..as Roy and most other consumers will testify