martes, septiembre 30, 2008

Plex en Eclipse

Christopher Smith, en su tiempo libre, está desarrollando en SourceForge un plugin para soportar el desarrollo con Plex. Para quien le interese la combinación de Eclipse/Plex.
De su comunicación en el foro de Plex:

The complete Eclipse project for Plex Services for Eclipse is now hosted at SourceForge.

The project is located at http://sourceforge.net/projects/plexservices/

The subversion repository is located at https://plexservices.svn.sourceforge.net/svnroot/plexservices

This project is licenced with the Apache 2.0 open source licence so please use it and contribute.

If you want to just install the plugin, an Eclipse update site is available at http://adcaustin.com/eclipse/

There is a half completed setup page on the Plex WIKI.

Please remember, I'm doing this in my spare time. If it needs a feature or a fix, I will try and help as best as my schedule allows.

If you have a question on how you can extend it and want to share, I will be MORE than happy to answer.

The plugin currently requires at least Eclipse 3.3 and I'm targeting my future development for 3.4.

Como Christopher dice, en la Wiki de Plex se puede encontrar más información:

How does Eclipse relate to Plex?

Plex can generate Java source code for very functional and usable applications. Building, Debugging, Deploying and Managing the generated source is not an easy or straight forward task. Eclipse can handle these tasks for you.

Plex 5.X ships with Microsoft's NMAKE as it's Java Builder (6.0 uses ANT). Building a large Applications can take a very long time. Eclipse can build your application in a fraction of the time. Diagnosing build errors with the compile listings from NMAKE is clumsy, if not impossible. Finding and discovering how to correct build errors in Eclipse is quite simple.

Debugging Java without an IDE can be very difficult. Debugging is inherent part of the Eclipse Platform.

Deploying a Plex Java application is a cumbersome manual process. Eclipse, along with the integrated ANT support, can automate the whole process. You can even create J2EE projects in Eclipse to automate the packaging and deployment of Plex EJB Proxy applications to J2EE servers.

Managing a single developer environment of Plex generated source code and other required artifacts is difficult to impossible, never mind trying to do so in a multi-developer environment. Eclipse provides a "workspace" for each developer and source repository support for source control products like CVS and Subversion.

domingo, septiembre 28, 2008

Windows 7 en el camino del Vista?

Computerworld publica un pequeño artículo de Eric Lai, que extrae la conclusión de que en el nuevo Windows trabajan unas 2000 personas, en base a declaraciones de Steven Sinofsky, de Microsoft.

"We create feature teams with n developers, n testers, and 1/2n program managers," Sinofsky wrote in a four-page blog that introduced his views on managing large-scale software development. "On average a feature team is about 40 developers across the Windows 7 project."

Based on that arrangement, each feature team would appear to have about 40 developers writing code, an equal number of beta testers -- which Sinofsky separately described as "software development engineers in test" -- and about 20 program managers.

In other words, that would be 2,000 developers creating or testing Windows 7 code, overseen by 500 managers.

Microsoft's public relations firms declined to confirm or clarify those figures.

Es probable que en estos términos, Windows 7 tenga viscisitudes parecidas a las de Vista...

viernes, septiembre 26, 2008

(Thinking in Objects)

Henry Story, en su blog The Sun BabelFish, plantea un problema que mide los límites de la programación orientada a objetos, y la contrapone a las herramientas usuales en la Web Semantica. Con un ejemplo basado en el mundo del autismo, Story presenta un mundo de vistas personales, no de objetos reales. Probablemente el ejemplo usado no fue muy afortunado, particularmente en cómo implementarlo en OOP y en su contrapartida con RDF. Henry fue muy criticado, y en general con acierto, tanto en su expresión del problema en OOP como en la correlación parcial entre ambos ejemplos. Sin embargo, la presentación del problema es estimulante para medir OOP y sus límites:

In order to be able to have a mental theory one needs to be able to understand that other people may have a different view of the world. On a narrow three dimensional understanding of 'view', this reveals itself in that people at different locations in a room will see different things. One person may be able to see a cat behind a tree that will be hidden to another. In some sense though these two views can easily be merged into a coherent description. They are not contradictory. But we can do the same in higher dimensions. We can think of people as believing themselves to be in one of a number of possible worlds. Sally believes she is in a world where the ball is in the basket, whereas Ann believes she is in a world where the ball is in the box. Here the worlds are contradictory. They cannot both be true of the actual world.

To be able to make this type of statement one has to be able to do at least the following things:

  • Speak of ways the world could be
  • Refer to objects across these worlds
  • Compare these worlds
Henry propone un ejemplo (imperfecto y criticado por varios de sus lectores):

Let us illustrate this with a simple example. Let us see how one could naively program the puppet play in Java. Let us first create the objects we will need:

Person sally = new Person("Sally");
Person ann = new Person("Ann");
Container basket = new Container("Basket");
Container box = new Container("Box");
Ball ball = new Ball("b1");
Container room = new Container("Room");
So far so good. We have all the objects. We can easily imagine code like the following to add the ball into the basket, and the basket into the room.
basket.add(ball);
room.add(basket);
Perhaps we have methods whereby the objects can ask what their container is. This would be useful for writing code to make sure that a thing could not be in two different places at once - in the basket and in the box, unless the basket was in the box.
Container c = ball.getImmediateContainer();
Assert.true(c == basket);
try {
box.add(ball)
Assert.fail();
} catch (InTwoPlacesException e) {
}
All that is going to be tedious coding, full of complicated issues of their own, but it's the usual stuff. Now what about the beliefs of Sally and Ann? How do we specify those? Perhaps we can think of sally and ann as being small databases of objects they are conscious of. Then one could just add them like this:
sally.consciousOf(basket,box,ball);
ann.consciousOf(basket,box,ball);
But the problem should be obvious now. If we move the ball from the basket to the box, the state of the objects in sally and ann's database will be exactly the same! After all they are the same objects!
basket.remove(ball);
box.add(ball);
Ball sb = sally.get(Ball.class,"b1");
Assert.true(box.contains(sb));
//that is because
Ball ab = ann.get(Ball.class,"b1");
Assert.true(ab==sb);
There is really no way to change the state of the ball for one person and not for the other,... unless perhaps we give both people different objects. This means that for each person we would have to make a copy of all the objects that they could think of. But then we would have a completely different problem: namely deciding when these two objects were the same. For it is usually understood that the equality of two objects depends on their state. So one usually would not think that an physical object could be the same if it was in two different physical places. Certainly if we had a ball b1 in a box, and another ball b2 in a basket, then what on earth would allow us to say we were speaking of the same ball? Perhaps their name, if it we could guarantee that we had unique names for things. But we would still have some pretty odd things going on then, we would have objects that would somehow be equal, but would be in completely different states! And this is just the beginning of our problems. Just think of the dangers involved here in taking an object from ann's belief database, and how easy it would be to by mistake allow it to be added to sally's belief store.
Henry aboga por el uso de RDF (Resource Description Framework) como herramienta capaz de expresar adecuadamente las distintas vistas de un problema, en un universo de vistas en lugar de objetos. Pero sin duda, esto merece otro espacio.
De los comentarios que siguen al artículo, particularmente abre un poco más el que hiciera Ryan, y la respuesta de Henry
Ryan:
I really like your analysis of OOP and it's relation to autism. I have never thought of it in such a way, but it does make a lot of sense (when I am able to remove my preconceptions). However, why is autism bad in this scenario (if you are even implying it)? I can understand that if our perspective is not an omniscient one then this can fail us. Would you please provide a applied example of the problem at hand with relation to your point on Semantic Web? Thanks a lot!
Henry:

Thanks for asking Ryan. Yes there are a lot of examples. The following article "Extending the Representational State Transfer (REST) Architectural Style for Decentralized Systems" which you can find here
http://portal.acm.org/citation.cfm?id=999447
makes the point about how the distance between the source of a message and the recipient of a message makes perfect immediate communication impossible, if you think of it as resources having the same access to an object. But if you think of it as message passing then you can do some interesting things... Ok I read that quickly, but that is what made me decide to write this article out today.

In the AddressBook I am writing, which I describe in an audio slide cast here:
http://blogs.sun.com/bblfish/entry/building_secure_and_distributed_social
I need to get data from distributed places around the web. This can only be done seriously if you accept that there will be spammers, liars, and just simply wrong data out there. So though you may by default merge data, you may want to make it easy to unmerge it too. I wrote about that in more detail here:
http://blogs.sun.com/bblfish/entry/beatnik_change_your_mind

As I said, if you are writing tools, that you can think of as physical, mechanical objects, that don't have to have points of view on the universe, say if you are writing a web browser, a calculator, or some such thing, then this is not important. But as soon as you want to mesh the information on the web, you will need to take the opinion of others into account. We are fast moving to a world where this is going to become more and more important.

In any case it is good to know the limitations of your tools. :-)

En el mismo sentido, apunta Benjamin Damman:

Hmmmm. Parts of your intriguing article made me think of erlang.

"When one fetches information from a remote server one just has to take into account the fact that the server's view of the world may be different and incompatible in some respects with one's own. One cannot in an open world just assume that every body agrees with everything. One is forced to develop languages that enable a theory of mind. A lot of failures in distributed programming can probably be traced down to working with tools that don't."

Erlang was created for coding highly fault-tolerant (and distributed) systems; characteristics stemming this fact might make it an example of a language that 'enables a theory of mind.'

http://erlang.org/white_paper.html

Un papel sugerente de ideas, mas allá de los puntos observados por su crítica.

martes, septiembre 16, 2008

El proyecto Oslo

Con adelantos apenas delineados, se incrementan las noticias sobre el proyecto Oslo. Si el proyecto desarrollara el ambiente de modelado de Microsoft, y si pudiera ensamblar los distintos esfuerzos anteriores, quizá el conjunto pudiera tomar un rumbo más consistente. Durante 2008 tendremos una idea más clara del tema.
Ron Jacobs dice, anunciando su presentación junto a David Chappell:
Microsoft's "Oslo" project aims at creating a unified platform for model-based, service-oriented applications. This new approach will affect the next versions of several products and technologies, including the Microsoft .NET Framework, Microsoft Visual Studio, Microsoft BizTalk Server, Microsoft System Center, and more. Although many details of "Oslo" won't be public until later in 2008, this session provides an overview of what Microsoft has revealed so far. Along with a description of the problems it addresses, the session includes a look at several new "Oslo" technologies, including a general-purpose modeling language, role-specific modeling tools, a shared model repository, and a distributed service bus.

Uno de los nuevos elementos de Oslo, es el impulso al lenguaje D. Darryl Taft dice:
“The language was designed with an RDBMS [relational DBMS] as very, very, very much top-of-mind, so that we have a very clean mapping,” Lovering said. “But the language is not hard-wired to an RDBMS or relational model. And the language is actually built against an abstract data model. We represent the program itself also in that same abstract data model, which is a very LISP-ish idea—you know, where the whole program itself is the same data structure on which it operates.”
En su sitio dedicado a SOA, se resume así las características de Oslo:

”Oslo” is the codename for Microsoft’s forthcoming modeling platform. Modeling is used across a wide range of domains and allows more people to participate in application design and allows developers to write applications at a much higher level of abstraction. “Oslo” consists of:

  • A tool that helps people define and interact with models in a rich and visual manner
  • A language that helps people create and use textual domain-specific languages and data models
  • A relational repository that makes models available to both tools and platform components

Tres elementos se destacan, en los adelantos que funcionarios y allegados a Microsoft van develando: la mencionada utilización de un nuevo lenguaje (D), el énfasis en el modelado y la abstracción, y la idea de un repositorio que ordene los recursos participantes. No es algo nuevo (la idea del repositorio como sustento de las herramientas de modelado ya había generado iniciativas de Microsoft y otros en los 90), pero el conjunto es aplicado sobre recursos que han madurado y sobre los que se ha discutido mucho ya.
En el sitio de Microsoft sobre SOA, algunas ideas expuestas por Bob Muglia, arrojan luz sobre el futuro que Oslo traerá:

“Oslo” and a Mainstream Approach to Modeling

Modeling has often been heralded as a means to break down technology and role silos in application development to assist IT departments in delivering more effective business strategies. However, while the promise of modeling has existed for decades, it has failed to have a mainstream impact on the way organizations develop and manage their core applications. Microsoft believes that models must evolve to be more than static diagrams that define a software system; they are a core part of daily business discussions, from organizational charts to cash flow diagrams. Implementing models as part of the design, deployment and management process would give organizations a deeper way to define and communicate across all participants and aspects involved in the application lifecycle.

In order to make model-driven development a reality, Microsoft is focused on providing a model-driven platform and visual modeling tools that make it easy for all “mainstream” users, including information workers, developers, database architects, software architects business analysts and IT Professionals, to collaborate throughout the application development lifecycle. By putting model-driven innovation directly into the .NET platform, organizations will gain visibility and control over applications from end-to-end, ensuring they are building systems based on the right requirements, simplifying iterative development and re-use, and enabling them to resolve potential issues at a high level before they start committing resources.

Modeling is a core focus of Microsoft’s Dynamic IT strategy, the company’s long-term approach to provide customers with technology, services and best practices to enable IT and development organizations to be more strategic to the business. “Oslo” is a core piece of delivering on this strategy.

“The benefits of modeling have always been clear, but traditionally only large enterprises have been able to take advantage of it and on a limited scale. We are making great strides in extending these benefits to a broader audience by focusing on three areas. First, we are deeply integrating modeling into our core .NET platform; second, on top of the platform, we then build a very rich set of perspectives that help specific personas in the lifecycle get involved; and finally, we are collaborating with partners and organizations like OMG to ensure we are offering customers the level of choice and flexibility they need.”

Bob Muglia, Senior Vice President, Microsoft Server & Tools Business

Esperaremos más noticias...

jueves, septiembre 11, 2008

Microsoft ingresa en OMG

Consecuente con su reciente viraje hacia la aceptación de UML, Microsoft anunció este miércoles su ingreso a la OMG. Largo camino desde los tiempos de las observaciones sarcásticas sobre los esfuerzos del Object Management Group...

REDMOND, Wash. — Sept. 10, 2008 — Microsoft Corp. today outlined its approach for taking modeling into mainstream industry use and announced its membership in the standards body Object Management Group™ (OMG™). Modeling is a core focus of Microsoft’s Dynamic IT strategy, the company’s long-term approach to provide customers with technology, services and best practices to enable IT and development organizations to be more strategic to the business.

Modeling often has been heralded as a means to break down technology and role silos in application development to assist IT departments in delivering more effective business strategies. However, although the promise of modeling has existed for decades, it has failed to have a mainstream impact on the way organizations develop and manage their core applications. Microsoft believes that models must evolve to be more than static diagrams defining a software system; they are a core part of daily business discussions, from organizational charts to cash flow diagrams. Implementing models as part of the design, deployment and management process would give organizations a deeper way to define and communicate across all participants and aspects involved in the application life cycle.

To make model-driven development a reality, Microsoft is focused on providing a model-driven platform and visual modeling tools that make it easy for all “mainstream” users, including information workers, developers, database architects, software architects, business analysts and IT professionals, to collaborate throughout the application development life cycle. By putting model-driven innovation directly into the Microsoft .NET platform, organizations will gain visibility and control over applications from end to end, ensuring that they are building systems based on the right requirements, simplifying iterative development and re-use, and resolving potential issues at a high level before they start committing resources.

“We’re building modeling in as a core part of the platform,” said Bob Muglia, senior vice president, Server and Tools Business at Microsoft. “This enables IT pros to specify their business needs and build applications that work directly from those specifications. It also brings together the different stages of the IT life cycle — connecting business analysts, who specify requirements, with system architects, who design the solution, with developers, who build the applications, and with operations experts, who deploy and maintain the applications. Ultimately, this means IT pros can innovate and respond faster to the needs of their business.”

OMG has been an international, open-membership, not-for-profit computer industry consortium since 1989. OMG’s modeling standards include the Unified Modeling Language™ (UML®) and Business Process Management Notation (BPMN™). In addition to joining the organization, Microsoft will take an active role in numerous OMG working groups to help contribute to the open industry dialogue and assist with evolution of the standards to meet mainstream customer needs. For example, Microsoft is already working with the finance working group on information models for insurance business functions related to the property and casualty industry, and will eventually look to expand those models so that they can be applied to P&C, life and reinsurance. Another early focus will be on developing specifications for converting messages across the various payments messaging standards.

“Microsoft has always been one of the driving forces in the development industry, helping to make innovation possible but also simplifying many of the most challenging aspects of the application development process,” said Dr. Richard Mark Soley, CEO at OMG. “In less than 10 years, OMG’s UML, a cornerstone of the Model Driven Architecture initiative, has been adopted by the majority of development organizations, making OMG the seminal modeling organization and supporting a broad array of vertical market standards efforts in healthcare, finance, manufacturing, government and other areas. Microsoft’s broad expertise and impact will make its membership in OMG beneficial to everyone involved.”

Developers can begin to implement model-driven approaches today through innovations such as Extensible Application Markup Language (XAML) — the declarative model that underlies Windows Presentation Foundation and Windows Workflow Foundation — and ASP.NET MVC, which deeply integrates model-driven development into the .NET Framework and makes it easy to implement the model-view-controller (MVC) pattern for Web applications. Both XAML and MVC are examples of models that drive the actual runtime behavior of .NET applications. These are part of Microsoft’s broader companywide efforts to deliver a connected platform modeling, which includes technologies being delivered across both “Oslo” and Visual Studio “Rosario” initiatives.

(En Microsoft PressPass)

InfoQ le dedica un artículo que reseña los antecedentes, y algunas de las voces que aquí también se comentaron.

lunes, septiembre 08, 2008

Google Chrome en InfoQ


En estos días, varios millones de entusiastas están probando Chrome, el browser de Google (me incluyo), en su primer lanzamiento público. Evidentemente, se ha lanzado una carga de profundidad en el mercado, que, como otros productos de su dueño, apenas comienza, y mucho más veremos.
Geoffrey Wiseman, en InfoQ, publica un breve pero abarcador artículo sobre estado y perspectivas, que es conveniente leer.
En cuanto al escenario en la industria, Wiseman estima:

Many people have heralded the launch as the renewal of the browser wars once fought between Microsoft and Netscape / Mozilla (those were the primary contenders, although every browser has its contigent willing to trumpet its strengths). Some are willing to count Chrome out already, while others are adopting a wait and see stance.

Many argue that Google doesn't wish to compete with other browsers, simply to advance the state of network-delivered applications to where they are indistinguishable from desktop applications and in so doing, push the operating system into the background.

In particular, people telling this story love to cast Microsoft in the opposing role, so that one can imagine the two titans clashing.

En su resúmen técnico, Wiseman escribe:

The Chrome browser is the result of the Chromium project, which connects the WebKit web browser engine with the new Google V8 JavaScript Engine, the Skia vector graphics engine, and Google Gears.

The WebKit browser engine began its life as a fork of the KDE project's KHTML and KJS engines by Apple, becoming the basis of the Safari browser. WebKit was later re-adopted by KDE. Google already employs WebKit within their Android mobile phone platform, and it became the obvious solution for them. As the comic introduction to Chrome states:

It uses memory efficiently, was easily adapted to embedded devices, and it was easy for new browser developers to learn to make the code base work. Browsers are complex. One of the things done well with WebKit is that it's kept SIMPLE.

The version of WebKit used in the initial Windows beta seems to be WebKit 525.13, which is not the most recent version, and has some security vulnerabilities (see Security below). Some users have also noticed rendering differences from Safari's WebKit rendering to Chrome's, including antialiasing and shadows. This may be the result of the Skia graphics engine used under the hood.

Talking about the integration with WebKit, the Chromium FAQ says:

The Chromium source code includes a copy of the WebKit source. We frequently snapshot against the WebKit tip of tree or specific branches according to our release needs.

Our goal is to reduce the size and complexity of the differences between the copy we maintain in order to work more effectively as a participant in the WebKit community and also to make periodic updates occur more smoothly.

The V8 JavaScript Engine is open-source and hosted on Google Code, but was written for Chrome, rather than adopting an existing JavaScript engine. V8 is written in ~100,000 lines of C++ and can be run standalone or embedded in C++ applications.

The foremost reason for V8's creation seems to be performance. The V8 Design Documentation states, "V8 is ... designed for fast execution of large JavaScript applications." The Chromium Blog on V8 is entitled "The Need for Speed" and states:

Google Chrome features a new JavaScript engine, V8, that has been designed for performance from the ground up. In particular, we wanted to remove some common bottlenecks that limit the amount and complexity of JavaScript code that can be used in Web applications.

V8 claims a number of performance improvements and innovations, from fast property access using hidden classes, dynamic machine code generation and efficient garbage collection (stop-the-world, generational, accurate, compacting), small object hreaders, multi-threaded from the ground up. The team that created V8 was headed by Lars Bak, who, as Avi Bryant says, was "the technical lead for both Strongtalk and the HotSpot Java VM, and a huge contributor to the original Self VM" and has a number of VM-related patents to his name.

V8 is not a virtual machine in the classic sense as Matthieu Riou points out: there's no intermediate representation or byte-code. As a result, you cannot write your own language that compiles to "V8 byte code", although you can cross-compile to JavaScript. Despite this, Dave Griswold believes that V8 could serve as the engine for other dynamic languages:

I think these properties will rapidly make V8 the dominant VM for dynamic languages. It ought to make a great platform for Smalltalk.

Google Gears has also moved into the Chromimum Project, as pointed out in the FAQ:

With Gears as a plug-in to Chromium we're carrying two copies of sqlite and two copies of V8. That's silly. We're integrating the code so Gears can run great in Chromium. We plan to continue to build Gears for other browsers out of the same code base.

Although Google Chrome supports plugins for content handling like Flash and PDF, it does not currently support browser extensions, although that is planned.

Por mi parte, no reemplazaré (por ahora) a Firefox, porque aún Chrome es incompleto para algunos de los usos que hoy mantengo en Firefox, pero sus ventajas por ahora son innegables, y en primer lugar, en performance. En Septiembre, la lucha ha comenzado.

domingo, septiembre 07, 2008

Plex Beta 6.1

En un mejor ciclo de desarrollo que en releases anteriores, la versión 6.1 de Plex está en Beta, por un mes más, aproximadamente. En estos días estoy ocupado probando. Para cualquier interesado ajeno a Plex, lo más interesante en el 6.1, es el desarrollo de aplicaciones para SOA. Del documento de sumario del release:
Model-Based Service Development
This feature strengthens CA Plex support for SOA development by providing model-based service development capabilities directly in the product. Services are represented as objects within the Plex model, using the component modeling approach already established for COM and EJB objects.
WCF service generation is supported with this release and a plug-in architecture enables developers to create their own service generators.
WCF Service Generation
Windows Communication Foundation (WCF) is a new communication
subsystem within the Microsoft .NET Framework that unifies several different communication technologies such as web services, .NET remoting, message queuing, and so on.
The WCF service generation in CA Plex r6.1 enables you to present business logic as services based on WCF. This can include business logic developed in the Plex model and logic from third-party applications.
Service Wrappers and Cross-Platform Interoperability
The new WCF service generation is designed to support the convenient wrappering of existing applications as services. This includes Java, i5/OS, and .NET programs. Generally, this means that the target of a FNC implemented by FNC triple can be a Java or RPG function. In the case of RPG, the target function can correspond to an i5/OS program developed outside CA Plex, such
as i5/OS programs or programs developed with CA 2E.
Hay otras mejoras, en Java, en Iseries, en el manejo del modelo. Pero esto es particularmente interesante.