jueves, agosto 25, 2011

ERPs e innovación (a propósito de gerenciamiento)

Adam Hartung, comentarista de tecnología de Forbes, compara la dirección de Hewlett Packard con la de Apple, señalando el gerenciamiento "de corte industrial" de HP como gran causa de su progresivo deterioro. Pero hablando de gerenciamiento e innovación, Hartung da una agresiva e interesante visión del software ERP.  Qué dice Hartung:

Now CEO Apotheker’s plan for HP’s growth is selling ERP software from a third-tier competitor. ERP (enterprise resource planning) applications like SAP and Oracle are viewed today as the remarkably expensive, hard to install, hard to maintain, hard to modify, monolithic, bureaucracy creating, innovation killing systems they were designed to be.  ERP applications were created to force companies, functions and employees to replicate previous decisions, in the hopes of increasing control (especially financial control) and cutting operating cost.  Not to learn, or do anything new.  They were designed to create rigidity, and are completely unable to enhance flexibility, market responsiveness and growth.  ERP was the emerging high-tech “solution” in 1992!
It is clear Mr. Apotheker is returning to his previous personal success, as CEO of SAP.  He isn’t looking to the future and how he can meet new, unmet needs.  He’s investing in what he knows, from his past.  His focus on “business solutions” is his way of pushing HP to be more like SAP – only 2 decades too late, against enormously well funded competitors, in a low-growth marketplace looking for new solutions.  Too bad for HPs investors, employees and suppliers.

domingo, agosto 21, 2011

IBM como parte de la historia tecnológica

Un artículo aquí que suele ser visitado es el que habla sobre la historia de los servidores de IBM. Lamentablemente, el enlace que abre el artículo hace mucho tiempo que dejó de apuntar a algo, viejo problema de cualquier artículo apoyado en la web...A raíz de éste y otros casos similares, hace tiempo que reemplazo los enlaces por un extracto del tema de que se trate. No pierdo la esperanza de recuperar el viejo artículo, pero algo se puede hacer: gracias a The wayback machine, se puede recuperar un corte de la página al 5 de enero de 2007, que refleja en general el contenido de aquella página. Esta referencia se incluye ahora en mi nota de 2006. Dada la volatilidad del material escrito para Internet, The Internet Machine cumple un servicio irreemplazable.
Pero volviendo a IBM y su influencia en el desarrollo tecnológico del siglo XX: a raíz de su centenario, la empresa ha desarrollado mucha documentación que supongo que será más persistente. El conjunto más importante de documentos, es el que enumera cien aportes de IBM a la tecnología. Lamentablemente, por falta de tiempo dejé pasar el mejor momento para reproducir excelentes materiales difundidos durante el centenario. No obstante, este resumen y otros posteriores se proponen remediarlo. Y respecto a IBM, la observación más destacada acerca de su centenario, no es tanto acerca de su capacidad de crecer y mantenerse en la primera línea de la investigación tecnológica, sino la pregunta sobre su futuro: hace ya algunos años IBM ha producido un giro radical en su visión de los negocios, que por ahora da buenos dividendos, pero pone en cuestión su liderazgo tecnológico, y su esfuerzo en el desarrollo de la investigación. La pregunta es ¿tendrá un segundo siglo de liderazgo?

Volviendo a los papeles del centenario: puestos en desorden, simplemente por preferencias y a medida que los voy encontrando, el primero para reproducir es el dedicado a Codd y las bases de datos relacionales:
In 1970, Edgar F. Codd, an Oxford-educated mathematician working at the IBM San Jose Research Lab, published a paper showing how information stored in large databases could be accessed without knowing how the information was structured or where it resided in the database.
Until then, retrieving information required relatively sophisticated computer knowledge, or even the services of specialists who knew how to write programs to fetch specific information—a time-consuming and expensive task.
Databases that were used to retrieve the same information over and over, and in a predictable way—such as a bill of materials for manufacturing—were well established at the time. What Codd did was open the door to a new world of data independence. Users wouldn’t have to be specialists, nor would they need to know where the information was or how the computer retrieved it. They could now concentrate more on their businesses and less on their computers.
Codd called his paper, “A Relational Model of Data for Large Shared Data Banks.” Computer scientists called it a “revolutionary idea.”
Today, the ease and flexibility of relational databases have made them the predominant choice for financial records, manufacturing and logistical information, and personnel data. Most routine data transactions—accessing bank accounts, using credit cards, trading stocks, making travel reservations, buying things online—all use structures based on relational database theory.
Codd’s idea spawned a new family of products for IBM, centered on the IBM ® DB2 ® database management system, as well as the industry-standard computer language for working with relational databases, called SQL.
According to the New York Times obituary for Codd, “… before Dr. Codd’s work found its way into commercial products, electronic databases were ‘completely ad hoc and higgledy-piggledy,’ said Chris Date, a relational data expert who worked on DB2 at IBM before becoming a business partner of Dr. Codd’s.”
Like many revolutionary ideas, the relational database didn’t come about easily.
By the 1960s, the vast amount of data stored in the world’s new mainframe computers—many of them IBM System/360 machines—had become a problem. Mainframe computations were expensive, often costing hundreds of US dollars per minute. A significant part of that cost was the complexity surrounding database management.
Codd, who had added a doctorate in computer science to his math background when he came to the United States from his native England, set out to solve this problem. He started with an elegantly simple premise: He wanted to be able to ask the computer for information, and then let the computer figure out where and how the information is stored and how to retrieve it.
IBM’s Don Chamberlin said that Codd’s “basic idea was that relationships between data items should be based on the item’s values, and not on separately specified linking or nesting. This notion greatly simplified the specification of queries and allowed unprecedented flexibility to exploit existing data sets in new ways.”
In his seminal paper, Codd wrote that he used the term relation in the mathematical sense of set theory, as in the relation between groups of sets. In plain terms, his relational database solution provided a level of data independence that allowed users to access information without having to master details of the physical structure of a database.
As exciting as the theory was to the technical community, it was still a theory. It needed to be thoroughly tested to see if and how it worked. For several years, IBM elected to continue promoting its established hierarchical database system, IBM IMS (Information Management System). A hierarchical system uses a tree-like structure for the data tables. While IMS can be faster than DB2 for common tasks, it may require more programming effort to design and maintain it for non-primary duties. Relational databases have proven superior in cases where the requests change frequently or require a variety of viewpoint “angles.”
IBM, Rockwell and Caterpillar developed IMS in 1966 to help track the millions of parts and materials used in NASA’s Apollo Space Program. It continues to be IBM’s premier hierarchical database management system.
In 1973, the San Jose Research Laboratory—now Almaden Research Center—began a program called System R (R for relational) to prove the relational theory with what it called “an industrial-strength implementation.” The project produced an extraordinary output of inventions that became the foundation for IBM’s success with relational databases.
Don Chamberlin and Ray Boyce invented SQL, for Structured Query Language, today the most widely used computer language for querying relational databases. Patricia Selinger developed a cost-based optimizer, which makes working with relational databases more cost-effective and efficient. And Raymond Lorie invented a compiler that saves database query plans for future use.
In 1983, IBM introduced the DB2 family of relational databases, so named because it was IBM’s second family of database management software. Today, DB2 databases handle billions of transactions every day. It is one of IBM’s most successful software products. According to Arvind Krishna, general manager of IBM Information Management, DB2 continues to be a leader in innovative relational database software.
Dr. Codd, known as “Ted” to his colleagues, was honored as an IBM Fellow in 1976, and in 1981, the Association for Computing Machinery gave him the Turing Award for contributions of major importance to the field of computing. The Turing is generally recognized as the Nobel Prize of computing.

Selected team members who contributed to this Icon of Progress:

  • Ray Boyce Co-developer of SQL (Structured Query Language)
  • Edgar “Ted” Codd Mathematician, IBM Fellow
  • Donald Chamberlin Co-developer of SQL
  • Christopher J. Date Longtime collaborator of Ted Codd
  • Patricia G. Selinger Founding manager of the Database Technology Institute at IBM Almaden Research Center
Los enlaces del artículo reproducido son míos.

sábado, agosto 20, 2011

SPLC 2011

Con programa preliminar a partir de este domingo, el lunes y martes próximo se desarrollará en Munich la 15ª conferencia de Lineas de Producto Software (SPL, Software Product Lines). Un concepto fundamental en la construcción de software, una fuente de ideas para quienes lo construyen, tanto sea en grandes unidades económicas, donde sus conceptos deberían ser  irremplazables, como para unidades menores, donde deberían motivar líneas de investigación y trabajo más racionales.
Aunque hemos conversado más de una vez sobre su significado, la descripción de uno de sus tutoriales explica razonablemente su meta:
Product Line Engineering is a common approach to address a business with a family of related software products. Instead of having separated development projects for each product in the family, products are built using a shared set of core assets, such as reference architectures and common infrastructure and domain-specific components. Key drivers for PLE are the potential cost savings due to shared core assets, as well as new business opportunities by, for instance, supporting tight integration and improved interworking among members of the product family. 
 Estoy persuadido de que SPL y MDD recorren caminos convergentes y retroalimentados. Uno de los tutoriales que se desarrollarán, toca esta característica: "Leveraging Model Driven Engineering in Software Product Lines", desarrollado por  Bruce Trask and Angel Roman:
Model Driven Engineering (MDE) is a promising recent innovation in the software industry that has proven to work synergistically with Software Product Line Architectures (SPLA). It can provide the tools necessary to fully harness the power of Software Product Lines. The major players in the software industry including commercial companies such as IBM, Microsoft, standards bodies including the Object Management Group and leading Universities such as the ISIS group at Vanderbilt University are embracing this MDE/SPLA combination fully. IBM is spearheading the Eclipse Foundation including its MDE tools like the Eclipse Modeling Framework (EMF) and the Graphical Modeling Framework. Microsoft has also launched their Software Factories and DSL Toolkit into the MDE space. Top software groups such as the ISIS group at Vanderbilt are using these MDE techniques in combination with SPLAs for very complex systems. The Object Management Group is working on standardizing the various facets of MDE. All of these groups are capitalizing on the perfect storm of critical innovations today that allow such an approach to finally be viable. To further emphasize the timeliness of this technology is the complexity ceiling the software industry find itself facing wherein the platform technologies have increased far in advance of the language tools necessary to deal with them. This complexity ceiling is evident in today’s Software Product Lines.
 Invito a seguir sus sesiones, a solicitar sus presentaciones, y a tomar contacto con sus participantes. Estas conferencias anuales acercan el mejor material sobre el tema.

Información y seguimiento en el sitio de la conferencia.

jueves, agosto 04, 2011

Actualización en comunicaciones

A propósito de la introducción oficial del sistema de TDT en España (el asunto sería válido también en Argentina y Chile, con cambios recientes en el mismo sentido), la compañia de satélites Astra maneja un informe que lo cuestiona, y que pone sobre la mesa el estado del uso de las comunicaciones. D. Toledo, en El Confidencial:

“La sentencia definitiva a la TDT nace en la propia Comisión Europea, que contempla como escenario la liberalización total del espectro radioeléctrico para dedicarlo a otros servicios, con lo que la televisión se difundiría mediante otras tecnologías. En definitiva, hablaríamos de la total desaparición de la TDT”, reza el informe. Astra recuerda que la televisión digital terrestre restringe el espacio de que disponen los operadores para sus emisiones, lo que limitará el desarrollo de los programas en alta definición o en 3D. “Tendrán que elegir y descartar los componentes vanguardistas. El satélite, el cable o Internet son el futuro de la televisión, por mucho que el Gobierno español haya hecho una muy equivocada y costosa apuesta”, remata.
La cruzada de Astra contra la TDT es de largo aliento. Y no es del todo inocente. Se siente perjudicado por el modelo elegido por el Gobierno, ya que atentaría contra el principio de neutralidad entre tecnologías. Según calcula la empresa luxemburguesa, llevar la señal de televisión digital terrestre a entre el 96% y el 98,5% de la población ha supuesto la creación de 3.750 centros de radiodifusión nuevos, lo que implica una inversión inicial de 327 millones de euros, con un coste de mantenimiento de 67,2 millones anuales. De acuerdo con sus estimaciones, el satélite es más eficiente a partir de un 90% de cobertura
La progresiva liberalización del espacio radioeléctrico para generar el denominado dividendo digital y dar cabida a servicios de banda ancha móvil a velocidades superiores a las actuales refuerza estos argumentos. De momento, el proceso obligará a reubicar las cadenas. Ya está en marcha la modificación de los canales instalados en la banda entre los 790 y los 862 MHz. “Los operadores, conscientes de que el futuro pasa por la transmisión de datos mediante la red móvil, ya han anunciado que necesitarán más espacio (al menos 100 MHz), con lo que se prevé que un segundo dividendo digital, previsto para 2015, se adelante a 2013”, añade Astra. Cada cambio de esta índole obliga a los hogares a una nueva antenización.