domingo, abril 19, 2015

7.ª conferencia mundial de Plex/2E:

Se publicó ayer la agenda de la séptima conferencia mundial de Plex, programada para el primero de junio, hasta el cinco. Como es usual, habrá un buen número de sesiones a cargo de usuarios y socios de negocios, algunas de ellas en castellano. De entre ellas, quisiera destacar algunas relacionadas con web, movilidad y servicios web:

Developing Mobile and Web UX Workshop, a cargo de Abram Darnutzer y Andrew Legget, de CM First, acerca de la extensibilidad de Plex a aplicaciones móviles:
Do you need to extend your legacy Plex app to mobile, but aren’t sure how to get there? Join us for some hands on training for developing multi-channel responsive HTML5 apps that can be deployed to desktop browsers and mobile devices. We will have exercises and detailed information on everything you need to be successful: everything from geolocation, camera imaging, social auth, native app store wrapping techniques, offline storage, device specific capabilities and more. You will leave with a working Order Processing and Delivery app that can be used off-line.
REST API’s in a CA Plex Context, presentada por Lorenz Alder, de CM First, introduciendo probablemente uno de los temas de mayor interés desde el punto de vista de arquitecturas:
This presentation first discusses general aspects of web API design and give some guidance and best practices. We will then focus on RESTful API’s and answer questions like: What is RESTful? What is HATEOAS? What is RPC? and try to find a pragmatic approach to RESTful or REST based API’s. In the last section we will talk about the status quo of the CA Plex generators and how they fit into the REST paradigm and show some ways to move from a client server centric perspective to a web perspective.
HTML5, The Future for App Development, Keynote del equipo de Sencha:
This session will provide a side-by-side comparison of developing a multi-channel, multi-platform application in HTML5 relative to a siloed or native development approach. The presenter will explore not only development issues, but application deployment, testing, and on-going maintenance issues as well.
En las sesiones de 2E se discutirá, en 2E Training Workshop (What’s new in r8.7?), un tema que espero ver también disponible en Plex tan pronto como sea posible: la extensión del manejo de las características del SQL en DB2, algo que parece incluír en el release incremental 7.2, si leo bien el anuncio ("the new features in CA Plex r7.2 Incremental Release 1 that help enable these goals"):
Do you want to move from a traditional DDS database to an SQL-type database, and still continue using your existing application? Do you want to use meaningful names on your SQL/DDL databases instead of implementation names? Do you want to be able to generate SQL/DDL type objects into any library of your choice? These are now possible with the latest release of CA 2E – r8.7. Our CA Staff will walk you through the important features of CA 2E 8.7 and also have a hands-on session to try out these new features.
De destacar también , varias sesiones acerca de la variante .NET,  facilidades de productividades en la IDE de Plex, o , especialmente, algunas dedicadas a vincular el desarrollo de aplicaciones Plex o 2E con ofertas de infraestructura de CA, con algunos aspectos de mucho interés (estoy tratando de seguir la vinculación de 2E/Plex con  CA API Gateway -sesión CA 2E – A player in the AppDev Strategy).
Sorprendentemente, no veo ninguna sesión organizada por los desarrolladores de Websydian. Es particularmente curioso porque han participado en la preparación de la conferencia, hasta donde conozco.

En fin, un conjunto de sesiones que apuntan a problemas bien actuales. No está mal.

martes, abril 07, 2015

Diez reglas para recordar (y seguir...)

Firmado por Javin Paul, un artículo que recuerda diez reglas importantes de orientación a objetos en el diseño con Java (10 Object Oriented Design Principles Java Programmer should know), que podríamos extender fácilmente a otros campos (y estoy pensando en Plex). Un pequeño decálogo, tan escueto que podríamos pegarlo en una pizarra frente a nuestros ojos, y que deberíamos repasar todos los días.
 DRY (Don't repeat yourself)
Our first object oriented design principle is DRY, as name suggest DRY (don't repeat yourself) means don't write duplicate code, instead use Abstraction to abstract common things in one place. If you have block of code in more than two place consider making it a separate method, or if you use a hard-coded value more than one time make them public final constant. Benefit of this Object oriented design principle is in maintenance. It's important  not to abuse it, duplication is not for code, but for functionality . It means, if you used common code to validate OrderID and SSN it doesn’t mean they are same or they will remain same in future. By using common code for two different functionality or thing you closely couple them forever and when your OrderID changes its format , your SSN validation code will break. So beware of such coupling and just don’t combine anything which uses similar code but are not related.

Encapsulate What Changes
Only one thing is constant in software field and that is "Change", So encapsulate the code you expect or suspect to be changed in future. Benefit of this OOPS Design principle is that Its easy to test and maintain proper encapsulated code. If you are coding in Java then follow principle of making variable and methods private by default and increasing access step by step e.g. from private to protected and not public. Several of design pattern in Java uses Encapsulation, Factory design pattern is one example of Encapsulation which encapsulate object creation code and provides flexibility to introduce new product later with no impact on existing code.

Open Closed Design Principle
Classes, methods or functions should be Open for extension (new functionality) and Closed for modification. This is another beautiful SOLID design principle, which prevents some-one from changing already tried and tested code. Ideally if you are adding new functionality only than your code should be tested and that's the goal of Open Closed Design principle. By the way, Open Closed principle is "O" from SOLID acronym.

Single Responsibility Principle (SRP)
Single Responsibility Principle is another SOLID design principle, and represent  "S" on SOLID acronym. As per SRP, there should not be more than one reason for a class to change, or a class should always handle single functionality. If you put more than one functionality in one Class in Java  it introduce coupling between two functionality and even if you change one functionality there is chance you broke coupled functionality,  which require another round of testing to avoid any surprise on production environment.

Dependency Injection or Inversion principle
Don't ask for dependency it will be provided to you by framework. This has been very well implemented in Spring framework, beauty of this design principle is that any class which is injected by DI framework is easy to test with mock object and easier to maintain because object creation code is centralized in framework and client code is not littered with that.There are multiple ways to  implemented Dependency injection like using  byte code instrumentation which some AOP (Aspect Oriented programming) framework like AspectJ does or by using proxies just like used in Spring. See this example of IOC and DI design pattern to learn more about this SOLID design principle. It represent "D" on SOLID acronym.

Favor Composition over Inheritance
Always favor composition over inheritance ,if possible. Some of you may argue this, but I found that Composition is lot more flexible than Inheritance. Composition allows to change behavior of a class at runtime by setting property during runtime and by using Interfaces to compose a class we use polymorphism which provides flexibility of to replace with better implementation any time. Even Effective Java advise to favor composition over inheritance.

Liskov Substitution Principle (LSP)
According to Liskov Substitution Principle, Subtypes must be substitutable for super type i.e. methods or functions which uses super class type must be able to work with object of sub class without any issue". LSP is closely related to Single responsibility principle and Interface Segregation Principle. If a class has more functionality than subclass might not support some of the functionality ,and does violated LSP. In order to follow LSP SOLID design principle, derived class or sub class must enhance functionality, but not reduce them. LSP represent  "L" on SOLID acronym.

Interface Segregation principle (ISP)
Interface Segregation Principle stats that, a client should not implement an interface, if it doesn't use that. This happens mostly when one interface contains more than one functionality, and client only need one functionality and not other.Interface design is tricky job because once you release your interface you can not change it without breaking all implementation. Another benefit of this design principle in Java is, interface has disadvantage to implement all method before any class can use it so having single functionality means less method to implement.

Programming for Interface not implementation
Always program for interface and not for implementation this will lead to flexible code which can work with any new implementation of interface. So use interface type on variables, return types of method or argument type of methods in Java. This has been advised by many Java programmer including in Effective Java and head first design pattern book.

Delegation principle
Don't do all stuff  by yourself,  delegate it to respective class. Classical example of delegation design principle is equals() and hashCode() method in Java. In order to compare two object for equality we ask class itself to do comparison instead of Client class doing that check. Benefit of this design principle is no duplication of code and pretty easy to modify behavior.
Y volviendo sobre la idea de su aplicabilidad en Plex, sin duda lo es. En algunos casos fácilmente entendible, (Favor Composition over Inheritance, Encapsulate What Changes, DRY,  Favor Composition over Inheritance) y en otros casos, después de luchar contra la vía fácil de hacer las cosas (Programming for Interface not implementation). Solo veo difícil implementar Dependency Injection. Y cuando me refiero a aplicabilidad, lo hago a nivel del modelo, no a nivel del código generado, donde su aplicabilidad está asegurada.
Plex, como otros productos, admite distintas interpretaciones, distintas formas de desarrollar. Aplicar principios de OOD permite explotarlo en forma más productiva, potenciando sus características. Luchar por no usar una hoja de ruta rutinaria favorece resultados consistentes y duraderos.

lunes, abril 06, 2015

Plex en el System i

Y a propósito de modernización en el System i (o 400, o iSeries, o...), ¿cómo está Plex? En un breve inventario, podemos decir que el RPG ILE  está soportado, así como el SQL ILE RPG. Pero construcciones complejas no lo están, tanto en cuanto a las posibilidades extendidas del ILE (Integrated Language Environment) como en cuanto a la generación de servicios disponibles en alguna plataforma (servicios web, tal como hoy es posible usar WCF en .NET, especialmente), o deseables por su propia importancia, como las extensiones para acceso web, móvil, el soporte de cloud computing, la ampliación del uso del SQL, la integración con Linux, Mac, Windows... ¿qué hay del soporte de PHP, Ruby, o la inclusión de Node.js en el sistema?
Algunas de estas características o facilidades están ya disponibles a través de terceros: CM First permite mover desde el System i a aplicaciones web y móviles tanto modelos de 2E como de Plex, así como se lo puede hacer con Websydian. Servicios web fueron abordados ya hace tiempo por Websydian. En cloud computing  CM First ha iniciado desarrollos con EC2 DE Amazon. Existen múltiples patrones desarrollados por miembros de la comunidad de Plex, tanto para el uso de SQLRPG, como para el uso de servicios web, entre otros. Desarrollos corporativos a veces compartidos y a veces apenas conocidos por la comunidad de usuarios.
Pero lo más importante es que existen varias solicitudes en curso de actualización del soporte de ILE, que aparecen como candidatos a ser incluídos inicialmente en la versión 7.2, aunque más probablemente en la 8. Entre ellas, una ampliación de la generación de código para SQLRPG, el manejo de datos varchar, el paso de la creación de DDSs a DDL. ¿REST? En algún momento Simon Cockaine preguntó a la comunidad sobre su uso, lo que puede incluírlo o no.
Visto en conjunto, una respuesta irregular pero no muy lejana de las posibilidades de la plataforma. El uso del API del OS/400 permite un flexible acceso a los recursos, aunque se extraña una respuesta más avanzada de quienes debieran conducir el producto.

domingo, abril 05, 2015

Futuro del 400..(o como se llame en 2020)

Linea de tiempo planificada para el System i - (En IBM System Magazine)

Steve Will, Arquitecto Jefe del System i,  publica en IBM System Magazine (30 de marzo) un artículo explicando la planificación de futuras versiones del System i (AKA AS400, i Series, System i...), que extiende el ciclo de vida de los sistemas hasta dos próximas versiones como planes inmediatos, llevando su ciclo de vida hasta más allá de 2025: la corriente versión aparece planeada hasta 2020/2021, y confirma que están trabajando en dos siguientes versiones (Next i+1/Next i+2). La primera, con cambios que expanden características ya en desarrollo, y la siguiente, determinada por cambios mayores no contenibles en la primera.
(...) we have two major releases under development right now. The 7.2 release came out less than a year ago, and we’ve been working hard on its following major release – called “i next” on this chart. But, we have items that we know cannot fit into “i next” but which require a major release, so we are working on the one after that, “i next +1.” 
 Lo más importante del artículo de Will es la indicación del compromiso de IBM con el sistema, y de su potenciación en el marco de la evolución tecnológica actual:
The key to understanding this next chart is to recognize when there is a known, committed date and when there is just a direction. A known date is represented when the horizontal line has a vertical end. For example, IBM i 6.1 was released in 2008, and its announced end of service is in 2015; both ends of that line are vertical. But while IBM i 7.2 came out in 2014 (vertical left end) the end of service date is indicated by an arrow, meaning we have not announced anything.

However, if 7.1 and 7.2 are each supported as long as 6.1 and V5R4 were, then 7.2 is going to be supported out into the 2020s.

And, very importantly, I told you that we have two more releases actively under development right now. When will they be released? Well, the ends of those lines are arrows, so we’re not saying yet. The availability dates could still change, but clearly, we don’t tend to deliver new releases any sooner than two years these days, and sometimes it’s longer than that. So, “i next” and “i next +1” will come out sometime, and if they also are supported for seven years, well, we’re more than 10 years out into the future now.

Furthermore, on the previous chart, we discussed that new capabilities are coming out in between releases. This means that the “Support” chart does not indicate only “support” but also a timeline for delivery of new function via TRs.
 Frente a la socarrona afirmación (repetida frecuentemente) de la obsolescencia del equipo, creo que realmente tendríamos que pensar más en la obsolescencia de las perspectivas con que se planean desarrollos sobre la plataforma: cada vez me siento más inclinado a abandonar definitivamente cualquier referencia al "400", considerando la distancia entre aquello que estaba disponible en el 400 hace veinte años, y lo que es posible hacer hoy en el "i":
We’re adding new capabilities in virtualization, cloud, I/O, DB2, mobile, open standards and much more. Staying current with new technology is a clear indication we are investing and plan to be around for a long time.

jueves, marzo 19, 2015

Iniciadas las "CA Plex (& CA 2E) Office Hours Series"

El 20 de febrero CA anunció el inicio de las "Office Hours" para Plex y 2E: un punto de encuentro para discutir en forma abierta e informal entre clientes y personal de CA de soporte de los productos. El primer encuentro se produjo el 18 de febrero, para usuarios de 2E. Y ayer, 18 de marzo, se celebró el primer encuentro de Plex, con una buena participación y presentación de asuntos de interés.
Esta primera discusión ha resultado actual, relacionada con el contacto con nuevas plataformas y arquitecturas, y seguramente este mismo priimer hilo continuará abierto por un tiempo más. Fue, a mi juicio, especialmente interesante la pregunta lanzada por Simon Cockaine acerca del uso de microservicios y/o REST, no respondida en el momento pero merecedora de dejar abierta. Otros asuntos: el uso de Plex en un entorno ASP .NET, y la eventual actualización/modernización/ampliación de las librerías de patrones. Si usted es usuario, y alguno de los temas discutidos le preocupan, participe en el hilo abierto, o propóngase intervenir en la próxima cita, el 20 de mayo. Una activa intervención en estas Office Hours de más y más usuarios dará mayores garantías de que Plex evolucione en una dirección que sea favorable para todos.

miércoles, marzo 18, 2015

Adiós a Google Code

Uno más, y van...Google anuncia el fin del servicio de Google Code. Acabo de enterarme a través del boletín de SourceForge, que remite a su recomendación acerca de cómo migrar los proyectos remanentes a SourceForge. En resumen, a partir del 12 de marzo, Google Code congeló la aceptación de nuevos proyectos, renunciando a su servicio debido a su pérdida de importancia frente a otras opciones más aceptadas y controladas. Google lo explica claramente:
When we started the Google Code project hosting service in 2006, the world of project hosting was limited. We were worried about reliability and stagnation, so we took action by giving the open source community another option to choose from. Since then, we’ve seen a wide variety of better project hosting services such as GitHub and Bitbucket bloom. Many projects moved away from Google Code to those other systems. To meet developers where they are, we ourselves migrated nearly a thousand of our own open source projects from Google Code to GitHub.
As developers migrated away from Google Code, a growing share of the remaining projects were spam or abuse. Lately, the administrative load has consisted almost exclusively of abuse management. After profiling non-abusive activity on Google Code, it has become clear to us that the service simply isn’t needed anymore.
Las tres fechas claves del proceso de cierre son:
  • March 12, 2015 - New project creation disabled.
  • August 24, 2015 - The site goes read-only. You can still checkout/view project source, issues, and wikis.
  • January 25, 2016 - The project hosting service is closed. You will be able to download a tarball of project source, issues, and wikis. These tarballs will be available throughout the rest of 2016.
¿Economía de recursos o compromiso limitado con sus desarrollos? Entre los comentarios y quejas por su cierre, quizá comparta el de Pavel Roskin, reaccionando a quienes hablan de cementerio de iniciativas: The comment about Google Cemetery makes me sick. Snide remarks don't belong here. Google Code has been a great contribution to the Free Software. Google should be thanked for that.

martes, marzo 10, 2015

Plex: algunas actividades en marzo

Este primer trimestre de 2015 trae algunas novedades relacionadas con Plex: temprano, en enero, George Jeffcock anunció su decisión de soportar el conjunto de patrones elaborados de Pattern Factory, publicado ahora en su sitio de Stella Tools . Pattern Factory fue durante años un conjunto muy interesante de patrones de uso y código abierto, desarrollados por Peter Fabel. La republicación de los patrones en el dominio de Stella Tools permite acceder a ellos nuevamente.
También de mano de George viene el anuncio del establecimiento de un puente entre Plex y la herramienta de administración de cambios y distribución de paquetes TDS/OMS de Remain Software, de Holanda. Es interesante su comentario:
I propose a CA Plex developer will be very familiar with the CA Plex Object Browser and would like to use it to orchestrate which objects are to be managed by a 3rd party tool. Ideally the CA Plex IDE would support traditional OLE drag and drop to 3rd party but with this unsupported, creating a Model API application in between the CA Plex IDE and the 3rd party is the next best solution.

In the case of CM First's Matchpoint Product it offers in my opinion the closest integration to CA Plex possible aided by the fact it is written in CA Plex and so no need for the intermediate Model API dialog.

With Softlanding's Turnover (originally written in CA Plex) and Remain's TD/OMS both being Eclipse based, wouldn't it be nice (well we can dream) to make the CA Plex IDE into Eclipse.
En febrero, CM First anunció el soporte movil de su herramienta de administración de cambios, Match Point, comentada antes por Jeffcock.
También este mes, mañana mismo, se desarrolla una conferencia sobre Plex en Milán, promovida por AXSOS, Mondo Software y Websydian.
Aceptable comienzo de año...
 

domingo, marzo 01, 2015

Séptima Conferencia Mundial de CA 2E/Plex

Del 1 al 4 de junio de este año se desarrollará la séptima conferencia mundial de Plex y 2E. Planeada inicialmente para una fecha más temprana, finalmente está confirmada para junio. Esta vez las sesiones se realizan en Austin, Texas, con el soporte de CM First. En cierto modo, su localización en Texas favorece la participación de colegas de Latinoamérica. Se ha conversado sobre la posibilidad de que varias sesiones sean en castellano. Actualmente, tanto desde la conferencia como desde la comunidad de Plex/2E se está promoviendo la participación de usuarios con sus propios proyectos en las sesiones. No existe detalle de las sesiones todavía, pero las versiones lanzadas (CA 2E V. 8.7, CA PLEX V7.2) están anticipadas como parte del contenido. Existen muchas propuestas, y esperamos su mejor definición en las próximas semanas.

sábado, febrero 28, 2015

A propósito de Lenovo y Superfish


El escándalo de Lenovo y su adware preinstalado Superfish generaron un rechazo comparable al producido por las revelaciones escalonadas de las interferencias de la NSA americana. Es que como se ha dicho, "es muy posiblemente la peor cosa que he visto que un fabricante haga a su base de clientes." (Mark Rogers). En sus palabras:
We trust our hardware manufacturers to build products that are secure. In this current climate of rising cybercrime, if you can’t trust your hardware manufacturer, you are in a very difficult position. That manufacturer has a huge role to play in keeping you safe – from releasing patches to update software when vulnerabilities are found to behaving in a responsible manner with the data the collect and the privileged access they have to your hardware.
When bad guys are able to get into the supply chain and install malware, it is devastating. Often users find themselves with equipment that is compromised and are unable to do anything about it. When malware is installed with the access a manufacturer has, it buries itself deep inside the system – often with a level of access that takes it beyond the reach of antivirus or other countermeasures. This is why it is all the more disappointing – and shocking – to find a manufacturer doing this to its customers voluntarily.
Para no abundar en lo conocido, el fabricante Lenovo entregaba sus portables con software preinstalado, entre ellos Superfish. Este producto interfería las búsquedas del usuario insertando sus propias publicidades. Esto de por sí es molesto y desagradable, y el software antiadware se ocupa de este tipo de productos. Pero el mayor problema es que para conseguir este objetivo, Superfish reemplaza los certificados de seguridad de los sitios accedidos por uno suyo propio, creando las condiciones para que ninguna página resulte confiable a través de una conexión SSL. Mark Rogers da un informe detallado de la vulnerabilidad que se produce (y también de cómo solucionarlo). ¿Qué puede hacer un usuario si su proveedor lo traiciona y lo entrega atado de pies y manos?
En este sentido va la respuesta de la fundación de Software Libre:
Whenever you use proprietary software like Windows or Superfish, true, trustable, verifiable security is always out of reach. Because proprietary code can't be publicly inspected, there's no way to validate its security. Users have to trust that the code is safe and works as advertised. Since proprietary code can only be modified by the developers who claim to own it, users are powerless to choose the manner in which security bugs are fixed. With proprietary software, user security is secondary to developer control.
En muchos escenarios suele ser muy difícil evitar el uso de software propietario. Sin embargo, las crecientes interferencias en la privacidad del usuario, ameritan evaluar dos y tres veces las posibilidades de tomar otro camino y el riesgo de ponerse en manos de quien eventualmente no respeta los derechos del comprador.

Destacamos: la explicación del agujero de seguridad de Superfish, por Mark Rogers.
La expliación de Filippo Valsorda, y su test de riesgo sobre Superfish.
La investigación de Robert Graham sobre Superfish y su fallo.
El comentario de David Auerbach sobre el impacto de esta acción sobre Lenovo.




domingo, febrero 22, 2015

James Ward sobre Java

James Ward, de Salesforce, escribió el pasado mes de diciembre una  nota sobre Java (y alrededores). Una excelente lectura sobre java para aplicaciones web, y seguramente adaptable a otros escenarios. En realidad, se trata de una recomendación basada en experiencia acerca de la adopción de métodos ágiles y el recurso a herramientas de automatización y sistematización del proceso de contrucción (e implementación) de aplicaciones. No quiero repetirlo, pero sí recomendarlo. En todo caso, quisiera citar su reflexión sobre el mantenimiento de releases "monolíticos" (trabajar para entregas en series de tiempo y desarrollo prolongadas):

Monolithic Releases Suck

Unless you work for NASA there is no reason to have release cycles longer than two weeks. It is likely that the reason you have such long release cycles is because a manager somewhere is trying to reduce risk. That manager probably used to do waterfall and then switched to Agile but never changed the actually delivery model to one that is also more Agile. So you have your short sprints but the code doesn’t reach production for months because it would be too risky to release more often. The truth is that Continuous Delivery (CD) actually lowers the cumulative risk of releases. No matter how often you release, things will sometimes break. But with small and more frequent releases fixing that breakage is much easier. When a monolithic release goes south, there goes your weekend, week, or sometimes month. Besides… Releasing feels good. Why not do it all the time?
Moving to Continuous Delivery has a lot of parts and can take years to fully embrace (unless like all startups today, you started with CD). Here are some of the most crucial elements to CD that you can implement one-at-a-time:
  • Friction-less App Provisioning & Deployment: Every developer should be able to instantly provision & deploy a new app.
  • Microservices: Logically group services/apps into independent deployables. This makes it easy for teams to move forward at their own pace.
  • Rollbacks: Make rolling back to a previous version of the app as simple as flipping a switch. There is an obvious deployment side to this but there is also some policy that usually needs to go into place around schema changes.
  • Decoupled Schema & Code Changes: When schema changes and code changes depend on each other rollbacks are really hard. Decoupling the two isolates risk and makes it possible to go back to a previous version of an app without having to also figure out what schema changes need to be made at the same time.
  • Immutable Deployments: Knowing the correlation between what is deployed and an exact point-in-time in your SCM is essential to troubleshooting problems. If you ssh into a server and change something on a deployed system you significantly reduce your ability to reproduce and understand the problem.
  • Zero Intervention Deployments: The environment you are deploying to should own the app’s config. If you have to edit files or perform other manual steps post-deployment then your process is brittle. Deployment should be no more than copying a tested artifact to a server and starting it’s process.
  • Automate Deployment: Provisioning virtual servers, adding & removing servers behind load balancers, auto-starting server processes, and restarting dead processes should be automated.
  • Disposable Servers: Don’t let the Chaos Monkey cause chaos. Servers die. Prepare for it by having a stateless architecture and ephemeral disks. Put persistent state in external, persistent data stores.
  • Central Logging Service: Don’t use the local disk for logs because it prevents disposability and makes it really hard to search across multiple servers.
  • Monitor & Notify: Setup automated health checks, performance monitoring, and log monitoring. Know before your users when something goes wrong.
There are a ton of details to these that I won’t go into here. If you’d like to see me expand on any of these in a future blog, let me know in the comments.
Un punto importante, pero recordado sólo con el propósito de que lea completa la reflexión de James Ward, que lo merece.

sábado, enero 10, 2015

Fin del soporte de Plex 7.0

Al anunciado retiro del soporte de Plex 6.1, efectivo ya este año (junio de 2015) , se une ahora el aviso de "End of Service" para la versión 7.0, comenzando en enero de 2016. Con el anuncio ya comentado del lanzamiento de Plex 7.2, comenzamos a ver una actualización tecnológica que era muy necesaria. Insuficiente todavía, pero más próxima al nivel de los productos con los que Plex opera: Visual Studio actualizado parcialmente (para WCF connectors, pero no para clientes C++, algo más que engorroso), java actualizado a java 7. Todavía queda camino por recorrer, pero se viene cumpliendo la política de cambios en ciclos muy cortos.
Haga sus planes de evolución.

martes, diciembre 30, 2014

Lanzamiento del nuevo release Plex 7.2

Ayer CA comunicó la disponibilidad del release 7.2 de Plex. Mucho se ha conversado sobre qué incluiría, pero nada es adelantado en el anuncio. Lo que sí es adelantado es algo ya conocido, pero igualmente muy prometedor: la adopción de una política de releases incrementales rápidos, y con participación directa de todos aquellos clientes que deseen sumarse al plan:
The CA Incremental Release Program is a customer-interactive delivery model where new product features are developed and released using the Agile development methodology. CA’s development teams work closely with customers to create product features for rapid implementation. Rather than spending years building a software release full of features, we work with customers and release features incrementally, as they are completed.
El lanzamiento de la versión 7.2 es una confirmación de esta política, acortando todavía más los tiempos de entrega ya vistos entre la versión 7.0 y 7.1.
De la documentación inicial se desprende que el grueso de los cambios se concentran en la variante .NET y en WCF Service Connectors, el agregado de soporte para Oracle 12, y la esperada actualización del soporte de Visual Studio...2010. Se afirma que es posible el soporte de versiones superiores, pero no está testeado (VS 2013). En cuanto a Java, continúa soportado hasta la versión 7 (ya existente en 7.1) y en cuanto a OS400, el soporte alcanza a IBM i 7.1.
Evidentemente, hace falta la participación de la base de clientes, si queremos ver otras nuevas características disponibles.
Plex 7.2 en la wiki oficial (CA).
Lista de fixes en la wiki oficial de Plex.
Matriz de compatibilidad de Plex 7.2 (requiere usuario).

sábado, diciembre 27, 2014

Una arquitectura de dos velocidades (McKinsey)

Un par de artículos publicados por McKinsey este mes de diciembre, (1 y 2, firmados por Oliver Bossert, Jürgen Laartz, y Tor Jakob Ramsøy), plantean una estrategia realista de adaptación en una empresa anterior al universo digital. Los autores proponen una estrategia de dos velocidades para sumar una capa digital a una empresa de corte tradicional. Si lo releemos un poco, podríamos concluír que el escenario descripto es común y mayoritario: las empresas nativas digitales son una minoría, aunque se hayan convertido en hegemónicas en muy pocos años. Excelente artículo para pensar estrategias.
El postulado de los autores es este:
Unlike enterprises that are born digital, traditional companies don’t have the luxury of starting with a clean slate; they must build an architecture designed for the digital enterprise on a legacy foundation. What’s more, while most companies would have been comfortable in the past going through a three- to five-year transformation and not implementing new features in the meantime, today’s highly competitive markets no longer allow players to alter architecture and business models sequentially. It is therefore important to realize that the transformation toward digital is a continuous process of delivering new functionality.
Para los autores, esta migración al mundo digital requiere hacerse fuertes en cuatro aspectos: Innovación en el desarrollo de productos y servicios, habilidad para atender múltiples canales, capacidad de análisis de datos y tendencias (big data), automatización y digitalización de procesos de negocios:
First, because the digital business model allows the creation—and shorter time to market—of digital products and services, companies need to become skilled at digital-product innovation that meets changing customer expectations. One such new offering for consumers is car-insurance policies enabled by geolocation-tracking technology, where the price of the policy depends on how much and how aggressively a person actually drives.
Second, companies need to provide a seamless multichannel (digital and physical) experience so consumers can move effortlessly from one channel to another. For example, many shoppers use smartphones to reserve a product online and pick it up in a store.
Third, companies should use big data and advanced analytics to better understand customer behavior. For example, gaining insight into customers’ buying habits—with their consent, of course—can lead to an improved customer experience and increased sales through more effective cross-selling.
Fourth, companies need to improve their capabilities in automating operations and digitizing business processes. This is important because it enables quicker response times to customers while cutting operating waste and costs.
El problema básico al que hay que encarar es el relacionado con la contradicción entre una empresa estable, con procesos de negocios manejados de manera conservadora, frente a la necesidad de ser flexible, ágil, rápido y variable en la atención de los nuevos procesos. Esto requiere otra manera de organizar las actividades de IT:
While a few players have overcome some of these hurdles, it is a big challenge for many IT executives to implement all four levers so customers can, for instance, purchase individually tailored products across multiple channels. One important reason is that the legacy IT architecture and organization, for example, which runs the supply-chain and operations systems responsible for executing online product orders, lacks the speed and flexibility needed in the digital marketplace.
Indeed, the ability to offer new products on a timely basis has become an important compe­t­itive factor; this might require weekly software releases for an e-commerce platform. That kind of speed can only be achieved with an inherently error-prone software-development approach of testing, failing, learning, adapting, and iterating rapidly. It’s hard to imagine that experimental approach applied to legacy sys­tems. Nor would it be appropriate, because the demand for perfection is far higher in key back-end legacy systems. Quality, measured by the number of IT system errors, and resilience, measured by the availability and stability of IT infrastructure services, comes at slow speed but is critical for risk- and regulatory-compliance management and for core transactional activities such as finance and online sales. In contrast, lower IT-system quality and resilience can be acceptable in customer-facing areas, for instance, when users participate in the testing of new software. For these reasons, many companies need an IT architecture that can operate at different speeds.
Los autores valúan como imprescindibles los dos tipos de procesos (tradicionales, difícilmente transformables, y digitales, con grandes requerimientos de agilidad, flexibilidad y rapidez de respuesta). En este marco, elaboran una serie de recomendaciones para mantener e interactuar entre ambos tipos de procesos y necesidades:
Manage a hybrid target architecture with very different platforms. Digital target architectures are heterogeneous, with trans­actional platforms managed for scalability and resilience coexisting alongside other systems optimized for customer experience. The transformation can be sustained only if a high-level target architecture and standards in critical areas such as cybersecurity are clearly described from the beginning. Without them, the transformation can be slowed down by the complexity of legacy and new hardware and application provisioning.
Plan for ongoing software delivery with blends of methodologies. There isn’t time to develop software by using a waterfall model and then separating the transformation into several long phases, as in traditional multi-year IT transformations. Nor is the solution to migrate all delivery to agile methodologies. The answer is to do both but blend the benefits of agile (iterative development, continuous delivery) into the waterfall model. Now, the software solution for each business challenge has to be constantly developed, tested, and implemented in an integrated fashion. This requires clear segregation of platforms into domains managed for fast iterative delivery (for example, for customer-experience applications) or for transactional integrity (for back-end transactional systems).
Develop the low-speed architecture, too. It’s important to establish a clear distinction between the two IT models from the beginning and not only focus on the fast-speed part but also develop the transactional back-end architecture. Those systems of record require rigorous development and testing methodologies and must be managed for resilience and scalability, with no compromises.
Build a new organization and governance model in parallel with the new technology. In the digital enterprise, business and IT work together in a new and integrated way, where boundaries between the two start to blur. This partnership has to be established during the transformation.
Change mind-sets. By transforming the architecture, technology can become a key fac­tor for a company’s competitiveness. Such a development requires increased management attention and usually a place on the board agenda. While IT efficiency clearly remains important, spending levels may well rise as companies transform IT from largely being a necessary expense to being a true business enabler. As such, expenses are managed as investments rather than just costs; this will often require a substantial mind-set shift for the organization.
Run waves of change in three parallel streams. In a two-speed transformation, it makes sense to have an implementation plan that runs in three parallel streams. The digital-transformation stream builds new functionality for the business, supported by the results of a short-term optimization stream that develops solutions that might not always be compliant with the target architecture (for example, using noncom­pliant interfaces). To ease the development of short-term measures and create a sustainable IT infrastructure, an architecture-transformation stream is the third necessary component.
Las técnicas descriptas pueden verse no sólo como aplicables a una estrategia de cambio hacia una economía digital, sino a cualquier escenario de una empresa grande, con una tradición establecida de procesos de negocios y soluciones tecnológicas establecidas pero anticuadas; la idea central que destaco de la visión de Bossert y otros es la de articular dos velocidades en el desarrollo de nuevos procesos y arquitecturas, dando a cada parte su importancia relativa, y manteniendo procedimientos diferenciados según de qué área se trate.
Recomiendo releer varias veces estos artículos, extrapolando cuando parezca necesario.