domingo, agosto 07, 2022

Todd Montgomery: Unblocked by design

Leído en InfoQ , que publica una presentacion ofrecida en QCon Plus, en noviembre de 2021. Un punto de vista lejano a cómo he trabajado siempre, pero con argumentos para atenderlo. Todd Montgomery aboga en favor del diseño asincrónico de los procesos. considerando en primer lugar que la secuencialidad es ilusoria:

All of our systems provide this illusion of sequentiality, this program order of operation that we really hang our hat on as developers. We look at this and we can simplify our lives by this illusion, but be prepared, it is an illusion. That's because a compiler can reorder, runtimes can reorder, CPUs can reorder. Everything is happening in parallel, not just concurrently, but in parallel on all different parts of a system, operating systems as well as other things. It may not be the fastest way to just do step one, step two, step three. It may be faster to do steps one and two at the same time or to do step two before one because of other things that can be optimized. By imposing order on that we can make some assumptions about the state of things as we move along. Ordering has to be imposed. This is done by things in the CPU such as the load/store buffers, providing you with this ability to go ahead and store things to memory, or to load them asynchronously. Our CPUs are all asynchronous.

Storages are exactly the same way, different levels of caching give us this ability for multiple things to be optimized along that path. OSs with virtual memory and caches do the same thing. Even our libraries do this with the ideas of promises and futures. The key is to wait. All of this provides us with this illusion that it's ok to wait. It can be, but that can also have a price, because the operating system can de-schedule. When you're waiting for something, and you're not doing any other work, the operating system is going to take your time slice. It's also lost opportunity to do work that is not reliant on what you're waiting for. In some application, that's perfectly fine, in others it's not. By having locks and signaling in that path, they do not come for free, they do impose some constraints.

 Ubicando el contexto primero: 

When we talk about sequential or synchronous or blocking, we're talking about the idea that you do some operation. You cannot continue to do things until something has finished or things like that. This is more exaggerated when you go across an asynchronous binary boundary. It could be a network. It could be sending data from one thread to another thread, or a number of different things. A lot of these things make it more obvious, as opposed to asynchronous or non-blocking types of designs where you do something and then you go off and do something else. Then you come back and can process the result or the response, or something like that.

Cómo ve la sincronía:

I'll just use as an example throughout this, because it's easy to talk about, the idea of a request and a response. With sync or synchronous, you would send a request, there'll be some processing of it. Optionally, you might have a response. Even if the response is simply just to acknowledge that it has completed. It doesn't always have to involve having a response, but there might be some blocking operation that happens until it is completed. A normal function call is normally like this. If it's sequential operation, and there's not really anything else to do at that time, that's perfectly fine. If there are other things that need to be done now, or it needs to be done on something else, that's a lost opportunity.

Y la asincronía:

Async is more about the idea of initiating an operation, having some processing of it, and you're waiting then for a response. This could be across threads, cores, nodes, storage, all kinds of different things where there is this opportunity to do things while you're waiting for the next step, or that to complete or something like that. The idea of async is really, what do you do while waiting? It's a very big part of this. Just as an aside, when we talk about event driven, we're talking about actually the idea of on the processing side, you will see a request come in. We'll denote that as OnRequest. On the requesting side, when a response comes in, you would have OnResponse, or OnComplete, or something like that. We'll use these terms a couple times throughout this.

 El propósito de Montgomery es procesar asincronicamente, y sacar partido de los tiempos muertos:

The key here is while something is processing or you're waiting, is to do something, and that's one of the takeaways I want you to think of. It's a lost opportunity. What can you do while waiting and make that more efficient? The short answer is, while waiting, do other work. Having the ability to actually do other stuff is great. The first thing is sending more requests, as we saw. The sequence here is, how do you distinguish between the requests? The relationship here is you have to correlate them. You have to be able to basically identify each individual request and individual response. That correlation gives rise to having things which are a little bit more interesting. The ordering of them starts to become very relevant. You need to figure out things like how to handle things that are not in order. You can reorder them. You're just really looking at the relationship between a request and a response and matching them up. It can be reordered in any way you want, to make things simple. It does provide an interesting question of, what happens if you get something that you can't make sense of. Is it invalid? Do you drop it? Do you ignore it? In this case, you've sent request 0, and you've got a response for 1. In this point, you're not sure exactly what the response for 1 is. That's handling the unexpected.

(...) This is an async duty cycle. This looks like a lot of the duty cycles that I have written, and I've seen written and helped write, which is, you're basically sitting in a loop while you're running. You usually have some mechanism to terminate it. You usually poll inputs. By polling, I definitely mean going to see if there's anything to do, and if not, you simply return and go to the next step. You poll if there's input. You check timeouts. You process pending actions. The more complicated work is less in the polling of the inputs and handling them, it's more in the checking for timeouts, processing pending actions, those types of things. Those are a little bit more complex. Then at the end, you might idle waiting for something to do. Or you might just say, ok, I'm going to sleep for a millisecond, and you come right back. You do have a little bit of flexibility here in terms of idling, waiting for something to do.

 Realmente, estos conceptos parecen complicados de aplicar en un proceso usual de trabajo, y más viables en la construcción de trabajos de nivel de sistema operativo. El interlocutor de Montgomery (Printezis) lo ve justamente así: You did talk about the duty cycle and how you would write it. In reality, how much a developer would actually write that, but instead use a framework that will do most of the work for them?

La respuesta de Montgomery:

(...) Beyond that, I mean, patterns and antipatterns, I think, learning queuing theory, which may sound intimidating, but it's not. Most of it is fairly easy to absorb at a high enough level that you can see far enough to help systems. It is one of those things that I think pays for itself. Just like learning basic data structures, we should teach a little bit more about queuing theory and things behind it. Getting an intuition for how queues work and some of the theory behind them goes a huge way, when looking at real life systems. At least it has for me, but I do encourage people to look at that. Beyond that, technologies frameworks, I think by spending your time more looking at what is behind a framework. In other words, the concepts, you do much better than just looking at how to use a framework. That may be front and center, because that's what you want to do, but go deeper. Go deeper into, what is it built on? Why does it work this way? Why doesn't it work this other way? Asking those questions, I think you'll learn a tremendous amount. (...)

La conversación se extiende y deriva por otros asuntos relacionados. Recomendable para leer y releer. Habrá que volver más de una vez.

Veo un modo de afrontar los procesos alejado del modo en que usualmente he trabajado, pero debo reconocer que en los últimos cinco o seis años los cambios conceptuales sobreabundan, y puedo decir que estamos en una quinta o sexta generación, lejos de aquellos que llamamos cuarta generación hace veinte o treinta años. El tiempo mostrará qué ha resultado duradero, y qué ha tomado por un callejón sin salida. Estoy dispuesto a escuchar.


Pesadillas en la nube

 Forrest Brazeal, actualmente empleado de Google Cloud (An AWS Hero turned Google Cloud employee, I explore the technical and philosophical differences between the two platforms. My biases are obvious, but opinions are my own) señala en julio que la peor pesadilla de cualquier desarrollador en la nube es una llamada recursiva en sus pruebas, que escale la facturación de su cuenta de unos pocos dólares/euros a "miles" (50.000 por ejemplo). Y una llamada recursiva que genere miles de llamadas procesadas puede producirse en cualquier prueba:

AWS calls it the recursive runaway problem. I call it the Hall of Infinite Functions - imagine a roomful of mirrors reflecting an endless row of Lambda invocations. It’s pretty much the only cloud billing scenario that gives me nightmares as a developer, for two reasons:

  • It can happen so fast. It’s the flash flood of cloud disasters. This is not like forgetting about a GPU instance and incurring a few dollars per hour in linearly increasing cost. You can go to bed with a $5 monthly bill and wake up with a $50,000 bill - all before your budget alerts have a chance to fire.

  • There’s no good way to protect against it. None of the cloud providers has built mechanisms to fully insulate developers from this risk yet.

Brazeal apunta a un incidente descripto en detalle por sus propias víctimas (We Burnt $72K testing Firebase + Cloud Run and almost went Bankrupt) que puede dar una idea del problema. En este caso la factura pasó de un potencial de 7 dólares a 72000...

Sudeep Chauhan, protagonista de este incidente, escribe posteriormente, tras poner en orden la casa, una lista de recomendaciones para trabajar con un proveedor de servicios en la nube.

Nota: Renato Losio, en InfoQ, a propósito del artículo de Brazeal, lo menciona y extiende, recordando otro artículo de Brazeal dedicado a la capa sin cargo de AWS.

sábado, agosto 06, 2022

Probablemente usted no necesite microservicios

 Mattew Spence, en ITNEXT, a contracorriente de la enorme ola de bombo sobre microservicios, desarrolla un consistente conjunto de argumentos de relativización de la importancia y necesidad de microservicios (You don't need microservices) . Sólo destaco el argumento acerca de la simplicidad de los microservicios, y de sus ventajas derivadas:

"Simpler, Easier to Understand Code"

This benefit is at best disingenuous, at worse, a bald-faced lie.

Each service is simpler and easier to understand. Sure. The system as a whole is far more complex and harder to understand. You haven’t removed the complexity; you’ve increased it and then transplanted it somewhere else.

(...) Although microservices enforce modularization, there is no guarantee it is good modularization. Microservices can easily become a tightly coupled “distributed monolith” if the design isn’t fully considered.

(...) The choice between monolith and microservices is often presented as two mutually exclusive modes of thought. Old school vs. new school. Right or wrong. One or the other.

The truth is they are both valid approaches with different trade-offs. The correct choice is highly context-specific and must include a broad range of considerations.

The choice itself is a false dichotomy and, in certain circumstances, should be made on a feature-by-feature basis rather than a single approach for an entire organization’s engineering team.

Should you consider microservices?

As is often the case, it depends. You might genuinely benefit from a microservices architecture.

There are certainly situations where they can pay their dues, but if you are a small to medium-sized team or an early-stage project:

No, you probably don’t need microservices.


domingo, julio 31, 2022

Liam Allan habla sobre Node en IBM i

 Liam Allan, como Scott Klement, han dado un impulso formidable al IBM i (AKA AS/400, iseries), explorando, popularizando y explotando los sucesivos cambios tecnológicos habidos en el equipo desde hace años. El comentario sobre Node lo hace Liam en la entrevista que Charles Guarino le hace en TechChannel. La participación de Liam, reciente, ha implicado cambios radicales en el modo de encarar al IBM i, comenzando por su editor de programas. Debemos decir que el ambiente y las prácticas relacionadas con el IBM i históricamente han sido más vale conservadoras, apropiadas para un set de equipos que solía ser el núcleo del procesamiento de las empresas que lo usaban. Dice Guarino sobre este aspecto: I still think there’s still a lot of newbies—even the most seasoned RPG developers are still newbies—and open-source makes them nervous, perhaps because it’s a whole different paradigm, a whole different vernacular. Everything about it is different, yet obviously there are so many similarities, but the terminology is very different. Klement y quienes lo siguieron, y ahora Allan, han representado una renovación y actualización más que conveniente,  necesaria.

Por mi parte, dándole vueltas a su uso con Plex. Ya Klement ha potenciado su integración con sus propuestas a nivel de integración de lenguajes java y c/c++ a través de ILE.

 Lo dicho sobre Node:

Charlie: (...) So Liam, I do have a lot of things that I want to talk to you about, but when I think of you lately what comes to my mind is Node. I mean I kind of associate you with just Node and how you really are really running with that technology, especially on IBM i, but I think there are a lot of people who don’t quite understand where that fits in, what Node actually is and how it fits on your platform. So what can you say about that in general?

Liam: Absolutely. So I mean, there’s a few points to be made. I guess I’ll start with the fact that you know, it is 80% of my working life is writing typescript and Javascript. So I spend most of my days in it now, which is great. A few years ago, it was more like 50% and each year it’s growing more and more. So I usually focus on how it can integrate with IBM i. So you know having Node.js code, whether it’s typescript or Javascript talking to IBM i via the database—so, calling programs, fetching data, updating data; you know, the minimal standard kind of driver type stuff that you do, crud, things like that. What I especially like about Node on IBM i is that it is made for high input/outputs. It’s great at handling large volumes of data and most people that are using IBM i tend to have tons of data, right? Db2 for i has been around for centuries at this point; it’s older than I am, and I can make that joke. No one else can make that joke but I can make it and you know it’s been around for the longest time. And so people have got all of this data and in my opinion Node.js is just a great way to express that data—you know, via an API. I think it’s fast. It’s got high throughput and yeah, it’s a synchronous in its standard. It’s easy to use, it’s easy to deploy, it’s easy to write code for especially. One of the reasons I like is the fact that I can have something working within 20 minutes. It’s a fantastic piece of technology and it’s been out for a while. I mean it’s been out for like 10 years, 10 years plus at this point. It’s just fun to use. I really enjoy it and I encourage other people to use it too.

domingo, julio 03, 2022

El concepto "Legacy" y la zanahoria "microservice"

 Lo que sigue es un artículo "viejísimo", del 25 de abril de este año. Lo copiaré y comentaré si es necesario, porque sigue siendo de rigurosa actualidad, tanto en el universo de IBM i, como en general:

Beware The Hype Of Modern Tech

Many IBM i shops are under the gun to modernize their applications as part of a digital transformation initiative. If the app is more than 10 or 15 years old and doesn’t use the latest technology and techniques, it’s considered a legacy system that must be torn down and rebuilt according to current code. But there are substantial risks associated with these efforts – not the least of which that the modern method is essentially incompatible with the IBM i architecture as it currently exists. IBM i shops should be careful when evaluating these new directions.

Amy Anderson, a modernization consultant working in IBM’s Rochester, Minnesota, lab, says she was joking last year when she said “every executive says they want to do containerized microservices in the cloud.” If Anderson is thinking about a future in comedy, she might want to rethink her plans, because what she says isn’t a joke; it’s the truth.

Many, if not most, tech executives these days are fully behind the drive to run their systems as containerized microservices in the cloud. They have been told by the analyst firms and the mainstream tech press and the cloud giants that the future of business IT is breaking up monolithic applications into lots of different pieces that communicate through microservices, probably REST. All these little apps will live in containers, likely managed by Kubernetes, enabling them to scale up and down seamlessly on the cloud, likely AWS or Microsoft Azure.

The “containerized microservices in the cloud” mantra has been repeated so often, many just accept it as the gospel truth. Of course that is the future of business tech! they say. How else could we possibly run all these applications? It’s accepted as an article of faith that this is the right approach. Whether a company is running homegrown software or a packaged app, they’re adamant that the old ways must be left behind, and to embrace the glorious future that is containerized microservices running in the cloud.

 The reality is that the supposedly glorious future is today is a pipe dream, at least when it comes to IBM i. Let’s start with Kubernetes, the container orchestration system open sourced by Google in 2014, which is a critical component of running in the “cloud native” way. (...)

While Kubernetes solves one problem – eliminating the complexity inherent in deploying and scaling all the different components that go into a given application – it introduces a lot more complexity to the user. Running a Kubernetes cluster is hard. If you’ve talked to anybody who has tried to do it themselves, you’ll quickly find out that it’s extremely difficult. It requires a whole new set of skills that most IT professionals do not have. The cloud giants, of course, have these folks in droves, but they’re practically non-existent everywhere else.

ISVs are eager to adopt Kubernetes as the new de facto operating system for one very good reason: because it helps them run their applications on the cloud. (...) 

For greenfield development, the cloud can make a lot of sense. Customers can get up and running very quickly on a cloud-based business application, and leave all the muss and fuss of managing hardware to the cloud provider. But there are downsides too, such as no ability to customize the application. For the vendors, the fact that customers cannot customize goes hand in hand with their inability to fall behind on releases. (Surely the vendor passes whatever benefit it receives through collective avoidance of technical debt back to you, dear customer.)

The Kubernetes route makes less sense for established products with an established installed base. It takes quite a bit of work to adapt an existing application to run inside a Docker container and have it managed in a Kubernetes pod. It can be done, but it’s a heavy lift. But when it comes to critical transactional systems, it likely becomes more of a full-blown re-implementation than a simple upgrade. There are no free lunches in IT.

When it comes to IBM i, lots of existing customers who are running their ERP systems on-prem are not ready to move their production business applications to the cloud. Notice what happened when Infor stopped rolling out enhancements for the M3 applications for IBM i customers. Infor wanted these folks to adopt M3 running on X86 servers running in AWS cloud. Many of them balked at this forced re-implementation, and now Infor is rolling out a new offering called CM3 that recognizes that customers want to keep their data on prem in their Db2 for i server.

Other ERP vendors have taken a similar approach to the cloud. SAP wants its Business Suite customers to move to S/4 HANA, which is a containerized, microservice-based ERP running in the cloud. The German ERP giant has committed to supporting on-prem Business Suite customers until 2027, and through 2030 with an extended maintenance agreement. After that, the customers must be on S/4 HANA, which at this point doesn’t run on IBM i.

Will the 1,500-plus customers who have benefited from running SAP on IBM i for the past 30 years be willing to give up their entire legacy and begin anew in the S/4 HANA cloud? It sounds like a risky proposition, especially given the fact that much of the functionality that currently exists in Business Suite has yet to be re-constructed din S/4 HANA. Is this an acceptable risk?

Kubernetes is just part of the problem, but it’s a big one, because at this point IBM i doesn’t support Kubernetes. It’s not even clear what Kubernetes running on IBM i would look like, considering all the virtualization features that already exist in the IBM i and Power platform. (What would become of LPARs, subsystems, and iASPs? How would any of that work?) In any event, the executives in charge of IBM i have told IT Jungle there is no demand for Kubernetes among IBM i customers. But that could change.

Particularmente interesante es el comentario acerca de los planes de Jack Henry & Associates:

Jack Henry & Associates officially unleashed its long-term roadmap earlier this year, but it had been working on the plan for years. The company has been a stalwart of the midrange platform for decades, reliably processing transactions for more than a thousand banks and credit unions running on its RPG-based core banking systems. It is also one of the biggest private cloud providers in the Power Systems arena, as it runs the Power machinery powering (pun intended) hundreds of customer applications.

The future roadmap for Jack Henry is (you guessed it) containerized microservices in the cloud. The company explains that it doesn’t make sense to develop and maintain about 100 duplicate business functions across four separate products, and so it will slowly replace those redundant components that today make up its monolithic packages like Silverlake with smaller, bite-sized components that run in the cloud-native fashion on Kubernetes and connect and communicate via microservices.

It’s not a bad plan, if you’ve been listening to the IT analysts and the press for the past five years. Jack Henry is doing exactly what they’ve been espousing as the modern method. But how does it mesh with its current legacy? The reality is that none of Jack Henry’s future software will be able to run on IBM i. Db2 for i is not even one of the long-term options for a database; instead it selected PostgreSQL, SQL Server, and MongoDB (depending on which cloud the customer is running in).

Jack Henry executives acknowledge that there’s not much overlap between its roadmap and the IBM i roadmap at this point in time. But they say that they’re moving slowly and won’t have all of the 100 or so business functions fully converted into containerized microservices for 15 years – and then it will likely take another 15 years to get everybody moved over. So it’s not a pressing issue at the moment.

Maybe Kubernetes will run on IBM i by then? Maybe there will be something new and different that eliminates the technological mismatch? Who knows?

The IBM i system is a known entity, with known strengths and weaknesses. Containerized microservices in the cloud is an unknown entity, and its strengths and weaknesses are still being determined. While containerized microservices running in the cloud may ultimately win out as the superior platform for business IT, that hasn’t been decided yet.

For the past 30 years, the mainstream IT world has leapt from one shiny object to the next, convinced that it will be The Next Big Thing. (TPM, the founder of this publication and its co-editor with me, has a whole different life as a journalist and analyst chasing this, called The Next Platform, not surprisingly.) Over the same period, the IBM i platform has continued more or less on the same path, with the same core architecture, running the same types of applications in the same reliable, secure manner.

The more hype is lavished upon containerized microservices in the cloud, the more it looks like just the latest shiny object, which will inevitably be replaced by the next shiny object. Meanwhile, the IBM i server will just keep ticking.

 Sin duda han habido cambios espectaculares en unos pocos años, los últimos cuatro o cinco, y existen herramientas y recursos disponbiles de gran potencia. Pero para una empresa o institucion en marcha, un cambio tiene que ser pesado con cuidado, evitando el riesgo de caer en el vacío. ¿Un cambio que requiere nuevas metodologías, nuevos lenguajes, nuevas platatformas, nuevas comunicaciones? ¿desarrollos con lo último de lo último, sin contar con la prueba de recursos robustos y experimentados por varios años?

domingo, junio 05, 2022

China, Gitee, GitHub

 En Technology Review, del MIT, el 30 de mayo, escribe Zeyi Yang

Earlier this month, thousands of software developers in China woke up to find that their open-source code hosted on Gitee, a state-backed Chinese competitor to the international code repository platform GitHub, had been locked and hidden from public view.

Gitee released a statement later that day explaining that the locked code was being manually reviewed, as all open-source code would need to be before being published from then on. The company “didn’t have a choice,” it wrote. Gitee didn’t respond to MIT Technology Review, but it is widely assumed that the Chinese government had imposed yet another bit of heavy-handed censorship.

For the open-source software community in China, which celebrates transparency and global collaboration, the move has come as a shock. Code was supposed to be apolitical. Ultimately, these developers fear it could discourage people from contributing to open-source projects, and China’s software industry will suffer as a result

 Una nueva muestra de la dependencia de grandes actores existente en el mundo Open Source en primer lugar. Pero yendo más lejos, una indicación de la limitada capacidad de elección existente en el mundo de la tecnología y de las ideas y culturas transportadas por su medio. El problema descubierto por los desarrolladores chinos con su propio repositorio "oficial" puede repetirse potencialmente en el mundo occidental, bajo el sello de las grandes tecnológicas que dominan directa o indirectamente los repositorios abiertos, "públicos", y las infraestructuras y servicios en la nube. Ni Google, ni Microsoft, ni Amazon han demostrado neutralidad en su historia, y son protagonistas de décadas de juicios por prácticas desleales. Confiar tu base de código, o tus aplicaciones en este marco no es lo más apropiado, probablemente.

domingo, abril 24, 2022

Computacion cuántica o bombo?

 Sankar Das Sarma, investigador en Física, director del CMTC (Condensed Matter Theory Center) de la Universidad de Maryland, publica en Technology Review un artículo enfriando las expectativas en computación cuántica

A decade and more ago, I was often asked when I thought a real quantum computer would be built. (It is interesting that I no longer face this question as quantum-computing hype has apparently convinced people that these systems already exist or are just around the corner).  My unequivocal answer was always that I do not know. Predicting the future of technology is impossible—it happens when it happens. One might try to draw an analogy with the past. It took the aviation industry more than 60 years to go from the Wright brothers to jumbo jets carrying hundreds of passengers thousands of miles. The immediate question is where quantum computing development, as it stands today, should be placed on that timeline. Is it with the Wright brothers in 1903? The first jet planes around 1940? Or maybe we’re still way back in the early 16th century, with Leonardo da Vinci’s flying machine? I do not know. Neither does anybody else.

Sobre el trabajo de Sarma, una lista de papeles en los que participa. 

Sobre el CMTC, y su trabajo, mencionando su colaboración con Microsoft.

Sobre el estado las investigaciones de la materia condensada (Condensed matter physics), en  Wikipedia.

La foto, tomada del blog de Microsoft sobre computacion cuantica.

domingo, abril 17, 2022

Mary Poppendieck mirando en perspectiva

 En QCon Plus, una conferencia virtual de InfoQ, se presenta una conversación con Mary Poppendieck (en la charla también interviente Tom, su esposo y socio en su largo trabajo de consultoría). Mary presenta una visión de los cambios producidos en la construcción de software comenzando con el nuevo siglo: veinte años de cambios radicales que alteraron los paradigmas en que nos hemos basado por décadas. Me parece una visión en perspectiva de particular interés, considerando su propio trabajo en 3M iniciado con Six Sigma, y su propio entendimiento de los conceptos agiles. Mary habla de puentes; ella misma lo es, acompañando el cambio establecido.


 En Technology Review

On March 14, Shah and his University of Washington colleague Emily M. Bender, who studies computational linguistics and ethical issues in natural-language processing, published a paper that criticizes what they see as a rush to embrace language models for tasks they are not designed to address. In particular, they fear that using language models for search could lead to more misinformation and more polarized debate. 

“The Star Trek fantasy—where you have this all-knowing computer that you can ask questions and it just gives you the answer—is not what we can provide and not what we need,” says Bender, a coauthor on the paper that led Timnit Gebru to be forced out of Google, which had highlighted the dangers of large language models.

domingo, septiembre 05, 2021

Utilidad de The Java Version Almanac

 Usualmente trabajo en Java 8, (como parte de otros lenguajes). La versión 8 es obligatoria como intersección de los distintos productos que se combinan en nuestro desarrollo (Plex, Websphere, C++, RPGIV, OS/400). Francamente, podríamos pasar a Java 9 sin problemas reales, pero sería ir un paso más allá de lo que el soporte de Plex reconoce. Es probable que terminemos haciéndolo durante el año, o poco después, de todas formas. Lo que parece un gran atraso respecto a la versión corriente (Java 15/16) y las versiones bajo desarrollo (Java 17/18), si consideramos que Java 8 es por ahora la anteúltima versión estable (LTS), seguida por Java 11, y en espera de Java 17. El sistema adoptado por Oracle de desarrollo de Java en versiones con pequeños cambios graduales, favorece el avance de versión analizando el impacto que puede tener el paso a la siguiente, considerando los releases mayores como objetivos principales. Sobre el plan de desarrollo de Java, dice Oracle:

For product releases after Java SE 8, Oracle will designate a release, every three years, as a Long-Term-Support (LTS) release. Java SE 11 is an LTS release. For the purposes of Oracle Premier Support, non-LTS releases are considered a cumulative set of implementation enhancements of the most recent LTS release. Once a new feature release is made available, any previous non-LTS release will be considered superseded. For example, Java SE 9 was a non-LTS release and immediately superseded by Java SE 10 (also non-LTS), Java SE 10 in turn is immediately superseded by Java SE 11. Java SE 11 however is an LTS release, and therefore Oracle Customers will receive Oracle Premier Support and periodic update releases, even though Java SE 12 was released. (en It's time to move to Java 17, Johan Janssen, 27 de agosto de 2021, Java Magazine)

Marc R. Hoffmann y Cay S. Horstmann han desarrollado un cuadro que despliega el inventario de todas las versiones de Java existentes, su soporte, y el análisis de los cambios existentes de versión a versión, a nivel de clase y método. Este cuadro es un auxiliar imprescindible a la hora de planear una meditada actualización a niveles superiores. Se trata de un gran trabajo, que al menos a mí me resulta de consulta obligatoria. Ojalá otros productos relacionados tuvieran un cuadro similar (estoy pensando en Apache POI, donde suelo necesitar la inferencia y el análisis de casos para concluír qué debo usar).

The Java Version Almanac,


martes, agosto 31, 2021

Moviendo una aplicacion a la nube

 Días atrás leí con interés un artículo de Jonathon Henderson, de Scott Logic, describiendo detalladamente un proyecto de conversión de una aplicación descripta sin entrar en detalles como "monolítica", a un conjunto de servicios y/o aplicaciones back y front end que la reemplacen, basado en AWS. Henderson describe su proceso de descubrimiento y reconocimiento de los mecanismos y recursos que fue necesitando, y su proceso de entendimiento y dificultades que pudo o no resolver: una bitácora de trabajo más que útil. Esta es su descripción del proyecto:

I had the pleasure of picking up Scott Logic’s StockFlux project with the same fantastic team as my previous project, which consisted of 3, very talented frontend developers and myself as the lone backend developer.

The frontend team had the task of transforming the existing StockFlux application to use the bleeding edge of the OpenFin platform, which incorporated features defined by the FDC3 specification by FINOS.

This involved splitting StockFlux into several applications, to showcase OpenFin’s inter-app functionality such as snapping and docking, as well as using the OpenFin FDC3 implementations of intents, context data and channels for inter-app communication. It also involved using the FDC3 App Directory specification to promote discovery of our apps using a remotely hosted service, which is where I come in.

Henderson describe fundamentalmente su trabajo de backend, sin entrar prácticamente en el trabajo del frontend, pero de todas formas es una buena y estimulante descripción de aprendizaje y evaluación de caminos posibles.

Esta es su enumeración de objetivos de su propio trabajo:

  • Building an FDC3 compliant App Directory to host our apps and provide application discovery.
  • Building a Securities API, with a full-text search, using a 3rd party data provider.
  • Providing Open-High-Low-Close (OHLC) data for a given security, to power the StockFlux Chart application.
  • Creating a Stock News API.
  • Building and managing our infrastructure on AWS.
  • Automating our AWS infrastructure using CloudFormation.
  • Creating a CI/CD pipeline to test, build and deploy changes.

Una vez completado el proyecto, con una idea clara de los puntos fuertes y débiles de su desarrollo, y del soporte de AWS, Henderson se siente conforme con lo hecho. No obstante, deja una observación que debe ser tenida muy en cuenta:

One thing I’m still relatively unsure about is the idea of vendor lock-in. By basing an application around their specific services, we effectively lock ourselves into AWS, which makes our applications less portable. While building the StockFlux backend services I made an effort to abstract things in such a way that would allow us to add support for services offered by other providers, to reduce our dependency on AWS. On the other hand, locking into one vendor doesn’t have to be a bad thing - by committing to use AWS (or another provider), we can explore and utilise the vast array of services that are on offer, rather than restrict ourselves to using as little as possible to promote portability.

Es decir, permanece casi por completo en manos de su proveedor, sus tarifas, y evolución de sus planes.Sin duda, una particularidad de Cloud services que debe ser pesada con cuidado. Especialmente cuando se translada no una aplicación pequeña y volátil, sino algo de importancia y vasto.

domingo, agosto 29, 2021

Lo suscribo...

 cómo ser el peor...

Visión del bosque

 Cuando observo, sea en las publicaciones que llegan a mis manos, o sea en lo que veo y oigo a mi alrededor, hay algo que me suena mal, y que me deja en ascuas. Veo los árboles, pero no veo el bosque, por ningún lado. 

Es usual hablar de CI/CD, de n variantes de Agile, de microservicios, de n variantes de JavaScript, de Cloud, de n lenguajes -sin considerar el contexto de aplicación-, de sockets, de web components, etc, etc. Podríamos llamar a estos desarrollos "material diario de trabajo", en algún caso metodológicos y en otros instrumentales. Pero no veo mucho foco más allá de estos elementos y medios de desarrollo. Ni siquiera se trata de arquitecturas. Se enfoca el proceso de desarrollo en automatizarlo y entregar a producción con velocidad y continuidad diaria, se espera independencia total entre componentes, se espera que cada componente pueda construírse encapsulado y sin dependencias de ningún otro. Pero no veo el bosque, no veo el plan general. Seguramente no se trata de que no existe, sino que se piensa y trabaja en los detalles. StackOverflow o muchos otros sitios similares no ofrecen ese tipo de visión, y mi pregunta es si todos esos desarrolladores que exponen o investigan sobre problemas de explotación de su instrumental diario de trabajo, tienen ese punto de vista. Me pregunto si el enfoque en microservicios o enfoques similares, el trabajo en pequeños grupos, la utilización de equipos de trabajo en sitios remotos y opacos, si no implican que el bosque está visto en un pequeño, pequeñísimo grupo de ¿arquitectos? ¿líderes de proyecto? ¿responsables de producto?,  y el sentido se va perdiendo desde las reuniones "agiles" hasta el último eslabón de la cadena, allí donde se escribe el código.

domingo, agosto 08, 2021

Microservicios y sentido común

 A propósito de microservicios, comentados en otras recientes oportunidades, un par de observaciones de Mika Yeap en Medium. Aquí se menciona el monolito más como una antiguedad que como una aplicación que integra todo en un sólo ejecutable. Esto tiene más sentido y va en la dirección en la que los mocroservicios actúan, o deberían, cuando corresponda.

Una razón que Yeap reconoce es la escala. Arquitectura distribuida y microservicios para atender el crecimiento de elementos y operaciones en el sistema del que se trate. Pero al hablar de escala, habla de escala global, de decenas o centenas de millones de nodos participantes:

There’s a multitude of reasons you’d need to use distributed architecture when you get big enough. The catch is most of us will never get big enough. I mean, how close are you to Amazon’s numbers? Or how about Netflix? That’s what I thought.

Yeap recuerda que trabajar con microservicios no es simple, y requiere disponer antes una organización robusta, disciplinada, con conocimiento y recursos. Aún así, recomienda pasar a una arquitectura distribuida sólo cuando sea imprescindible, y conservar "el monolito" mientras sea viable:’s best to challenge this beast only if you’re prepared. Skills. Talent. Organization. You need lots of things in the right flavor to do this well. If you think you can show some engineers a couple keynotes then send them off to split all the things, you’re in for a nasty surprise. My team and I weren’t prepared, so I would know. I mean, what does a startup without product-market fit or money have to offer against microservices? Just a month’s supply of ramen to feed five people. In other words, not much but goodwill and some elbow grease. Which wasn’t enough.So as far as I’ve learned, you should only be building microservices if you’ve got a gigantic user base, or the resources to support the specialized development. And even in the first case, you don’t necessarily have to go distributed immediately. In fact, even AirBnB was powered by a monolith until just recently. A monolith written in Ruby on Rails, no less. And it seems to me that their product worked just fine.

Yet many people truly believe distributed architecture is a superior alternative to a monolithic one. People actually think they’re two equal solutions to the same problem. Which is absurd, since they’re actually solutions to different problems.

...Y recuerda que la arquitectura distribuida no es simple, en absoluto. Podemos decir que cada punto que merece foco en su implementación requiere soluciones complejas. Pone como ejemplo las transacciones distribuidas:

Microservices promise to solve all sorts of problems, depending on who you ask. When in reality, they only exacerbate them when you don’t know what you’re doing. How? Well, just two words can send chills down any microservice veteran’s spine: Distributed transactions. How’s that for a nightmare? I promise you, once you have to configure orchestration-based sagas just to update one property on a single object, you’ll be begging for a good old boring monolith.

En fin, microservicios suena genial, pero antes de emprenderlos, piense y haga números.


sábado, agosto 07, 2021

Unas recomendaciones generalizables

 Existe una verdad publicada, y existe una vida real. La verdad publicada está siempre en la cresta de la ola, convirtiendo en el non plus ultra a la última invención presentada. Y esto es especialmente, fundamentalemente válido en el universo de la tecnología. Se afirma que una tecnología determinada debe ser adoptada ya, porque tiene un efecto x, y soluciona un problema z "como nunca hasta ahora". Sucede como las invitaciones de todos los proveedores de servicio de Internet y telefonía: si le hiciéramos caso a todos, deberíamos reemplazar nuestro contrato de servicio una o dos veces por semana. Por eso aprecio las reflexiones de un veterano en el desarrollo de aplicaciones. A veces, el rey está desnudo.

Las recomendaciones de Beau Beauchamp se dirigen a quienes desarrollan en la categoría de Javascript, pero se pueden extender, cambiando algunos nombres, a otras categorías. 

En primer lugar, su presentación:

I’ve been a web application developer for over 20 years. I’ve seen all kinds of UI libraries come and I’ve seen them go. I’ve been in the industry long enough to know that just because a framework is new and cool and “OMG! Everyone cool is using it!”, doesn’t mean you should use it too.

Speaking from a purely enterprise perspective, which is probably the same perspective you should be looking at if you plan on building the next great SaaS app, you should be thinking long and hard about incorporating any kind of library or UI framework into your app.

Right now you’re small, nimble, you code things in record time and push to production with a minimal number of steps. You’re probably not writing tests, or very few, and all of your builds are done automagically locally.

But if you get successful, really successful, like with hundreds of developers working on your app kind of successful, all of that is going to change. You are going to become an enterprise; and in the enterprise, we do things very differently in the “real world” than you do in the “startup world”.

 Su lista de recomendaciones se hacen dialogando contra lo que se pudiera practicar en una pequeña "startup". Cambie el destinatario por otros. Hay muchos candidatos. 

La primera recomendación:

Cualquier cambio es costoso

Every step you add into the deployment process costs you time and money in ways you cannot see when you’re a small startup.

If you unwisely choose the technology stack for your app, especially the UI, it’s going to take a tremendous amount of resources to rip it out and update it

 Evite tecnologías de moda que se actualizan demasiado rapidamente, o que vienen y van

UI is notorious for flavor-of-the-day UI libraries and frameworks. Handlebars, React, Vue, AngularJS, Angular 2, 4, 5, 6, 7, and now fucking 8. All in the span of what—6 years? I’m sure there are bunch of others I’ve not even heard of yet.

“Angular is not a fad, Beau.” Sorry, it is. In the enterprise you cannot be changing out libraries and updating shit this fast. You won’t have the budget or the resources (“resources” is enterprise-speak for the people who will now hate you).

Each time some library or framework “updates” or changes versions, it costs you money.

Costo versus reales beneficios

Think about which UI framework you’re thinking of using in your project or next project. What is the framework really doing for you? Is it saving you time or just giving your application some cool “whiz-bang” features?

“It gives us model and DOM binding! A huge time saver.” Fine, you can do that with a jQuery plugin that you don’t have to compile. Next?

“It gives us an MVC on the front end with model-bound templates for a SPA (single-page application).” Fine, but this is not saving you time; in fact, it’s costing you more time in building pages, compiling, and SPA’s are terrible if you need real SEO and accurate analytics.

At the end of the day, by the time you are done installing all of the crap that you need to actually run the super-really-cool fad-tech UI framework, it’s not easier at all, and all you have done is saddled your enterprise team with hours and hours of frustrating local setup and configuration and IDE integration. It’s nonsense.

Yea, we do it; but it’s NOT easier. I’ve seen enterprise devs saddled with setting up the new wiz-bang technology spend literally weeks working out the bugs in their development environments because someone decided to install a new “framework”.

And at the end of the day, the UI framework, much more of a pain in the ass, much more difficult to debug, than it was helpful in achieving a good clean easy-to-mange front-end.

Diga no a compiladores de UI. Mantengo este punto porque está en su lista, y tiene importancia en este caso. Aquí, la generalización está en qué línea de desarrollo recomienda: 

I once interviewed with a really big video game company. You know what their development policy was changing to? Plain ECMA (JavaScript) and CSS. No compilers. No CoffeeScript. No TypeScript. They even ripped out jQuery. I was impressed.

Now why were they doing this? Because they needed to save time and money. Over the years developers had come into their teams, added in all kinds of wiz-bang fad tech into their UI, and then left. Now their teams were saddled with all of the UI garbage that was costing the company millions in developer hours each time Angular or whatever SPA library had an update or an upgrade. We’re talking about touching literally millions of lines of code and tens of thousands of files across the enterprise.

I totally understand why they junked all of it. Now, I personally wouldn’t junk jQuery or Bootstrap, but those don’t need to be compiled to run; and jQuery isn’t updating every 5-minutes either. You can usually update without causing huge regressions, if any at all.
A la conclusión de Beau podríamos agregarle otros causantes...

The reason these fad-tech UI libraries even exist is because of bored engineers working at big tech, like Google, Facebook, Twitter, etc. But at the end of the day nothing works better and is easier to debug and maintain than just pain JS and native CSS.

¿Monolitos? ¿Qué monolitos?

 Si atendemos a las definiciones de sus propios favorecedores, creo que se aboga por microservicios apoyándose en una entelequia que poco tiene que ver con el mundo real de las aplicaciones corrientes. Tomado de A Deep Dive Into Microservices vs. Monolith Architecture, por mencionar un caso que dice apoyarse en los estándares (aunque no menciona a Martin Fowler):

Monolith simply refers to one block containing all in one piece. According to Wikipedia, a monolithic application describes a single-tiered software application in which the user interface and data access code are combined into a single program from a single platform.

Y siguiendo su referencia a Wikipedia:

In software engineering, a monolithic application describes a single-tiered software application in which the user interface and data access code are combined into a single program from a single platform.

A monolithic application is self-contained and independent from other computing applications. The design philosophy is that the application is responsible not just for a particular task, but can perform every step needed to complete a particular function.[1] Today, some personal finance applications are monolithic in the sense that they help the user carry out a complete task, end to end, and are private data silos rather than parts of a larger system of applications that work together. Some word processors are monolithic applications.[2] These applications are sometimes associated with mainframe computers.

In software engineering, a monolithic application describes a software application that is designed without modularity.[citation needed] Modularity is desirable, in general, as it supports reuse of parts of the application logic and also facilitates maintenance by allowing repair or replacement of parts of the application without requiring wholesale replacement.

Modularity is achieved to various extents by different modularization approaches. Code-based modularity allows developers to reuse and repair parts of the application, but development tools are required to perform these maintenance functions (e.g. the application may need to be recompiled). Object-based modularity provides the application as a collection of separate executable files that may be independently maintained and replaced without redeploying the entire application (e.g. Microsoft "dll" files; Sun/UNIX "shared object" files).[citation needed] Some object messaging capabilities allow object-based applications to be distributed across multiple computers (e.g. Microsoft COM+). Service-oriented architectures use specific communication standards/protocols to communicate between modules.

In its original use, the term "monolithic" described enormous mainframe applications with no usable modularity.[citation needed] This, in combination with the rapid increase in computational power and therefore rapid increase in the complexity of the problems which could be tackled by software, resulted in unmaintainable systems and the "software crisis".

Es decir, si vamos a considerar "monolito" a una aplicación que en un solo ejecutable procesa los datos, define toda la lógica, y contiene la propia interfaz de comunicación con el mundo, probablemente haya que remontarse a 1970 o poco más (the term "monolithic" described enormous mainframe applications with no usable modularity).  Con seguridad, hace cuarenta años que ese paradigma no existe, salvo que incluyamos en él a las corrientes "apps" moviles. Como el propio artículo de Wikipedia menciona, simplemente la modularidad y la posibilidad de mantener una aplicación separada por responsabilidades de servicio existe no menos que desde los 80 del siglo pasado, y más atrás desde que lenguajes de alto nivel como el COBOL existen. Sólo un mal diseño, pésimo diseño, podría poner  todos los aspectos de una aplicación en un solo componente: podemos decir que existen otros modelos distintos que practican la modularidad y la separación de responsabilidades desde mucho antes de que se hablara de microservicios. 

Oponer Microservicios/Monolitos es sesgado y parcial. Otras arquitecturas/tecnologías se han desarrollado y perfeccionado para combatir los indudables perjuicios de un "monolito". Si vamos a la discusión del artículo en Wikipedia (siempre hay que ir a la historia y a la discusión), los observadores críticos del artículo así lo hacen ver:

The article lacks proper references and has a down view of the concept instead of explaining what it is. The article is also very focused on explaining what modules are and why they are good, getting out of the topic. I would go a step forward and say that this article has wrong information. 

El artículo de Wikipedia habla de "data silos". Este es un punto muy interesante y hay que verlo por separado, en otro momento. 

Pero establecer un paradigma opuesto que sea la panacea, requeriría un poco más de trabajo. Podemos decir que contra esa figura de "monolito" se ha trabajado durante treinta o cuarenta años (por ejemplo OOD/OOP), antes que se proclamara que los microservicios  son la solución.


sábado, julio 31, 2021

Elemental, Watson

 Leyendo publicaciones técnicas hoy, tan pronto como levantan la vista al nivel de arquitectura, parecería que lo único que existe es una arquitectura basada en microservicios, y lo demás es monolitico y repudiable (o peyorativamente, simplemente monolitos de la prehistoria). Dice Atul Mittal, en Medium:

Is it always necessary to have Microservice Architecture instead of Monolithic Architecture?

The answer is NO, BIG NO. You should very understand the efforts and the value your product brings in. Generally, at first, implementing Microservice Architecture is a very tedious task, you need to have a separate Team/Architecture for each component that you need to have in your Application and a lot of thought process goes into that which might not be a viable option for small companies or Application built in very less budget, so there comes a saviour way to create Applications in “Monolithic” fashion or you can also say “traditional” way of creating Applications. If there are not so many components in that case also, you can think to adopt a “Monolithic’ way of doing things. But keep in mind, to modularize your application as much as possible so that later in case you want to redesign it in a “Microservice” way you don’t face many challenges in doing so.