tag:blogger.com,1999:blog-87585792024-03-23T19:08:58.516+01:00Hacia la Cuarta Generacion del SoftwareComentarios, discusiones, notas, sobre tendencias en el desarrollo de la tecnología informática, y la importancia de la calidad en la construcción de software.Jorge Ubedahttp://www.blogger.com/profile/16457542679928501488noreply@blogger.comBlogger807125tag:blogger.com,1999:blog-8758579.post-24637253090767111682023-12-31T19:42:00.002+01:002023-12-31T19:42:19.412+01:00Geoffrey Hinton sobre la inteligencia artificial<p> </p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhcSRiK48hm49jTrvqdHdV_uDZGYyk8-4M_dEZHK4z-_3073O457A0W4mPGH6UOOx5DuTkIXguyJNCW5BJuRxqarkQ66y719s3lOEoHJZspqnJTQaKoOX4QDCDLRyPvz8kfJUNgE6E0Ku5modA9h0bhkm-HPY2dxCuDmk8rUQg0QzezQHr0bzXt/s587/Geoffrey_Hinton.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="587" data-original-width="440" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhcSRiK48hm49jTrvqdHdV_uDZGYyk8-4M_dEZHK4z-_3073O457A0W4mPGH6UOOx5DuTkIXguyJNCW5BJuRxqarkQ66y719s3lOEoHJZspqnJTQaKoOX4QDCDLRyPvz8kfJUNgE6E0Ku5modA9h0bhkm-HPY2dxCuDmk8rUQg0QzezQHr0bzXt/s320/Geoffrey_Hinton.jpg" width="240" /></a></div><br />Will Douglas Heaven entrevista <a href="https://www.technologyreview.com/2023/05/02/1072528/geoffrey-hinton-google-why-scared-ai/?mc_cid=c1b94a0267&mc_eid=85143887f7" target="_blank">en Technology Review del MIT</a> a <a href="https://en.wikipedia.org/wiki/Geoffrey_Hinton" target="_blank">Geoffrey Hinton</a>, sobre su actual desconfianza en la Inteligencia Artificial:<br /><p></p><p></p><p><i>Hinton
fears that these tools are capable of figuring out ways to manipulate
or kill humans who aren’t prepared for the new technology. <br /><br />“I
have suddenly switched my views on whether these things are going to be
more intelligent than us. I think they’re very close to it now and they
will be much more intelligent than us in the future,” he says. “How do
we survive that?”<br /><br />He is especially worried that people could
harness the tools he himself helped breathe life into to tilt the scales
of some of the most consequential human experiences, especially
elections and wars.<br /><br />“Look, here’s one way it could all go wrong,”
he says. “We know that a lot of the people who want to use these tools
are bad actors like Putin or DeSantis. They want to use them for winning
wars or manipulating electorates.”<br /><br />Hinton believes that the next
step for smart machines is the ability to create their own subgoals,
interim steps required to carry out a task. What happens, he asks, when
that ability is applied to something inherently immoral?<br /><br />“Don’t
think for a moment that Putin wouldn’t make hyper-intelligent robots
with the goal of killing Ukrainians,” he says. “He wouldn’t hesitate.
And if you want them to be good at it, you don’t want to micromanage
them—you want them to figure out how to do it.”<br /><br />There are already
a handful of experimental projects, such as BabyAGI and AutoGPT, that
hook chatbots up with other programs such as web browsers or word
processors so that they can string together simple tasks. Tiny steps,
for sure—but they signal the direction that some people want to take
this tech. And even if a bad actor doesn’t seize the machines, there are
other concerns about subgoals, Hinton says.<br /><br />“Well, here’s a
subgoal that almost always helps in biology: get more energy. So the
first thing that could happen is these robots are going to say, ‘Let’s
get more power. Let’s reroute all the electricity to my chips.’ Another
great subgoal would be to make more copies of yourself. Does that sound
good?”<br /><br />Maybe not. But Yann LeCun, Meta’s chief AI scientist,
agrees with the premise but does not share Hinton’s fears. “There is no
question that machines will become smarter than humans—in all domains in
which humans are smart—in the future,” says LeCun. “It’s a question of
when and how, not a question of if.”<br /><br />But he takes a totally
different view on where things go from there. “I believe that
intelligent machines will usher in a new renaissance for humanity, a new
era of enlightenment,” says LeCun. “I completely disagree with the idea
that machines will dominate humans simply because they are smarter, let
alone destroy humans.”<br /><br />“Even within the human species, the
smartest among us are not the ones who are the most dominating,” says
LeCun. “And the most dominating are definitely not the smartest. We have
numerous examples of that in politics and business.”<br /><br /><a href="https://en.wikipedia.org/wiki/Yoshua_Bengio" target="_blank">Yoshua Bengio</a>,
who is a professor at the University of Montreal and scientific
director of the Montreal Institute for Learning Algorithms, feels more
agnostic. “I hear people who denigrate these fears, but I don’t see any
solid argument that would convince me that there are no risks of the
magnitude that Geoff thinks about,” he says. But fear is only useful if
it kicks us into action, he says: “Excessive fear can be paralyzing, so
we should try to keep the debates at a rational level.”</i><br /><br /><a href="https://en.wikipedia.org/wiki/Yann_LeCun" target="_blank">LeCun</a>
es muy optimista...si no fuera por los drones sobre Kiev, la prisión de
Navalni, o las medidas de control social de China, quizá se podría
aceptar su visión.</p><span style="font-size: x-small;">Foto:
Ramsey Cardy / Collision via Sportsfile, CC BY 2.0
<https://creativecommons.org/licenses/by/2.0>, via Wikimedia
Commons </span>Jorge Ubedahttp://www.blogger.com/profile/16457542679928501488noreply@blogger.com0tag:blogger.com,1999:blog-8758579.post-35042323791156226032023-08-27T19:32:00.000+02:002023-08-27T19:32:09.377+02:00Publicando al IBM i sin IBM<p> Un comercial de una empresa americana que trabaja con el as400, hoy llamado IBM i, cansado de escuchar afirmaciones de que el equipo ya no se fabrica ni usa, decidió publicar una empresa por vez, que usa el equipo en Estados Unidos. Una idea que responde a la inactividad de IBM respecto a un equipo que le ha dado mucho dinero en los 35 años que lleva evolucionando, y que no se merece lo que en la práctica parece un ocultamiento de parte de IBM. Si usted busca información de desarrollo sobre el equipo, le costará encontrarla: si pregunta por DB2, será redirigido a DB2 para el system z, si pregunta por utilidades de SQL, o facilidades para procesar JSON, será redirigido primero al system z, y sólo refinando la búsqueda logrará acertar. Todos los enlaces preexistentes a artículos muy valiosos y bien escritos fueron perdidos y no redireccionados hace unos pocos años atrás. Los materiales existen, pero sólo una paciente búsqueda le permitirá llegar a ellos. Lamentable para un equipo que no ha dejado de evolucionar y adquirir funcionalidades de primera línea, cuya velocidad de procesamiento se mantiene muy competitiva, y que mantiene una gran base de clientes, que no recibe educación. En fin, quizá a fuerza de no educar y ocultar, IBM consiga que el equipo no exista. </p><p>Lo que dice Alex Woodie sobre este esfuerzo solitario:</p><p>If you are a consumer of mainstream news, it can be hard to find
anything about IBM i. The proprietary business platform isn’t marketed
by IBM in advertisements and it receives very little coverage in
mainstream IT publications. But a salesman for an IBM i business partner
has come up with an easy yet compelling way to boost the visibility of
the platform.</p>
<p>Earlier this month, Josh Bander, who is an enterprise account executive at <a href="https://www.briteskies.com/" rel="noopener" target="_blank">Briteskies</a>, shared a recent conversation he had through his <a href="https://www.linkedin.com/in/tjbander/" rel="noopener" target="_blank">LinkedIn page</a>
.“Over the weekend, I spoke to a few of my friends in IT, and they all
told me #IBMi is dead,” Bander said. “To prove them wrong, I plan to
take pictures of items in my house made with IBM i for the next week.”</p>
<p>The first picture featured Bander’s car, a Honda. The Japanese
carmaker’s US subsidiary, American Honda Motor Company, has used one or
more IBM i servers at its Torrance, California, facility for years.</p>
<p>Day three brought an image of a shoe by Nike. The legendary Oregon
company has been an IBM i shop since at least 2003, when Nike acquired
Converse, and it was still using IBM i in 2021, according to <span style="background-color: #fcff01;">the list of
IBM i shops maintained by <a href="https://all400s.com/" rel="noopener" target="_blank">All400s.com</a>, which Bander used for his project.</span></p><p>A range hood for a stove made by Broan-NuTone appeared on day four of
Bander’s IBM i journey through his home. The Hartford, Wisconsin-based
manufacturer, which makes a variety of fans and air quality products, is
also a confirmed IBM i ship.</p>
<p>Do you have Kleenex in your house? If so, then you have a product
made by an IBM i shop, as the Irving, Texas-based Kimberly-Clark, maker
of the Kleenex brand of facial tissues, is another confirmed IBM i user.</p>
<p>What about Taster’s Choice? It may not be everybody’s favorite cup of joe – Starbucks, the <a href="http://www.starbucks.com" rel="noopener" target="_blank">coffee goliath from Seattle, Washington,</a>
is a longtime IBM i shop – but the iconic coffee brand has IBM i in its
veins, since it is owned by Switzerland-based Nestle, which is the
largest food company in the world and another IBM midrange system user.</p>
<p>Maybe you have some shipping labels lying around. If they’re made by
Avery Dennison, the well-known manufacturer of shipping labels and
packaging materials based in Glendale, California, then you’ve found
another everyday product made by an IBM i shop.</p><p>You don’t have to live the California wine country life to shop at
Williams Sonoma. But if you do buy from the popular retailer, you can
rest easy knowing that at least some aspect of the San Francisco
company’s business is managed by IBM i.</p>
<p>Another iconic American brand, Rubbermaid, is also an IBM i shop. The
Atlanta, Georgia company, which is now owned by Newell, was known to
have run the IBM i as of 2021.</p>
<p>Bander’s LinkedIn posts of household items made by IBM i shops
attracted quite a bit of attention from the IBM i ecosystem, and the
hashtag “IBMiEverywhere” began trending. Apparently, IBM i professionals
enjoy seeing that well-run and world-famous consumer brands are
longtime IBM i users.</p>
<p>So why doesn’t IBM do this, or something similar? We have pestered
Big Blue server execs many times over the years about the lack of
marketing and advertising support for the platform, and rarely come away
with satisfying answers.</p><p>To IBM’s credit, it does write and run case studies about IBM i customers. <span style="background-color: #fcff01;">It has a <a href="https://www.ibm.com/it-infrastructure/us-en/resources/power/ibm-i-customer-stories/" rel="noopener" target="_blank">section of its website</a>
where it has around 100 case studies of IBM i customers, as well as
stories about a few business partners.</span> Honda is on that list, as well as
brands like Carhartt and Lamps Plus.</p>
<p><span style="background-color: #fcff01;">But there are many, many more name-brand companies that rely on IBM i
that have never been officially mentioned by IBM as customers. Some of
the world’s largest and most profitable companies run at least a small
part of their businesses on the IBM i system, and while that in itself
is not a reason for other companies to follow suit, it at least shows
that world-class companies are continuing to invest in it and that has
value.</span></p>
<p><span style="background-color: #fcff01;">IBM execs often say they wish they could do more to tout the great
companies that rely on IBM i, and there’s no reason not to believe them.
The truth is, the companies themselves often are not interested in
participating in a formal IBM case study, marketing campaign, or to be
featured in actual advertisements – and if they are, they often expect
something in return for their cooperation.</span></p>
<p>That makes rogue efforts like Bander’s all the more fun and
entertaining. John Rockwell does his best to keep the All400s list up to
date, and while there are companies on the list that are actively
moving off the platform or planning to, there are plenty more that are
happy customers that aren’t going anywhere.</p>
<p>In the end, sharing unofficial lists of companies that run on IBM i
seems to be a good way to boost morale for the IT soldiers in the
trenches, who hear a lot of FUD and may be questioning their choice of
platform. As it turns out, there are a lot of great companies that
continue to rely on the box, which continues to run business software
reliability, securely, and efficiently decade after decade.</p>
<p>They may not be shouting their IBM i success from the rooftops. But sometimes actions speak louder than words.</p><p> <a href="https://www.itjungle.com/2023/08/14/a-simple-plan-to-boost-ibm-i-visibility/" target="_blank">Visto en IT Jungle</a>, en agosto.<br /></p><p> </p><p> </p><p></p>Jorge Ubedahttp://www.blogger.com/profile/16457542679928501488noreply@blogger.com0tag:blogger.com,1999:blog-8758579.post-50298683395911524482023-02-12T09:24:00.000+01:002023-02-12T09:24:52.331+01:00AI y la ética<p> El mayor problema de la inteligencia artificial es que sus construcciones están basadas en principios matemáticos y lógicos, y estos no son suficientes y pueden ser desviados. Hace muy poco, <a href="https://cuartageneracion.blogspot.com/2022/12/galactica-y-las-dificultades-de-los.html" target="_blank">Galáctica lo ha reflejado</a>, en su inicio desastroso, capotando en tres días. <a href="https://hai.stanford.edu/news/how-large-language-models-will-transform-science-society-and-ai" target="_blank">GPT parece estar embarcado en mejorar esta aproximación</a>, y es un proyecto en curso, arrasando todas las marcas de interés del pasado. Mientras tanto, las Big Tech probablemente deban realinearse. Dice <a href="https://www.technologyreview.com/2023/02/08/1068068/chatgpt-is-everywhere-heres-where-it-came-from" target="_blank">Will Douglas Heaven en Technology Review</a>:<br /></p><p></p><blockquote>While OpenAI was wrestling with GPT-3’s biases, the rest of the tech
world was facing a high-profile reckoning over the failure to curb toxic
tendencies in AI. It’s no secret that large language models can spew
out false—even hateful—text, but researchers have found that <a href="https://www.technologyreview.com/2020/10/23/1011116/chatbot-gpt3-openai-facebook-google-safety-fix-racist-sexist-language-ai/">fixing the problem</a>
is not on the to-do list of most Big Tech firms. When Timnit Gebru,
co-director of Google’s AI ethics team, coauthored a paper that
highlighted the <a href="https://www.technologyreview.com/2020/12/04/1013294/google-ai-ethics-research-paper-forced-out-timnit-gebru/">potential harms associated with large language models</a> (including <a href="https://www.technologyreview.com/2022/11/14/1063192/were-getting-a-better-idea-of-ais-true-carbon-footprint/">high computing costs</a>), it was not welcomed by senior managers inside the company. In December 2020, Gebru was <a href="https://www.technologyreview.com/2020/12/16/1014634/google-ai-ethics-lead-timnit-gebru-tells-story/">pushed out of her job</a>. </blockquote><br /><p></p>Jorge Ubedahttp://www.blogger.com/profile/16457542679928501488noreply@blogger.com0tag:blogger.com,1999:blog-8758579.post-28025318848155908622023-01-07T08:49:00.006+01:002023-01-07T08:49:58.023+01:00El IBM i y sus capacidades actuales <p> <a href="https://www.itjungle.com/2022/10/31/whats-the-best-web-language-for-ibm-i/" target="_blank">Dice Mike Pavlak</a>: “At the end of the day, professionally I’ve worked in about six different languages. I can write bad code in every one of them,”</p><p>El contexto de la afirmación es interesante: el conjunto de recursos que hoy dispone el IBM i (¿o debemos decir IBM Power? ) para interconectarse con toda clase de recursos: PHP, Node, Python, sumados a los viejos conocidos de c. c++. java. Pavlak enfoca y evalúa la conveniencia de cada uno en función de conexiones con arquitecturas web, fundamentalmente, pero ese abanico de posibilidades puede ir más allá sin duda.</p><p>Dice Pavlak en particular sobre Node: </p><p></p><blockquote><p>Node.js isn’t as easy to learn for true-blue IBM i types, but it has
one advantage over the other two: it uses JavaScript, which as
previously noted has been broadly adopted by the wider IT world.
However, there’s a caveat to the notion that Node.js developers only
need to know JavaScript to be productive.</p>
<p>“A lot of people like Node because there’s a myth that I can use the
same language on the presentation layer, JavaScript . . . up on the
server,” Pavlak said. “And there’s truth to that. The syntax of the
language is the same. What’s different though is the library usage. The
libraries you’d use on the client end are not the libraries you would
use on the server.”</p><p>
Choosing Node.js makes sense in certain scenarios, such as when an
IBM i shop has hired younger developers with JavaScript skills. Because
the syntax is the same, these front-end JavaScript developers may be
able to become productive developing back-end Node.js code on the IBM i
server in a shorter amount of time than using other languages. “Using
Node on the backend starts to make sense in that scenario,”</p><p>(...) Node.js does have a significant performance advantage over PHP and
Python in on particular category: How quickly the stack starts. The
technology, which was created by <a href="http://www.google.com">Google</a>,
is widely used by massive Web properties, such as Netflix. When you
fire up a Netflix session on your TV, your Roku, or your phone, you’re
actually initiating the deployment of a Node.js instance running on <a href="http://www.aws.amazon.com">AWS</a>.</p>
<p>“Node.js starts so fast, it’s so much easier to scale…horizontally,”
Pavlak said. “So AWS instances are basically X86. In that scenario, Node
has a decided advantage.”</p></blockquote><p></p><p> </p><p></p><p> </p>Jorge Ubedahttp://www.blogger.com/profile/16457542679928501488noreply@blogger.com0tag:blogger.com,1999:blog-8758579.post-77928656773124060042022-12-11T12:48:00.000+01:002022-12-11T12:48:16.810+01:00Galáctica y las dificultades de los modelos de lenguaje<p></p><p>En noviembre, Meta presentó un <a href="https://en.wikipedia.org/wiki/Language_model" target="_blank">modelo de lenguaje</a> bautizado Galactica, elaborado para asistir a investigadores científicos, pero sólo tres días después fue retirado de disponibilidad para ser consultado o testeado. Básicamente, como ha sucedido en otros campos de trabajo con inteligencia artificial (IA/AI), el lenguaje no reconoce verdad o falsedad. En las pruebas, trabajos formalmente presentados como científicos pero absurdos como la existencia de <a href="https://futurism.com/the-byte/facebook-takes-down-galactica-ai" target="_blank">osos en el espacio</a>, o las causas de la guerra de Ucrania, pasaron por buenos, con justificaciones razonadas.</p><p><a href="https://www.technologyreview.com/2022/11/18/1063487/meta-large-language-model-ai-only-survived-three-days-gpt-3-science" target="_blank">Will Douglas Heaven, en Technology Review</a>:</p><p></p><blockquote><p>Galactica is a large language model for science, trained on 48 million
examples of scientific articles, websites, textbooks, lecture notes, and
encyclopedias. Meta promoted its model as a shortcut for researchers
and students. In the company’s words, Galactica “can summarize academic
papers, solve math problems, generate Wiki articles, write scientific
code, annotate molecules and proteins, and more.”</p><p>(...) A fundamental problem with Galactica is that it is not able to
distinguish truth from falsehood, a basic requirement for a language
model designed to generate scientific text. People found that it made up
fake papers (sometimes attributing them to real authors), and generated
wiki articles about the <a href="https://twitter.com/meaningness/status/1592634519269822464">history of bears in space</a>
as readily as ones about protein complexes and the speed of light. It’s
easy to spot fiction when it involves space bears, but harder with a
subject users may not know much about.</p> <p>(...) Many scientists pushed
back hard. Michael Black, director at the Max Planck Institute for
Intelligent Systems in Germany, who works on deep learning, <a href="https://twitter.com/Michael_J_Black/status/1593133722316189696">tweeted</a>: “In all cases, <span style="background-color: #fcff01;">it was wrong or biased but sounded right and authoritative</span>. I think it’s dangerous.”</p><p>(...) The Meta team behind Galactica argues that language models are better
than search engines. “We believe this will be the next interface for
how humans access scientific knowledge,” the researchers <a href="https://galactica.org/static/paper.pdf">write</a>. This
is because language models can “potentially store, combine, and reason
about” information. <span style="background-color: #fcff01;">But that “potentially” is crucial</span>. It’s a coded
admission that language models cannot yet do all these things. And they
may never be able to. “Language models are not really knowledgeable beyond their ability to
capture patterns of strings of words and spit them out in a
probabilistic manner,” says [Chirag Shah, University of Washington]. “It gives a false sense of
intelligence.”</p></blockquote><p></p><p> <a href="https://en.wikipedia.org/wiki/Grady_Booch" target="_blank">Grady Booch</a> <a href="https://twitter.com/Grady_Booch/status/1593033061423550464" target="_blank">comenta</a>: <span class="css-901oao css-16my406 r-poiln3 r-bcqeeo r-qvutc0">"Galactica is little more than statistical nonsense at scale.
Amusing. Dangerous. And IMHO unethical". Algún investigador en ML (Yann LeCun, en el mismo hilo), se escandaliza por la calificación de no ético. Creo que a algunos científicos les falta medir el alcance de lo que tienen entre manos. <br /></span></p><p> </p><p> </p>Jorge Ubedahttp://www.blogger.com/profile/16457542679928501488noreply@blogger.com0tag:blogger.com,1999:blog-8758579.post-91539097470740644582022-12-10T00:26:00.002+01:002022-12-10T00:26:20.403+01:00Frederick Brooks: muere un pionero<p> </p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhrAhSP9c8bYLcRdmPT81jN-cJHvavR3Jqc_7ishFTCK1K5WKyYgWuwv1rpFtvwCRtAEF3KwZxIlRXEV43zJRJ4pmKd7ctK16qToZOEKtDE9xi7_lUOqR8X_8ahE-O2Ak1x46vaBkY1LRJboJXpnzckOZf5tepvjtVwWBLlhyshiu_XNNydXQ/s324/Frederick-Brooks.jpg" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="324" data-original-width="250" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhrAhSP9c8bYLcRdmPT81jN-cJHvavR3Jqc_7ishFTCK1K5WKyYgWuwv1rpFtvwCRtAEF3KwZxIlRXEV43zJRJ4pmKd7ctK16qToZOEKtDE9xi7_lUOqR8X_8ahE-O2Ak1x46vaBkY1LRJboJXpnzckOZf5tepvjtVwWBLlhyshiu_XNNydXQ/s320/Frederick-Brooks.jpg" width="247" /></a></div><p></p><p>Hace pocos días, el 17 de noviembre, ha muerto <a href="https://es.wikipedia.org/wiki/Frederick_Brooks" target="_blank">Frederick Brooks</a>, un pionero de la ingeniería de software, casi de su primera generación. Longevo, continuó trabajando vinculado a las tecnologías digitales hasta la primera década de este siglo, comenzando desde 1953, después de egresar de la Universidad de Duke. Pasó por IBM a partir de 1956 y hasta 1965, donde dirigió el diseño de los ordenadores 360 (<a href="https://es.wikipedia.org/wiki/IBM_S/360" target="_blank">IBM System/360</a>), el primer mainframe de IBM, base de la arquitectura estructurada por IBM, y padre directo de los 4300 y los actuales System Z. Todavía hoy una aplicación codificada en y para el 360 puede ejecutarse en un System/Z. En las decisiones que permitieron esta evolución, uno de los pilares fue Brooks. </p><p>El otro gran aporte de Brooks está en la metodología, en la sistematización de su experiencia en sus años de IBM, en primer lugar, en 1975, con <i><a href="https://en.wikipedia.org/wiki/The_Mythical_Man-Month" target="_blank">The Mythical Man-Month</a></i>, y años después, en 1986, con <i><a href="https://en.wikipedia.org/wiki/No_Silver_Bullet" target="_blank">No Silver Bullet—Essence and Accident in Software Engineering</a></i>, agregado luego como nuevo capítulo en The Mythical...Existe un gran salto entre las épocas en que escribió estos libros y su lectura actual, pero a pesar del desfase técnico, todavía deben ser libros de lectura obligatoria. </p><p>Lo que sigue es <a href="https://www.infoq.com/news/2022/12/fred-brooks-obituary/" target="_blank">el obituario de Shane Hastie</a> en InfoQ, con un buen conjunto de referencias a los logros de Brooks:<br /></p><p></p><blockquote><p><i>Dr Frederick P Brooks Jr, originator of the term architecture in
computing, author of one of the first books to examine the nature of
computer programming from a sociotechnical perspective, architect of the
IBM 360 series of computers, university professor and person
responsible for the 8-bit byte died on 17 November at his home in Chapel
Hill, N.C. Dr Brooks was 91 years old.</i></p>
<p><i>He was a pioneer of computer architecture, highly influential through his practical work and publications including <a href="https://www.oreilly.com/library/view/mythical-man-month-the/0201835959/">The Mythical Man Month</a>, <a href="https://www.oreilly.com/library/view/the-design-of/9780321702081/">The Design of Design </a>and his paper <a href="http://worrydream.com/refs/Brooks-NoSilverBullet.pdf">No Silver Bullet</a> which debunked many of the myths of software engineering.</i></p>
<p><i>In 1999 he was awarded a <a href="https://amturing.acm.org/award_winners/brooks_1002187.cfm">Turing Award</a> for landmark contributions to computer architecture, operating systems, and software engineering. In the award overview it is pointed out that</i></p>
<blockquote>
<p><i>Brooks coined the term <b>computer architecture</b> to mean
the structure and behavior of computer processors and associated
devices, as separate from the details of any particular hardware
implementation</i></p>
</blockquote>
<p><i>In the No Silver Bullet article he states:</i></p>
<blockquote>
<p><i>There is no single development, in either technology or management
technique, which by itself promises even one order-of-magnitude
improvement within a decade in productivity, in reliability, in
simplicity.</i></p>
</blockquote>
<p><i>Quotations from the <a href="https://www.amazon.com/gp/product/B00B8USS14/ref=dbs_a_def_rwt_hsch_vapi_tkin_p1_i0">Mythical Man Month:Essays on Software Engineering</a> permeate software engineering today, including:</i></p>
<blockquote>
<ul><li><i>Adding manpower to a late software project makes it later. </i></li><li><i>The bearing of a child takes nine months, no matter how many women are assigned.</i></li><li><i>All programmers are optimists.</i></li></ul>
</blockquote>
<p><i>On April 29, 2010 Dilbert explored the <a href="https://dilbert.com/strip/2010-04-29">adding manpower</a> quote. </i></p>
<p><i>In 2010 he was <a href="https://www.wired.com/2010/07/ff-fred-brooks/">interviewed by Wired</a> magazine. When asked about his greatest technical achievement he responded</i></p>
<blockquote>
<p><i>The most important single decision I ever made was to change the IBM
360 series from a 6-bit byte to an 8-bit byte, thereby enabling the use
of lowercase letters. That change propagated everywhere.</i></p>
</blockquote>
<p><i>He was the founder of the Computer Science Department at the
University of North Carolina at Chapel Hill, where the Computer Science
building is named after him. In an <a href="https://cs.unc.edu/news-article/remembering-department-founder-dr-frederick-p-brooks-jr">obituary</a> the University says:</i></p>
<blockquote>
<p><i>Dr. Brooks has left an unmistakable mark on the computer science
department and on his profession; this is physically recognized by the
south portion of the department’s building complex bearing his name. He
set an example of excellence in both scholarship and teaching, with a
constant focus on the people of the department, treating everyone with
respect and appreciation. His legacy will live on at UNC-Chapel Hill</i></p>
</blockquote>
<p><i>His page on the university website lists his <a href="http://www.cs.unc.edu/~brooks/">honours, books and publications</a>.</i></p>
<p><i>The <a href="https://computerhistory.org/">Computer History Museum</a> has an <a href="https://archive.computerhistory.org/resources/access/text/2012/11/102658255-05-01-acc.pdf">interview of Dr Brooks</a> by <a href="https://www.computer.org/profiles/grady-booch">Grady Booch</a>.</i></p>
<p><i>He leaves his wife of 66 years Nancy, three children, nine grandchildren and two great-grandchildren.</i></p></blockquote><p></p><p> </p>Jorge Ubedahttp://www.blogger.com/profile/16457542679928501488noreply@blogger.com0tag:blogger.com,1999:blog-8758579.post-5438155921459206502022-10-30T20:24:00.002+01:002022-10-30T20:24:24.604+01:00Advertencias sobre diseño y microservicios<p> Leí hace algunos días un conjunto de observaciones sobre microservicios que me parecieron más que atinados, especialmente cuando los microservicios parecen ser la receta universal para toda empresa. Si revisas las ofertas laborales, desde hace meses estos son la estrella de las solicitudes; y mas o menos parecido cuando se ven las presentaciones empresarias. Sigo los artículos ofrecidos por Medium, y allí es abrumadora su presencia. De hecho, <a href="https://levelup.gitconnected.com/things-you-must-know-before-switching-to-microservices-2634f217839d" target="_blank">las observaciones que comento se han publicado allí</a>.<br /></p><p>¿Realmente los microservicios son una respuesta total? <a href="https://medium.com/@giedrius.kristinaitis" target="_blank">Giedrius Kristinaitis</a> en estas recomendaciones lo pone en duda, y arroja unas buenas paladas de cordura:</p><p></p><blockquote>What you need to answer yourself is how microservices will help <i class="mu">your</i> particular situation. Think about <i class="mu">your </i>situation,
and <span style="background-color: #fcff01;">don’t blindly copy what big tech companies do, because their domain
is most likely different from yours, and they have their own reasons
that might not exist for you</span>. You can listen to their general advice,
just don’t be like “<i class="mu">oh, this company is doing X to solve their Y problem, so we’ll do the same</i>” when you don’t really have a Y problem.</blockquote><p></p><p>Giedrius recuerda una verdad muy simple: no aplique una plantilla, sino examine su problema:</p><p class="pw-post-body-paragraph ly lz jb ma b mb mv kc md me mw kf mg mh mx mj mk ml my mn mo mp mz mr ms mt iu gg" data-selectable-paragraph="" id="affd"></p><blockquote><p class="pw-post-body-paragraph ly lz jb ma b mb mv kc md me mw kf mg mh mx mj mk ml my mn mo mp mz mr ms mt iu gg" data-selectable-paragraph="" id="affd">Saying things like “<i class="mu">if we use microservices we’ll be able to reduce development costs, we’ll scale better, etc.</i>” is not a good answer, because it’s very generic and does not explain how.</p><p class="pw-post-body-paragraph ly lz jb ma b mb mv kc md me mw kf mg mh mx mj mk ml my mn mo mp mz mr ms mt iu gg" data-selectable-paragraph="" id="5ddb">Here’s what a good answer might look like: “<i class="mu">we
need to process a lot of batches of X data, however, we can’t do it
anymore, we can’t scale because each batch is unnecessarily coupled to
process Y which can’t be made any faster, nor does it need to, so we
need X to be decoupled from Y</i>”.</p><p class="pw-post-body-paragraph ly lz jb ma b mb mv kc md me mw kf mg mh mx mj mk ml my mn mo mp mz mr ms mt iu gg" data-selectable-paragraph="" id="4d2a">Such
an answer would tell exactly <span style="background-color: #fcff01;">what problem you’re having and why</span>.
Identifying your problem is very important. If you can’t identify your
problem you’re at a high risk of making your life too complicated by
needlessly starting with microservices.</p></blockquote><p class="pw-post-body-paragraph ly lz jb ma b mb mv kc md me mw kf mg mh mx mj mk ml my mn mo mp mz mr ms mt iu gg" data-selectable-paragraph="" id="4d2a"></p><p class="pw-post-body-paragraph ly lz jb ma b mb mv kc md me mw kf mg mh mx mj mk ml my mn mo mp mz mr ms mt iu gg" data-selectable-paragraph="" id="4d2a"> El consejo de Giedrius es no precipitarse estableciendo una arquitectura basada en microservicios, sino concentrarse en el problema, especialmente modificando progresivamente el diseño y arquitectura de la aplicación monolítica de la que se parte. Recomienda disminuír el acoplamiento y dependencias entre partes del sistema, quizá extrayendo partes que puedan manejarse como servicios:</p><p class="pw-post-body-paragraph ly lz jb ma b mb mv kc md me mw kf mg mh mx mj mk ml my mn mo mp mz mr ms mt iu gg" data-selectable-paragraph="" id="2574"></p><blockquote><p class="pw-post-body-paragraph ly lz jb ma b mb mv kc md me mw kf mg mh mx mj mk ml my mn mo mp mz mr ms mt iu gg" data-selectable-paragraph="" id="2574">(...) if you don’t think about making your system loosely coupled, and if you
don’t think about loose coupling, no matter what architecture you
choose, it’s probably not gonna work out, microservices included. (...) So
if you think that you must start with microservices from the get-go
you’re already implying that your services will be too coupled and too
static to actually qualify as microservices. If you can have a loosely
coupled monolithic system, you will be able to convert it to
microservices.</p><blockquote class="no np nq"><p class="ly lz mu ma b mb mv kc md me mw kf mg nr mx mj mk ns my mn mo nt mz mr ms mt iu gg" data-selectable-paragraph="" id="c63f"><i>If you can’t have a loosely coupled monolithic system, microservices will make your life even worse, a lot worse</i>.</p></blockquote></blockquote><blockquote class="no np nq"><p class="ly lz mu ma b mb mv kc md me mw kf mg nr mx mj mk ns my mn mo nt mz mr ms mt iu gg" data-selectable-paragraph="" id="c63f"></p></blockquote><p>Giedrius desplaza la atención a resaltar que en primer lugar este paso es un problema de diseño, y que eso es lo que debe estar claro en primer lugar, dejando a un lado decisiones basadas en "porque lo hizo Netflix". Reflexionar acerca del actual diseño y su caos, analizando las prácticas que hubieron de llevar a tenerel desorden que se quiere corregir. Sin este paso, el fallo se repetirá:</p><p class="pw-post-body-paragraph ly lz jb ma b mb mv kc md me mw kf mg mh mx mj mk ml my mn mo mp mz mr ms mt iu gg" data-selectable-paragraph="" id="e334"></p><blockquote><p class="pw-post-body-paragraph ly lz jb ma b mb mv kc md me mw kf mg mh mx mj mk ml my mn mo mp mz mr ms mt iu gg" data-selectable-paragraph="" id="e334">The
old monolithic system is a huge pile of spaghetti and needs to be
rewritten. The biggest mistake you can make in such a situation is not
learning from past mistakes. You should sit down and closely inspect
what bad (engineering) practices or processes led to the state that it’s
in.</p><p class="pw-post-body-paragraph ly lz jb ma b mb mv kc md me mw kf mg mh mx mj mk ml my mn mo mp mz mr ms mt iu gg" data-selectable-paragraph="" id="a0b2">If
you don’t do that you’re bound to repeat the same mistakes when you
rewrite the system. You know what they say, history repeats itself, and
the only way to prevent it is to learn about history.</p><p class="pw-post-body-paragraph ly lz jb ma b mb mv kc md me mw kf mg mh mx mj mk ml my mn mo mp mz mr ms mt iu gg" data-selectable-paragraph="" id="2ee2">You
just can’t rush into a new project with the same engineering practices
you used in the old one and expect things to magically turn out
different this time around. The old one failed for a lot of reasons, and
you can’t ignore them. Everyone working on the new project should be
informed about them.</p></blockquote><p class="pw-post-body-paragraph ly lz jb ma b mb mv kc md me mw kf mg mh mx mj mk ml my mn mo mp mz mr ms mt iu gg" data-selectable-paragraph="" id="2ee2"></p><p class="pw-post-body-paragraph ly lz jb ma b mb mv kc md me mw kf mg mh mx mj mk ml my mn mo mp mz mr ms mt iu gg" data-selectable-paragraph="" id="2ee2">Recomiendo su lectura, y pensar estas observaciones. El artículo es más amplio pero esta es la parte que me interesa particularmente. <br /></p><p></p><p></p><p><br /></p>Jorge Ubedahttp://www.blogger.com/profile/16457542679928501488noreply@blogger.com0tag:blogger.com,1999:blog-8758579.post-42647414093849586312022-10-30T08:29:00.001+01:002022-10-30T08:29:38.941+01:00Más sobre Meta<p> Meta (Facebook) se hunde en la bolsa, con nuevas caídas de su valor:</p><p><b></b></p><blockquote><b>Meta</b> abrió el mercado bursátil hoy <b>al mismo precio de hace siete años</b>,
cuando la compañía aún se llamaba Facebook y parecía tener un enorme
futuro por delante. Una caída del 20% del precio de la acción tras la
presentación <a href="https://www.elmundo.es/economia/empresas/2022/10/26/6359a58ee4d4d8a46f8b459a.html" target="_blank">de los últimos datos trimestrales</a> ha barrido todo lo ganado desde entonces y demostrado que algunos gigantes tecnológicos, en realidad, tienen los pies de barro. (...) Las cifras del tercer trimestre, la verdad, <b>son mucho peores de lo que los analistas esperaban</b>.
En un año se han evaporado la mitad de los beneficios. El año pasado,
al cierre del tercer trimestre, la compañía aseguraba haber ganado cerca
de 9.000 millones de dólares. Este año la cifra apenas supera los 4.390
millones, <b>un 52% menos</b>. El beneficio por acción ha
caído un 49% a pesar de que los ingresos de la compañía han sido
relativamente estables, con una bajada de sólo un 4% que se puede
achacar fácilmente al clima económico general. (<a href="https://www.elmundo.es/blogs/elmundo/el-gadgetoblog/" target="_blank">Ángel Gimenez de Luis</a>, en <a href="https://www.elmundo.es/economia/empresas/2022/10/27/635abf06e4d4d881118b45f5.html" target="_blank">El Mundo</a>)<br /></blockquote><p></p><p> En una época en la que el lucro lo dan tus datos, que si te preguntas cuál es su negocio, éste no es otro que tu conocimiento puesto en venta, entonces su baja puede ser un acontecimiento positivo. Los datos de cada participante son el punto es clave para Meta (y para muchos otros):</p><p></p><blockquote><p>Los problemas empezaron, aunque cueste creerlo, <b>con una simple actualización de software</b>.
A finales del año pasado Apple introdujo nuevos <span style="background-color: #fcff01;">controles de privacidad
en los iPhone que permiten a los usuarios limitar la cantidad de
información que las apps en sus teléfonos son capaces de extraer.</span></p><p>Hasta entonces Meta se apoyaba en su omnipresencia digital para elaborar <b>perfiles muy detallados de los usuarios</b>.
Recolectaba información no sólo del uso que se hacía de sus propias
aplicaciones, sino también muchas otras en las que incluía códigos de
seguimiento. Esto le permitía ser muy eficaz -y, por tanto, cobrar más-
en el negocio de la publicidad online.</p><p>Pero con los nuevos cambios
ha perdido una gran ventaja competitiva en su mercado más importante,
EEUU, donde la cantidad de personas que usan iPhone es muy alta. No
ayuda tampoco que Google haya decidido seguir un camino parecido con
Android, restringiendo cada vez más la cantidad y calidad de los datos
que muestra a los desarrolladores, salvo que los usuarios opten
explícitamente por compartirlos.</p></blockquote><p></p><p> Es que si el negocio es vender aire, y vivir en una <b>meta</b>-realidad, su alcance puede llegar a ser muy frágil, porque probablemente la vida diaria de la sociedad transcurre y transcurrirá en un entorno distinto, no en la ficción:</p><p></p><blockquote>El otro problema para Meta es que ha decidido <span style="background-color: #fcff01;">apostar su futuro a una sola carta</span>: <b>la realidad virtual</b>.
El año pasado anunció su cambio de nombre, justificándolo como un mejor
reflejo de sus intenciones. Zuckerberg cree que en un futuro cercano la
mayor parte de nuestra vida digital, tanto en los momentos de ocio como
de trabajo, transcurrirán en entornos virtuales, algo que, en conjunto,
denomina como "el metaverso", de ahí que ahora hablemos de Meta en
lugar de Facebook.</blockquote><p></p><p> Ya tuvimos un <a href="https://es.wikipedia.org/wiki/Second_Life" target="_blank">Second Life</a>. </p><p>Giménez de Luis apunta también a TikTok, que compite en su terreno, quitándole porciones importantes de seguidores. Con el agravante de que TikTok representa a la presencia cada vez mayor de China como competidor por la hegemonía. Mismos o peores objetivos, si tenemos en cuenta el totalitarismo nada virtual chino.</p><p><br /></p>Jorge Ubedahttp://www.blogger.com/profile/16457542679928501488noreply@blogger.com0tag:blogger.com,1999:blog-8758579.post-91966764080039819892022-10-18T20:14:00.002+02:002022-10-18T20:14:49.306+02:00No comprometa proyectos basados en Google II<p> <a href="https://cuartageneracion.blogspot.com/2022/09/no-comprometa-proyectos-basados-en.html" target="_blank">Como hemos dicho antes</a>, la confiabilidad en la continuidad de un proyecto o un producto de Google, tiende a cero. Tanto que existe una página "<a href="https://killedbygoogle.com/" target="_blank">Killed by Google</a>", con un recuento de productos e iniciativas que en su momento fueron populares y que fueron abandonadas. Decir "abandonadas" quiere decir que lo que alguien hubiera invertido se ha perdido, o a duras salvado con un costo de reingeniería.</p><p><a href="https://medium.com/@itslizmartin/why-google-keeps-killing-its-products-b6c352eda9f9" target="_blank">Liz Martin en Medium</a> (Why Google Keeps Killing Its Products):</p><p class="pw-post-body-paragraph js jt ik bm b ju md jw jx jy me ka kb kc mf ke kf kg mg ki kj kk mh km kn ko id gl" data-selectable-paragraph="" id="1f4a"></p><blockquote><p class="pw-post-body-paragraph js jt ik bm b ju md jw jx jy me ka kb kc mf ke kf kg mg ki kj kk mh km kn ko id gl" data-selectable-paragraph="" id="1f4a">(...) But
here’s the thing: killing off projects is part of Google’s innovation
process. Many of the Google products that people use today include
features from things that no longer exist.</p><p class="pw-post-body-paragraph js jt ik bm b ju jv jw jx jy jz ka kb kc kd ke kf kg kh ki kj kk kl km kn ko id gl" data-selectable-paragraph="" id="c42d">For example, Google Inbox was killed off in 2019 but many of its features <a class="au kx" href="https://socialbarrel.com/google-already-porting-some-inbox-features-to-gmail-for-android/118780/" rel="noopener ugc nofollow" target="_blank">migrated over to Gmail</a>. Google Play Music was killed off in 2020, but several of its features are <a class="au kx" href="https://arstechnica.com/gadgets/2018/05/youtube-music-will-replace-google-play-music-but-wont-kill-user-uploads/" rel="noopener ugc nofollow" target="_blank">being used in Youtube Music</a>. Google Allo was killed off in 2019, but its best features were <a class="au kx" href="https://gizmodo.com/android-messages-is-getting-some-of-the-best-features-f-1826929313" rel="noopener ugc nofollow" target="_blank">ported over to Android Messages</a>.</p><p class="pw-post-body-paragraph js jt ik bm b ju jv jw jx jy jz ka kb kc kd ke kf kg kh ki kj kk kl km kn ko id gl" data-selectable-paragraph="" id="052b">(...) Google
exists in a fast-paced space. The faster the company can fail, the more
quickly it can innovate and beat the competition to the newest
technological advancement. No matter how chaotic, these calculated risks
are the method to Google’s madness.</p><p class="pw-post-body-paragraph js jt ik bm b ju jv jw jx jy jz ka kb kc kd ke kf kg kh ki kj kk kl km kn ko id gl" data-selectable-paragraph="" id="c1c5"><strong class="bm mr">Question: What do you think Google will kill off next? What product would you like to see Google bring back to life?</strong></p></blockquote><p class="pw-post-body-paragraph js jt ik bm b ju jv jw jx jy jz ka kb kc kd ke kf kg kh ki kj kk kl km kn ko id gl" data-selectable-paragraph="" id="c1c5"><strong class="bm mr"></strong></p><p class="pw-post-body-paragraph js jt ik bm b ju jv jw jx jy jz ka kb kc kd ke kf kg kh ki kj kk kl km kn ko id gl" data-selectable-paragraph="" id="c42d"> </p><p></p>Jorge Ubedahttp://www.blogger.com/profile/16457542679928501488noreply@blogger.com0tag:blogger.com,1999:blog-8758579.post-7854464572731353722022-09-21T23:28:00.001+02:002022-10-30T08:32:06.062+01:00Meta en problemas<p> En el bloque de las <a href="https://en.wikipedia.org/wiki/Big_Tech" target="_blank">Big Tech</a>, Big Four o Big Five, según las variaciones de criterio para clasificarlas, hay un elemento común que pesa con una masa descomunal sobre la industria tecnológica o sobre su investigación y evolución: la capacidad monopólica de imponer tendencias y torcer el rumbo del desarrollo según sus criterios. En este sentido han perdido su halo primario de tecnológicas "buenas", que gozaron en mayor o menor medida todas ellas en sus comienzos: innovadoras, abiertas, promotoras de la inteligencia y la iniciativa, participantes en toda clase de iniciativas de mejora social. Desde hace años son para las autoridades de Estados Unidos y Europa el centro de revisiones de prácticas monopólicas, y actores de primera línea de lobbismo en favor de sus proyectos, con<a href="https://en.wikipedia.org/wiki/Lawsuits_involving_Meta_Platforms" target="_blank"> sanciones que se van acumulando</a>. Dentro de ellas destacan, a mi juicio, dos: <a href="https://en.wikipedia.org/wiki/Meta_Platforms" target="_blank">Facebook</a> (ahora Meta) y Twitter. Facebook ha sido particularmente escandalosa y expuesta <a href="https://www.bbc.com/mundo/noticias-internacional-37946548" target="_blank">durante la presidencia americana de Donald Trump</a>. Es que con una masa de usuarios participantes cercana a tres mil millones, la capacidad de manipulación es semejante a tener un gobierno que rigiera Estados Unidos, Europa, Rusia y China, y esto es parte de su negocio. </p><p>Sin embargo, por agotamiento o por competencia, ha llegado un momento en que por primera vez no ha crecido, y eso ha activado alarmas. La vía de escape imaginada por su dirección ha sido lanzar Meta con el nuevo paradigma de "Metaverso", "<a href="https://en.wikipedia.org/wiki/Meta_Platforms#Rebranding"><i>a digital extension of the physical world by social media, virtual reality and augmented reality features</i></a>" . Meta vende vida virtual, aire en la red, y su proyecto tiene riesgos que este año no parecen ser igual de virtuales. En <a href="https://www.eleconomista.es/tecnologia/noticias/11952920/09/22/Mark-Zuckerberg-ha-perdido-70200-millones-este-ano-es-la-punta-del-iceberg-de-los-problemas-de-Meta.html" target="_blank"><i>El Economista</i></a>:</p><p></p><blockquote><i>(...) el pasado febrero, la red social presenta sus resultados y, con ellos, llega el primer periodo en el que los usuarios no aumentan. Esto provoca el hundimiento de la compañía y el mayor desplome en un día en el patrimonio de su fundador, marcando una caída histórica de 31.000 millones en una sesión. <br /><br />La ausencia de nuevas altas en la plataforma revela dos cosas: la competencia con TikTok y un menor presupuesto publicitario por parte de los anunciantes. En el primer caso, la red social de Zuckerberg ha encontrado una gran rival en la china gracias al éxito de su formato, los vídeos cortos. En el segundo, el deterioro de las condiciones económicas ha lastrado los ingresos de la compañía.<br /><br />Además, el órdago por el Metaverso ha requerido y seguirá necesitando enormes inversiones, algo que ha pesado en el valor de la compañía este ejercicio. De hecho, el propio Zuckerberg dijo que la nueva propuesta de la tecnológica era deficitaria y que supondría pérdidas durante tres y cinco años. Además, en los últimos tiempos, la antigua Facebook ha sido objeto de un mayor escrutinio regulatorio. <br /><br />En comparación con sus competidoras, es la que peor rinde en bolsa. Meta Platforms se deja un 57% de valor en lo que va de año, solo superada por Netflix, que pierde un 60%. Sin embargo, las rentabilidades negativas de Apple, Amazon y Alphabet son mucho menos significativas, del -14%, -26% y -29%, respectivamente.</i></blockquote>En fin, el darwinismo en la evolución tecnológica también puede alcanzar al T-Rex<br /><p></p><p><br /></p>Jorge Ubedahttp://www.blogger.com/profile/16457542679928501488noreply@blogger.com0tag:blogger.com,1999:blog-8758579.post-37136804320851282832022-09-11T12:43:00.003+02:002022-09-11T12:49:52.848+02:00No comprometa proyectos basados en Google<p>En una época en que en la cúspide de la pirámide de proveedores de tecnología, infraestructura, y elaboración de software hay un muy reducido número de participantes (Microsoft, AWS (Apple), Google (Alphabet), Oracle, Facebook (Meta), la confiabilidad en sus servicios debería ser fundamental. Sin embargo, lo efectivo es el manejo monopólico de la evolución y la oferta en el mercado. Es muy común ver una pequeña empresa que destaca por un par de años en un nicho de mercado, hasta que es comprado por algún miembro prominente de la pirámide. Y esto no significa que el hallazgo diferenciador de esta tal empresa sea utilizado de manera multiplicadora por el comprador. Es más probable que marche a vía muerta en otro par de años. Los vendedores festejan el negocio, y quienes hubieron de confiar en la startup y adoptaron su producto, están probablemente perdidos. </p><p>En este marco, Google destaca en un aspecto en particular: investigar, ofrecer un elemento novedoso en algún área de mercado, impulsarlo y entusiasmar a miles de adoptantes, y luego, de un día para otro, avisar que ese producto, proceso, o lo que sea, se discontinuará el año siguiente. Y los miles de usuarios entusiastas, los que demostraban lo importante que el nuevo elemento era, los early birds, tienen que comenzar a planear (a pérdida), cómo saldrán del corral con el menor daño posible. <a href="https://cloud.google.com/iot-core" target="_blank">Google Cloud IOT service</a> es su más reciente muestra de arbitrariedad en el manejo del mercado y de sus clientes. Es notable entrar a la página del producto, donde se describen sus servicios y su gran valor, mientras que en la primera línea de la página aparece un sobreescrito que avisa que el servicio se termina el 16 de agosto de 2023.</p><p><a href="https://www.infoq.com/news/2022/08/google-iot-core-discontinued/?forceSponsorshipId=3f839c6d-26b2-4d25-b079-50e419b48365 " target="_blank">En InfoQ, donde he visto esta noticia</a>, se dice esto:</p><blockquote><p><i>Google Cloud IoT Core
is a fully-managed service that allows customers to connect, manage, and
ingest data from millions of globally dispersed devices quickly and
securely. Recently, Google announced discontinuing the service -
according to the <a href="https://cloud.google.com/iot/docs/resources">documentation</a>, the company will retire the service on the 16th of August, 2023. </i>
</p><p><i>The company <a href="https://www.infoq.com/news/2017/10/google-cloud-iot/">released</a>
the first public beta of IoT Core in 2017 as a competing solution to
the IoT offerings from other cloud vendors – Microsoft with <a href="https://azure.microsoft.com/en-us/services/iot-hub/#overview">Azure IoT Hub</a> and AWS with <a href="https://aws.amazon.com/iot-core/">AWS IoT Core</a>. In early 2018, the service became <a href="https://www.infoq.com/news/2018/02/google-cloud-iot-core-ga/">generally available</a>.
Now, the company emailed its customers with the message that "your
access to the IoT Core Device Manager APIs will no longer be available.
As of that date, devices will be unable to connect to the Google Cloud
IoT Core MQTT and HTTP bridges, and existing connections will be shut
down." Therefore, the lifespan of the service is a mere five years.</i></p><p><i>(...) In addition, over the years, <span style="background-color: #fcff01;">various companies have even shipped
dedicated hardware kits for those looking to build Internet of Things
(IoT) products around the managed service.</span> <a href="https://twitter.com/QuinnyPig">Cory Quinn</a>, a cloud economist at The Duckbill Group, <a href="https://twitter.com/QuinnyPig/status/1559370694063820800">tweeted</a>:</i></p>
<blockquote>
<p><i>I bet @augurysys is just super thrilled by their public <a href="https://cloud.google.com/customers/augury">Google Cloud IoT Core case study</a> at this point in the conversation. Nothing like a public reference for your bet on the wrong horse.</i></p>
</blockquote>
<p><i>Last year, InfoQ <a href="https://www.infoq.com/news/2021/08/google-enterprise-apis-label/">reported</a>
on Enterprise API and the "product killing" reputation of the company -
where the community also shared their concerns and sentiment. And
again, a year later, <a href="https://twitter.com/singhns">Narinder Singh</a>, co-founder, and CEO at LookDeep Health, as an example expressed a similar view in a <a href="https://twitter.com/singhns/status/1559243804758216704?s=20&t=pGGaLAk-xf9ANi3QOPt8Tw">tweet</a>:</i></p>
<blockquote>
<p><i>Can't believe how backwards @Google @googlecloud still is with
regards to the enterprise. Yes, they are better at selling now, but
they are repeatedly saying through their actions you should only use the
core parts of GCP.</i></p>
</blockquote><p><i> (...) Lastly, already a Google Partner, ClearBlade <a href="https://www.clearblade.com/iot-core/">announced</a> a full-service replacement for the IoT Core with their service, including a <a href="https://www.clearblade.com/wp-content/uploads/2022/08/ClearBlade-Google-IoT-Core-Migration_Website.pdf">migration path</a> from Google IoT Core to ClearBlade. An option for customers, however, in the Hacker News <a href="https://news.ycombinator.com/item?id=32475298">thread</a>, a respondent, patwolf, stated:</i></p>
<blockquote>
<p><i>I've been successfully using Cloud IoT for a few years. Now I need to
find an alternative. There's a vendor named ClearBlade that announced
today a direct migration path, but at this point, I'd rather roll my
own.</i></p>
</blockquote><p></p></blockquote><p>¿Cuántas veces ha pasado esto antes? ¿Qué garantías de prosperar tiene un negocio si ésta es la confiabilidad de su proveedor? Como en un automóvil, utilice una "conducción defensiva", y sepa con quién negocia: tenga un par de vías de escape, y si puede, evite al gigante.<br /></p>Jorge Ubedahttp://www.blogger.com/profile/16457542679928501488noreply@blogger.com0tag:blogger.com,1999:blog-8758579.post-44597416944280064232022-08-07T19:54:00.000+02:002022-08-07T19:54:07.985+02:00Todd Montgomery: Unblocked by design<p><br /></p><p><a href="https://www.infoq.com/presentations/problems-async-arch/" target="_blank">Leído en InfoQ</a> , que publica una presentacion ofrecida en QCon Plus, en noviembre de 2021. Un punto de vista lejano a cómo he trabajado siempre, pero con argumentos para atenderlo. Todd Montgomery aboga en favor del diseño asincrónico de los procesos. considerando en primer lugar que la secuencialidad es ilusoria:</p><div class="notesWrapper">
<p></p><blockquote><p>All of our systems provide this illusion of sequentiality, this
program order of operation that we really hang our hat on as developers.
<span style="background-color: #fcff01;"> We look at this and we can simplify our lives by this illusion, but be
prepared, it is an illusion</span>. That's because a compiler can reorder,
runtimes can reorder, CPUs can reorder. <span style="background-color: #fcff01;">Everything is happening in
parallel, not just concurrently, but in parallel on all different parts
of a system, operating systems as well as other things</span>. It may not be
the fastest way to just do step one, step two, step three. It may be
faster to do steps one and two at the same time or to do step two before
one because of other things that can be optimized. By imposing order on
that we can make some assumptions about the state of things as we move
along. Ordering has to be imposed. This is done by things in the CPU
such as the load/store buffers, providing you with this ability to go
ahead and store things to memory, or to load them asynchronously. <span style="background-color: #fcff01;">Our
CPUs are all asynchronous.</span></p>
<p>Storages are exactly the same way, different levels of caching give
us this ability for multiple things to be optimized along that path. OSs
with virtual memory and caches do the same thing. Even our libraries do
this with the ideas of promises and futures. The key is to wait. All of
this provides us with this illusion that it's ok to wait. It can be,
but that can also have a price, because the operating system can
de-schedule. <span style="background-color: #fcff01;">When you're waiting for something, and you're not doing any
other work, the operating system is going to take your time slice. It's
also lost opportunity to do work that is not reliant on what you're
waiting for. </span>In some application, that's perfectly fine, in others it's
not. By having locks and signaling in that path, they do not come for
free, they do impose some constraints.</p></blockquote><p></p>
</div><p> Ubicando el contexto primero: </p><p></p><blockquote>When we talk about sequential or synchronous or blocking, we're talking
about the idea that you do some operation. You cannot continue to do
things until something has finished or things like that. This is more
exaggerated when you go across an asynchronous binary boundary. It could
be a network. It could be sending data from one thread to another
thread, or a number of different things. A lot of these things make it
more obvious, as opposed to asynchronous or non-blocking types of
designs where you do something and then you go off and do something
else. Then you come back and can process the result or the response, or
something like that.</blockquote><p></p><p>Cómo ve la sincronía:</p><p></p><blockquote>I'll just use as an example throughout this, because it's easy to talk
about, the idea of a request and a response. With sync or synchronous,
you would send a request, there'll be some processing of it. Optionally,
you might have a response. Even if the response is simply just to
acknowledge that it has completed. It doesn't always have to involve
having a response, but there might be some blocking operation that
happens until it is completed. A normal function call is normally like
this. <span style="background-color: #fcff01;">If it's sequential operation, and there's not really anything else
to do at that time, that's perfectly fine. If there are other things
that need to be done now, or it needs to be done on something else,
that's a lost opportunity</span>.</blockquote><p></p><p>Y la asincronía:</p><p></p><blockquote>Async is more about the idea of initiating an operation, having some
processing of it, and you're waiting then for a response. This could be
across threads, cores, nodes, storage, <span style="background-color: #fcff01;">all kinds of different things
where there is this opportunity to do things while you're waiting for
the next step, or that to complete or something like that. The idea of
async is really, what do you do while waiting? </span>It's a very big part of
this. Just as an aside, when we talk about event driven, we're talking
about actually the idea of on the processing side, you will see a
request come in. We'll denote that as OnRequest. On the requesting side,
when a response comes in, you would have OnResponse, or OnComplete, or
something like that. We'll use these terms a couple times throughout
this.</blockquote><p></p><p> El propósito de Montgomery es procesar asincronicamente, y sacar partido de los tiempos muertos:</p><p></p><blockquote><p>The key here is while something is processing or you're waiting, is to
do something, and that's one of the takeaways I want you to think of.
It's a lost opportunity. What can you do while waiting and make that
more efficient? The short answer is, while waiting, do other work.
Having the ability to actually do other stuff is great. The first thing
is sending more requests, as we saw. The sequence here is, how do you
distinguish between the requests? The relationship here is you have to
correlate them. You have to be able to basically identify each
individual request and individual response. That correlation gives rise
to having things which are a little bit more interesting. The ordering
of them starts to become very relevant. You need to figure out things
like how to handle things that are not in order. You can reorder them.
You're just really looking at the relationship between a request and a
response and matching them up. It can be reordered in any way you want,
to make things simple. It does provide an interesting question of, what
happens if you get something that you can't make sense of. Is it
invalid? Do you drop it? Do you ignore it? In this case, you've sent
request 0, and you've got a response for 1. In this point, you're not
sure exactly what the response for 1 is. That's handling the unexpected.</p><p>(...) This is an async duty cycle. This looks like a lot of the duty cycles
that I have written, and I've seen written and helped write, which is,
you're basically sitting in a loop while you're running. You usually
have some mechanism to terminate it. You usually poll inputs. By
polling, I definitely mean going to see if there's anything to do, and
if not, you simply return and go to the next step. You poll if there's
input. You check timeouts. You process pending actions. The more
complicated work is less in the polling of the inputs and handling them,
it's more in the checking for timeouts, processing pending actions,
those types of things. Those are a little bit more complex. Then at the
end, you might idle waiting for something to do. Or you might just say,
ok, I'm going to sleep for a millisecond, and you come right back. You
do have a little bit of flexibility here in terms of idling, waiting for
something to do.</p></blockquote><p> </p><p> Realmente, estos conceptos parecen complicados de aplicar en un proceso usual de trabajo, y más viables en la construcción de trabajos de nivel de sistema operativo. El interlocutor de Montgomery (Printezis) lo ve justamente así: <i>You did talk about the duty cycle and how you would write
it. In reality, how much a developer would actually write that, but
instead use a framework that will do most of the work for them?</i></p><p>La respuesta de Montgomery:<i> </i></p><p></p><blockquote>(...) Beyond that, I mean, patterns and antipatterns, I think, learning
queuing theory, which may sound intimidating, but it's not. Most of it
is fairly easy to absorb at a high enough level that you can see far
enough to help systems. It is one of those things that I think pays for
itself. Just like learning basic data structures, we should teach a
little bit more about queuing theory and things behind it. Getting an
intuition for how queues work and some of the theory behind them goes a
huge way, when looking at real life systems. At least it has for me, but
I do encourage people to look at that. Beyond that, technologies
frameworks, I think by spending your time more looking at what is behind
a framework. In other words, the concepts, you do much better than just
looking at how to use a framework. That may be front and center,
because that's what you want to do, but go deeper. Go deeper into, what
is it built on? Why does it work this way? Why doesn't it work this
other way? Asking those questions, I think you'll learn a tremendous
amount. (...)</blockquote><p></p><p>La conversación se extiende y deriva por otros asuntos relacionados. Recomendable para leer y releer. Habrá que volver más de una vez. <br /></p><p>Veo un modo de afrontar los procesos alejado del modo en que usualmente he trabajado, pero debo reconocer que en los últimos cinco o seis años los cambios conceptuales sobreabundan, y puedo decir que estamos en una quinta o sexta generación, lejos de aquellos que llamamos cuarta generación hace veinte o treinta años. El tiempo
mostrará qué ha resultado duradero, y qué ha tomado por un callejón sin
salida. Estoy dispuesto a escuchar.<br /></p><p> <br /></p><p><br /></p>Jorge Ubedahttp://www.blogger.com/profile/16457542679928501488noreply@blogger.com0tag:blogger.com,1999:blog-8758579.post-77233654694296730442022-08-07T08:43:00.001+02:002022-08-10T10:23:41.824+02:00Pesadillas en la nube<p> Forrest Brazeal, actualmente empleado de Google Cloud (<i>An AWS Hero turned Google Cloud employee, I explore the technical and
philosophical differences between the two platforms. My biases are
obvious, but opinions are my own</i>) <a href="https://cloudirregular.substack.com/p/the-cloud-billing-risk-that-scares" target="_blank">señala en julio</a> que la peor pesadilla de cualquier desarrollador en la nube es una llamada recursiva en sus pruebas, que escale la facturación de su cuenta de unos pocos dólares/euros a "miles" (50.000 por ejemplo). Y una llamada recursiva que genere miles de llamadas procesadas puede producirse en cualquier prueba:</p><p><span></span></p><blockquote><p><span>AWS calls it </span><a href="https://docs.aws.amazon.com/lambda/latest/operatorguide/recursive-runaway.html" rel="">the recursive runaway problem</a><span>.
I call it the Hall of Infinite Functions - imagine a roomful of mirrors
reflecting an endless row of Lambda invocations. It’s pretty much the
only cloud billing scenario that gives me nightmares as a developer, for
two reasons:</span></p><ul><li><p><span>It can happen </span><i>so fast. </i><span>It’s
the flash flood of cloud disasters. This is not like forgetting about a
GPU instance and incurring a few dollars per hour in linearly
increasing cost. You can go to bed with a $5 monthly bill and wake up
with a $50,000 bill - </span><b>all before your budget alerts have a chance to fire</b><span>.</span></p></li><li><p>There’s
no good way to protect against it. None of the cloud providers has
built mechanisms to fully insulate developers from this risk yet. </p></li></ul></blockquote><p>Brazeal apunta a un incidente descripto en detalle por sus propias víctimas (<a href="https://blog.tomilkieway.com/72k-1/" target="_blank"><i><span style="font-size: small;"><span style="font-weight: normal;">We Burnt $72K testing Firebase + Cloud Run and almost went Bankrupt</span></span></i></a>) que puede dar una idea del problema. En este caso la factura pasó de un potencial de 7 dólares a 72000...</p><p>Sudeep Chauhan, protagonista de este incidente, escribe posteriormente, tras poner en orden la casa, <a href="https://blog.sudcha.com/guide-to-cloud/" target="_blank">una lista de recomendaciones</a> para trabajar con un proveedor de servicios en la nube.</p><p>Nota: Renato Losio, en InfoQ, a propósito del artículo de Brazeal, <a href="https://www.infoq.com/news/2022/08/recursive-serverless-functions/" target="_blank">lo menciona y extiende</a>, recordando <a href="https://www.infoq.com/news/2021/05/aws-billing-limits/" target="_blank">otro artículo de Brazeal</a> dedicado a la capa sin cargo de AWS.<br /></p><p><br /></p><p> </p>Jorge Ubedahttp://www.blogger.com/profile/16457542679928501488noreply@blogger.com0tag:blogger.com,1999:blog-8758579.post-41443316065016527382022-08-06T09:07:00.003+02:002022-08-06T09:07:52.644+02:00Probablemente usted no necesite microservicios<p> <a href="https://itnext.io/you-dont-need-microservices-2ad8508b9e27" target="_blank">Mattew Spence, en ITNEXT</a>, a contracorriente de la enorme ola de bombo sobre microservicios, desarrolla un consistente conjunto de argumentos de relativización de la importancia y necesidad de microservicios (You don't need microservices) . Sólo destaco el argumento acerca de la simplicidad de los microservicios, y de sus ventajas derivadas:<br /></p><h1 class="lf lg jc bn lh li lj lk ll lm ln lo lp lq lr ls lt lu lv lw lx ly lz ma mb mc gh" data-selectable-paragraph="" id="0df3"><span style="font-size: small;"><span style="font-family: arial;"></span></span></h1><blockquote><h1 class="lf lg jc bn lh li lj lk ll lm ln lo lp lq lr ls lt lu lv lw lx ly lz ma mb mc gh" data-selectable-paragraph="" id="0df3"><span style="font-weight: normal;"><span style="font-size: small;"><span style="font-family: arial;">"Simpler, Easier to Understand Code"</span></span></span></h1><p class="pw-post-body-paragraph jz ka jc kb b kc md ke kf kg me ki kj kk mf km kn ko mg kq kr ks mh ku kv kw iv gh" data-selectable-paragraph="" id="b1d8"><span style="background-color: #fcff01;">This benefit is at best disingenuous, at worse, a bald-faced lie.</span></p><p class="pw-post-body-paragraph jz ka jc kb b kc kd ke kf kg kh ki kj kk kl km kn ko kp kq kr ks kt ku kv kw iv gh" data-selectable-paragraph="" id="d0e8">Each service is simpler and easier to understand. Sure. <b class="kb jd">The system as a whole is far more complex and harder to understand.</b> You haven’t removed the complexity; you’ve increased it and then transplanted it somewhere else.</p></blockquote><p></p><blockquote>(...) Although microservices enforce modularization, there is no guarantee it is <em class="mi">good</em> modularization. <span style="background-color: #fcff01;">Microservices can easily become a tightly coupled “distributed monolith” if the design isn’t fully considered.</span></blockquote> <p></p><p></p><p class="pw-post-body-paragraph jz ka jc kb b kc md ke kf kg me ki kj kk mf km kn ko mg kq kr ks mh ku kv kw iv gh" data-selectable-paragraph="" id="e485"></p><blockquote><p class="pw-post-body-paragraph jz ka jc kb b kc md ke kf kg me ki kj kk mf km kn ko mg kq kr ks mh ku kv kw iv gh" data-selectable-paragraph="" id="e485">(...) The
choice between monolith and microservices is often presented as two
mutually exclusive modes of thought. Old school vs. new school. Right or
wrong. One or the other.</p><p class="pw-post-body-paragraph jz ka jc kb b kc kd ke kf kg kh ki kj kk kl km kn ko kp kq kr ks kt ku kv kw iv gh" data-selectable-paragraph="" id="d018">The
truth is they are <span style="background-color: #fcff01;">both valid approaches with different trade-offs</span>. The
correct choice is highly context-specific and must include a broad range
of considerations.</p><p class="pw-post-body-paragraph jz ka jc kb b kc kd ke kf kg kh ki kj kk kl km kn ko kp kq kr ks kt ku kv kw iv gh" data-selectable-paragraph="" id="e587">The
choice itself is a false dichotomy and, in certain circumstances,
should be made on a feature-by-feature basis rather than a single
approach for an entire organization’s engineering team.</p><p class="pw-post-body-paragraph jz ka jc kb b kc kd ke kf kg kh ki kj kk kl km kn ko kp kq kr ks kt ku kv kw iv gh" data-selectable-paragraph="" id="57ac">Should you consider microservices?</p><p class="pw-post-body-paragraph jz ka jc kb b kc kd ke kf kg kh ki kj kk kl km kn ko kp kq kr ks kt ku kv kw iv gh" data-selectable-paragraph="" id="f7ef">As is often the case, it depends. You might genuinely benefit from a microservices architecture.</p><p class="pw-post-body-paragraph jz ka jc kb b kc kd ke kf kg kh ki kj kk kl km kn ko kp kq kr ks kt ku kv kw iv gh" data-selectable-paragraph="" id="6fbd">There
are certainly situations where they can pay their dues, <span style="background-color: #fcff01;">but if you are a
small to medium-sized team or an early-stage project:</span></p><p class="pw-post-body-paragraph jz ka jc kb b kc kd ke kf kg kh ki kj kk kl km kn ko kp kq kr ks kt ku kv kw iv gh" data-selectable-paragraph="" id="4859"><span style="background-color: #fcff01;"><b class="kb jd">No, you probably don’t need microservices.</b></span></p></blockquote><p class="pw-post-body-paragraph jz ka jc kb b kc kd ke kf kg kh ki kj kk kl km kn ko kp kq kr ks kt ku kv kw iv gh" data-selectable-paragraph="" id="4859"><b class="kb jd"></b></p><p></p><p> </p><p></p><p class="pw-post-body-paragraph jz ka jc kb b kc kd ke kf kg kh ki kj kk kl km kn ko kp kq kr ks kt ku kv kw iv gh" data-selectable-paragraph="" id="d0e8"></p>Jorge Ubedahttp://www.blogger.com/profile/16457542679928501488noreply@blogger.com0tag:blogger.com,1999:blog-8758579.post-25837895687643798902022-07-31T08:52:00.007+02:002022-07-31T12:28:26.361+02:00Liam Allan habla sobre Node en IBM i<p> <a href="https://worksofbarry.com/" target="_blank">Liam Allan</a>, como <a href="https://scottklement.com/" target="_blank">Scott Klement</a>, han dado un impulso formidable al <a href="https://en.wikipedia.org/wiki/IBM_i" target="_blank">IBM i</a> (AKA AS/400, iseries), explorando, popularizando y explotando los sucesivos cambios tecnológicos habidos en el equipo desde hace años. El comentario sobre Node lo hace Liam <a href="https://techchannel.com/Trends/06/2022/liam-allan-techtalk" target="_blank">en la entrevista que Charles Guarino le hace en TechChannel.</a> La participación de Liam, reciente, ha implicado cambios radicales en el modo de encarar al IBM i, comenzando por su editor de programas. Debemos decir que el ambiente y las prácticas relacionadas con el IBM i históricamente han sido más vale conservadoras, apropiadas para un set de equipos que solía ser el núcleo del procesamiento de las empresas que lo usaban. Dice Guarino sobre este aspecto:<i> I still think there’s still a lot of newbies—even the most seasoned RPG
developers are still newbies—and open-source makes them nervous, perhaps
because it’s a whole different paradigm, a whole different vernacular.
Everything about it is different, yet obviously there are so many
similarities, but the terminology is very different</i>. Klement y quienes lo siguieron, y ahora Allan, han representado una renovación y actualización más que conveniente, necesaria.<br /></p><p>Por mi parte, dándole vueltas a su uso con Plex. Ya Klement ha potenciado su integración con sus propuestas a nivel de integración de lenguajes java y c/c++ a través de ILE.<br /></p><p> Lo dicho sobre Node:</p><blockquote><i><b>Charlie:</b> (...) So Liam, I do have a lot
of things that I want to talk to you about, but when I think of you
lately what comes to my mind is Node. I mean I kind of associate you
with just Node and how you really are really running with that
technology, especially on IBM i, but I think there are a lot of people
who don’t quite understand where that fits in, what Node actually is and
how it fits on your platform. So what can you say about that in
general?<br />
<br />
<b>Liam:</b> Absolutely. So I mean, there’s a few points to be
made. I guess I’ll start with the fact that you know, it is 80% of my
working life is writing typescript and Javascript. So I spend most of my
days in it now, which is great. A few years ago, it was more like 50%
and each year it’s growing more and more. So I usually focus on how it
can integrate with IBM i. So you know having Node.js code, whether it’s
typescript or Javascript talking to IBM i via the database—so, calling
programs, fetching data, updating data; you know, the minimal standard
kind of driver type stuff that you do, crud, things like that. What I
especially like about Node on IBM i is that it is made for high
input/outputs. It’s great at handling large volumes of data and most
people that are using IBM i tend to have tons of data, right? Db2 for i
has been around for centuries at this point; it’s older than I am, and I
can make that joke. No one else can make that joke but I can make it
and you know it’s been around for the longest time. And so people have
got all of this data and in my opinion Node.js is just a great way to
express that data—you know, via an API. I think it’s fast. It’s got high
throughput and yeah, it’s a synchronous in its standard. It’s easy to
use, it’s easy to deploy, it’s easy to write code for especially. One of
the reasons I like is the fact that I can have something working within
20 minutes. It’s a fantastic piece of technology and it’s been out for a
while. I mean it’s been out for like 10 years, 10 years plus at this
point. It’s just fun to use. I really enjoy it and I encourage other
people to use it too.</i></blockquote><p></p>Jorge Ubedahttp://www.blogger.com/profile/16457542679928501488noreply@blogger.com0tag:blogger.com,1999:blog-8758579.post-28672689495348975682022-07-03T20:23:00.007+02:002022-07-03T20:23:58.347+02:00El concepto "Legacy" y la zanahoria "microservice"<p> Lo que sigue es un artículo "viejísimo", <a href="https://www.itjungle.com/2022/04/25/beware-the-hype-of-modern-tech/" target="_blank">del 25 de abril de este año</a>. Lo copiaré y comentaré si es necesario, porque sigue siendo de rigurosa actualidad, tanto en el universo de IBM i, como en general:</p><p></p><blockquote><p>Beware The Hype Of Modern Tech</p><p>Many IBM i shops <span style="background-color: #fcff01;">are under the gun to modernize their applications as
part of a digital transformation</span> initiative. If the app is more than 10
or 15 years old and doesn’t use the latest technology and techniques,
it’s considered a legacy system that must be torn down and rebuilt
according to current code. But there are substantial risks associated
with these efforts – not the least of which that the modern method is
essentially incompatible with the IBM i architecture as it currently
exists. IBM i shops should be careful when evaluating these new
directions.</p>
<p>Amy Anderson, a modernization consultant working in <a href="http://www.ibm.com" rel="noopener" target="_blank">IBM</a>’s Rochester, Minnesota, lab, says she <a href="https://www.itjungle.com/2021/06/30/so-you-want-to-do-containerized-microservices-in-the-cloud/" rel="noopener" target="_blank">was joking last year</a>
when she said “every executive says they want to do containerized
microservices in the cloud.” If Anderson is thinking about a future in
comedy, she might want to rethink her plans, because what she says isn’t
a joke; it’s the truth.</p>
<p>Many, if not most, tech executives these days are fully behind the
drive to run their systems as containerized microservices in the cloud.
They have been told by the analyst firms and the mainstream tech press
and the cloud giants that the future of business IT is breaking up
monolithic applications into lots of different pieces that communicate
through microservices, probably REST. All these little apps will live in
containers, likely managed by Kubernetes, enabling them to scale up and
down seamlessly on the cloud, likely <a href="http://www.aws.amazon.com" rel="noopener" target="_blank">AWS</a> or <a href="http://www.azure.microsoft.com" rel="noopener" target="_blank">Microsoft Azure</a>.</p>
<p>The “containerized microservices in the cloud” mantra<span style="background-color: #fcff01;"> has been repeated so often, many just accept it as the gospel truth</span>. <em>Of course</em> that is the future of business tech! they say. <em>How else</em>
could we possibly run all these applications?<span style="background-color: #fcff01;"> It’s accepted as an
article of faith that this is the right approach</span>. Whether a company is
running homegrown software or a packaged app, they’re adamant that the
old ways must be left behind, and to embrace the glorious future that is
containerized microservices running in the cloud.</p><p></p><p> The reality is that the supposedly glorious future is today is a pipe
dream, at least when it comes to IBM i. Let’s start with Kubernetes, the
container orchestration system open sourced by Google in 2014, which is
a critical component of running in the “cloud native” way. (...)<br /></p><p>While Kubernetes solves one problem – eliminating the complexity
inherent in deploying and scaling all the different components that go
into a given application –<span style="background-color: #fcff01;"> it introduces a lot more complexity to the
user</span>. Running a Kubernetes cluster is hard. If you’ve talked to anybody
who has tried to do it themselves, you’ll quickly find out that it’s
extremely difficult. <span style="background-color: #fcff01;">It requires a whole new set of skills that most IT
professionals do not have</span>. The cloud giants, of course, have these folks
in droves, but they’re practically non-existent everywhere else.</p><p>ISVs are eager to adopt Kubernetes as the new <em>de facto</em> operating system for one very good reason: because it helps them run their applications on the cloud. (...) </p><p>For greenfield development, the cloud can make a lot of sense. Customers
can get up and running very quickly on a cloud-based business
application, and leave all the muss and fuss of managing hardware to the
cloud provider. But there are downsides too, such as <span style="background-color: #fcff01;">no ability to
customize the application</span>. For the vendors, the fact that customers
cannot customize goes hand in hand with their inability to fall behind
on releases. (Surely the vendor passes whatever benefit it receives
through collective avoidance of technical debt back to you, dear
customer.)</p><p>The Kubernetes route makes less sense for established products with
an established installed base. <span style="background-color: #fcff01;">It takes quite a bit of work to adapt an
existing application to run inside a Docker container and have it
managed in a Kubernetes pod</span>. It can be done, but it’s a heavy lift. <span style="background-color: #fcff01;">But
when it comes to critical transactional systems, it likely becomes more
of a full-blown re-implementation than a simple upgrade</span>. There are no
free lunches in IT.</p>
<p>When it comes to IBM i, lots of existing customers who are running
their ERP systems on-prem are not ready to move their production
business applications to the cloud. Notice what happened when <a rel="noopener" target="_blank">Infor</a>
stopped rolling out enhancements for the M3 applications for IBM i
customers. Infor wanted these folks to adopt M3 running on X86 servers
running in AWS cloud. Many of them balked at this forced
re-implementation, and now <a href="https://www.itjungle.com/2022/04/20/infor-cm3-to-provide-on-prem-alternative-to-cloudy-m3/" rel="noopener" target="_blank">Infor is rolling out a new offering</a> called CM3 that <span style="background-color: #fcff01;">recognizes that customers want to keep their data on prem in their Db2 for i server.</span></p><p>Other ERP vendors have taken a similar approach to the cloud. <a href="http://www.sap.com" rel="noopener" target="_blank">SAP</a>
wants its Business Suite customers to move to S/4 HANA, which is a
containerized, microservice-based ERP running in the cloud. The German
ERP giant has committed to supporting on-prem Business Suite customers
<span style="background-color: #fcff01;">until 2027</span>, and through 2030 with an extended maintenance agreement.
After that, the customers must be on S/4 HANA, <span style="background-color: #fcff01;">which at this point
doesn’t run on IBM i</span>.</p>
<p>Will the 1,500-plus customers who have benefited from running SAP on
IBM i for the past 30 years be willing to give up their entire legacy
and <a href="https://www.itjungle.com/2021/03/17/sap-on-ibm-i-to-s-4-hana-migration-no-need-to-rush/" rel="noopener" target="_blank">begin anew in the S/4 HANA cloud</a>?
It sounds like a risky proposition, especially <span style="background-color: #fcff01;">given the fact that much
of the functionality that currently exists in Business Suite has yet to
be re-constructed din S/4 HANA. Is this an acceptable risk?</span></p><p>Kubernetes is just part of the problem, but it’s a big one, because at
this point IBM i <span style="background-color: #fcff01;">doesn’t support Kubernetes</span>. It’s not even clear what
Kubernetes running on IBM i would look like, considering all the
virtualization features that already exist in the IBM i and Power
platform. (What would become of LPARs, subsystems, and iASPs? How would
any of that work?) In any event, the executives in charge of IBM i have
told <em>IT Jungle</em> there is no demand for Kubernetes among IBM i customers. But that could change.</p></blockquote><p>Particularmente interesante es el comentario acerca de los planes de Jack Henry & Associates:</p><p></p><blockquote><p>Jack Henry & Associates <a href="https://www.itjungle.com/2022/03/28/inside-jack-henrys-long-term-modernization-roadmap/" rel="noopener" target="_blank">officially unleashed its long-term roadmap</a>
earlier this year, but it had been working on the plan for years. The
company has been a stalwart of the midrange platform for decades,
reliably processing transactions for more than a thousand banks and
credit unions running on its RPG-based core banking systems. It is also
one of the biggest private cloud providers in the Power Systems arena,
as it runs the Power machinery powering (pun intended) hundreds of
customer applications.</p>
<p><span style="background-color: #fcff01;">The future roadmap for Jack Henry is (you guessed it) containerized
microservices in the cloud</span>. The company explains that it doesn’t make
sense to develop and maintain about 100 duplicate business functions
across four separate products, and so it will slowly replace those
redundant components that today make up its monolithic packages like
Silverlake with smaller, bite-sized components that run in the
cloud-native fashion on Kubernetes and connect and communicate via
microservices.</p>
<p>It’s not a bad plan, if you’ve been listening to the IT analysts and
the press for the past five years. Jack Henry is doing exactly what
they’ve been espousing as the modern method. But how does it mesh with
its current legacy? <span style="background-color: #fcff01;">The reality is that none of Jack Henry’s future
software will be able to run on IBM i. Db2 for i is not even one of the
long-term options for a database; instead it selected PostgreSQL, SQL
Server, and MongoDB</span> (depending on which cloud the customer is running
in).</p>
<p>Jack Henry executives acknowledge that there’s not much overlap
between its roadmap and the IBM i roadmap at this point in time. <span style="background-color: #fcff01;">But
they say that they’re moving slowly and won’t have all of the 100 or so
business functions fully converted into containerized microservices for
15 years – and then it will likely take another 15 years to get
everybody moved over</span>. So it’s not a pressing issue at the moment.</p>
<p>Maybe Kubernetes will run on IBM i by then? Maybe there will be
something new and different that eliminates the technological mismatch?
Who knows?</p>
<p><span style="background-color: #fcff01;">The IBM i system is a known entity, with known strengths and
weaknesses. Containerized microservices in the cloud is an unknown
entity, and its strengths and weaknesses are still being determined.
While containerized microservices running in the cloud may ultimately
win out as the superior platform for business IT, that hasn’t been
decided yet.</span></p><p>For the past 30 years, the mainstream IT world has leapt from one
shiny object to the next, <span style="background-color: #fcff01;">convinced that it will be The Next Big Thing</span>.
(TPM, the founder of this publication and its co-editor with me, has a
whole different life as a journalist and analyst chasing this, called <a href="https://www.nextplatform.com/" rel="noopener" target="_blank"><em>The Next Platform</em></a>,
not surprisingly.) Over the same period, the IBM i platform has
continued more or less on the same path, <span style="background-color: #fcff01;">with the same core
architecture, running the same types of applications in the same
reliable, secure manner.</span></p>
<p>The more hype is lavished upon containerized microservices in the
cloud, the more it looks like just the latest shiny object, which will
inevitably be replaced by the next shiny object. <span style="background-color: #fcff01;">Meanwhile, the IBM i
server will just keep ticking</span>.</p></blockquote><p></p><p></p><p><span style="background-color: #fcff01;"></span></p><p><span style="background-color: #fcff01;"></span></p><p> Sin duda han habido cambios espectaculares en unos pocos años, los últimos cuatro o cinco, y existen herramientas y recursos disponbiles de gran potencia. Pero para una empresa o institucion en marcha, un cambio tiene que ser pesado con cuidado, evitando el riesgo de caer en el vacío. ¿Un cambio que requiere nuevas metodologías, nuevos lenguajes, nuevas platatformas, nuevas comunicaciones? ¿desarrollos con lo último de lo último, sin contar con la prueba de recursos robustos y experimentados por varios años?<br /></p>Jorge Ubedahttp://www.blogger.com/profile/16457542679928501488noreply@blogger.com0tag:blogger.com,1999:blog-8758579.post-32197484396494833372022-06-05T23:33:00.000+02:002022-06-05T23:33:05.792+02:00China, Gitee, GitHub<p> En Technology Review, del MIT, el 30 de mayo, <a href="https://www.technologyreview.com/2022/05/30/1052879/censoring-china-open-source-backfire/" target="_blank">escribe Zeyi Yang</a><br /></p><p></p><blockquote><i>Earlier this month, thousands of software developers in China woke up to
find that their open-source code hosted on Gitee, a state-backed
Chinese competitor to the international code repository platform GitHub,
had been locked and hidden from public view.<br /></i>
<i><br />
Gitee released a statement later that day explaining that the locked
code was being manually reviewed, as all open-source code would need to
be before being published from then on. The company “didn’t have a
choice,” it wrote. Gitee didn’t respond to MIT Technology Review, but it
is widely assumed that the Chinese government had imposed yet another
bit of heavy-handed censorship.<br /></i>
<i><br />
For the open-source software community in China, which celebrates
transparency and global collaboration, the move has come as a shock.
Code was supposed to be apolitical. Ultimately, these developers fear it
could discourage people from contributing to open-source projects, and
China’s software industry will suffer as a result</i></blockquote><p> Una nueva muestra de la dependencia de grandes actores existente en el mundo Open Source en primer lugar. Pero yendo más lejos, una indicación de la limitada capacidad de elección existente en el mundo de la tecnología y de las ideas y culturas transportadas por su medio. El problema descubierto por los desarrolladores chinos con su propio repositorio "oficial" puede repetirse potencialmente en el mundo occidental, bajo el sello de las grandes tecnológicas que dominan directa o indirectamente los repositorios abiertos, "públicos", y las infraestructuras y servicios en la nube. Ni Google, ni Microsoft, ni Amazon han demostrado neutralidad en su historia, y son protagonistas de décadas de juicios por prácticas desleales. Confiar tu base de código, o tus aplicaciones en este marco no es lo más apropiado, probablemente.<br /></p><p></p>Jorge Ubedahttp://www.blogger.com/profile/16457542679928501488noreply@blogger.com0tag:blogger.com,1999:blog-8758579.post-17603587924184353032022-04-24T08:40:00.002+02:002022-04-24T08:43:19.737+02:00Computacion cuántica o bombo?<p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiijixhGaRG1Jj9scdZWsLp0Oej29rRCVke-TpiDCDF-SYDoP_MeNOsLd4PFnPYSbfESI3bYlSJS5agv1R3VH2YAsFQU9eogluP0cprl-pz670Dprt74wJFxHB4dErpfAx6kK_ef8xVZvBZLnI6jnQ7t-Idjnm6r_fJTVw3Ktrt6c7w2No7fw/s1024/Quantum-blog_ChetanNayak_03-2022_1400x788-1024x576.jpg" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="576" data-original-width="1024" height="180" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiijixhGaRG1Jj9scdZWsLp0Oej29rRCVke-TpiDCDF-SYDoP_MeNOsLd4PFnPYSbfESI3bYlSJS5agv1R3VH2YAsFQU9eogluP0cprl-pz670Dprt74wJFxHB4dErpfAx6kK_ef8xVZvBZLnI6jnQ7t-Idjnm6r_fJTVw3Ktrt6c7w2No7fw/s320/Quantum-blog_ChetanNayak_03-2022_1400x788-1024x576.jpg" width="320" /></a></div><br /> Sankar Das Sarma, investigador en Física, director del CMTC (Condensed Matter Theory Center) de la Universidad de Maryland, publica en Technology Review <a href="https://www.technologyreview.com/2022/03/28/1048355/quantum-computing-has-a-hype-problem/?truid=f4ab5e0d3d60220ce750107ea450c027&mc_cid=cd5654711a&mc_eid=85143887f7" target="_blank">un artículo</a> enfriando las expectativas en computación cuántica<br /><p></p><p style="margin-left: 40px; text-align: left;"><i>A decade and more ago, I was often asked when I thought a real quantum
computer would be built. (It is interesting that I no longer face this
question as quantum-computing hype has apparently convinced people that
these systems already exist or are just around the corner). My
unequivocal answer was always that I do not know. Predicting the future
of technology is impossible—it happens when it happens. One might try to
draw an analogy with the past. It took the aviation industry more than
60 years to go from the Wright brothers to jumbo jets carrying hundreds
of passengers thousands of miles. The immediate question is where
quantum computing development, as it stands today, should be placed on
that timeline. Is it with the Wright brothers in 1903? The first jet
planes around 1940? Or maybe we’re still way back in the early 16<sup>th</sup> century, with Leonardo da Vinci’s flying machine? I do not know. Neither does anybody else</i>.</p><p>Sobre el trabajo de Sarma, <a href="https://journals.aps.org/search/results?sort=relevance&clauses=%5B%7B%22operator%22%3A%22AND%22%2C%22field%22%3A%22author%22%2C%22value%22%3A%22S+Das+Sarma%22%7D%5D" target="_blank">una lista de papeles</a> en los que participa. </p><p>Sobre el <a href="https://www.physics.umd.edu/cmtc/intro.html" target="_blank">CMTC</a>, y su trabajo, mencionando su colaboración con <a href="https://www.microsoft.com/en-us/research/research-area/quantum-computing/?facet%5Btax%5D%5Bmsr-research-area%5D%5B0%5D=243138&sort_by=most-recent" target="_blank">Microsoft</a>.</p><p>Sobre el estado las investigaciones de la materia condensada (Condensed matter physics), <a href="https://en.wikipedia.org/wiki/Condensed_matter_physics" target="_blank">en Wikipedia</a>.</p><p><span style="font-size: x-small;">La foto, <a href="https://www.microsoft.com/en-us/research/blog/microsoft-has-demonstrated-the-underlying-physics-required-to-create-a-new-kind-of-qubit/" target="_blank">tomada del blog de Microsoft</a> sobre computacion cuantica.</span><br /></p><p><br /></p>Jorge Ubedahttp://www.blogger.com/profile/16457542679928501488noreply@blogger.com0tag:blogger.com,1999:blog-8758579.post-59142690697687740602022-04-17T08:35:00.001+02:002022-04-17T08:35:41.735+02:00Mary Poppendieck mirando en perspectiva<p> En QCon Plus, una conferencia virtual de InfoQ, se presenta <a href="https://www.infoq.com/presentations/software-engineering-change-digital-scale" target="_blank">una conversación con Mary Poppendieck</a> (en la charla también interviente Tom, su esposo y socio en su largo trabajo de consultoría). Mary presenta una visión de los cambios producidos en la construcción de software comenzando con el nuevo siglo: veinte años de cambios radicales que alteraron los <a href="https://es.wikipedia.org/wiki/Paradigma" target="_blank">paradigmas</a> en que nos hemos basado por décadas. Me parece una visión en perspectiva de particular interés, considerando su propio trabajo en 3M <a href="https://www.shmula.com/poppendieck-on-waste-the-handoff/447/" target="_blank">iniciado con Six Sigma</a>, y su propio entendimiento de los conceptos agiles. Mary habla de puentes; ella misma lo es, acompañando el cambio establecido.<br /></p>Jorge Ubedahttp://www.blogger.com/profile/16457542679928501488noreply@blogger.com0tag:blogger.com,1999:blog-8758579.post-41800285683643738752022-04-17T07:55:00.000+02:002022-04-17T07:55:17.025+02:00Descaminados<p> En <a href="https://www.technologyreview.com/2022/03/29/1048439/chatbots-replace-search-engine-terrible-idea" target="_blank">Technology Review</a><br /></p><p>On March 14, Shah and his University of Washington colleague Emily M.
Bender, who studies computational linguistics and ethical issues in
natural-language processing, published a paper that <a href="https://dl.acm.org/doi/10.1145/3498366.3505816">criticizes what they see as a rush to embrace language models</a>
for tasks they are not designed to address. In particular, they fear
that using language models for search could lead to more misinformation
and more polarized debate. </p> <p>“The Star Trek fantasy—where you
have this all-knowing computer that you can ask questions and it just
gives you the answer—is not what we can provide and not what we need,”
says Bender, a coauthor <a href="https://www.technologyreview.com/2020/12/04/1013294/google-ai-ethics-research-paper-forced-out-timnit-gebru/">on the paper that led Timnit Gebru to be forced out of Google</a>, which had highlighted the dangers of large language models. </p>Jorge Ubedahttp://www.blogger.com/profile/16457542679928501488noreply@blogger.com0tag:blogger.com,1999:blog-8758579.post-18517029007662118372021-09-05T09:55:00.005+02:002021-09-05T10:04:40.468+02:00Utilidad de The Java Version Almanac<p> Usualmente trabajo en Java 8, (como parte de otros lenguajes). La versión 8 es obligatoria como intersección de los distintos productos que se combinan en nuestro desarrollo (Plex, Websphere, C++, RPGIV, OS/400). Francamente, podríamos pasar a Java 9 sin problemas reales, pero sería ir un paso más allá de lo que el soporte de Plex reconoce. Es probable que terminemos haciéndolo durante el año, o poco después, de todas formas. Lo que parece un gran atraso respecto a la versión corriente (Java 15/16) y las versiones bajo desarrollo (Java 17/18), si consideramos que Java 8 es por ahora la anteúltima versión estable (LTS), seguida por Java 11, y en espera de Java 17. El sistema adoptado por Oracle de desarrollo de Java en versiones con pequeños cambios graduales, favorece el avance de versión analizando el impacto que puede tener el paso a la siguiente, considerando los releases mayores como objetivos principales. Sobre el plan de desarrollo de Java, dice Oracle:</p><p style="margin-left: 40px; text-align: left;"><i>For product releases after Java SE 8, Oracle will designate a release,
every three years, as a Long-Term-Support (LTS) release. Java SE 11 is
an LTS release. For the purposes of Oracle Premier Support, non-LTS
releases are considered a cumulative set of implementation enhancements
of the most recent LTS release. Once a new feature release is made
available, any previous non-LTS release will be considered superseded.
For example, Java SE 9 was a non-LTS release and immediately superseded
by Java SE 10 (also non-LTS), Java SE 10 in turn is immediately
superseded by Java SE 11. Java SE 11 however is an LTS release, and
therefore Oracle Customers will receive Oracle Premier Support and
periodic update releases, even though Java SE 12 was released. </i>(en <a href="https://blogs.oracle.com/javamagazine/migrate-to-java-17" target="_blank">It's time to move to Java 17</a>, Johan Janssen, 27 de agosto de 2021, Java Magazine)<i><br /></i></p><p><a href="https://github.com/marchof" target="_blank">Marc R. Hoffmann</a> y <a href="https://horstmann.com/" target="_blank">Cay S. Horstmann</a> han desarrollado <a href="https://javaalmanac.io/" target="_blank">un cuadro</a> que despliega el inventario de todas las versiones de Java existentes, su soporte, y el análisis de los cambios existentes de versión a versión, a nivel de clase y método. Este cuadro es un auxiliar imprescindible a la hora de planear una meditada actualización a niveles superiores. Se trata de un gran trabajo, que al menos a mí me resulta de consulta obligatoria. Ojalá otros productos relacionados tuvieran un cuadro similar (estoy pensando en Apache POI, donde suelo necesitar la inferencia y el análisis de casos para concluír qué debo usar).</p><p>The Java Version Almanac, javaalmanac.io <br /></p><p><span style="color: #888888; font-size: 11pt; font-weight: bold;"><br /></span></p><p><span style="color: #888888; font-size: 11pt; font-weight: bold;"> </span></p>Jorge Ubedahttp://www.blogger.com/profile/16457542679928501488noreply@blogger.com0tag:blogger.com,1999:blog-8758579.post-90531138560932641292021-08-31T09:32:00.002+02:002021-08-31T09:33:43.446+02:00Moviendo una aplicacion a la nube<p> Días atrás leí con interés un artículo de <a href="https://blog.scottlogic.com/jhenderson/" target="_blank">Jonathon Henderson</a>, de <a href="https://www.scottlogic.com/" target="_blank">Scott Logic</a>, describiendo detalladamente <a href="https://blog.scottlogic.com/2019/07/29/from-monolith-to-serverless-on-aws.html" target="_blank">un proyecto de conversión de una aplicación</a> descripta sin entrar en detalles como "monolítica", a un conjunto de servicios y/o aplicaciones back y front end que la reemplacen, basado en AWS. Henderson describe su proceso de descubrimiento y reconocimiento de los mecanismos y recursos que fue necesitando, y su proceso de entendimiento y dificultades que pudo o no resolver: una bitácora de trabajo más que útil. Esta es su descripción del proyecto:</p><p style="margin-left: 40px; text-align: left;"><i>I had the pleasure of picking up Scott Logic’s <a href="https://stockflux.scottlogic.com">StockFlux</a>
project with the same fantastic team as my previous project, which
consisted of 3, very talented frontend developers and myself as the lone
backend developer.</i></p><div style="margin-left: 40px; text-align: left;">
</div><p style="margin-left: 40px; text-align: left;"><i>The frontend team had the task of transforming the existing StockFlux application to use the bleeding edge of the <a href="https://openfin.co">OpenFin</a> platform, which incorporated features defined by the <a href="https://fdc3.finos.org/">FDC3 specification</a> by <a href="https://www.finos.org/">FINOS</a>.</i></p><div style="margin-left: 40px; text-align: left;">
</div><p style="margin-left: 40px; text-align: left;"><i>This involved splitting StockFlux into several applications, to
showcase OpenFin’s inter-app functionality such as snapping and docking,
as well as using the OpenFin FDC3 implementations of intents, context
data and channels for inter-app communication. It also involved using
the FDC3 <a href="https://fdc3.finos.org/docs/1.0/appd-intro">App Directory</a> specification to promote discovery of our apps using a remotely hosted service, which is where I come in.</i></p><p>Henderson describe fundamentalmente su trabajo de backend, sin entrar prácticamente en el trabajo del frontend, pero de todas formas es una buena y estimulante descripción de aprendizaje y evaluación de caminos posibles.</p><p>Esta es su enumeración de objetivos de su propio trabajo:</p><ul style="margin-left: 40px; text-align: left;"><li><i>Building an FDC3 compliant App Directory to host our apps and provide application discovery.</i></li><li><i>Building a Securities API, with a full-text search, using a 3rd party data provider.</i></li><li><i>Providing Open-High-Low-Close (OHLC) data for a given security, to power the StockFlux Chart application.</i></li><li><i>Creating a Stock News API.</i></li><li><i>Building and managing our infrastructure on AWS.</i></li><li><i>Automating our AWS infrastructure using CloudFormation.</i></li><li><i>Creating a CI/CD pipeline to test, build and deploy changes.</i></li></ul><p>Una vez completado el proyecto, con una idea clara de los puntos fuertes y débiles de su desarrollo, y del soporte de AWS, Henderson se siente conforme con lo hecho. No obstante, deja una observación que debe ser tenida muy en cuenta:</p><p style="margin-left: 40px; text-align: left;"><i>One thing I’m still relatively unsure about is the idea of vendor lock-in.
By basing an application around their specific services, <span style="background-color: #fcff01;">we effectively
lock ourselves into AWS</span>, which makes our applications less portable.
While building the StockFlux backend services I made an effort to
abstract things in such a way that would allow us to add support for
services offered by other providers, to reduce our dependency on AWS. On
the other hand, locking into one vendor doesn’t have to be a bad thing -
by committing to use AWS (or another provider), we can explore and
utilise the vast array of services that are on offer, rather than
restrict ourselves to using as little as possible to promote
portability.</i></p><p>Es decir, permanece casi por completo en manos de su proveedor, sus tarifas, y evolución de sus planes.Sin duda, una particularidad de Cloud services que debe ser pesada con cuidado. Especialmente cuando se translada no una aplicación pequeña y volátil, sino algo de importancia y vasto.<br /></p>Jorge Ubedahttp://www.blogger.com/profile/16457542679928501488noreply@blogger.com0tag:blogger.com,1999:blog-8758579.post-53542781697886955092021-08-29T09:48:00.000+02:002021-08-29T09:49:24.275+02:00Lo suscribo...<p> <a href="https://ingenieriadesoftware.es/como-ser-el-peor-programador-del-mundo/" target="_blank">cómo ser el peor</a>...<br /></p>Jorge Ubedahttp://www.blogger.com/profile/16457542679928501488noreply@blogger.com0tag:blogger.com,1999:blog-8758579.post-8176892665856728992021-08-29T09:26:00.000+02:002021-08-29T09:26:52.507+02:00Visión del bosque<p> Cuando observo, sea en las publicaciones que llegan a mis manos, o sea en lo que veo y oigo a mi alrededor, hay algo que me suena mal, y que me deja en ascuas. Veo los árboles, pero no veo el bosque, por ningún lado. </p><p>Es usual hablar de CI/CD, de n variantes de Agile, de microservicios, de n variantes de JavaScript, de Cloud, de n lenguajes -sin considerar el contexto de aplicación-, de sockets, de web components, etc, etc. Podríamos llamar a estos desarrollos "material diario de trabajo", en algún caso metodológicos y en otros instrumentales. Pero no veo mucho foco más allá de estos elementos y medios de desarrollo. Ni siquiera se trata de arquitecturas. Se enfoca el proceso de desarrollo en automatizarlo y entregar a producción con velocidad y continuidad diaria, se espera independencia total entre componentes, se espera que cada componente pueda construírse encapsulado y sin dependencias de ningún otro. Pero no veo el bosque, no veo el plan general. Seguramente no se trata de que no existe, sino que se piensa y trabaja en los detalles. StackOverflow o muchos otros sitios similares no ofrecen ese tipo de visión, y mi pregunta es si todos esos desarrolladores que exponen o investigan sobre problemas de explotación de su instrumental diario de trabajo, tienen ese punto de vista. Me pregunto si el enfoque en microservicios o enfoques similares, el trabajo en pequeños grupos, la utilización de equipos de trabajo en sitios remotos y opacos, si no implican que el bosque está visto en un pequeño, pequeñísimo grupo de ¿arquitectos? ¿líderes de proyecto? ¿responsables de producto?, y el sentido se va perdiendo desde las reuniones "agiles" hasta el último eslabón de la cadena, allí donde se escribe el código.<br /></p>Jorge Ubedahttp://www.blogger.com/profile/16457542679928501488noreply@blogger.com0tag:blogger.com,1999:blog-8758579.post-2031469058060260832021-08-08T08:40:00.000+02:002021-08-08T08:40:12.840+02:00Microservicios y sentido común<p> A propósito de microservicios, comentados en otras recientes oportunidades, un par de observaciones de <a href="https://medium.com/codex/the-false-promise-of-microservices-c93be3c9b3dc" target="_blank">Mika Yeap</a> en Medium. Aquí se menciona el monolito más como una antiguedad que como una aplicación que integra todo en un sólo ejecutable. Esto tiene más sentido y va en la dirección en la que los mocroservicios actúan, o deberían, cuando corresponda.</p><p>Una razón que Yeap reconoce es la escala. Arquitectura distribuida y microservicios para atender el crecimiento de elementos y operaciones en el sistema del que se trate. Pero al hablar de escala, habla de escala global, de decenas o centenas de millones de nodos participantes:</p><p style="margin-left: 40px; text-align: left;"><i>There’s a multitude of reasons you’d need to use distributed
architecture when you get big enough. The catch is most of us will never
get big enough. I mean, how close are you to Amazon’s numbers? Or how
about Netflix? That’s what I thought.</i></p><p>Yeap recuerda que trabajar con microservicios no es simple, y requiere disponer antes una organización robusta, disciplinada, con conocimiento y recursos. Aún así, recomienda pasar a una arquitectura distribuida sólo cuando sea imprescindible, y conservar "el monolito" mientras sea viable: </p><p style="margin-left: 40px; text-align: left;"><i>...it’s
best to challenge this beast only if you’re prepared. Skills. Talent.
Organization. You need lots of things in the right flavor to do this
well. If you think you can show some engineers a couple keynotes then
send them off to split all the things, you’re in for a nasty surprise.
My team and I weren’t prepared, so I would know. I mean, what does a
startup without product-market fit or money have to offer against
microservices? Just a month’s supply of ramen to feed five people. In
other words, not much but goodwill and some elbow grease. Which wasn’t
enough.So
as far as I’ve learned, you should only be building microservices if
you’ve got a gigantic user base, or the resources to support the
specialized development. And even in the first case, you don’t
necessarily have to go distributed immediately. In fact, even AirBnB was
<a class="bu jx" href="https://www.infoq.com/news/2019/02/airbnb-monolith-migration-soa/" rel="noopener nofollow">powered by a monolith</a> until just recently. A monolith written in Ruby on Rails, no less. And it seems to me that their product worked just fine.</i></p><p style="margin-left: 40px; text-align: left;"><i>Yet many people truly believe distributed architecture is a superior
alternative to a monolithic one. People actually think they’re two equal
solutions to the same problem. Which is absurd, since they’re actually
solutions to <em class="ju">different</em> problems. <br /></i></p><p>...Y recuerda que la arquitectura distribuida no es simple, en absoluto. Podemos decir que cada punto que merece foco en su implementación requiere soluciones complejas. Pone como ejemplo las transacciones distribuidas:</p><p style="margin-left: 40px; text-align: left;"><i>Microservices promise to solve all sorts of problems, depending on who
you ask. When in reality, they only exacerbate them when you don’t know
what you’re doing. How? Well, just two words can send chills down any
microservice veteran’s spine: Distributed transactions. How’s that for a
nightmare? I promise you, once you have to configure
orchestration-based sagas just to update one property on a single
object, you’ll be begging for a good old boring monolith.</i></p><p>En fin, microservicios suena genial, pero antes de emprenderlos, piense y haga números. <br /></p><p> </p><p><br /></p>Jorge Ubedahttp://www.blogger.com/profile/16457542679928501488noreply@blogger.com0