domingo, diciembre 31, 2023

Geoffrey Hinton sobre la inteligencia artificial

 


Will Douglas Heaven entrevista en Technology Review del MIT a Geoffrey Hinton, sobre su actual desconfianza en la Inteligencia Artificial:

Hinton fears that these tools are capable of figuring out ways to manipulate or kill humans who aren’t prepared for the new technology.

“I have suddenly switched my views on whether these things are going to be more intelligent than us. I think they’re very close to it now and they will be much more intelligent than us in the future,” he says. “How do we survive that?”

He is especially worried that people could harness the tools he himself helped breathe life into to tilt the scales of some of the most consequential human experiences, especially elections and wars.

“Look, here’s one way it could all go wrong,” he says. “We know that a lot of the people who want to use these tools are bad actors like Putin or DeSantis. They want to use them for winning wars or manipulating electorates.”

Hinton believes that the next step for smart machines is the ability to create their own subgoals, interim steps required to carry out a task. What happens, he asks, when that ability is applied to something inherently immoral?

“Don’t think for a moment that Putin wouldn’t make hyper-intelligent robots with the goal of killing Ukrainians,” he says. “He wouldn’t hesitate. And if you want them to be good at it, you don’t want to micromanage them—you want them to figure out how to do it.”

There are already a handful of experimental projects, such as BabyAGI and AutoGPT, that hook chatbots up with other programs such as web browsers or word processors so that they can string together simple tasks. Tiny steps, for sure—but they signal the direction that some people want to take this tech. And even if a bad actor doesn’t seize the machines, there are other concerns about subgoals, Hinton says.

“Well, here’s a subgoal that almost always helps in biology: get more energy. So the first thing that could happen is these robots are going to say, ‘Let’s get more power. Let’s reroute all the electricity to my chips.’ Another great subgoal would be to make more copies of yourself. Does that sound good?”

Maybe not. But Yann LeCun, Meta’s chief AI scientist, agrees with the premise but does not share Hinton’s fears. “There is no question that machines will become smarter than humans—in all domains in which humans are smart—in the future,” says LeCun. “It’s a question of when and how, not a question of if.”

But he takes a totally different view on where things go from there. “I believe that intelligent machines will usher in a new renaissance for humanity, a new era of enlightenment,” says LeCun. “I completely disagree with the idea that machines will dominate humans simply because they are smarter, let alone destroy humans.”

“Even within the human species, the smartest among us are not the ones who are the most dominating,” says LeCun. “And the most dominating are definitely not the smartest. We have numerous examples of that in politics and business.”

Yoshua Bengio, who is a professor at the University of Montreal and scientific director of the Montreal Institute for Learning Algorithms, feels more agnostic. “I hear people who denigrate these fears, but I don’t see any solid argument that would convince me that there are no risks of the magnitude that Geoff thinks about,” he says. But fear is only useful if it kicks us into action, he says: “Excessive fear can be paralyzing, so we should try to keep the debates at a rational level.”


LeCun es muy optimista...si no fuera por los drones sobre Kiev, la prisión de Navalni, o las medidas de control social de China, quizá se podría aceptar su visión.

Foto: Ramsey Cardy / Collision via Sportsfile, CC BY 2.0 <https://creativecommons.org/licenses/by/2.0>, via Wikimedia Commons

domingo, agosto 27, 2023

Publicando al IBM i sin IBM

 Un comercial de una empresa americana que trabaja con el as400, hoy llamado IBM i, cansado de escuchar afirmaciones de que el equipo ya no se fabrica ni usa, decidió publicar una empresa por vez,  que usa el equipo en Estados Unidos. Una idea que responde a la inactividad de IBM respecto a un equipo que le ha dado mucho dinero en los 35 años que lleva evolucionando, y que no se merece lo que en la práctica parece un ocultamiento de parte de IBM. Si usted busca información de desarrollo sobre el equipo, le costará encontrarla: si pregunta por DB2, será redirigido a DB2 para el system z, si pregunta por utilidades de SQL, o facilidades para procesar JSON, será redirigido primero al system z, y sólo refinando la búsqueda logrará acertar. Todos los enlaces preexistentes a artículos muy valiosos y bien escritos fueron perdidos y no redireccionados hace unos pocos años atrás. Los materiales existen, pero sólo una paciente búsqueda le permitirá llegar a ellos. Lamentable para un equipo que no ha dejado de evolucionar y adquirir funcionalidades de primera línea, cuya velocidad de procesamiento se mantiene muy competitiva, y que mantiene una gran base de clientes, que no recibe educación. En fin, quizá a fuerza de no educar y ocultar, IBM consiga que el equipo no exista. 

Lo que dice Alex Woodie sobre este esfuerzo solitario:

If you are a consumer of mainstream news, it can be hard to find anything about IBM i. The proprietary business platform isn’t marketed by IBM in advertisements and it receives very little coverage in mainstream IT publications. But a salesman for an IBM i business partner has come up with an easy yet compelling way to boost the visibility of the platform.

Earlier this month, Josh Bander, who is an enterprise account executive at Briteskies, shared a recent conversation he had through his LinkedIn page .“Over the weekend, I spoke to a few of my friends in IT, and they all told me #IBMi is dead,” Bander said. “To prove them wrong, I plan to take pictures of items in my house made with IBM i for the next week.”

The first picture featured Bander’s car, a Honda. The Japanese carmaker’s US subsidiary, American Honda Motor Company, has used one or more IBM i servers at its Torrance, California, facility for years.

Day three brought an image of a shoe by Nike. The legendary Oregon company has been an IBM i shop since at least 2003, when Nike acquired Converse, and it was still using IBM i in 2021, according to the list of IBM i shops maintained by All400s.com, which Bander used for his project.

A range hood for a stove made by Broan-NuTone appeared on day four of Bander’s IBM i journey through his home. The Hartford, Wisconsin-based manufacturer, which makes a variety of fans and air quality products, is also a confirmed IBM i ship.

Do you have Kleenex in your house? If so, then you have a product made by an IBM i shop, as the Irving, Texas-based Kimberly-Clark, maker of the Kleenex brand of facial tissues, is another confirmed IBM i user.

What about Taster’s Choice? It may not be everybody’s favorite cup of joe – Starbucks, the coffee goliath from Seattle, Washington, is a longtime IBM i shop – but the iconic coffee brand has IBM i in its veins, since it is owned by Switzerland-based Nestle, which is the largest food company in the world and another IBM midrange system user.

Maybe you have some shipping labels lying around. If they’re made by Avery Dennison, the well-known manufacturer of shipping labels and packaging materials based in Glendale, California, then you’ve found another everyday product made by an IBM i shop.

You don’t have to live the California wine country life to shop at Williams Sonoma. But if you do buy from the popular retailer, you can rest easy knowing that at least some aspect of the San Francisco company’s business is managed by IBM i.

Another iconic American brand, Rubbermaid, is also an IBM i shop. The Atlanta, Georgia company, which is now owned by Newell, was known to have run the IBM i as of 2021.

Bander’s LinkedIn posts of household items made by IBM i shops attracted quite a bit of attention from the IBM i ecosystem, and the hashtag “IBMiEverywhere” began trending. Apparently, IBM i professionals enjoy seeing that well-run and world-famous consumer brands are longtime IBM i users.

So why doesn’t IBM do this, or something similar? We have pestered Big Blue server execs many times over the years about the lack of marketing and advertising support for the platform, and rarely come away with satisfying answers.

To IBM’s credit, it does write and run case studies about IBM i customers. It has a section of its website where it has around 100 case studies of IBM i customers, as well as stories about a few business partners. Honda is on that list, as well as brands like Carhartt and Lamps Plus.

But there are many, many more name-brand companies that rely on IBM i that have never been officially mentioned by IBM as customers. Some of the world’s largest and most profitable companies run at least a small part of their businesses on the IBM i system, and while that in itself is not a reason for other companies to follow suit, it at least shows that world-class companies are continuing to invest in it and that has value.

IBM execs often say they wish they could do more to tout the great companies that rely on IBM i, and there’s no reason not to believe them. The truth is, the companies themselves often are not interested in participating in a formal IBM case study, marketing campaign, or to be featured in actual advertisements – and if they are, they often expect something in return for their cooperation.

That makes rogue efforts like Bander’s all the more fun and entertaining. John Rockwell does his best to keep the All400s list up to date, and while there are companies on the list that are actively moving off the platform or planning to, there are plenty more that are happy customers that aren’t going anywhere.

In the end, sharing unofficial lists of companies that run on IBM i seems to be a good way to boost morale for the IT soldiers in the trenches, who hear a lot of FUD and may be questioning their choice of platform. As it turns out, there are a lot of great companies that continue to rely on the box, which continues to run business software reliability, securely, and efficiently decade after decade.

They may not be shouting their IBM i success from the rooftops. But sometimes actions speak louder than words.

 Visto en IT Jungle, en agosto.

 

 

domingo, febrero 12, 2023

AI y la ética

 El mayor problema de la inteligencia artificial es que sus construcciones están basadas en principios matemáticos y lógicos, y estos no son suficientes y pueden ser desviados. Hace muy poco, Galáctica lo ha reflejado, en su inicio desastroso, capotando en tres días. GPT parece estar embarcado en mejorar esta aproximación, y es un proyecto en curso, arrasando todas las marcas de interés del pasado. Mientras tanto, las Big Tech probablemente deban realinearse. Dice Will Douglas Heaven en Technology Review:

While OpenAI was wrestling with GPT-3’s biases, the rest of the tech world was facing a high-profile reckoning over the failure to curb toxic tendencies in AI. It’s no secret that large language models can spew out false—even hateful—text, but researchers have found that fixing the problem is not on the to-do list of most Big Tech firms. When Timnit Gebru, co-director of Google’s AI ethics team, coauthored a paper that highlighted the potential harms associated with large language models (including high computing costs), it was not welcomed by senior managers inside the company. In December 2020, Gebru was pushed out of her job

sábado, enero 07, 2023

El IBM i y sus capacidades actuales

 Dice Mike Pavlak: “At the end of the day, professionally I’ve worked in about six different languages. I can write bad code in every one of them,”

El contexto de la afirmación es interesante: el conjunto de recursos que hoy dispone el IBM i (¿o debemos decir IBM Power? ) para interconectarse con toda clase de recursos: PHP, Node, Python, sumados a los viejos conocidos de c. c++. java. Pavlak enfoca y evalúa la conveniencia de cada uno en función de conexiones con arquitecturas web, fundamentalmente, pero ese abanico de posibilidades puede ir más allá sin duda.

Dice Pavlak en particular sobre Node: 

Node.js isn’t as easy to learn for true-blue IBM i types, but it has one advantage over the other two: it uses JavaScript, which as previously noted has been broadly adopted by the wider IT world. However, there’s a caveat to the notion that Node.js developers only need to know JavaScript to be productive.

“A lot of people like Node because there’s a myth that I can use the same language on the presentation layer, JavaScript . . . up on the server,” Pavlak said. “And there’s truth to that. The syntax of the language is the same. What’s different though is the library usage. The libraries you’d use on the client end are not the libraries you would use on the server.”

Choosing Node.js makes sense in certain scenarios, such as when an IBM i shop has hired younger developers with JavaScript skills. Because the syntax is the same, these front-end JavaScript developers may be able to become productive developing back-end Node.js code on the IBM i server in a shorter amount of time than using other languages. “Using Node on the backend starts to make sense in that scenario,”

(...) Node.js does have a significant performance advantage over PHP and Python in on particular category: How quickly the stack starts. The technology, which was created by Google, is widely used by massive Web properties, such as Netflix. When you fire up a Netflix session on your TV, your Roku, or your phone, you’re actually initiating the deployment of a Node.js instance running on AWS.

“Node.js starts so fast, it’s so much easier to scale…horizontally,” Pavlak said. “So AWS instances are basically X86. In that scenario, Node has a decided advantage.”

 

domingo, diciembre 11, 2022

Galáctica y las dificultades de los modelos de lenguaje

En noviembre, Meta presentó un modelo de lenguaje bautizado Galactica, elaborado para asistir a investigadores científicos, pero sólo tres días después fue retirado de disponibilidad para ser consultado o testeado. Básicamente, como ha sucedido en otros campos de trabajo con inteligencia artificial (IA/AI), el lenguaje no reconoce verdad o falsedad. En las pruebas, trabajos formalmente presentados como científicos pero absurdos como la existencia de osos en el espacio, o las causas de la guerra de Ucrania, pasaron por buenos, con justificaciones razonadas.

Will Douglas Heaven, en Technology Review:

Galactica is a large language model for science, trained on 48 million examples of scientific articles, websites, textbooks, lecture notes, and encyclopedias. Meta promoted its model as a shortcut for researchers and students. In the company’s words, Galactica “can summarize academic papers, solve math problems, generate Wiki articles, write scientific code, annotate molecules and proteins, and more.”

(...) A fundamental problem with Galactica is that it is not able to distinguish truth from falsehood, a basic requirement for a language model designed to generate scientific text. People found that it made up fake papers (sometimes attributing them to real authors), and generated wiki articles about the history of bears in space as readily as ones about protein complexes and the speed of light. It’s easy to spot fiction when it involves space bears, but harder with a subject users may not know much about.

(...) Many scientists pushed back hard. Michael Black, director at the Max Planck Institute for Intelligent Systems in Germany, who works on deep learning, tweeted: “In all cases, it was wrong or biased but sounded right and authoritative. I think it’s dangerous.”

(...) The Meta team behind Galactica argues that language models are better than search engines. “We believe this will be the next interface for how humans access scientific knowledge,” the researchers write.  This is because language models can “potentially store, combine, and reason about” information. But that “potentially” is crucial. It’s a coded admission that language models cannot yet do all these things. And they may never be able to. “Language models are not really knowledgeable beyond their ability to capture patterns of strings of words and spit them out in a probabilistic manner,” says [Chirag Shah,  University of Washington]. “It gives a false sense of intelligence.”

 Grady Booch comenta: "Galactica is little more than statistical nonsense at scale. Amusing. Dangerous. And IMHO unethical". Algún investigador en ML (Yann LeCun, en el mismo hilo), se escandaliza por la calificación de no ético. Creo que a algunos científicos les falta medir el alcance de lo que tienen entre manos.

 

 

sábado, diciembre 10, 2022

Frederick Brooks: muere un pionero

 

Hace pocos días, el 17 de noviembre, ha muerto Frederick Brooks, un pionero de la ingeniería de software, casi de su primera generación. Longevo, continuó trabajando vinculado a las tecnologías digitales hasta la primera década de este siglo, comenzando desde 1953, después de egresar de la Universidad de Duke. Pasó por IBM a partir de 1956 y hasta 1965, donde dirigió el diseño de los ordenadores 360 (IBM System/360), el primer mainframe de IBM, base de la arquitectura estructurada por IBM, y padre directo de los 4300 y los actuales System Z. Todavía hoy una aplicación codificada en y para el 360 puede ejecutarse en un System/Z. En las decisiones que permitieron esta evolución, uno de los pilares fue Brooks. 

El otro gran aporte de Brooks está en la metodología, en la sistematización de su experiencia en sus años de IBM, en primer lugar, en 1975, con The Mythical Man-Month, y años después, en 1986, con No Silver Bullet—Essence and Accident in Software Engineering, agregado luego como nuevo capítulo en The Mythical...Existe un gran salto entre las épocas en que escribió estos libros y su lectura actual, pero a pesar del desfase técnico, todavía deben ser libros de lectura obligatoria. 

Lo que sigue es el obituario de Shane Hastie en InfoQ, con un buen conjunto de referencias a los logros de Brooks:

Dr Frederick P Brooks Jr, originator of the term architecture in computing, author of one of the first books to examine the nature of computer programming from a sociotechnical perspective, architect of the IBM 360 series of computers, university professor and person responsible for the 8-bit byte died on 17 November at his home in Chapel Hill, N.C. Dr Brooks was 91 years old.

He was a pioneer of computer architecture, highly influential through his practical work and publications including The Mythical Man Month, The Design of Design and his paper No Silver Bullet which debunked many of the myths of software engineering.

In 1999 he was awarded a Turing Award for landmark contributions to computer architecture, operating systems, and software engineering. In the award overview it is pointed out that

Brooks coined the term computer architecture to mean the structure and behavior of computer processors and associated devices, as separate from the details of any particular hardware implementation

In the No Silver Bullet article he states:

There is no single development, in either technology or management technique, which by itself promises even one order-of-magnitude improvement within a decade in productivity, in reliability, in simplicity.

Quotations from the Mythical Man Month:Essays on Software Engineering permeate software engineering today, including:

  • Adding manpower to a late software project makes it later.  
  • The bearing of a child takes nine months, no matter how many women are assigned.
  • All programmers are optimists.

On April 29, 2010 Dilbert explored the adding manpower quote.  

In 2010 he was interviewed by Wired magazine. When asked about his greatest technical achievement he responded

The most important single decision I ever made was to change the IBM 360 series from a 6-bit byte to an 8-bit byte, thereby enabling the use of lowercase letters. That change propagated everywhere.

He was the founder of the Computer Science Department at the University of North Carolina at Chapel Hill, where the Computer Science building is named after him. In an obituary the University says:

Dr. Brooks has left an unmistakable mark on the computer science department and on his profession; this is physically recognized by the south portion of the department’s building complex bearing his name. He set an example of excellence in both scholarship and teaching, with a constant focus on the people of the department, treating everyone with respect and appreciation. His legacy will live on at UNC-Chapel Hill

His page on the university website lists his honours, books and publications.

The Computer History Museum has an interview of Dr Brooks by Grady Booch.

He leaves his wife of 66 years Nancy, three children, nine grandchildren and two great-grandchildren.

 

domingo, octubre 30, 2022

Advertencias sobre diseño y microservicios

 Leí hace algunos días un conjunto de observaciones sobre microservicios que me parecieron más que atinados, especialmente cuando los microservicios parecen ser la receta universal para toda empresa. Si revisas las ofertas laborales, desde hace meses estos son la estrella de las solicitudes; y mas o menos parecido cuando se ven las presentaciones empresarias. Sigo los artículos ofrecidos por Medium, y allí es abrumadora su presencia. De hecho, las observaciones que comento se han publicado allí.

¿Realmente los microservicios son una respuesta total? Giedrius Kristinaitis en estas recomendaciones lo pone en duda, y arroja unas buenas paladas de cordura:

What you need to answer yourself is how microservices will help your particular situation. Think about your situation, and don’t blindly copy what big tech companies do, because their domain is most likely different from yours, and they have their own reasons that might not exist for you. You can listen to their general advice, just don’t be like “oh, this company is doing X to solve their Y problem, so we’ll do the same” when you don’t really have a Y problem.

Giedrius recuerda una verdad muy simple: no aplique una plantilla, sino examine su problema:

Saying things like “if we use microservices we’ll be able to reduce development costs, we’ll scale better, etc.” is not a good answer, because it’s very generic and does not explain how.

Here’s what a good answer might look like: “we need to process a lot of batches of X data, however, we can’t do it anymore, we can’t scale because each batch is unnecessarily coupled to process Y which can’t be made any faster, nor does it need to, so we need X to be decoupled from Y”.

Such an answer would tell exactly what problem you’re having and why. Identifying your problem is very important. If you can’t identify your problem you’re at a high risk of making your life too complicated by needlessly starting with microservices.

 El consejo de Giedrius es no precipitarse estableciendo una arquitectura basada en microservicios, sino concentrarse en el problema, especialmente modificando progresivamente el diseño y arquitectura de la aplicación monolítica de la que se parte. Recomienda disminuír el acoplamiento y dependencias entre partes del sistema, quizá extrayendo partes que puedan manejarse como servicios:

(...) if you don’t think about making your system loosely coupled, and if you don’t think about loose coupling, no matter what architecture you choose, it’s probably not gonna work out, microservices included. (...) So if you think that you must start with microservices from the get-go you’re already implying that your services will be too coupled and too static to actually qualify as microservices. If you can have a loosely coupled monolithic system, you will be able to convert it to microservices.

If you can’t have a loosely coupled monolithic system, microservices will make your life even worse, a lot worse.

Giedrius desplaza la atención a resaltar que en primer lugar este paso es un problema de diseño, y que eso es lo que debe estar claro en primer lugar, dejando a un lado decisiones basadas en "porque lo hizo Netflix". Reflexionar acerca del actual diseño y su caos, analizando las prácticas que hubieron de llevar a tenerel desorden que se quiere corregir. Sin este paso, el fallo se repetirá:

The old monolithic system is a huge pile of spaghetti and needs to be rewritten. The biggest mistake you can make in such a situation is not learning from past mistakes. You should sit down and closely inspect what bad (engineering) practices or processes led to the state that it’s in.

If you don’t do that you’re bound to repeat the same mistakes when you rewrite the system. You know what they say, history repeats itself, and the only way to prevent it is to learn about history.

You just can’t rush into a new project with the same engineering practices you used in the old one and expect things to magically turn out different this time around. The old one failed for a lot of reasons, and you can’t ignore them. Everyone working on the new project should be informed about them.

Recomiendo su lectura, y pensar estas observaciones. El artículo es más amplio pero esta es la parte que me interesa particularmente.


Más sobre Meta

 Meta (Facebook) se hunde en la bolsa, con nuevas caídas de su valor:

Meta abrió el mercado bursátil hoy al mismo precio de hace siete años, cuando la compañía aún se llamaba Facebook y parecía tener un enorme futuro por delante. Una caída del 20% del precio de la acción tras la presentación de los últimos datos trimestrales ha barrido todo lo ganado desde entonces y demostrado que algunos gigantes tecnológicos, en realidad, tienen los pies de barro. (...) Las cifras del tercer trimestre, la verdad, son mucho peores de lo que los analistas esperaban. En un año se han evaporado la mitad de los beneficios. El año pasado, al cierre del tercer trimestre, la compañía aseguraba haber ganado cerca de 9.000 millones de dólares. Este año la cifra apenas supera los 4.390 millones, un 52% menos. El beneficio por acción ha caído un 49% a pesar de que los ingresos de la compañía han sido relativamente estables, con una bajada de sólo un 4% que se puede achacar fácilmente al clima económico general. (Ángel Gimenez de Luis, en El Mundo)

 En una época en la que el lucro lo dan tus datos, que si te preguntas cuál es su negocio, éste no es otro que tu conocimiento puesto en venta, entonces su baja puede ser un acontecimiento positivo. Los datos de cada participante son el punto es clave para Meta (y para muchos otros):

Los problemas empezaron, aunque cueste creerlo, con una simple actualización de software. A finales del año pasado Apple introdujo nuevos controles de privacidad en los iPhone que permiten a los usuarios limitar la cantidad de información que las apps en sus teléfonos son capaces de extraer.

Hasta entonces Meta se apoyaba en su omnipresencia digital para elaborar perfiles muy detallados de los usuarios. Recolectaba información no sólo del uso que se hacía de sus propias aplicaciones, sino también muchas otras en las que incluía códigos de seguimiento. Esto le permitía ser muy eficaz -y, por tanto, cobrar más- en el negocio de la publicidad online.

Pero con los nuevos cambios ha perdido una gran ventaja competitiva en su mercado más importante, EEUU, donde la cantidad de personas que usan iPhone es muy alta. No ayuda tampoco que Google haya decidido seguir un camino parecido con Android, restringiendo cada vez más la cantidad y calidad de los datos que muestra a los desarrolladores, salvo que los usuarios opten explícitamente por compartirlos.

 Es que si el negocio es vender aire, y vivir en una meta-realidad, su alcance puede llegar a ser muy frágil, porque probablemente la vida diaria de la sociedad transcurre y transcurrirá en un entorno distinto, no en la ficción:

El otro problema para Meta es que ha decidido apostar su futuro a una sola carta: la realidad virtual. El año pasado anunció su cambio de nombre, justificándolo como un mejor reflejo de sus intenciones. Zuckerberg cree que en un futuro cercano la mayor parte de nuestra vida digital, tanto en los momentos de ocio como de trabajo, transcurrirán en entornos virtuales, algo que, en conjunto, denomina como "el metaverso", de ahí que ahora hablemos de Meta en lugar de Facebook.

 Ya tuvimos un Second Life

Giménez de Luis apunta también a TikTok, que compite en su terreno, quitándole porciones importantes de seguidores. Con el agravante de que TikTok representa a la presencia cada vez mayor de China como competidor por la hegemonía. Mismos o peores objetivos, si tenemos en cuenta el totalitarismo nada virtual chino.


martes, octubre 18, 2022

No comprometa proyectos basados en Google II

 Como hemos dicho antes, la confiabilidad en la continuidad de un proyecto o un producto de Google, tiende a cero. Tanto que existe una página "Killed by Google", con un recuento de productos e iniciativas que en su momento fueron populares y que fueron abandonadas. Decir "abandonadas" quiere decir que lo que alguien hubiera invertido se ha perdido, o a duras salvado con un costo de reingeniería.

Liz Martin en Medium (Why Google Keeps Killing Its Products):

(...) But here’s the thing: killing off projects is part of Google’s innovation process. Many of the Google products that people use today include features from things that no longer exist.

For example, Google Inbox was killed off in 2019 but many of its features migrated over to Gmail. Google Play Music was killed off in 2020, but several of its features are being used in Youtube Music. Google Allo was killed off in 2019, but its best features were ported over to Android Messages.

(...) Google exists in a fast-paced space. The faster the company can fail, the more quickly it can innovate and beat the competition to the newest technological advancement. No matter how chaotic, these calculated risks are the method to Google’s madness.

Question: What do you think Google will kill off next? What product would you like to see Google bring back to life?

 

miércoles, septiembre 21, 2022

Meta en problemas

 En el bloque de las Big Tech, Big Four o Big Five, según las variaciones de criterio para clasificarlas, hay un elemento común que pesa con una masa descomunal sobre la industria tecnológica o sobre su investigación y evolución: la capacidad monopólica de imponer tendencias y torcer el rumbo del desarrollo según sus criterios. En este sentido han perdido su halo primario de tecnológicas "buenas", que gozaron en mayor o menor medida todas ellas en sus comienzos: innovadoras, abiertas, promotoras de la inteligencia y la iniciativa, participantes en toda clase de iniciativas de mejora social. Desde hace años son para las autoridades de Estados Unidos y Europa el centro de revisiones de prácticas monopólicas, y actores de primera línea de lobbismo en favor de sus proyectos, con sanciones que se van acumulando. Dentro de ellas destacan, a mi juicio, dos: Facebook (ahora Meta) y Twitter. Facebook ha sido particularmente escandalosa y expuesta durante la presidencia americana de Donald Trump. Es que con una masa de usuarios participantes cercana a tres mil millones, la capacidad de manipulación es semejante a tener un gobierno que rigiera Estados Unidos, Europa, Rusia y China, y esto es parte de su negocio. 

Sin embargo, por agotamiento o por competencia, ha llegado un momento en que por primera vez no ha crecido, y eso ha activado alarmas. La vía de escape imaginada por su dirección ha sido lanzar Meta con el nuevo paradigma de "Metaverso", "a digital extension of the physical world by social media, virtual reality and augmented reality features" . Meta vende vida virtual, aire en la red, y su proyecto tiene riesgos que este año no parecen ser igual de virtuales. En El Economista:

(...) el pasado febrero, la red social presenta sus resultados y, con ellos, llega el primer periodo en el que los usuarios no aumentan. Esto provoca el hundimiento de la compañía y el mayor desplome en un día en el patrimonio de su fundador, marcando una caída histórica de 31.000 millones en una sesión.

La ausencia de nuevas altas en la plataforma revela dos cosas: la competencia con TikTok y un menor presupuesto publicitario por parte de los anunciantes. En el primer caso, la red social de Zuckerberg ha encontrado una gran rival en la china gracias al éxito de su formato, los vídeos cortos. En el segundo, el deterioro de las condiciones económicas ha lastrado los ingresos de la compañía.

Además, el órdago por el Metaverso ha requerido y seguirá necesitando enormes inversiones, algo que ha pesado en el valor de la compañía este ejercicio. De hecho, el propio Zuckerberg dijo que la nueva propuesta de la tecnológica era deficitaria y que supondría pérdidas durante tres y cinco años. Además, en los últimos tiempos, la antigua Facebook ha sido objeto de un mayor escrutinio regulatorio.

En comparación con sus competidoras, es la que peor rinde en bolsa. Meta Platforms se deja un 57% de valor en lo que va de año, solo superada por Netflix, que pierde un 60%. Sin embargo, las rentabilidades negativas de Apple, Amazon y Alphabet son mucho menos significativas, del -14%, -26% y -29%, respectivamente.
En fin, el darwinismo en la evolución tecnológica también puede alcanzar al T-Rex


domingo, septiembre 11, 2022

No comprometa proyectos basados en Google

En una época en que en la cúspide de la pirámide de proveedores de tecnología, infraestructura, y elaboración de software hay un muy reducido número de participantes (Microsoft, AWS (Apple), Google (Alphabet), Oracle, Facebook (Meta), la confiabilidad en sus servicios debería ser fundamental. Sin embargo, lo efectivo es el manejo monopólico de la evolución y la oferta en el mercado. Es muy común ver una pequeña empresa que destaca por un par de años en un nicho de mercado, hasta que es comprado por algún miembro prominente de la pirámide. Y esto no significa que el hallazgo diferenciador de esta tal empresa sea utilizado de manera multiplicadora por el comprador. Es más probable que marche a vía muerta en otro par de años. Los vendedores festejan el negocio, y quienes hubieron de confiar en la startup y adoptaron su producto, están probablemente perdidos. 

En este marco, Google destaca en un aspecto en particular: investigar, ofrecer un elemento novedoso en algún área de mercado, impulsarlo y entusiasmar a miles de adoptantes, y luego, de un día para otro, avisar que ese producto, proceso, o lo que sea, se discontinuará el año siguiente. Y los miles de usuarios entusiastas, los que demostraban lo importante que el nuevo elemento era, los early birds, tienen que comenzar a planear (a pérdida), cómo saldrán del corral con el menor daño posible. Google Cloud IOT service es su más reciente muestra de arbitrariedad en el manejo del mercado y de sus clientes. Es notable entrar a la página del producto, donde se describen sus servicios y su gran valor, mientras que en la primera línea de la página aparece un sobreescrito que avisa que el servicio se termina el 16 de agosto de 2023.

En InfoQ, donde he visto esta noticia, se dice esto:

Google Cloud IoT Core is a fully-managed service that allows customers to connect, manage, and ingest data from millions of globally dispersed devices quickly and securely. Recently, Google announced discontinuing the service - according to the documentation, the company will retire the service on the 16th of August, 2023. 

The company released the first public beta of IoT Core in 2017 as a competing solution to the IoT offerings from other cloud vendors – Microsoft with Azure IoT Hub and AWS with AWS IoT Core. In early 2018, the service became generally available. Now, the company emailed its customers with the message that "your access to the IoT Core Device Manager APIs will no longer be available. As of that date, devices will be unable to connect to the Google Cloud IoT Core MQTT and HTTP bridges, and existing connections will be shut down." Therefore, the lifespan of the service is a mere five years.

(...) In addition, over the years, various companies have even shipped dedicated hardware kits for those looking to build Internet of Things (IoT) products around the managed service. Cory Quinn, a cloud economist at The Duckbill Group, tweeted:

I bet @augurysys is just super thrilled by their public Google Cloud IoT Core case study at this point in the conversation. Nothing like a public reference for your bet on the wrong horse.

Last year, InfoQ reported on Enterprise API and the "product killing" reputation of the company - where the community also shared their concerns and sentiment.  And again, a year later, Narinder Singh, co-founder, and CEO at LookDeep Health, as an example expressed a similar view in a tweet:

Can't believe how backwards @Google @googlecloud still is with regards to the enterprise.  Yes, they are better at selling now, but they are repeatedly saying through their actions you should only use the core parts of GCP.

 (...) Lastly, already a Google Partner, ClearBlade announced a full-service replacement for the IoT Core with their service, including a migration path from Google IoT Core to ClearBlade. An option for customers, however, in the Hacker News thread, a respondent, patwolf, stated:

I've been successfully using Cloud IoT for a few years. Now I need to find an alternative. There's a vendor named ClearBlade that announced today a direct migration path, but at this point, I'd rather roll my own.

¿Cuántas veces ha pasado esto antes? ¿Qué garantías de prosperar tiene un negocio si ésta es la confiabilidad de su proveedor? Como en un automóvil, utilice una "conducción defensiva", y sepa con quién negocia: tenga un par de vías de escape, y si puede, evite al gigante.

domingo, agosto 07, 2022

Todd Montgomery: Unblocked by design


Leído en InfoQ , que publica una presentacion ofrecida en QCon Plus, en noviembre de 2021. Un punto de vista lejano a cómo he trabajado siempre, pero con argumentos para atenderlo. Todd Montgomery aboga en favor del diseño asincrónico de los procesos. considerando en primer lugar que la secuencialidad es ilusoria:

All of our systems provide this illusion of sequentiality, this program order of operation that we really hang our hat on as developers. We look at this and we can simplify our lives by this illusion, but be prepared, it is an illusion. That's because a compiler can reorder, runtimes can reorder, CPUs can reorder. Everything is happening in parallel, not just concurrently, but in parallel on all different parts of a system, operating systems as well as other things. It may not be the fastest way to just do step one, step two, step three. It may be faster to do steps one and two at the same time or to do step two before one because of other things that can be optimized. By imposing order on that we can make some assumptions about the state of things as we move along. Ordering has to be imposed. This is done by things in the CPU such as the load/store buffers, providing you with this ability to go ahead and store things to memory, or to load them asynchronously. Our CPUs are all asynchronous.

Storages are exactly the same way, different levels of caching give us this ability for multiple things to be optimized along that path. OSs with virtual memory and caches do the same thing. Even our libraries do this with the ideas of promises and futures. The key is to wait. All of this provides us with this illusion that it's ok to wait. It can be, but that can also have a price, because the operating system can de-schedule. When you're waiting for something, and you're not doing any other work, the operating system is going to take your time slice. It's also lost opportunity to do work that is not reliant on what you're waiting for. In some application, that's perfectly fine, in others it's not. By having locks and signaling in that path, they do not come for free, they do impose some constraints.

 Ubicando el contexto primero: 

When we talk about sequential or synchronous or blocking, we're talking about the idea that you do some operation. You cannot continue to do things until something has finished or things like that. This is more exaggerated when you go across an asynchronous binary boundary. It could be a network. It could be sending data from one thread to another thread, or a number of different things. A lot of these things make it more obvious, as opposed to asynchronous or non-blocking types of designs where you do something and then you go off and do something else. Then you come back and can process the result or the response, or something like that.

Cómo ve la sincronía:

I'll just use as an example throughout this, because it's easy to talk about, the idea of a request and a response. With sync or synchronous, you would send a request, there'll be some processing of it. Optionally, you might have a response. Even if the response is simply just to acknowledge that it has completed. It doesn't always have to involve having a response, but there might be some blocking operation that happens until it is completed. A normal function call is normally like this. If it's sequential operation, and there's not really anything else to do at that time, that's perfectly fine. If there are other things that need to be done now, or it needs to be done on something else, that's a lost opportunity.

Y la asincronía:

Async is more about the idea of initiating an operation, having some processing of it, and you're waiting then for a response. This could be across threads, cores, nodes, storage, all kinds of different things where there is this opportunity to do things while you're waiting for the next step, or that to complete or something like that. The idea of async is really, what do you do while waiting? It's a very big part of this. Just as an aside, when we talk about event driven, we're talking about actually the idea of on the processing side, you will see a request come in. We'll denote that as OnRequest. On the requesting side, when a response comes in, you would have OnResponse, or OnComplete, or something like that. We'll use these terms a couple times throughout this.

 El propósito de Montgomery es procesar asincronicamente, y sacar partido de los tiempos muertos:

The key here is while something is processing or you're waiting, is to do something, and that's one of the takeaways I want you to think of. It's a lost opportunity. What can you do while waiting and make that more efficient? The short answer is, while waiting, do other work. Having the ability to actually do other stuff is great. The first thing is sending more requests, as we saw. The sequence here is, how do you distinguish between the requests? The relationship here is you have to correlate them. You have to be able to basically identify each individual request and individual response. That correlation gives rise to having things which are a little bit more interesting. The ordering of them starts to become very relevant. You need to figure out things like how to handle things that are not in order. You can reorder them. You're just really looking at the relationship between a request and a response and matching them up. It can be reordered in any way you want, to make things simple. It does provide an interesting question of, what happens if you get something that you can't make sense of. Is it invalid? Do you drop it? Do you ignore it? In this case, you've sent request 0, and you've got a response for 1. In this point, you're not sure exactly what the response for 1 is. That's handling the unexpected.

(...) This is an async duty cycle. This looks like a lot of the duty cycles that I have written, and I've seen written and helped write, which is, you're basically sitting in a loop while you're running. You usually have some mechanism to terminate it. You usually poll inputs. By polling, I definitely mean going to see if there's anything to do, and if not, you simply return and go to the next step. You poll if there's input. You check timeouts. You process pending actions. The more complicated work is less in the polling of the inputs and handling them, it's more in the checking for timeouts, processing pending actions, those types of things. Those are a little bit more complex. Then at the end, you might idle waiting for something to do. Or you might just say, ok, I'm going to sleep for a millisecond, and you come right back. You do have a little bit of flexibility here in terms of idling, waiting for something to do.

 Realmente, estos conceptos parecen complicados de aplicar en un proceso usual de trabajo, y más viables en la construcción de trabajos de nivel de sistema operativo. El interlocutor de Montgomery (Printezis) lo ve justamente así: You did talk about the duty cycle and how you would write it. In reality, how much a developer would actually write that, but instead use a framework that will do most of the work for them?

La respuesta de Montgomery:

(...) Beyond that, I mean, patterns and antipatterns, I think, learning queuing theory, which may sound intimidating, but it's not. Most of it is fairly easy to absorb at a high enough level that you can see far enough to help systems. It is one of those things that I think pays for itself. Just like learning basic data structures, we should teach a little bit more about queuing theory and things behind it. Getting an intuition for how queues work and some of the theory behind them goes a huge way, when looking at real life systems. At least it has for me, but I do encourage people to look at that. Beyond that, technologies frameworks, I think by spending your time more looking at what is behind a framework. In other words, the concepts, you do much better than just looking at how to use a framework. That may be front and center, because that's what you want to do, but go deeper. Go deeper into, what is it built on? Why does it work this way? Why doesn't it work this other way? Asking those questions, I think you'll learn a tremendous amount. (...)

La conversación se extiende y deriva por otros asuntos relacionados. Recomendable para leer y releer. Habrá que volver más de una vez.

Veo un modo de afrontar los procesos alejado del modo en que usualmente he trabajado, pero debo reconocer que en los últimos cinco o seis años los cambios conceptuales sobreabundan, y puedo decir que estamos en una quinta o sexta generación, lejos de aquellos que llamamos cuarta generación hace veinte o treinta años. El tiempo mostrará qué ha resultado duradero, y qué ha tomado por un callejón sin salida. Estoy dispuesto a escuchar.

 


Pesadillas en la nube

 Forrest Brazeal, actualmente empleado de Google Cloud (An AWS Hero turned Google Cloud employee, I explore the technical and philosophical differences between the two platforms. My biases are obvious, but opinions are my own) señala en julio que la peor pesadilla de cualquier desarrollador en la nube es una llamada recursiva en sus pruebas, que escale la facturación de su cuenta de unos pocos dólares/euros a "miles" (50.000 por ejemplo). Y una llamada recursiva que genere miles de llamadas procesadas puede producirse en cualquier prueba:

AWS calls it the recursive runaway problem. I call it the Hall of Infinite Functions - imagine a roomful of mirrors reflecting an endless row of Lambda invocations. It’s pretty much the only cloud billing scenario that gives me nightmares as a developer, for two reasons:

  • It can happen so fast. It’s the flash flood of cloud disasters. This is not like forgetting about a GPU instance and incurring a few dollars per hour in linearly increasing cost. You can go to bed with a $5 monthly bill and wake up with a $50,000 bill - all before your budget alerts have a chance to fire.

  • There’s no good way to protect against it. None of the cloud providers has built mechanisms to fully insulate developers from this risk yet.

Brazeal apunta a un incidente descripto en detalle por sus propias víctimas (We Burnt $72K testing Firebase + Cloud Run and almost went Bankrupt) que puede dar una idea del problema. En este caso la factura pasó de un potencial de 7 dólares a 72000...

Sudeep Chauhan, protagonista de este incidente, escribe posteriormente, tras poner en orden la casa, una lista de recomendaciones para trabajar con un proveedor de servicios en la nube.

Nota: Renato Losio, en InfoQ, a propósito del artículo de Brazeal, lo menciona y extiende, recordando otro artículo de Brazeal dedicado a la capa sin cargo de AWS.


sábado, agosto 06, 2022

Probablemente usted no necesite microservicios

 Mattew Spence, en ITNEXT, a contracorriente de la enorme ola de bombo sobre microservicios, desarrolla un consistente conjunto de argumentos de relativización de la importancia y necesidad de microservicios (You don't need microservices) . Sólo destaco el argumento acerca de la simplicidad de los microservicios, y de sus ventajas derivadas:

"Simpler, Easier to Understand Code"

This benefit is at best disingenuous, at worse, a bald-faced lie.

Each service is simpler and easier to understand. Sure. The system as a whole is far more complex and harder to understand. You haven’t removed the complexity; you’ve increased it and then transplanted it somewhere else.

(...) Although microservices enforce modularization, there is no guarantee it is good modularization. Microservices can easily become a tightly coupled “distributed monolith” if the design isn’t fully considered.

(...) The choice between monolith and microservices is often presented as two mutually exclusive modes of thought. Old school vs. new school. Right or wrong. One or the other.

The truth is they are both valid approaches with different trade-offs. The correct choice is highly context-specific and must include a broad range of considerations.

The choice itself is a false dichotomy and, in certain circumstances, should be made on a feature-by-feature basis rather than a single approach for an entire organization’s engineering team.

Should you consider microservices?

As is often the case, it depends. You might genuinely benefit from a microservices architecture.

There are certainly situations where they can pay their dues, but if you are a small to medium-sized team or an early-stage project:

No, you probably don’t need microservices.

 

domingo, julio 31, 2022

Liam Allan habla sobre Node en IBM i

 Liam Allan, como Scott Klement, han dado un impulso formidable al IBM i (AKA AS/400, iseries), explorando, popularizando y explotando los sucesivos cambios tecnológicos habidos en el equipo desde hace años. El comentario sobre Node lo hace Liam en la entrevista que Charles Guarino le hace en TechChannel. La participación de Liam, reciente, ha implicado cambios radicales en el modo de encarar al IBM i, comenzando por su editor de programas. Debemos decir que el ambiente y las prácticas relacionadas con el IBM i históricamente han sido más vale conservadoras, apropiadas para un set de equipos que solía ser el núcleo del procesamiento de las empresas que lo usaban. Dice Guarino sobre este aspecto: I still think there’s still a lot of newbies—even the most seasoned RPG developers are still newbies—and open-source makes them nervous, perhaps because it’s a whole different paradigm, a whole different vernacular. Everything about it is different, yet obviously there are so many similarities, but the terminology is very different. Klement y quienes lo siguieron, y ahora Allan, han representado una renovación y actualización más que conveniente,  necesaria.

Por mi parte, dándole vueltas a su uso con Plex. Ya Klement ha potenciado su integración con sus propuestas a nivel de integración de lenguajes java y c/c++ a través de ILE.

 Lo dicho sobre Node:

Charlie: (...) So Liam, I do have a lot of things that I want to talk to you about, but when I think of you lately what comes to my mind is Node. I mean I kind of associate you with just Node and how you really are really running with that technology, especially on IBM i, but I think there are a lot of people who don’t quite understand where that fits in, what Node actually is and how it fits on your platform. So what can you say about that in general?

Liam: Absolutely. So I mean, there’s a few points to be made. I guess I’ll start with the fact that you know, it is 80% of my working life is writing typescript and Javascript. So I spend most of my days in it now, which is great. A few years ago, it was more like 50% and each year it’s growing more and more. So I usually focus on how it can integrate with IBM i. So you know having Node.js code, whether it’s typescript or Javascript talking to IBM i via the database—so, calling programs, fetching data, updating data; you know, the minimal standard kind of driver type stuff that you do, crud, things like that. What I especially like about Node on IBM i is that it is made for high input/outputs. It’s great at handling large volumes of data and most people that are using IBM i tend to have tons of data, right? Db2 for i has been around for centuries at this point; it’s older than I am, and I can make that joke. No one else can make that joke but I can make it and you know it’s been around for the longest time. And so people have got all of this data and in my opinion Node.js is just a great way to express that data—you know, via an API. I think it’s fast. It’s got high throughput and yeah, it’s a synchronous in its standard. It’s easy to use, it’s easy to deploy, it’s easy to write code for especially. One of the reasons I like is the fact that I can have something working within 20 minutes. It’s a fantastic piece of technology and it’s been out for a while. I mean it’s been out for like 10 years, 10 years plus at this point. It’s just fun to use. I really enjoy it and I encourage other people to use it too.