sábado, febrero 28, 2015

A propósito de Lenovo y Superfish


El escándalo de Lenovo y su adware preinstalado Superfish generaron un rechazo comparable al producido por las revelaciones escalonadas de las interferencias de la NSA americana. Es que como se ha dicho, "es muy posiblemente la peor cosa que he visto que un fabricante haga a su base de clientes." (Mark Rogers). En sus palabras:
We trust our hardware manufacturers to build products that are secure. In this current climate of rising cybercrime, if you can’t trust your hardware manufacturer, you are in a very difficult position. That manufacturer has a huge role to play in keeping you safe – from releasing patches to update software when vulnerabilities are found to behaving in a responsible manner with the data the collect and the privileged access they have to your hardware.
When bad guys are able to get into the supply chain and install malware, it is devastating. Often users find themselves with equipment that is compromised and are unable to do anything about it. When malware is installed with the access a manufacturer has, it buries itself deep inside the system – often with a level of access that takes it beyond the reach of antivirus or other countermeasures. This is why it is all the more disappointing – and shocking – to find a manufacturer doing this to its customers voluntarily.
Para no abundar en lo conocido, el fabricante Lenovo entregaba sus portables con software preinstalado, entre ellos Superfish. Este producto interfería las búsquedas del usuario insertando sus propias publicidades. Esto de por sí es molesto y desagradable, y el software antiadware se ocupa de este tipo de productos. Pero el mayor problema es que para conseguir este objetivo, Superfish reemplaza los certificados de seguridad de los sitios accedidos por uno suyo propio, creando las condiciones para que ninguna página resulte confiable a través de una conexión SSL. Mark Rogers da un informe detallado de la vulnerabilidad que se produce (y también de cómo solucionarlo). ¿Qué puede hacer un usuario si su proveedor lo traiciona y lo entrega atado de pies y manos?
En este sentido va la respuesta de la fundación de Software Libre:
Whenever you use proprietary software like Windows or Superfish, true, trustable, verifiable security is always out of reach. Because proprietary code can't be publicly inspected, there's no way to validate its security. Users have to trust that the code is safe and works as advertised. Since proprietary code can only be modified by the developers who claim to own it, users are powerless to choose the manner in which security bugs are fixed. With proprietary software, user security is secondary to developer control.
En muchos escenarios suele ser muy difícil evitar el uso de software propietario. Sin embargo, las crecientes interferencias en la privacidad del usuario, ameritan evaluar dos y tres veces las posibilidades de tomar otro camino y el riesgo de ponerse en manos de quien eventualmente no respeta los derechos del comprador.

Destacamos: la explicación del agujero de seguridad de Superfish, por Mark Rogers.
La expliación de Filippo Valsorda, y su test de riesgo sobre Superfish.
La investigación de Robert Graham sobre Superfish y su fallo.
El comentario de David Auerbach sobre el impacto de esta acción sobre Lenovo.




domingo, febrero 22, 2015

James Ward sobre Java

James Ward, de Salesforce, escribió el pasado mes de diciembre una  nota sobre Java (y alrededores). Una excelente lectura sobre java para aplicaciones web, y seguramente adaptable a otros escenarios. En realidad, se trata de una recomendación basada en experiencia acerca de la adopción de métodos ágiles y el recurso a herramientas de automatización y sistematización del proceso de contrucción (e implementación) de aplicaciones. No quiero repetirlo, pero sí recomendarlo. En todo caso, quisiera citar su reflexión sobre el mantenimiento de releases "monolíticos" (trabajar para entregas en series de tiempo y desarrollo prolongadas):

Monolithic Releases Suck

Unless you work for NASA there is no reason to have release cycles longer than two weeks. It is likely that the reason you have such long release cycles is because a manager somewhere is trying to reduce risk. That manager probably used to do waterfall and then switched to Agile but never changed the actually delivery model to one that is also more Agile. So you have your short sprints but the code doesn’t reach production for months because it would be too risky to release more often. The truth is that Continuous Delivery (CD) actually lowers the cumulative risk of releases. No matter how often you release, things will sometimes break. But with small and more frequent releases fixing that breakage is much easier. When a monolithic release goes south, there goes your weekend, week, or sometimes month. Besides… Releasing feels good. Why not do it all the time?
Moving to Continuous Delivery has a lot of parts and can take years to fully embrace (unless like all startups today, you started with CD). Here are some of the most crucial elements to CD that you can implement one-at-a-time:
  • Friction-less App Provisioning & Deployment: Every developer should be able to instantly provision & deploy a new app.
  • Microservices: Logically group services/apps into independent deployables. This makes it easy for teams to move forward at their own pace.
  • Rollbacks: Make rolling back to a previous version of the app as simple as flipping a switch. There is an obvious deployment side to this but there is also some policy that usually needs to go into place around schema changes.
  • Decoupled Schema & Code Changes: When schema changes and code changes depend on each other rollbacks are really hard. Decoupling the two isolates risk and makes it possible to go back to a previous version of an app without having to also figure out what schema changes need to be made at the same time.
  • Immutable Deployments: Knowing the correlation between what is deployed and an exact point-in-time in your SCM is essential to troubleshooting problems. If you ssh into a server and change something on a deployed system you significantly reduce your ability to reproduce and understand the problem.
  • Zero Intervention Deployments: The environment you are deploying to should own the app’s config. If you have to edit files or perform other manual steps post-deployment then your process is brittle. Deployment should be no more than copying a tested artifact to a server and starting it’s process.
  • Automate Deployment: Provisioning virtual servers, adding & removing servers behind load balancers, auto-starting server processes, and restarting dead processes should be automated.
  • Disposable Servers: Don’t let the Chaos Monkey cause chaos. Servers die. Prepare for it by having a stateless architecture and ephemeral disks. Put persistent state in external, persistent data stores.
  • Central Logging Service: Don’t use the local disk for logs because it prevents disposability and makes it really hard to search across multiple servers.
  • Monitor & Notify: Setup automated health checks, performance monitoring, and log monitoring. Know before your users when something goes wrong.
There are a ton of details to these that I won’t go into here. If you’d like to see me expand on any of these in a future blog, let me know in the comments.
Un punto importante, pero recordado sólo con el propósito de que lea completa la reflexión de James Ward, que lo merece.