miércoles, marzo 28, 2007

Más sobre Poppendieck y Lean Manufacturing

A propósito del comentario anterior sobre los conceptos de Lean Software explicados por Tom y Mary Poppendieck, se puede remitir a su artículo escrito para Dr. Doob´s en 2001, reescrito en su sitio con algunas variantes (se pueden consultar las dos versiones). Allí desarrollan claramente las ideas que heredaran de Lean Manufacturing, ayudando también a explicar desde el punto de vista del software, las técnicas de calidad acuñadas por los japoneses en los últimos 50 años. Es natural que Poppendieck, siendo gerente de Sistemas de una manufacturera como 3M, pudiera sacar conclusiones y generalizaciones para los procesos de software, desde los procesos industriales. Algo que a veces es resistido, explicado con claridad.
In a January 2001 article in Harvard Business Review titled ‘Strategy as Simple Rules’, Kathleen Eisenhardt describes how smart companies thrive in a complex business environment by establishing a set of simple rules which define direction without confining it.[2] She suggests that instead of following complex processes, using simple rules to communicate strategy is the best way to empower people to seize fleeting opportunities in rapidly changing markets.

The 1980’s were a time of profound change in US manufacturing, and the change was guided by a set of simple rules. Simple rules gave guidance to every level of the organization, and got everyone on the same sheet of music. They empowered people at all levels of the organization, because the provided guidance for making day-to-day decisions. With simple rules, work teams were able to continuously improve the processes and products without detailed guidance or complex processes.

The basic practices of Lean Manufacturing and TQM in the 1980’s might be summed up in these ten simple rules:

1. Eliminate Waste
2. Minimize Inventory
3. Maximize Flow
4. Pull From Demand
5. Empower Workers
6. Meet Customer Requirements
7. Do it Right the First Time
8. Abolish Local Optimization
9. Partner With Suppliers
10. Create a Culture of Continuous Improvement

These Lean Manufacturing rules have been tested and proven over the last two decades. They have been adapted to logistics, customer service, health care, finance, and even construction. The application of the rules may change slightly from one industry to the next, but the underlying principles have stood the test of time in many sectors of the economy.

Lean Programming

Recent work in Agile Methodologies, Adaptive Software Development, and Extreme Programming have in effect applied the simple rules of Lean Manufacturing to software development. The results, which we call Lean Programming, are as dramatic as the improvements in manufacturing brought on by the Just-in-Time and Total Quality Management movements of the 1980’s.

domingo, marzo 25, 2007

McAfee: Un mapa del riesgo en Internet

McAfee elaboró un estudio del grado de riesgo al navegar en Internet, clasificando distintos tipos de problemas:

Online safety risks are a truly global issue. Yet differences in threats vary significantly by country and other factors, for example:

  • A consumer is almost 12 times more likely to encounter a drive-by-download while surfing Russian domains as Columbian ones.
  • Registering at a Web site in India results in a 4.3% chance of getting spammy e-mail. Taking the same action with a domain registered in China yields a 7.2% chance.
  • 5.2% of Vietnamese Web sites have risky downloads. Just 0.5% of Singaporean sites host such files.
  • 2.7 million times every month, casual Web surfers visit risky Dutch Web sites. Even though Hong Kong has approximately the same percentage of risky Web sites, those risky domains receive just 52,000 clicks each month.

Geographic differences give rise to important big-picture questions. For example, what level of online risk is involved as an ever greater percentage of the world's population moves online and users seek out Web sites in their native languages? Are there relatively safer or riskier country domains? Should online consumers factor in this information when searching and surfing? Can the malicious Web be mapped in a way that is interesting and useful?

McAfee can help answer these questions thanks to data from SiteAdvisor. SiteAdvisor tests web sites to keep consumers safe from spyware, spam, viruses and scams. SiteAdvisor is a free download for use with Internet Explorer and Firefox.

Consulte el mapa en SiteAdvisor.
Otros análisis realizados:
Seguridad de motores de búsqueda.
Ranking de fuentes de spyware.

SiteAdvisor, según ellos mismos:
SiteAdvisor se fundó en abril de 2005 gracias a un grupo de ingenieros del MIT que querían convertir Internet en un lugar más seguro para sus familiares y amigos. Tras pasar demasiados períodos vacacionales intentando limpiar una maraña de spam, software publicitario y software espía de los equipos de sus familias, decidimos pasar a la acción.
Nos dimos cuenta de que existía una brecha en los productos de seguridad existentes para Internet. Mientras que las empresas de seguridad tradicionales han hecho un trabajo relativamente bueno a la hora de tratar amenazas técnicas como los virus, no conseguían evitar una nueva clase de trucos de "ingeniería social" como las infecciones por software espía, estafas por robo de identidad y sitios que enviaban cantidades excesivas de correo electrónico.
Para afrontar este desafío, creamos un sistema de probadores automáticos que patrullan constantemente el Web para navegar por los sitios, descargar archivos y especificar información en los formularios de registro. Documentamos todos los resultados y los complementamos con las respuestas de nuestros usuarios, los comentarios de propietarios de sitios Web y los análisis de nuestros empleados.
Nuestro software fácil de usar para Internet Explorer y Firefox resume nuestros resultados de seguridad en clasificaciones en rojo, amarillo y verde intuitivas para ayudar a los usuarios de sitios Web a permanecer a salvo mientras buscan, navegan y realizan transacciones en línea. Nuestro objetivo es ser los primeros en establecer un nuevo método de seguridad en el Web y en convertir Internet en un lugar más seguro para todos.
El 5 de abril de 2006 anunciamos que McAfee, Inc. nos había adquirido. La unión con McAfee nos proporcionó un alcance mucho mayor en todo el mundo, acceso a la tecnología de seguridad líder de McAfee y muchos más recursos para acelerar el desarrollo de funciones para nuestros numerosos usuarios.
SiteAdvisor tiene cuestionamientos que pueden ser estudiados en Wikipedia.
[Visto en Terra].

sábado, marzo 24, 2007

Mary Poppendieck: cómo se aprendió de Japón

Esta nota está registrada desde febrero en la colección de enlaces de del.icio.us, en su publicación en Infoq. Sin embargo, la versión escrita de Peter Abilla es más cómoda para lectores de habla no inglesa (nosotros, y todos los demás). En este reportaje, Mary y Tom Poppendieck explican claramente todo lo que ellos, y por su medio muchos otros, le deben a los métodos de Toyota, extendidos y compartidos por la industria japonesa en general. Mary (y Tom) explica cómo aplicó a la creación de software los principios del Sistema de Producción de Toyota (TPS) [1] y sus generalizaciones. La historia que cuenta Mary también habla de cómo Estados Unidos perdió competitividad frente a Japón, de cómo la asimiló, y en qué estado de recuperación se encuentra.
Algunas de sus afirmaciones:
Pregunta: De dónde viene Lean?
Mary: Lean comes from the Toyota Production System which was invented at Toyota for automotive manufacturing in the late 1940s, 1950s, 1960s. We didn’t actually discover how it was working in the US until perhaps the early 1980s, when the idea of "Just-In-Time" manufacturing started competing against other US products, so we started seeing Toyota and other Japanese cars taking market share away. In my industry, which was 3M, we were making video cassettes and all of a sudden we found that Japanese competitors were selling video cassettes for a third of what we could selling for, and less than we could make them for. We were trying to figure out what caused that. It turns out that this concept of Just-In-Time manufacturing was strongly behind what was going on, and later the concept became known as Lean manufacturing.
Ante la pregunta de qué es Lean, Mary lo explica en su aplicación original en la industria:
Mary: We’ll talk about Lean in general. The Lean history starts over here in manufacturing. Here the first thing Lean was known as was "the Toyota Production System" (it was the way Toyota learned how to manufacture cars). That became known as Just-In-Time and that how it was known when it came to the US and Europe. Then, in 1990 a book was published "The Machine that Changed the World" - the Story of Lean Production. That’s where the word Lean production came from. They’re all basically the same thing. They’re a way of thinking about manufacturing that allows you to do rapid manufacturing with very low inventory and high variability. Then, if we come down this path there’s something in logistics (warehousing, moving materials between companies) called supply chain management (SCM). SCM is the way that you use Lean in the logistics area. And if we come over here there’s a whole other area which I’ll call Product Development, which is very different than manufacturing and logistics. We now apply Lean thinking principles (not the practices, but the principles) from manufacturing and logistics into Product Development. When you apply Lean into Product Development you get a different way of looking at it. And I believe that software development is a subset of, is like, it’s part of Product Development. When you apply Lean to software development you take the general principles of Toyota production system or Lean and you apply them into the Product Development environment but you don’t use the exact practices that you would use in manufacturing. You have to go back to the first principles of what you’re trying to accomplish and move those into software development.
(...) The main principles behind Lean were articulated by Taiichi Ohno, the person at Toyota who invented the Toyota Production System. The first principle would be the idea of Flow (or Low Inventory, or Just-in-Time). The second one is what I would have to call "expose problems" or "no workarounds". The idea is that you have flow and you have low inventory: it’s like you have a boat on the water and your boat is sailing above these big rocks, which are problems. This is your inventory level here. If you lower your inventory level, at some point you’re going to run into a rock rock and your boat is going to come down and bump into this rock. If you don’t get rid of these rocks when you lower your inventory you’re going to bump into rocks, and crash and burn. The first thing you’re doing is lower your inventory to expose those problems so that you can get rid of them. If you don’t expose problems and stop and fix your problems, then lowering your inventory is just going to crash your boat. You have to do two things: you have flow but you also have no tolerance for abnormality. You just don’t allow defects into your system; you don’t allow things to go wrong. When something wrong happens you stop, you figure out what is causing it, you fix it rather than continuing on and just ignoring it or working around it. These are the two basic principles.
(...) I have seven principles that are the foundation of the way to look at it Lean in software development. The first principle is to eliminate waste. The second is to amplify learning. The third is to delay commitment. The forth is to deliver fast. The fifth is to build integrity into the product. You can’t test it in later; you have to build it with integrity in the first place. The sixth is to engage the intelligence of the people who are doing the work. The last is to optimize the whole system, not just part of it.
Qué significa "Inventario" aplicado al desarrolo de software:
Mary:In software development inventory is anything that you’ve started and you haven’t gotten done. It’s "partially done" work. In manufacturing if you start making something and it is in-process, it’s not sold, it is inventory. In development it’s the same thing. If you started developing something and it’s not done, it is inventory. What you’re trying to do Lean software development is the least amount of "partially done" work as possible. You want to go from understanding what you’re supposed to do to having it done and deployed and in somebody’s hands as rapidly as possible.

Tom: "Done" means coded, tested, documented, integrated, "Done", so there’s no more work to do. It is one of the main reasons why Lean approach gives you a more reliable delivery, a more trustworthy delivery. So that instead of finishing activities like requirements, which doesn’t tell you much about how far you are along overall, each piece of work that you do is completely done.

Qué significa "desperdicio" en el desarrollo de software:
Waste is anything that the customers do not value. If you’re doing something and in the end the customers don’t think it is important for them, then it is waste. There are lots of ways to look at waste. One is: partially done work is waste. Just like in manufacturing, inventory is waste; in software development partially done work is waste. Also, extra features you don’t need right now is waste; stuff that causes delays is waste; things that get in the way of a rapid flow of product are waste. Because customers, when they have a problem they want the problem solved now, and stuff that delays that is waste.
Qué significa "demorar el compromiso" (delayed commitments):

[1. Anticipado al discutir su método aplicado:]

Well, when you have requirements churn, meaning you’ve done some requirements work and then it needs to be changed, that means you’ve written your requirements too soon. If you don’t wait until it’s time to write those requirements (that’s what I mean by "delayed commitment") that’s when you’re going to have a lot of changes in requirements. On the other hand say you get to testing and you find errors - oh my goodness! Now you have to go back and code-and-fix, and code-and-fix. If you have that cycle in your value stream you’re testing too late. The idea is to move the detailed requirements closer to the coding, the testing closer to the coding, and do it in smaller chunks.

[2. Contestado específicamente:]

Mary: The idea of delayed commitments is not to decide until you have the most information that you possibly can have. Then you make decisions based on that. It’s the idea that (not that you procrastinate and never make decisions) but that you schedule decisions for when you have the most information. For example, pilots are trained that when they have to make a taught choice, they should decide when they have to decide, and not to decide until then because that’s when they have the most information. The military paradigm: when you’re threatened decide when you need to respond and don’t respond ahead of that, wait until that time because then you have the most information. But don’t wait any longer than that. So, it is the idea of scheduling decisions until the last moment when they need to be made, and not making them before that, because then you have the most information.
For example, let’s take user interface. When do you really need to design a user interface? Oftentimes it drives the whole design, but in fact you don’t really need it until you’re about to do your first alpha test. Before that you can be designing the business layer and you can actually put testing in below the user interface and you can be designing all of the other business logic; you can get that done with any kind of interface and in fact you ca drive testing with a automated interface, and then just before you go to alpha testing you decide what you want for your user interface. Then you take it off and at that point in time you figure it out. But up until that point in time you don’t need that. And there are many pieces of decision in software development, where there’s this idea that we have to design the whole thing before we get started. But the fundamental Lean principle in product development is that we should not make any design decisions until we absolutely have to. We really do not want to have decided anything until we need to; and so at the point it time of the decision we should still have 3 options. Still have a couple of middleware options, or still have not decided how we’re going to do the user interface. You wait until you know the most possible information before you make decisions. So, the idea of having a complete detailed spec at the beginning is the exact opposite of the Lean commitment.

Tom: At the beginning is the time of maximum ignorance about what you really need. It is hardly the time to make your key commitments. Most of the cost is incurred when you make the first commitments. If you can delay that, you can do a much better job.

Mary: You want irreversible decisions, decisions which you can’t change, to be made as late as you possibly can. Otherwise, if you make them early you’re going to lock in the decision, because you’re going to build stuff on it, and you didn’t need to make it yet. And if you wait, you leave options open so that you can chose later exactly what you need to do. That’s just basically a good design heuristic.

En el reportaje hay más: una comparación con XP, con RUP y con CMM, que merecen ser leídas.
Más aún, pueden además buscarse las respuestas que Mary da a doce preguntas de lectores consultados por Abilla.

[1] La referencia a la versión inglesa de Wikipedia, más completa que la castellana.

viernes, marzo 23, 2007

Crecimiento del acceso a Internet en Argentina-II

Complementando estadísticas del INDEC, Cisco publica sus cifras sobre el uso de banda ancha en Argentina. Publicado en varios medios. Tomo el artículo de América Economía:
Los accesos a internet por banda ancha anotaron un crecimiento en Argentina del 66% durante 2006, cuando se añadieron más de 600.000 nuevas conexiones a la red, según un estudio presentado por la empresa Cisco.
En el último año, las conexiones de ADSL crecieron un 76% y las de cable módem un 50%, ya que "se trata de un sector que apunta al segmento más masivo, que son los hogares", declaró Romina Ducci, representante de Cisco en este país.
Durante el segundo semestre de 2006 el crecimiento experimentado en el total de conexiones de banda ancha fue del 29,9% con respecto al mismo lapso del año anterior, lo que elevó a más de un millón y medio las conexiones de este tipo en Argentina.
A pesar de que hay 10,7 millones de hogares en el país, sólo el 15,7% tiene conexiones de banda ancha, lo que supone "un porcentaje preocupante y que hay que lograr elevar", manifestó Ducci.
De acuerdo con el estudio, las conexiones a internet varían según la distribución geográfica, predominando en las zonas urbanas el ADSL o el cable módem y en regiones mineras o petroleras el acceso por satélite.
Las líneas satelitales abarcan a un público minoritario, pero en 2006 registraron un aumento interanual del 25% porque "son la solución para superar deficiencias de infraestructuras terrestres", apuntó Ducci.
La ciudad de Buenos Aires es el distrito con más conexiones a internet de Argentina, con más de 600.000.
En América Latina existen nueve millones de conexiones de banda ancha, siendo Argentina uno de los países con mayor penetración sobre el total de la población, con un 4,1% en 2006, según el informe de Cisco.
Una síntesis gráfica de los datos de Cisco e INDEC en Educ.ar.

miércoles, marzo 21, 2007

Muere John Backus


Este sábado 19 de marzo John Backus murió en su casa de Oregon, a los 82 años. Se va otro de los fundadores, de los que pusieron los cimientos de la informática que hoy disponemos. Quisiera reproducir algunos párrafos del obituario que le dedicó el New York Times:

John W. Backus, who assembled and led the I.B.M. team that created Fortran, the first widely used programming language, which helped open the door to modern computing, died on Saturday at his home in Ashland, Ore. He was 82.
His daughter Karen Backus announced the death, saying the family did not know the cause, other than age.
Fortran, released in 1957, was “the turning point” in computer software, much as the microprocessor was a giant step forward in hardware, according to J. A. N. Lee, a leading computer historian.
Fortran changed the terms of communication between humans and computers, moving up a level to a language that was more comprehensible by humans. So Fortran, in computing vernacular, is considered the first successful higher-level language.
Mr. Backus and his youthful team, then all in their 20s and 30s, devised a programming language that resembled a combination of English shorthand and algebra. Fortran, short for Formula Translator, was very similar to the algebraic formulas that scientists and engineers used in their daily work. With some training, they were no longer dependent on a programming priesthood to translate their science and engineering problems into a language a computer would understand.
In an interview several years ago, Ken Thompson, who developed the Unix operating system at Bell Labs in 1969, observed that “95 percent of the people who programmed in the early years would never have done it without Fortran.”
He added: “It was a massive step.”
Fortran was also extremely efficient, running as fast as programs painstakingly hand-coded by the programming elite, who worked in arcane machine languages. This was a feat considered impossible before Fortran. It was achieved by the masterful design of the Fortran compiler, a program that captures the human intent of a program and recasts it in a way that a computer can process.
In the Fortran project, Mr. Backus tackled two fundamental problems in computing — how to make programming easier for humans, and how to structure the underlying code to make that possible. Mr. Backus continued to work on those challenges for much of his career, and he encouraged others as well.
“His contribution was immense, and it influenced the work of many, including me,” Frances Allen, a retired research fellow at I.B.M., said yesterday.
Mr. Backus was a bit of a maverick even as a teenager. He grew up in an affluent family in Wilmington, Del., the son of a stockbroker. He had a complicated, difficult relationship with his family, and he was a wayward student.
In a series of interviews in 2000 and 2001 in San Francisco, where he lived at the time, Mr. Backus recalled that his family had sent him to an exclusive private high school, the Hill School in Pennsylvania.
“The delight of that place was all the rules you could break,” he recalled.
After flunking out of the University of Virginia, Mr. Backus was drafted in 1943. But his scores on Army aptitude tests were so high that he was dispatched on government-financed programs to three universities, with his studies ranging from engineering to medicine.
After the war, Mr. Backus found his footing as a student at Columbia University and pursued an interest in mathematics, receiving his master’s degree in 1950. Shortly before he graduated, Mr. Backus wandered by the I.B.M. headquarters on Madison Avenue in New York, where one of its room-size electronic calculators was on display.
When a tour guide inquired, Mr. Backus mentioned that he was a graduate student in math; he was whisked upstairs and asked a series of questions Mr. Backus described as math “brain teasers.” It was an informal oral exam, with no recorded score. He was hired on the spot. As what? “As a programmer,” Mr. Backus replied, shrugging. “That was the way it was done in those days.”
Back then, there was no field of computer science, no courses or schools. The first written reference to “software” as a computer term, as something distinct from hardware, did not come until 1958.
In 1953, frustrated by his experience of “hand-to-hand combat with the machine,” Mr. Backus was eager to somehow simplify programming. He wrote a brief note to his superior, asking to be allowed to head a research project with that goal. “I figured there had to be a better way,” he said.
Mr. Backus got approval and began hiring, one by one, until the team reached 10. It was an eclectic bunch that included a crystallographer, a cryptographer, a chess wizard, an employee on loan from United Aircraft, a researcher from the Massachusetts Institute of Technology and a young woman who joined the project straight out of Vassar College.
“They took anyone who seemed to have an aptitude for problem-solving skills — bridge players, chess players, even women,” Lois Haibt, the Vassar graduate, recalled in an interview in 2000.
Mr. Backus, colleagues said, managed the research team with a light hand. The hours were long but informal. Snowball fights relieved lengthy days of work in winter. I.B.M. had a system of rigid yearly performance reviews, which Mr. Backus deemed ill-suited for his programmers, so he ignored it. “We were the hackers of those days,” Richard Goldberg, a member of the Fortran team, recalled in an interview in 2000.
After Fortran, Mr. Backus developed, with Peter Naur, a Danish computer scientist, a notation for describing the structure of programming languages, much like grammar for natural languages. It became known as Backus-Naur form. Later, Mr. Backus worked for years with a group at I.B.M. in an area called functional programming. The notion, Mr. Backus said, was to develop a system of programming that would focus more on describing the problem a person wanted the computer to solve and less on giving the computer step-by-step instructions. “That field owes a lot to John Backus and his early efforts to promote it,” said Alex Aiken, a former researcher at I.B.M. who is now a professor at Stanford University.
In addition to his daughter Karen, of New York, Mr. Backus is survived by another daughter, Paula Backus, of Ashland, Ore.; and a brother, Cecil Backus, of Easton, Md.
His second wife, Barbara Stannard, died in 2004. His first marriage, to Marjorie Jamison, ended in divorce.
It was Mr. Backus who set the tone for the Fortran team. Yet if the style was informal, the work was intense, a four-year venture with no guarantee of success and many small setbacks along the way.
Innovation, Mr. Backus said, was a constant process of trial and error. “You need the willingness to fail all the time,” he said. “You have to generate many ideas and then you have to work very hard only to discover that they don’t work. And you keep doing that over and over until you find one that does work.”

La fotografía está tomada de Wikipedia, que desarrolla un buen artículo sobre Backus.
Otras notas, en Slashdot, IBM, y Scott Rosenberg, que le dedicará un artículo dentro de su serie sobre los fundamentos de la Informática. De su sitio saqué la referencia a un papel que Rosenberg comentará: Una discusión sobre los fundamentos de programación de Von Neumann.

domingo, marzo 18, 2007

Sobre los rankings de Universidades

Un comentario de Alejandro Pisanty sobre el grado de acierto del ranking del Times Higher Education Supplement (Thes), mencionado aquí, mereció buscar la fuente original de la discusión. Pisanty comenta un artículo de una publicación australiana que describe brevemente las dificultades para organizar una escala. Remitiendo al orígen, el problema aparece en su dimensión, y se ven los esfuerzos para lograr una mejor aproximación en la valoración comparada de universidades. Aquí se ha referido el ranking de THES, y el de Jiao Tong, cuyos organizadores participan de la reunión comentada. Los comentarios se producen en el marco de un simposio de la Griffith University de Brisbane, Australia, que se propone estudiar y mejorar los parámetros de análisis y comparación entre universidades, a nivel local e internacional. La discusión relativiza el valor de los rankings mencionados, tal como Pisanty indica, en cuanto a instituciones que no sean de lengua inglesa, o, más ampliamente, para aquellas que no encuadren en los parámetros de valoración usados. Sin embargo, tampoco el desajuste invalida los resultados, sino que requeriría ponderarlos. Incluso el uso de la lengua inglesa en investigación no es invalidante, ya que usualmente cualquier estudio suele hacerse en inglés, justamente para generalizar su difusión.
En la presentación disponible de Nian Liu, miembro del equipo de la universidad Jiao Tong que elabora uno de los rankings, se explican los criterios usados en su caso (se pueden consultar también en el mismo sitio de la Universidad), sino que se valoran los problemas de la clasificación:
Education is the basic function of any university, however, it would be impossible to rank the quality of education due to the huge differences among the national systems.
Contribution to the national economic developmentis becoming increasingly important for universities, however, it is impossible to obtain internationally comparable indicators and data.
(sin embargo...) The academic or research performance of universities,a good indication of their reputation, can be ranked internationally.
(...) Many well-known institutions specialized in humanities and social sciences are ranked relatively low.
(...) English is the language of international academic community.
(...) Any ranking based on academic performance will be biased towards institutions in English-speaking countries.
Possible solution: papers published in non-native languages are offered a special weight.
(...) Universities which started after 1911 do not have a fair chance.
(...) Disciplines not related to the awarding fields do not have a fair chance. Other important awards include Abel, Pulitzer, Turing, Tyler, etc.
(...) Institutions for winning awards and those for doing the researches may not be the same.
(...) Institutions for obtaining degrees and those for pursuing the studies may not be the same.
Postdoctoral training is not considered.
(...) The weight of the Size indicator for per capita performance is rather low. Large institutions have relatively high positions in the ranking.
However, it’s very difficult to obtain internationally comparable data on the number of academic staff.
The types of academic staff: such as purely teaching staff, teaching and research staff, purely research staff.
The ranks of academic staff: such as professor, associate professor, reader, lecturer, research scientist etc.

Liu menciona algunos problemas propios de encuadrar un material tan variado como lo son las instituciones universitarias:
Many universities have more than one commonly used names: such as Virginia Tech and Virginia Polytechnic and State University.
Variations due to translation: such as Univ Kolnand Univ Cologne, Univ Vienna and Univ Wien.
Abbreviated names: such as ETH Zurich for Swiss Federal Institute of Technology Zurich.
Some authors only write their departmental or institute names without mentioning their university names.
University systems: such as Univ California system, Univ London system.
Affiliated institutions and research organizations: such as Ecole Polytechnique Montreal(affiliated to University of Montreal), CNRS Labs (affiliated to French universities).
Teaching and affiliated Hospitals: complex!
Merging, splitting, inheriting, discontinuing, name-changing of institutions such as:
Univ Kwazulu-Natal in South Africa merged from Univ Natal and Univ Durban-Westville.
University of Innsbruck in Austria splitted into Univ Innsbruck and Innsbruck Medical Univ.
Humboldt Univ Berlin and Free Univ Berlin inheriting the Nobel Prizes of the Berlin University before world war II.
Vrjie Universiteit Brusseland Universite Libre Bruxellesshare the same English name of Free University of Brussels.
La respuesta de Liu a estos problemas está en su propia identificación: el localizar los problemas permite una mejor aproximación a la calificación.
Los materiales de la discusión se localizan en el sitio de la conferencia.

Crecimiento del acceso a Internet en Argentina

Cifras del INDEC, en estos días bastante cuestionado: 11,4 % de aumento en el número de conexiones, para llegar a 2,5 millones de usuarios domiciliarios, y 10,6% de crecimiento en oficinas, con 223.105 usuarios (esta cifra parece sin embargo bastante pobre). Crecimiento de 78% en el uso de banda ancha. La nota en La Nación:
(Télam).- El número de conexiones a Internet de uso residencial aumentó un 11,4 por ciento durante 2006 hasta alcanzar los 2,5 millones de usuarios, alentado por el mayor nivel de conectividad a través del sistema de banda ancha, en detrimento de las conexiones telefónicas pagas o libres.
El Instituto Nacional de Estadística y Censos (INDEC) informó sobre estos datos, y que la cantidad de accesos en oficinas también aumentó durante el año pasado el 10,6 por ciento, hasta sumar los 223.105 usuarios.
El avance de las conexiones a nivel domiciliario estuvo basado, principalmente, por una mayor facilidad financiera para la adquisición de computadoras, y planes de ofertas para acceder a la Red a través del sistema de "banda ancha".
En este marco, las conexiones a través de ese sistema crecieron un 78,2 por ciento respecto de 2005, mientras que las cuentas por dial-up (teléfono) bajaron un 19,4 por ciento.
Incluso, el número de usuarios libres declinó durante 2006, un 14,2 por ciento, para quedar en 727.452 usuarios.
En el caso de las empresas, el avance en el uso del sistema de banda ancha alcanzó al 22,0 por ciento, lo que redundó en una caída del 11,1 en la utilización del sistema telefónico, y del 9,6 por ciento en los servicios libres.
De manera paralela, el comercio electrónico generado desde sitios argentinos en la web superó la barrera de los 10.000 millones de pesos, producto de las transacciones de más de 5 millones de personas que compraron algún producto o contrataron un servicio a través de Internet en 2006.
Con estas cifras, se logró por cuarto año consecutivo un crecimiento con tasas superiores al 100 por ciento. Durante 2005, las ventas "on line" ascendieron a 4800 millones de pesos, según informó la Cámara Argentina de Comercio Electrónico.

viernes, marzo 16, 2007

Cisco compra Webex

La noticia en Technology Review:

Santa Clara-based WebEx makes applications that enable online group meetings and secure instant messaging.
''As collaboration in the workplace becomes increasingly important, companies are looking for rich communications tools to help them work more effectively and efficiently,'' said Charles H. Giancarlo, Cisco's chief development officer, said in a statement. ''The combination of Cisco and WebEx will deliver compelling solutions accelerating this next wave of business communications.''

Webex ofrece servicio de intercomunicación extensible a trabajo colaborativo remoto, con excelentes prestaciones. Cisco mediante, el concepto puede devenir potente.

miércoles, marzo 14, 2007

Manejo de Configuración desde el punto de vista de CMMI

Eric Mariacher publica un pequeño trabajo sobre el manejo de configuración y administración de cambios, en el contexto de CMMI. Es decir, de cómo la actividad del manejo de configuración y administración de cambios son fundamentos para un proyecto de mejora y sistematización de los procesos de construcción de software.
El papel fue presentado en CM Crossroads, el sitio dedicado al estudio de este tema, en el foro abierto para la construcción de un cuerpo de conocimientos en CM (CM Body of Knowledge, CMBoK). Luego de una larga demora técnica, el proyecto comienza a moverse. Puede revisarse el foro, y la wiki básica.

Global Services Media premia a Globant

En La Nación:
La compañía de sistemas nacional Globant fue premiada por la revista especializada estadounidense Global Services Media por ser la empresa revelación en la provisión de servicios informáticos globales. La publicación, que suele galardonar a las 100 mejores empresas del sector y entregar distinciones en distintos rubros, reconoció a Globant por proveer "soluciones de bajo costo" basadas en tecnología de código abierto u open source
(...) Globant es una de las compañías que desde la devaluación brindan servicios informáticos desde la Argentina hacia el exterior, aprovechando la mano de obra calificada y los bajos costos. Si bien India es líder mundial en ese tipo de servicios, las cercanías culturales y horarias con Estados Unidos y Europa significaron una buena oportunidad para el país.
La empresa fue fundada en 2002 por los ingenieros Martín Migoya, Martín Umaran, Guibert Englebienne y Néstor Nocetti, y se especializó en tecnología de código abierto. Hoy brinda servicios informáticos a empresas multinacionales de renombre, como Coca-Cola, Renault, el grupo Santander, el portal de viajes inglés Lastminute.com y "el más importante buscador de Internet", según Migoya, CEO de la compañía.
"Este galardón no habla bien de Globant solamente, sino que habla bien del país", afirmó el ejecutivo, que recordó los avances en la Argentina en tecnologías de la información (IT) en los últimos años y mencionó entre ellos la ley de promoción de la industria del software.
"El premio pone a la Argentina en el mapa del outsourcing ", agregó. También se refirió al posicionamiento internacional de Globant: "Nos empiezan a considerar, después de muchísimo esfuerzo, como un jugador serio y calificado en esta industria, cosa que ninguna empresa argentina logró en IT y otras pocas, como Arcor o Techint, lograron en otras áreas", dijo Migoya.
Sobre la empresa:
En 2006, Globant tuvo ingresos estimados en 12 millones de dólares. Hace tres años que crece al 100% y espera, según sus fundadores, repetir esa marca en 2007. "Aspiramos a ser líderes en IT desde América latina hacia el mundo", afirmó Englebienne, CTO y otro de los creadores de la empresa.
La empresa cuenta con oficinas en Estados Unidos y Europa. En noviembre llegó a Tandil, donde se instaló "por la alta calidad educativa" de la ciudad, afirmaron los socios, y planea expandir su presencia física hacia otros países de América latina.
Recordemos que Tandil promueve desde su Universidad un cluster tecnológico que evidencia resultados en este caso.

domingo, marzo 11, 2007

la seguridad en el ISeries

...y ya que mencionamos al ISeries (AKA AS/400) a propósito del PHP, un "viejo" artículo de Phil Edwards, ayuda a pesar las fortalezas y debilidades del equipo. Vale la consideración dados los conocidos problemas de seguridad de PHP. ¿hasta qué punto puede afectar al AS/400?. En principio, como se afirma de Linux, se trata de un conjunto no nativo, con su propio nivel de problemas, y el ISeries debe monitorearlo como otro objeto o conjunto de objetos.
Lo esencial del artículo de Edwards:

Security is one of the AS/400’s strong points. The architecture of OS/400 makes it immune from the kind of malicious executable code which often spreads over the Net – viruses, worms and Trojan horses. The hackers who have caused havoc on Windows NT and Unix systems can’t do anything with the AS/400, even assuming they know it exists. Even on the Internet, there are no back doors into an AS/400: thanks to OS/400’s native security, every approach is blocked by user-level access control. In short, security worries can be left to the Windows crowd: AS/400 users don’t even need a security policy.
The first sentence of the previous paragraph is true: whether in the office or on the Net, OS/400 is preferable to almost any other operating system in security terms, and vastly preferable to Windows NT. Unfortunately, the rest of the paragraph contains three half-truths and a lie. The time to put your feet up and forget about security hasn’t yet come. The AS/400 is good, but it’s not quite that good.

Tres verdades a medias: "a prueba de virus", "los hackers no conocen al AS/400", "el AS/400 no tiene puertas traseras".
Virus:

The world of Windows is infested with ever more sophisticated viruses: blocks of code which attach themselves to other files when an infected file is executed or opened. Before there were viruses, Unix administrators were familiar with worms (programs which copy themselves when executed) and Trojan horses (programs which imitate normal functionality while carrying out other tasks). All of these are peculiarly difficult to achieve under OS/400.

However, ‘difficult’ doesn’t mean ‘impossible’. OS/400 is harder to fool than Unix (which in turn is more secure than Windows NT), but techniques for bypassing its internal security measures are known. (...). To create a Trojan horse program, for instance, a malicious programmer would only need access to the CRTCMD command and authority over a product library. If a command run from a product library calls a second command with an unqualified name, the system will look for the command in the product library. A command could thus be created in the product library with the same name as a system command. The duplicate command would run every time the command was invoked by a program running from the product library; when the command was run normally, the original version would be used.

Preventing this from happening is a simple matter of restricting access to CRTCMD and CHGCMD, and perhaps also ensuring that commands are called using fully qualified names. However, the risk is real. Even the creation of a worm or a virus for OS/400 cannot be entirely ruled out, although it would be orders of magnitude more difficult than doing the same thing under Unix.

It’s also possible for the AS/400 to store virus-infected PC documents and Domino documents containing macro viruses in the Integrated File System. The viruses will not be activated under OS/400, but they will be sent unchanged to Windows clients.

Los hackers:

“Other than the default accounts that tend to use the same password as the username (which any admin worth the title will have changed), I would recommend [the AS/400] to anybody running a web server.” - ThePsyko (thepsyko@itookmyprozac.com), writing in newsgroup alt.hackers.malicious

The menace of hackers is often overstated; many security specialists regard most hackers as a nuisance rather than a serious threat to business. “If hackers do get into your system they’re more likely to want to use your machine time and storage space than steal your data. Mostly they’re joyriders rather than industrial spies,” says John Earl of PowerTech. Either way, to date hackers seem to have largely steered clear of the AS/400, as ThePsyko’s comment suggests. The web site http://www.attrition.org records 2,825 web sites ‘defaced’ by hackers between January and the end of July 2000. No attacks on AS/400 sites are recorded, compared with 1,831 sites running Windows NT, 476 running Linux and 416 running mainstream variants of Unix (including one AIX site).
However, this lack of attention in itself shouldn’t be seen as proving that the AS/400 is hackproof. “If you’re in business on the web, you aren’t exposing your system to a user here and a user there - you’re open to the world,” says IBM’s Carol Woodbury. “And you’re probably going to be attacked: there are some odd people out there who get their jollies from doing that kind of thing. Even if all your data is secure, your site could still be brought down by a denial of service attack (in which a web site is overloaded with dummy connection requests). That means lost business, the same as any other crash.”
Nor does the lack of information on successful hacks of the AS/400 mean that there have been no successful hacks. Information on IT security breaches is published by the Computer Emergency Response Team (CERT). However, as Woodbury told NEWS/400 in 1999, “We take a different tack than CERT does, which describes the problem and then says, ‘here are the patches.’ We just say, ‘here are the patches.’”. Security breaches are addressed by HIPER (High﷓Impact Pervasive) PTFs; the problem addressed by the PTF is described in one or two lines of laconic IBM tech-speak.

Puertas traseras:

“OS/400 is a highly secure operating system,” says Earl. “Rochester really started taking security seriously with OS/400 V3R7, and since then they’ve done an outstanding job.” Login in to an AS/400 requires a valid user name and password; user passwords are encrypted and cannot be cracked. Once logged in, access to data is rigidly controlled by the operating system, which enforces object-level authorities and access rights. The problem is that, in practice, these two forms of access control are usually supplemented by a third, dating from the pre-V3R7 days when level 10 security was the default: the menu system, which steers users through a range of permissible actions. The more emphasis your applications place on the menu system, the more vulnerable you are to unauthorised activity.

Earl explains. “Most AS/400 applications have one user profile which owns all the objects used by the application, and one group profile for all users. That’s fine in itself; the problems start when those two are the same profile. That means that every user has access to everything: access is controlled by the menu system. The trouble is, if you’re accessing the AS/400 using FTP you won’t see those menus.”

FTP (File Transfer Protocol) is a standard protocol for transferring data over TCP/IP. It is implemented on the AS/400 as a ‘service’, a program which is started up at IPL, remains continually available in the background and can be invoked by a user session or an application program. There are three points at which control is transferred to and from FTP:

· QIBM_QTMF_SVR_LOGON (FTP Server Logon)

· QIBM_QTMF_CLIENT_REQ (FTP Client Request Validation) and

· QIBM_QTMF_SERVER_REQ (FTP Server Request Validation)

Each of these exit points represents a potential hole which needs to be plugged. The way to do this is with “exit programs”: dedicated programs, registered with WRKREGINF for use with a specific exit point, which handle the transfer of control to FTP, validating the username and the requests made. There are two reasons for taking exit programs seriously. Firstly, they give the administrator detailed, fine-grained control over FTP: programs can be written to restrict certain users to particular directories or functions, to reject login attempts from unknown IP addresses, or even to extend the capabilities of FTP by permitting controlled login by anonymous users (not permitted by the AS/400, but near-essential if you’re using FTP on a web site). Secondly, if you don’t implement an exit program somebody else may do it for you. An exit program receives parameters from the user, executes some code to process them, then passes a request to the - and that second step is open to abuse. The sample code printed in phrack would enable any hacker with access to the AS/400 to run system-level commands in the guise of an attempted FTP login: the log file would record a failed login by a user called CRTDTAARA, for instance, while in fact the system was running a CRTDTAARA command under another user name.

Increased levels of complexity within the organisation also create security exposures. “A firewall – a box which sits between your server and the Internet and vets Net traffic – is a good thing to have, but it’s only effective at a single point,” warns Ian Kilpatrick of Wick Hill. “Many companies have multiple user departments using Internet connections to link up directly with different business partners – suppliers, distributors, outsourcing agencies. Addressing the security implications of all those connections requires an understanding of the whole of the company’s business. The old AS/400 network model is dissolving within the company as well: with more and more interconnected systems, there are more and more security exposures. As intranets become more pervasive, we’re starting to see companies implementing departmental firewalls. It may sound like overkill, but who do you want to protect your payroll database from – the teenage hacker outside the organisation or the disaffected employee inside it?”

Even traditional AS/400 applications are now raising security issues. “People have an ERP application, and suddenly they’re not looking at green screens any more,” says Jan Juul of PentaSafe. “They support ODBC querying, they have web﷓based intranet interfaces, they have print jobs routed through PCs for WYSIWYG formatting. All of those leave back doors open into the AS/400.” ODBC and JDBC enable PC users to run SQL commands against DB2/400; Client Access and even DDM open up the AS/400 to external clients. Exit point programs can be implemented for all of these access points. “You can’t rely solely on OS/400 security,” says Andy Coates of R/Utility. “Users can easily bypass the menu and application program security by using a PC connection. The problem is becoming widespread with more power users in organisations using spreadsheets and so on. Users are becoming adept at accessing the data they require; quite often this gives them access to data they should not have access to.”

It’s also worth noting that, as the references to FTP and TCP/IP suggest, OS/400 itself has changed. “As the AS/400 has progressively moved further into the world of ‘open systems’ it’s become progressively less secure,” says Juul. Nor is this progression over yet. “Future releases of WebSphere on the AS/400 are going to be based on the open source web server Apache,” says Juul. “It won’t be a simple port of Apache to the AS/400: it will be an AS/400 web server, encapsulated within the operating system. But there is a high level of knowledge about Apache in the hacker community, and having an HTTP server based on Apache has to create some additional exposures.”

The AS/400’s days as a closed, departmental network server are gone - and its status as a proprietary island of secure information with it. “Security on the AS/400 is built on the model of an internal network where everyone knows who everyone is,” comments Juul. “The design doesn’t cater for unknown users.”

Por supuesto, Edwards es optimista en cuanto a la posibilidad de controlar el riesgo:
The good news is that, even on the Internet, the AS/400 is securable to a higher level than Windows NT or most variants of Unix. Ironically, the inflated fears which have encompassed the growth of e-business may have made it harder to consider how to address the real issues of security, on and off the Net. “Security isn’t that hard,” says Earl. “It just needs a bit of attention. The main thing is not to be scared of thinking about it. There are identifiable risks, and there are things you can do to minimise them: the point is to quantify the risk and the expenditure, and find the point at which the cost of taking precautions is more acceptable than the potential cost of the exposure.
(...)

“The key requirement for information security policy is, simply, to have a policy,” says Kilpatrick. “A security policy worthy of the name needs to be detailed: ‘Don’t give away company secrets’ isn’t a policy. And it needs to be live. That means it’s up to date – those paragraphs about the secure disposal of typewriter ribbons can go – and it’s in force throughout the organisation: there’s no point formulating a security policy if it’s going to sit in a ring-binder on somebody’s shelf. Information security needs to be taken as seriously – and managed as formally – as physical security.” This point is echoed by Spencer Pratt of Internet security specialists Defcom. “As well as monitoring intrusion attempts, you need procedures for dealing with potential intrusions, including an escalation procedure. When do you talk to the press about a security breach and when do you try to keep it quiet? If your systems are hacked, do you want to prosecute or do you just want to make sure it doesn’t happen again?” Taking security seriously means treating it as an issue affecting the whole business.

So, what can you do? Focusing specifically on the AS/400, there are three main recommendations: deploy object﷓level security; limit non-AS/400 access; and implement a monitoring system.

Una Wiki sobre PHP en el ISeries

IBM encara en el caso de PHP la nueva vía de construír conocimiento en torno a un tema: recurrir a una wiki, para que la comunidad de usuarios comparta su experiencia en un área nuevo para el AS/400 (luego ISeries):
Welcome to Redwiki, the site for Redbooks in Wiki format!

The objective of this residency it to update an existing redbook that describes the installation, configuration, porting issues (of other PHP applications to the iSeries), performance tips, and known best practices. The goal is to add more hands-on information that has been learned from experience with the product: code examples, tips and techniques, troubleshooting information, etc.

The ITSO is conducting this residency to create a deliverable that uses Wiki technology, as it is used by sites such as Wikipedia. You can be a part of a leading-edge pilot on a new way of delivering quality technical documentation. This residency is open to customers, business partners and IBM employees.

Note that this is not a typical residency. You will be working from your own location, as your time permits. This residency is slated as a three-month project, but we do not expect that you will be working on this full time. We do, however, expect individuals who are involved in this project to make significant contributions to the Wiki. In addition, unlike a typical residency where we have only 3 to 6 residents, for this project, we are looking for a much larger number of authors - perhaps as many as 20 people - who can work part time on the project.

If you would like to participate in this project, please visit this link and register as a resident. Describe why you would like to take part, and what you feel you have to offer to the project.

Los Red Books son los libros técnicos de casi todos los recursos disponibles de IBM. Este es el primer caso en que IBM utiliza a este nivel una wiki para crear conocimiento (Al menos, hasta donde conozco). En casos como éste, donde los usuarios pertenecen a una comunidad relativamente cerrada, la calidad de los aportes incorporados está casi garantizada.
Se puede consultar en PDF el Red Book al que se pretende extender. La wiki en sí, aquí.
Comentado por System iNetwork.

martes, marzo 06, 2007

Estado administrativo de la universidad argentina

Como no podía ser de otra manera, la anarquía y desgobierno de la Universidad de Buenos Aires del año 2006, se ha continuado en acusaciones y descubrimientos de abandono administrativo y corruptelas. Y, como no puede tratarse de un fenómeno aislado, asuntos parecidos han sido denunciados en la Universidad de Tucumán. Una vez más, el problema educativo es más amplio que lo que trate una ley, y su solución todavía no ha empezado en Argentina.
Los incidentes más notables:
  • El 28 de diciembre de 2006, Rubén Hallú, el nuevo rector de la universidad, presentó una denuncia penal por la rendición de fondos 2005-2006 del Hospital de Clínicas.
*Habría pagos a personas que no desempeñan efectivamente sus funciones laborales.
* Habría un manejo discrecional del presupuesto de gastos de las máquinas fotocopiadoras, sin la rendición de cuentas y la publicidad previas, garantes de todo proceso transparente.
*Habría una falta de rendición de cuentas anuales. Y en su caso, las rendiciones que se presentan adolecerían de datos fundamentales a la hora de evaluar la efectiva inversión de los montos destinados a gastos, inversión y mantenimiento de máquinas fotocopiadoras. Todo esto redundando en informes viciados, otra vez, de falta de transparencia.
*Habría falta de publicidad sobre el superávit anual o no de las explotaciones llevadas a cabo por la FUBA. Solamente a través de la publicidad, los actores de la explotación, más allá de que se identifiquen como simples alumnos y no como políticos, tienen la posibilidad de llevar a cabo desmanejos.
*Existiría una contratación irregular de empresas que llevan a cabo el mantenimiento de las máquinas fotocopiadoras. Irregularidad que se prueba a partir de una presunta vinculación entre las empresas contratadas.
*Habría gastos originados en la compra de insumos para las máquinas, que por su demasía no encuentran justificación.
*Existirían compras sin comparación de precios en el mercado. Este apartado encuentra su explicación en que la FUBA no requiere del método de licitación previa para llevar a cabo las contrataciones o compras requeridas por la explotación. Pero sí, le es menester recabar información en negocios o empresas afines, a los efectos de colectar presupuestos que demuestren la conveniencia económica de cada contratación o gasto.
"Los hechos apuntados pueden desbordar el carácter de una seria falta administrativa para configurar un eventual ilícito, que necesariamente debe ser investigado", mencionan los funcionarios en el escrito en el que describen una serie de irregularidades detectadas a través de una auditoría informática.
De acuerdo con la información difundida por el diario "La Gaceta", se constató que alumnos que aparecían como ausentes en las planillas de los profesores, eran cargados como aprobados, generalmente con un cuatro como nota, en el sistema inform tico de la facultad.
No vale la pena mencionar el tradicional inicio de clases primarias y secundarias en medio de huelgas docentes y discusiones sobre salarios, porque ya nos hemos acostumbrado a que el año escolar tendrá un mes o dos menos de los planeados, y eso especialmente en el inicio.
En fin, esto es lo que significa "el rey está desnudo", aunque todos lo declaren vestido.... Todavía está por verse cuándo esta sociedad decidirá cambiar su rumbo: no lo está si quienes estudian fraguan un título, o si defraudan una institución. Hacen falta profundos cambios en las personas, para que las expectativas sean otras.

domingo, marzo 04, 2007

La calidad de la enseñanza y la "satisfacción del cliente"

Mariano Narodowski, de la Universidad Torcuato Di Tella, discute en Clarín un punto que ha cobrado importancia en las últimas décadas en la educación argentina, diría desde la vuelta de la democracia: "Existen escuelas, tanto públicas como privadas, que creen que educar bien exige ausencia de conflictos y paridad entre docentes y alumnos"
[A pesar de un aparente consenso] "calidad de la educación" tiene diferentes significados, contradictorios entre sí. De hecho, "calidad" no es un concepto tradicional entre educadores sino que proviene de la jerga de los administradores y los ingenieros industriales. Como todo concepto novedoso, su utilización suele generar controversias y malos entendidos. ¿De que hablamos cuando hablamos de calidad de la educación?
Significados que enumera Narodowski:
La que imparte saberes y valores
Para algunos, la educación de calidad será aquella que trasmite determinados saberes y valores sin los cuales la educación pierde su razón de ser. Por supuesto, los valores y saberes considerados ocupan todo el arco ideológico y hasta teológico. Aun con sus diferencias, establecen un ideal de persona y de sociedad y se interrogan: ¿qué clase de persona queremos formar? Una educación será de calidad si responde adecuadamente a la pregunta.
La que dispone de recursos:
Para otros, la calidad de la educación va a estar determinada por los insumos con los que se cuenta en el proceso educativo. Cuando mejoran esos insumos (los salarios docentes, el estado de los edificios, la dotación de las bibliotecas, la capacitación docente, etc.) necesariamente habrá de mejorar la calidad educativa. A diferencia de la visión anterior, a esta no le interesan los valores y los saberes puesto que estos serán de baja calidad si los insumos también lo son. Al contrario, el incremento de los insumos garantizará el aumento de los resultados.
La que demuestre resultados satisfactorios en el marco de un estándar:
La tercera concepción es la que determina que la calidad educativa es la respuesta satisfactoria a pruebas estandarizadas. En otras palabras, habrá buena calidad sólo cuando los alumnos demuestran resultados por medio de una prueba. A diferencia de las dos anteriores, a este enfoque sólo le interesan los efectos, los que a su vez deben ser consistentes: medibles, cuantificables y comparables.
Y la que motiva su comentario:
(...) la concepción tal vez menos promocionada pero seguramente la más usada en nuestro medio: calidad de la educación como satisfacción del cliente.
Para esta postura, una buena escuela será aquella en las que los tomadores del servicio educativo están conformes con el servicio ofrecido. Esta conformidad puede medirse en encuestas de satisfacción análogas a las utilizadas en los comercios de comida rápida aunque el indicador más difundido es la ausencia de conflicto: la satisfacción del cliente se reflejará en una "armonía" —perversa, como veremos— entre los participantes del proceso educativo.
Esta visión, que pudiera ser frecuente en escuelas aranceladas, también afecta a las escuelas del estado, por distintas razones:
Esta visión no es exclusiva de escuelas privadas aranceladas. Concebir al alumno como un cliente que debe ser conformado suele ser una práctica frecuente también en escuelas estatales gratuitas a las que concurren sectores empobrecidos de la población. Lo clientelar no está asociado necesariamente al dinero sino que es una relación social que supone un intercambio entre iguales en el que alguien brinda y alguien utiliza un servicio.
Tradicionalmente, la relación educativa estaba estructurada en base a una relación asimétrica en la que el adulto era el responsable, quien cuidaba y protegía y quien ofrecía un proyecto de autonomía para cada uno de sus alumnos. El ser docente suponía ser un "otro" distinto, que basaba su diferencia en la enseñanza, la comprensión y el cuidado de quienes estaban a su cargo.
La calidad como satisfacción del cliente, necesita que docentes y alumnos sean equivalentes, desdibujándose la figura del educador y transformándolo en un mero proveedor. Lo que se consigue es atenuar conflictos por medio de la "concesión" o la "transa": los educadores terminan permitiendo que sucedan nuevas realidades pero no por convicción sino por el resultado de una negociación.
Así, muchos docentes confiesan que permiten en sus aulas lo que hace pocos años les parecía impensable, pero no por sincera convicción sino para evitar conflictos mayores. Satisfacen clientes. Al mismo tiempo, se sienten desautorizados por los funcionarios estatales, quienes frente a estos conflictos suelen guardar una sospechosa distancia: ellos también son parte de un complejo entramado de clientelas y mercados políticos y electorales.
Es cierto que la autoridad docente tuvo una justificada mala prensa, desde Juvenilia hasta la película The Wall. Y es cierto también que la principal victoria ideológica de la dictadura militar de 1976 es habernos hecho creer que toda forma de autoridad es persecución, tortura, muerte. Pero es impensable educar sin ser un otro diferente, capaz de proyectar e invitar a los demás a sumarse a ese proyecto. La equivalencia es la muerte del educador.
La calidad educativa entendida como satisfacción del cliente está aumentando en nuestras escuelas y no atinamos a reaccionar seriamente. Es necesario combinar las tres primeras concepciones de calidad y asumir desde las políticas públicas eso que no se consigue con leyes: el fortalecimiento del lugar del docente, brindando herramientas concretas para que los educadores puedan reconstruir una autoridad justa y confiable. Que la última palabra sea del docente y que todos sintamos alegría al reconocerla y responsabilidad para respaldarla.
Quizá esta concepción de la educación cuestionada por Narodowski sea uno de los mayores escollos vigentes en su aplicación en la Argentina. En cierta forma, esta fue una de las claves de la crisis de la Universidad de Buenos Aires durante 2006.

Ranking de utilización de lenguajes de programación

The TIOBE Programming Community index gives an indication of the popularity of programming languages. The index is updated once a month. The ratings are based on the world-wide availability of skilled engineers, courses and third party vendors. The popular search engines Google, MSN, and Yahoo! are used to calculate the ratings. Observe that the TIOBE index is not about the best programming language or the language in which most lines of code have been written.
The index can be used to check whether your programming skills are still up to date or to make a strategic decision about what programming language should be adopted when starting to build a new software system. The definition of the TIOBE index can be found here.

Position
Feb 2007
Position
Feb 2006
Delta in PositionProgramming LanguageRatings
Feb 2007
Delta
Feb 2006
Status
1 1 Java 18.978% -3.45% A
2 2 C 16.104% -2.23% A
3 3 C++ 10.768% -0.53% A
4 5 PHP 8.847% -0.07% A
5 4 (Visual) Basic 8.369% -1.03% A
6 6 Perl 6.073% -0.63% A
7 8 Python 3.566% +0.90% A
8 7 C# 3.189% -0.78% A
9 10 JavaScript 2.982% +1.47% A
10 20 10 * Ruby 2.528% +2.12% A
11 11 SAS 2.326% +1.13% A
12 9 Delphi 2.077% +0.10% A
13 12 PL/SQL 1.628% +0.66% A
14 21 7 * ABAP 1.205% +0.83% A
15 22 7 * D 1.205% +0.84% A
16 14 Lisp/Scheme 0.722% +0.10% A--
17 17 Ada 0.661% +0.15% B
18 13 COBOL 0.656% -0.08% B
19 15 Pascal 0.596% +0.05% B
20 36 16 * Transact-SQL 0.543% +0.38% B

Los egresados en Japon y el trabajo

Un post de Daniel-San sobre los egresados universitarios en Japón y otros países del mundo desarrollado, fácilmente extensible a muchas sociedades: los nuevos egresados no encuentran trabajo, o no son remunerados como socialmente se espera, o deben aceptar otras incumbencias. El comentario está originado en una nota de Newsweek Japón.
Dos o tres elementos destacables:
De hecho, antes de considerar el salario, un tema de discusión es si la formación recibida en las universidades refleja lo que las empresas e instituciones de investigación están necesitando. La misma Keindanren -Federación de Agrupaciones Económicas de Japón, la entidad empresarial más representativa de este país-, señala que casi el 20% de los graduados con un doctorado les falta una mayor especialización, creatividad y metodología en las tareas de investigación.
(...)
Las empresas son reacias a contratar graduados con muy alta preparación porque tienen que pagar más y asignar responsabilidades, cuando lo que quieren es que un profesional trabaje en su especialidad, en las tareas anexas, mantenga una conducta equilibrada en las relaciones humanas y no cobre mucho. Un joven o no tan joven con un doctorado no siempre encasilla bien en los requerimientos de las firmas.
El problema es complejo, pues en el caso de Japón muchas de las mismas universidades no tienen capacidad de formar y capacitar para lo que requiere el mundo laboral y empresarial de hoy; y por otra parte, los jóvenes que se prepararon y no tienen mucha experiencia de vida y están casi en una burbuja lejos de la realidad creen que con la preparación que tienen pueden y tienen derecho a un buen empleo, a una buena paga y un futuro relativamente asegurado.
La planificación estratégica de las carreras universitarias, algo que Jaim Etcheverry cuestionaría:
Un dilema que muchos de los países industrializados deben sortear por producir demasiados graduados universitarios sin realizar una planificación estratégica como país de las necesidades profesionales a mediano y largo plazo. Peor es la situación en la mayoría de los países en desarrollo en donde siguen “produciendo” miles de abogados, contadores y administradores, cuando en realidad lo que necesitan son técnicos intermedios, operarios calificados, agricultores capacitados en genética, ingenieros, etc., según la estructura económica y productiva que tienen.
Un aspecto no mencionado por el post, es la actitud de los nuevos graduados, y de los países en los que se preparan, ante la posibilidad de iniciarse como emprendedores. La sociedad japonesa no es precisamente una sociedad que deje lugar al entrepeneurismo: compuesta de grandes corporaciones que actúan como la familia imperial romana: sus proveedores como satélites, sus empleados como parte de una familia. La misma actitud es observable en España, en este caso tentados por la administración del estado: es común intentar tomar un puesto de funcionario en el estado, presentándose a las oposiciones, a tal punto que quizá el segundo negocio de las academias de preparación de estudiantes, y de materiales de estudio, sea la preparación para rendir exámen por oposición en un puesto de la administración. Ninguna de las dos alternativas promueven la dinámica social y económica. Si bien iniciar una empresa no es tarea simple, esta es la vía más valiosa de aplicar el potencial aprendido en los estudios superiores. Este factor diferencia a sociedades como la estadounidense o india, frente a otras como la japonesa o rusa.

Una historia de la calidad

Kevin Meyer, en su blog Superfactory, publica en forma abierta una historia de la calidad, aplicada a la industria (manufacturing excellence). Su línea de tiempo tiene la virtud de arrancar muy atrás en la historia, inciándose en el siglo XVIII, y de incluír no sólo a los clásicos constructores de TQM y sus sucesores, sino también a APICS, a quien pocas veces se menciona.
Su artículo, al día de hoy:

Timeline of Manufacturing Excellence

This Timeline is always under construction. If you have specific events and dates you would like to see added, please contact us. Please reference the source of the information for verification purposes.


Decade

2000 -


* 2004: Shingo Prize-winning Kaikaku published by Norman Bodek, chronicling the history and personal philosophies of the key people that helped develop TPS
* 2003: Shingo Prize-winning Better Thinking, Better Results published, case study and analysis of The Wiremold Company's enterprise-wide Lean transformation.
* 2001: Totota publishes "The Toyota Way 2001" document, which makes explicit the "respect for people" principle.

1990 -


* 1996: Lean Thinking by Womack and Jones
* 1991 - 1995: The business process re-engineering movement tried, but mostly failed, to transfer the concepts of standardized work and continuous flow to office and service processes that now constitute the great bulk of human activities.
* 1991: Relevance Lost by Tom Johnson and Robert Kaplan exposes weaknesses in manufacturing accounting systems, eventually leading to the Lean Accounting movement
* 1990: The Machine That Changed the World by Womack and Jones

1980 -


* 1988: Kaizen Institute of Americal holds kaizen seminars at Hartford Graduate Center (Hartford, Conn.), with TPS sessions taught by principals from Shingijutsu Co., Ltd.
* 1988: Shingo Prize for Manufacturing Excellence created by Norman Bodek and Professor Vern Buehler of Utah State University
* 1988: Shingijutsu hired by Danaher Corpopration to assist in implementing TPS a Jacobs Chuck and Jacobs Vehicle Systems.
* 1988: Kaizen Institute leads the first U.S. kaizen event at Jake Brake in Connecticut
* 1988: First wholly owned U.S. facility Toyota Motor Manufacturing in Georgetown, Kentucky
* 1988: Taiichi Ohno's Toyota Production System - Beyond Large Scale Production is published in English
* 1985 - 1989: Shingo's books on SMED, Poka Yoke, and Study of Toyota Production System from Industrial Engineering Viewpoint are published in the U.S.
* 1985: The Association for Manufacturing Excellence is officially formed from cast off APICS members.
* 1984: Several of AME's founders barnstormed for the APICS Zero Inventory Crusade, collectively making hundreds of presentations on what is now called lean manufacturing. APICS calls for the resignation of the steering committee for violating APICS special interest group rules. The committee decides to go out on its own.
* 1984: Toyota / GM joint venture NUMMI established in U.S.
* 1984: Norman Bodek forms Productivity Press
* 1983: First broader description of TPS by an American author - Zero Inventories by Robert "Doc" Hall is published
* 1980: Under the auspices of the Detroit APICS chapter, several future founders of the Association for Manufacturing Excellence organized the first known North American conference on the Toyota Production System at Ford World Headquarters, with 500 people attending. Featured speaker was Fujio Cho, who became president of Toyota.
* 1980: Kanban: The Coming Revolution is published. It is the first book describing TPS as "JIT"

1970 -


* 1979: Several APICS members who had seen Toyota production facilities and understood the problems with MRP began to meet regularly.
* 1979: Norman Bodek forms Productivity Inc.
* 1979: First U.S. study missions to Japan to see the Toyota Production System
* 1978: Taiichi Ohno retires and becomes honorary chairman of Toyoda Auto Loom
* 1977: Nick Edwards presents a paper at the APICS conference describing the fallacies of MRP
* 1975: First English TPS handbook drafted by Sugimori, Cho, Ohno, et al.
* 1973: Oil Shock plunges Japan economy into crisis. Only Toyota makes a profit
* 1973: Toyota - Regular supplier improvement workshops begin with top 10 suppliers

1960 -


* 1969: Start of Toyota operations management consulting division
* 1965: Toyota wins Deming Prize for Quality
* 1962: Toyota - Pull system and kanban complete internally company wide
o Average die change time 15 minutes. Single minute changeovers exist.
o 50% defect reduction from QC efforts
o Initial application of kanban with main suppliers
* 1961: Start of Toyota corporate wide TQC program
* 1960: Deming receives the Japanese "Second Order of the Sacred Treasures" award, with the accompanying citation stating that the people of Japan attribute the rebirth of their industry to his work.

1950 -


* 1957: Basic Andon system initiated with lights
* 1956: Shigeo Shingo begins regular visits to teach "P-Course"
* 1951: J.M. Juran publishes his seminal work The Quality Control Handbook
* 1951 - 1955: Further refinements to the basic TPS system by Ohno
o Aspects of visual control / 4S
o Start of TWI management training programs (JI, JR, JM)
o Creative suggestion system
o Reduction of batch sizes and change over time
o Purchase of rapid change over equipment from Danley corp
o Kanban implementation
o Production leveling mixed assembly
* 1950: Deming invited to Japan to assist with the Japanese 1951 census. He then gives the first of a dozen lectures on statistical quality control, emphasizing to Japanese management that improving quality can reduce expenses and improve productivity.
* 1950: Toyota financial crisis and labor dispute. Ends with 2146 people losing work. Kiichiro Toyoda steps down as President

1940 -


* 1947 - 1949: Ohno promoted to machine shop manager. Area designated model shop.
o Rearrangement of machines from process flow to product flow
o End of one man one machine. Start of multi process handling
o Detail study of individual process and cycle times
o Time study and motion analysis
o Elimination of "waste" concept
o Reduction in work in process inventory
o In-process inspection by workers
o Line stop authority to workers
o Major component sections (Denso, Aishin etc.) of Toyota divested
* 1946: Ford adopts GM management style and abandons lean manufacturing
* 1943: Edsel Ford dies
* 1943: Taiichi Ohno transfers from Toyoda Auto Loom to Toyota Motor Corporation
* 1943: Ford completes construction of the Willow Run bomber plant, which reaches a peak of one B-24 bomber per hour.
* 1940: Deming develops statistical sampling methods for the 1940 census, and then teaches statistical process control techniques to workers engaged in wartime production.
* 1940: Consolidated Aircraft builds one B-24 bomber per day. Ford's Charles Sorensen visits to see if Ford's methods can improve on that number.

1930 -


* 1939: Walter Shewhart publishes Statistical Methods from the Viewpoint of Quality Control. This book introduces his notion of the Shewhart improvement cycle Plan-Do-Study-Act. In the 1950's his colleague W Edwards Demming alters the term slightly to become the Plan-Do-Check-Act cycle
* 1938: Just-in-time concept established at Koromo / Honsha plant by Kiichiro Toyoda. JIT wa later severely disrupted by World War II.
* 1937: The German aircraft industry had pioneered takt time as a way to synchronize aircraft final assembly in which airplane fuselages were moved ahead in unison throughout final assembly at a precise measure (takt) of time. (Mitsubishi had a technical relationship with the German companies and transferred this method back to Japan where Toyota, located nearby in Aichi Prefecture, heard about it and adopted it.)
* 1937: Toyota Motor Corporation established. Kiichiro Toyoda President
* 1937: J.M. Juran conceptualizes the overall Pareto Principle and emphasizes the importance of sorting out the vital few from the trivial many. He attributes his insight to the Italian economist Vilfredo Pareto. Later the term is called the 80/20 rule.
* 1933: Automobile department established in Toyoda Auto Loom

1920 -


* 1929: Sakichi Toyoda sells foreign rights to loom and Kiichiro Toyoda visits Ford and European companies to learn the automotive business
* 1928: Ford's River Rouge plant completed, becoming the largest assembly plant in the world with over 100,000 employees.
* 1926: Henry Ford publishes Today and Tomorrow
* 1924: Walter Shewhart launches the modern study of process control through the invention of the control chart
* 1924: Sakichi creates the auto loom

1910 -


* 1914: Ford creates the first moving assembly line, reducing chassis assembly time from over 12 hours to less than 3 hours.
* 1912: The Ford production system based on the principles of "accuracy, flow and precision" extends to assembly.
* 1911: Sakichi Toyoda visits U.S. and sees Model T for the first time
* 1910 - 1912: Ford brought many strands of thinking together with advances in cutting tools, a leap in gauging technology, innovative machining practices, and newly-developed hardened metals. Continuous flow of parts through machining and fabrication of parts which consistently fit perfectly in assembly was possible. This was the heart of Ford's manufacturing breakthrough.
* 1910: Ford moves into Highland Park - the "Birthplace of Lean Manufacturing"

1900 -


* 1908: Ford introduces the Model T
* 1906: Italian economist Vilfredo Pareto creates a mathematical formula to describe the unequal distribution of wealth in Italy. He notices that 80% of the wealth is in the hands of 20% of the population
* 1905: Frank and Lillian Gilbreth investigate the notion of motion economy in the workplace. Studying the motions in work such as brick laying they develop a system of 18 basic elements that can depict basic motion.
* 1902: Jidoka concept established by Sakichi Toyoda

1890 -


* 1890: Sakichi Toyoda invents a wooden handloom

1850 -


* 1850: All of the American armories were making standardized metal parts for standardized weapons, but only with enormous amounts of handwork to get each part to its correct specification. This was because the machine tools of that era could not work on hardened metal.

1820 -


* 1822: Thomas Blanchard at the Springfield Armory in the U.S. had devised a set of 14 machines and laid them out in a cellular arrangement that made it possible to make more complex shapes like gunstocks for rifles. A block of wood was placed in the first machine, the lever was thrown, and the water-powered machine automatically removed some of the wood using a profile tracer on a reference piece. What this meant was really quite remarkable: The 14 machines could make a completed item with no human labor for processing and in single piece flow as the items were moved ahead from machine to machine one at a time.

1800 -


* 1807: Marc Brunel in England devised equipment for making simple wooden items like rope blocks for the Royal Navy using 22 kinds of machines that produced identical items in process sequence one at a time.

1790 -


* 1799: Whitney perfects the concept of interchangeable parts when he took a contract from the U.S. Army for the manufacture of 10,000 muskets at the low price of $13.40 each.

1760 -


* 1760: French general Jean-Baptiste de Gribeauval had grasped the significance of standardized designs and interchangeable parts to facilitate battlefield repairs.

1570 -


* 1574: King Henry III of France watches the Venice Arsenal build complete galley ships in less than an hour using continuous flow processes