Andy Oram

Andy Oram is an editor at O'Reilly Media. An employee of the company since 1992, Andy currently specializes in open source technologies and software engineering. His work for O'Reilly includes the first books ever released by a U.S. publisher on Linux, the 2001 title Peer-to-Peer, and the 2007 best-seller Beautiful Code.

Health IT is a growth area for programmers

New report covers areas of innovation and their difficulties

infofixO’Reilly recently released a report I wrote called The Information Technology Fix for Health: Barriers and Pathways to the Use of Information Technology for Better Health Care. Along with our book Hacking Healthcare, I hope this report helps programmers who are curious about Health IT see what they need to learn and what they in turn can contribute to the field.

Computers in health are a potentially lucrative domain, to be sure, given a health care system through which $2.8 trillion, or $8.915 per person, passes through each year in the US alone. Interest by venture capitalists ebbs and flows, but the impetus to creative technological hacking is strong, as shown by the large number of challenges run by governments, pharmaceutical companies, insurers, and others.

Some things you should consider doing include:

Join open source projects 

Numerous projects to collect and process health data are being conducted as free software; find one that raises your heartbeat and contribute. For instance, the most respected health care system in the country, VistA from the Department of Veterans Affairs, has new leadership in OSEHRA, which is trying to create a community of vendors and volunteers. You don’t need to understand the oddities of the MUMPS language on which VistA is based to contribute, although I believe some knowledge of the underlying database would be useful. But there are plenty of other projects too, such as the OpenMRS electronic record system and the projects that cooperate under the aegis of Open Health Tools

Read more…

Comment |

Court Prods FCC in Unexpected Direction in This Week’s Verizon Ruling

Network neutrality was on the retreat shortly after the Telecom Act of 1996

A court ruling this past Tuesday on FCC “network neutrality” regulation closes and opens a few paths in a three-way chess game that has been going on for years between the US District Court of Appeals, the FCC, and the major Internet server providers. (Four-way if you include Congress, and five-way if you include big Internet users such as Google—so, our chess game is coming closer to Chinese Checkers at this point.)

A lot of bloggers, and even news headlines, careened into histrionics (“Net neutrality is dead. Bow to Comcast and Verizon, your overlords“). The Free Press, although oversimplifying the impact, did correctly link the ruling to what they and many other network neutrality supporters consider the original sin of FCC rulings: eviscerating the common carrier regulation of broadband providers.

Even better, many commenters noted the ambiguities and double messages in the ruling. Unlike a famous earlier ruling on Comcast regulation, this week’s court ruling spends a good deal of time affirming the FCC’s right to regulate Internet providers. Notably, pp. 35-36 essentially confirm the value and validity of network neutrality (in the form of promoting innovation at the edges by placing no restraints on transmissions).

Read more…

Comment |

The new stage of system monitoring is better integrated

Current tools make collection and visualization easier but don't reduce work

New tools are raining down on system administrators these days, attacking the “monitoring sucks” theme that was pervasive just a year ago. The new tools–both open source and commercial–may be more flexible and lightweight than earlier ones, as well as more suited for the kaleidoscopic churn of servers in the cloud, making it easier to log events and visualize them. But I look for more: a new level of data integration. What if the monitoring tools for different components could send messages to each other and take over from the administrator the job of tracing causes for events?

Read more…

Comments: 2 |

OpenStack release offers more flexibility and aids to performance

The Havana release features metering and orchestration

I talked this week to Jonathan Bryce and Mark Collier of OpenStack to look at the motivations behind the enhancements in the Havana release announced today. We focused on the main event–official support for the Ceilometer metering/monitoring project and the Heat orchestration project–but covered a few small bullet items as well.

Read more…

Comment: 1 |

Open Source convention considers situational awareness in cars, and more

A report from OSCon

Every conference draws people in order to make contacts, but the Open Source convention also inspires them with content. I had one friend withdraw from an important business meeting (sending an associate) in order to attend a tutorial.

Lots of sessions and tutorials had to turn away attendees. This was largely fall-out from the awkward distribution of seats in the Oregon Convention Center: there are just half a dozen ballroom-sized spaces, forcing the remaining sessions into smaller rooms that are more appropriate for casual meetings of a few dozen people. When the conference organizers measure the popularity of the sessions, I suggest that any session at or near capacity have its attendance counted as infinity.

More than 3,900 people registered for OSCon 2013, and a large contingent kept attending sessions all the way through Friday.

Read more…

Comment |

Community Leadership Summit tracks the forces that spread ideas

Inclusivity and recruitment among the themes at the unconference

American businesses, along with many others around the world, hustle to find enough programmers and computing staff. The gap widens precariously between the number of job openings and the number students graduating with the necessary skills. And yet, at the same time, we seem to drown in an overabundance of software packages. If you want JavaScript frameworks, large-scale data stores, bulk system configuration tools, mesh networking protocols, or even a new functional language, you have almost too many choices. So the shortage of programmers does not apply when people offer their code to their colleagues across the globe.

Open source software’s ever-expanding options show a programming culture that is proud of its accomplishments and eager to explore untapped potential. But above all, they reveal a desire to work together on problems. The existence of open source packages shows the strength of community.

The Community Leadership Summit (CLS), which started four years ago at the Open Source convention and has been held on the weekend preceding it ever since, drew more attendees than ever this year (I estimate some 200 at the opening plenary), and a large fraction of them actually work as community managers. The field has matured in other ways as well. At early conferences, people expressed aspirations and complained of problems that this year are finding solutions.

Read more…

Comment |

Survey on the Future of Open Source, and Lessons from the Past

Quality and security drive adoption, but community is rising fast

I recently talked to two managers of Black Duck, the first company formed to help organizations deal with the licensing issues involved in adopting open source software. With Tim Yeaton, President and CEO, and Peter Vescuso, Executive Vice President of Marketing and Business Development, I discussed the seventh Future of Open Source survey, from which I’ll post a few interesting insights later. But you can look at the slides for yourself, so this article will focus instead on some of the topics we talked about in our interview. While I cite some ideas from Yeaton and Vescuso, many of the observations below are purely my own.

The spur to collaboration

One theme in the slides is the formation of consortia that develop software for entire industries. One recent example everybody knows about is OpenStack, but many industries have their own impressive collaboration projects, such as GENIVI in the auto industry.

What brings competitors together to collaborate? In the case of GENIVI, it’s the impossibility of any single company meeting consumer demand through its own efforts. Car companies typically take five years to put a design out to market, but customers are used to product releases more like those of cell phones, where you can find something enticingly new every six months. In addition, the range of useful technologies—Bluetooth, etc.—is so big that a company has to become expert at everything at once. Meanwhile, according to Vescuso, the average high-end car contains more than 100 million lines of code. So the pace and complexity of progress is driving the auto industry to work together.

All too often, the main force uniting competitors is the fear of another vendor and the realization that they can never beat a dominant vendor on its own turf. Open source becomes a way of changing the rules out from under the dominant player. OpenStack, for instance, took on VMware in the virtualization space and Amazon.com in the IaaS space. Android attracted phone manufacturers and telephone companies as a reaction to the iPhone.

A valuable lesson can be learned from the history of the Open Software Foundation, which was formed in reaction to an agreement between Sun and AT&T. In the late 1980s, Sun had become the dominant vendor of Unix, which was still being maintained by AT&T. Their combination panicked vendors such as Digital Equipment Corporation and Apollo Computer (you can already get a sense of how much good OSF did them), who promised to create a single, unified standard that would give customers increased functionality and more competition.

The name Open Software Foundation was deceptive, because it was never open. Instead, it was a shared repository into which various companies dumped bad code so they could cynically claim to be interoperable while continuing to compete against each other in the usual way. It soon ceased to exist in its planned form, but did survive in a fashion by merging with X/Open to become the Open Group, an organization of some significance because it maintains the X Window System. Various flavors of BSD failed to dislodge the proprietary Unix vendors, probably because each BSD team did its work in a fairly traditional, closed fashion. It remained up to Linux, a truly open project, to unify the Unix community and ultimately replace the closed Sun/AT&T partnership.

Collaboration can be driven by many things, therefore, but it usually takes place in one of two fashions. In the first, somebody throws out into the field some open source code that everybody likes, as Rackspace and NASA did to launch OpenStack, or IBM did to launch Eclipse. Less common is the GENIVI model, in which companies realize they need to collaborate to compete and then start a project.

A bigger pie for all

The first thing on most companies’ minds when they adopt open source is to improve interoperability and defend themselves against lock-in by vendors. The Future of Open Source survey indicates that the top reasons for choosing open source is its quality (slide 13) and security (slide 15). This is excellent news because it shows that the misconceptions of open source are shattering, and the arguments by proprietary vendors that they can ensure better quality and security will increasingly be seen as hollow.
Read more…

Comments: 2 |

Building native apps from JavaScript using Appcelerator Titanium

An interview with John Anderson

In this interview, the author of Appcelerator Titanium: Up and Running describes how Titanium can be used to generate native mobile apps from JavaScript code. He distinguishes the Titanium platform from native API programming and from other popular JavaScript platforms for mobile devices. We look at the way Titanium exploits the expressiveness and flexibility of JavaScript, and some of the directions that the Appcelerator company is taking Titanium.
Read more…

Comment: 1 |

Designing resilient communities

Establishing an effective organization for large-scale growth

In the open source and free software movement, we always exalt community, and say the people coding and supporting the software are more valuable than the software itself. Few communities have planned and philosophized as much about community-building as ZeroMQ. In the following posting, Pieter Hintjens quotes from his book ZeroMQ, talking about how he designed the community that works on this messaging library.

How to Make Really Large Architectures (excerpted from ZeroMQ by Pieter Hintjens)

There are, it has been said (at least by people reading this sentence out loud), two ways to make really large-scale software. Option One is to throw massive amounts of money and problems at empires of smart people, and hope that what emerges is not yet another career killer. If you’re very lucky and are building on lots of experience, have kept your teams solid, and are not aiming for technical brilliance, and are furthermore incredibly lucky, it works.

But gambling with hundreds of millions of others’ money isn’t for everyone. For the rest of us who want to build large-scale software, there’s Option Two, which is open source, and more specifically, free software. If you’re asking how the choice of software license is relevant to the scale of the software you build, that’s the right question.

The brilliant and visionary Eben Moglen once said, roughly, that a free software license is the contract on which a community builds. When I heard this, about ten years ago, the idea came to me—Can we deliberately grow free software communities?

Read more…

Comment |

LISA mixes the ancient and modern: report from USENIX system administration conference

System administrators try to maintain reliability and other virtues while adopting cost-cutting innovations

I came to LISA, the classic USENIX conference, to find out this year who was using such advanced techniques as cloud computing, continuous integration, non-relational databases, and IPv6. I found lots of evidence of those technologies in action, but also had the bracing experience of getting stuck in a talk with dozens of Solaris fans.

Such is the confluence of old and new at LISA. I also heard of the continued relevance of magnetic tape–its storage costs are orders of magnitude below those of disks–and of new developements on NFS. Think of NFS as a protocol, not a filesystem: it can now connect many different filesystems, including the favorites of modern distributed system users.

LISA, and the USENIX organization that valiantly unveils it each year, are communities at least as resilient as the systems that their adherents spend their lives humming. Familiar speakers return each year. Members crowd a conference room in the evening to pepper the staff with questions about organizational issues. Attendees exchange their t-shirts for tuxes to attend a three-hour reception aboard a boat on the San Diego harbor, which this time was experiencing unseasonably brisk weather. (Full disclosure: I skipped the reception and wrote this article instead.) Let no one claim that computer administrators are anti-social.

Again in the spirit of full disclosure, let me admit that I perform several key operations on a Solaris system. When it goes away (which someday it will), I’ll have to alter some workflows.

The continued resilience of LISA

Conferences, like books, have a hard go of it in the age of instant online information. I wasn’t around in the days when people would attend conferences to exchange magnetic tapes with their free software, but I remember the days when companies would plan their releases to occur on the first day of a conference and would make major announcements there. The tradition of using conferences to propel technical innovation is not dead; for instance, OpenStack was announced at an O’Reilly Open Source convention.

But as pointed out by Thomas Limoncelli, an O’Reilly author (Time Management for System Administrators) and a very popular LISA speaker, the Internet has altered the equation for product announcements in two profound ways. First of all, companies and open source projects can achieve notoriety in other ways without leveraging conferences. Second, and more subtly, the philosophy of “release early, release often” launches new features multiple times a year and reduces the impact of major versions. The conferences need a different justification.

Limoncelli says that LISA has survived by getting known as the place you can get training that you can get nowhere else. “You can learn about a tool from the person who created the tool,” he says. Indeed, at the BOFs it was impressive to hear the creator of a major open source tool reveal his plans for a major overhaul that would permit plugin modules. It was sobering though to hear him complain about a lack of funds to do the job, and discuss with the audience some options for getting financial support.

LISA is not only a conference for the recognized stars of computing, but a place to show off students who can create a complete user administration interface in their spare time, or design a generalized extension of common Unix tools (grep, diff, and so forth) that work on structured blocks of text instead of individual lines.

Another long-time attendee told me that companies don’t expect anyone here to whip out a checkbook in the exhibition hall, but they still come. They have a valuable chance at LISA to talk to people who don’t have direct purchasing authority but possess the technical expertise to explain to their bosses the importance of new products. LISA is also a place where people can delve as deep as the please into technical discussions of products.

I noticed good attendance at vendor-sponsored Bird-of-a-Feather sessions, even those lacking beer. For instance, two Ceph staff signed up for a BOF at 10 in the evening, and were surprised to see over 30 attendees. It was in my mind a perfect BOF. The audience talked more than the speakers, and the speakers asked questions as well as delivering answers.

But many BOFs didn’t fit the casual format I used to know. Often, the leader turned up with a full set of slides and took up a full hour going through a list of new features. There were still audience comments, but no more than at a conference session.

Memorable keynotes

One undeniable highlight of LISA was the keynote by Internet pioneer Vint Cerf. After years in Washington, DC, Cerf took visible pleasure in geeking out with people who could understand the technical implications of the movements he likes to track. His talk ranged from the depth of his wine cellar (which he is gradually outfitting with sensors for quality and security) to interplanetary travel.

The early part of his talk danced over general topics that I think were already adequately understood by his audience, such as the value of DNSSEC. But he often raised useful issues for further consideration, such as who will manage the billions of devices that will be attached to the Internet over the next few years. It can be useful to delegate read access and even write access (to change device state) to a third party when the device owner is unavailable. In trying to imagine a model for sets of device, Cerf suggested the familiar Internet concept of an autonomous system, which obviously has scaled well and allowed us to distinguish routers running different protocols.

The smart grid (for electricity) is another concern of Cerf’s. While he acknowledged known issues of security and privacy, he suggested that the biggest problem will be the classic problem of coordinated distributed systems. In an environment where individual homes come and go off the grid, adding energy to it along with removing energy, it will be hard to predict what people need and produce just the right amount at any time. One strategy involves microgrids: letting neighborhoods manage their own energy needs to avoid letting failures cascade through a large geographic area.

Cerf did not omit to warn us of the current stumbling efforts in the UN to institute more governance for the Internet. He acknowledged that abuse of the Internet is a problem, but said the ITU needs an “excuse to continue” as radio, TV, etc. migrate to the Internet and the ITU’s standards see decreasing relevance.

Cerf also touted the Digital Vellum project for the preservation of data and software. He suggested that we need a legal framework that would require software developers to provide enough information for people to continue getting access to their own documents as old formats and software are replaced. “If we don’t do this,” he warned, “our 22nd-century descendants won’t know much about us.”

Talking about OpenFlow and Software Defined Networking, he found its most exciting opportunity is to let us use content to direct network traffic in addition to, or instead of, addresses.

Another fine keynote was delivered by Matt Blaze on a project he and colleagues conducted to assess the security of the P25 mobile systems used everywhere by security forces, including local police and fire departments, soldiers in the field, FBI and CIA staff conducting surveillance, and executive bodyguards. Ironically, there are so many problems with these communication systems that the talk was disappointing.

I should in no way diminish the intelligence and care invested by these researchers from the University of Pennsylvania. It’s just the history of P25 makes security lapses seem inevitable. Because it was old, and was designed to accommodate devices that were even older, it failed to implement basic technologies such as asymmetric encryption that we now take for granted. Furthermore, most of the users of these devices are more concerned with getting messages to their intended destinations (so that personnel can respond to an emergency) than in preventing potential enemies from gaining access. Putting all this together, instead of saying “What impressive research,” we tend to say, “What else would you expect?”

Random insights

Attendees certainly had their choice of virtualization and cloud solutions at the conference. A very basic introduction to OpenStack was offered, along with another by developers of CloudStack. Although the latter is older and more settled, it is losing the battle of mindshare. One developer explained that CloudStack has a smaller scope than OpenStack, because CloudStack is focused on high-computing environments. However, he claimed, CloudStack works on really huge deployments where he hasn’t seen other successful solutions. Yet another open source virtuallization platform presented was Google’s Ganeti.

I also attended talks and had chats with developers working on the latest generation of data stores: massive distributed file systems like Hadoop’s HDFS, and high-performance tools such as HBase and Impala, for accessing the data it stores. There seems be accordion effect in data stores: developers start with simple flat or key-value structures. Then they find the need over time–depending on their particular applications–for more hierarchy or delimited data, and either make their data stores more heavyweight or jerry-rig the structure through conventions such as defining fields for certain purposes. Finally we’re back at something mimicking the features of a relational database, and someone rebels and starts another bare-bones project.

One such developer told me hoped his project never turns into a behemoth like CORBA or (lamentably) what WS-* specifications seem to have wrought.

CORBA is universally recognized as dead–perhaps stillborn, because I never heard of major systems deployed in production. In fact, I never knew of an implementation that caught up with the constant new layers of complexity thrown on by the standards committee.

In contrast, WS-* specifications teeter on the edge of acceptability, as a number of organizations swear by it.

I pointed out to my colleague that most modern cloud or PC systems are unlikely to suffer from the weight of CORBA or WS-*, because the latter two systems were created for environments without trust. They were meant to tie together organizations with conflicting goals, and were designed by consortia of large vendors jockeying for market share. For both of these reasons, they have to negotiate all sorts of parameters and add many assurances to every communication.

Recently we’ve seen an increase of interest in functional programming. It occurred to me this week that many aspects of functional programming go nicely with virtualization and the cloud. When you write code with no side effects and no global lack of state, you can recover more easily when instances of your servers disappear. It’s fascinating to see how technologies coming from many different places push each other forward–and sometimes hold each other back.

Comment: 1 |