Here's a couple of Big Questions that may take generations to answer
I spend a lot of time on this blog focused on the very short term issues regarding mobile. Is Apple better than Android. Will Blackberry survive? What’s the best strategy in Candy Crush? But sometimes you need to pull up to 30,000 feet and look at some of the bigger questions, such as:
What are the real long-term health effects of cell-phones? Wearable mobile technology has only been around for a few decades, and in true widespread use for less than 10. Are there health risks to having an RF transmitter that close to your head for long periods of time? More importantly, are there effects on offspring to carrying a two watt transmitter in close proximity to your reproductive organs for 18 hours a day? This is even more significant for women, where the effect would be cumulative from birth, since eggs are carried for a woman’s entire lifetime. Short term studies have shown mixed results, but lifetime exposure hazards are hard to gauge when the technology itself is so new. We really didn’t understand the cost to society of lead in our gasoline until half-a-century after its introduction. A decade of data on cell phones is unlikely to hold all the answers to the scope of the potential problems.
The Jawbone UP shows the promise available in all kinds of wearable sensors.
In a recent conversation, I described my phone as “everything that Compaq marketing promised the iPAQ was going to be.” It was the first device I really carried around and used as an extension of my normal computing activities. Of course, everything I did on the iPAQ can be done much more easily on a smartphone these days, so my iPAQ sits in a closet, hoping that one day I might notice and run Linux on it.
In the decade and a half since the iPAQ hit the market, battery capacity has improved and power consumption has gone down for many types of computing devices. In the Wi-Fi arena, we’ve turned phones into sensors to track motion throughout public spaces, and, in essence, “outsourced” the sensor to individual customers.
Phones, however, are relatively large devices, and the I/O capabilities of the phone aren’t needed in most sensor operations. A smartphone today can measure motion and acceleration, and even position through GPS. However, in many cases, display isn’t needed on the sensor itself, and the data to be collected might need another type of sensor. Many inexpensive sensors are available today to measure temperature, humidity, or even air quality. By moving the I/O from the sensor itself onto a centralized device, the battery power can be devoted almost entirely to collecting data.
Perl’s flexibility helps you avoid writing superfluous code.
The answer to this simple but somehow controversial question is an emphatic yes! Unfortunately, there is a lot of bad Perl out there owing to Perl’s history of being the language of getting things done in the 90s. It is easy for a newcomer to feel overwhelmed by such examples.
One can avoid that feeling by basically only learning from Perl that does not look like gibberish.
I decided to learn Perl a little late. Or, maybe just at the right time. I had all the tools to learn good habits right from the get go.
Untangling code with flow-based programming
That’s why we live in this world where we follow this one particular [von Neumann] architecture and all the alternatives were squashed… Turing gave us this very powerful one-dimensional model, von Neumann made it into this two-dimensional address matrix, and why are we still stuck in that world? We’re fully capable of moving on to the next generation… that becomes fully three-dimensional. Why stay in this von Neumann matrix?
Dyson suggested a more biologically based template-based approach, but I wasn’t sure at the time that we were as far from three dimensions as Dyson thought. Distributed computing with separate memory spaces already can offer an additional dimension, though most of us are not normally great at using it. (I suspect Dyson would disagree with my interpretation.)
Companies that specialize in scaling horizontally—Google, Facebook, and many others—already seem to have multiple dimensions running more or less smoothly. While we tend to think of that work as only applying to specialized cases involving many thousands of simultaneous users, that extra dimension can help make computing more efficient at practically any scale above a single processor core.
Unfortunately, we’ve trained ourselves very well to the von Neumann model—a flow of instructions through a processor working on a shared address space. There are many variations in pipelines, protections for memory, and so on, but we’ve centered our programming models on creating processes that communicate with each other. The program is the center of our computing universe because it must handle all of these manipulations directly.
Alternatives and suggestions for candidates and companies to avoid tests
In part one we covered types of technical tests—their relative costs, and why organizations need to understand the costs to the candidates if they want to attract the right type of candidates, and at what point in the process to test.
We haven’t covered alternatives yet. Larger organizations have the benefit of HR or recruitment divisions to bear the brunt of the early cost of the recruitment process—they can call candidates individually to check out interpersonal skills and to make the candidates feel wanted. Smaller organizations don’t necessarily have this, but they do have the benefit of being more flexible. If the brunt of the recruitment process falls on developers (or tech leads, CTOs, or other people in the technical organization) then obviously these organizations are trying to keep the time costs down—every hour invested in recruitment is an hour not spent on coding the company’s money-making product. But these techies are also in a much better position to be able to judge a candidate, and don’t always need to rely on one channel (the technical test, for example) to perform this judgment. They have other alternatives.
Learning languages through frameworks
I love frameworks. I love that frameworks like Rails and Bootstrap, in particular, make me more productive: People smarter than I have taken care of several decisions that distract from the typical goals of my web applications. I spend most of my time developing within the friendly confines of such frameworks, in part because I enjoy building—and delivering—applications I can show off to non-programmer friends (or clients, for that matter). They’re just not as impressed when I show them a Hello World app or a completed kata, for some reason.
Of course, there’s a danger here. With out-of-the-box, “omakase” Rails, it’s increasingly possible to create powerful applications with a rich understanding of Rails, but a so-so (or worse) understanding of Ruby itself. A well-done framework hides much of the complexity of a language, so the developer can focus more on the problem domain. But if and when an application’s requirements deviate from those for which a stock Rails installation is suited, a couple of things might happen for the developer who’s not sufficiently grounded in the language. At best, programmer productivity slows considerably. At worst, dirty hacks and kludges start appearing in the code base, and technical debt mounts quickly.
Lessons from Adobe's breach and heartbreak for Cupid Media's users
Recently, I commented on the Adobe breach in a post titled “How Secure is Your Old and Inactive User Data?” The next week I followed up with, “Adobe’s Breach Widens.” It was then that Heather Edell, Adobe’s Senior Manager of Corporate Communications contacted me directly with a few details about how Adobe is responding to some of the 38 million customers whose data was made vulnerable by the breach:
Customers whose credit or debit card information was involved are receiving a notification letter from us with additional information on steps they can take to help protect themselves against potential misuse of personal information about them.
I appreciated the email from Heather. The Adobe web page is very good with a lot of details, and jives with what Brian Krebs and others outlined.
I’ll also take this as a polite way to say that no, the email-address-only losses aren’t going to be notified. We’ll agree to disagree on that.
Will WebRTC disrupt or be disrupted?
WebRTC promises to deliver computer to computer communications with minimal reliance on central servers to manage the conversation. Peer-to-peer systems promise smoother exchanges without the tremendous scale challenges of running video, for example, through central points.
The WebRTC Conference and Expo was unlike any other web conference I’ve attended. Though technologies in development are common at tech conferences, I can’t remember attending a show that was focused on a technology whose future had these levels of promise and uncertainty. Also, despite the name, WebRTC doesn’t resemble much of the Web despite being built into some browsers (more hopefully coming soon) and supporting HTTP(S) proxying.
Dissecting the hows, whys, and why nots of screening candidates
Because so many of us have experienced both sides of the interview table, the London Java Community has a slight obsession with discussing approaches to recruitment. The nice thing about these conversations is you see both points of view—the candidates and the techies who are responsible for hiring.
Our most recent conversation was (again) about the value of technical tests during the interview process. I’m not sure there’s another topic that generates such diverse and yet all “correct” points of view. If there’s one thing I’ve learnt from seeing this conversation again and again, it’s that there is no One True Way to test a candidate’s technical ability. If there was, we’d all be doing it.