ENTRIES TAGGED "testing"

Upward Mobility: The Terror of iOS App Submission

Getting apps into the store is a non-deterministic process

One of the major topics of my Enterprise iOS book is how to plan release schedules around  Apple’s peril-filled submission process. I don’t think you can count yourself a truly bloodied iOS dev until you’ve gotten your first rejection notice from iTunes Connect, especially under deadline pressure.

Traditionally, the major reasons that applications would bounce is that the developer had been a Bad Person. They had grossly abused the Human Interface standards, or had a flakey app that crashed when the tester fired it up, or used undocumented internal system calls. In most cases, the rejection could have been anticipated if the developer had done his homework. There were occasional apps that got rejected for bizarre reasons, such as perceived adult content, or because of some secret Apple agenda, but they were the rare exception. If you followed the rules, your app would get in the store.

Read more…

Comments: 2 |

Documentation as Testing

Can explanation contribute to technology creation?

“If you’re explaining, you’re losing.”

That gem of political wisdom has always been hard for me to take, as, after all, I make my living at explaining technology. I don’t feel like I’m losing. And yet…

It rings true. It’s not that programs and devices shouldn’t need documentation, but rather that documentation is an opportunity to find out just how complex a tool is. The problem is less that documentation writers are losing when they’re explaining, and more that creators of software and devices are losing when they have to settle for “fix in documentation.”

I was delighted last week to hear from Doug Schepers of webplatform.org that they want to “tighten the feedback loop between specification and documentation to make the specifications better.” Documentation means that someone has read and attempted to explain the specification to a broader audience, and the broader audience can then try things out and add their own comments. Writing documentation with that as an explicit goal is a much happier approach than the usual perils of documentation writers, trapped explaining unfixable tools whose creators apparently never gave much thought to explaining them.

It’s not just WebPlatform.org. I’ve praised the Elixir community for similar willingness to listen when people writing documentation (internal or external) report difficulties. When something is hard to explain, there’s usually some elegance missing. Developers writing their own documentation sometimes find it, but it can be easier to see the seams when you aren’t the one creating them.
Read more…

Comments: 4 |

What Developers Can Learn from Healthcare.gov

Remember, even a failure can serve as an example of what not to do

The first highly visible component of the Affordable Health Care Act launched this week, in the form of the healthcare.gov site. Theoretically, it allows citizens, who live in any of the states that have chosen not to implement their own portal, to get quotes and sign up for coverage.

I say theoretically because I’ve been trying to get a quote out of it since it launched on Tuesday, and I’m still trying. Every time I think I’ve gotten past the last glitch, a new one shows up further down the line. While it’s easy to write it off as yet another example of how the government (under any administration) seems to be incapable of delivering large software projects, there are some specific lessons that developers can take away.

Read more…

Comments: 72 |

NoSQL Choices: To Misfit or Cargo Cult?

Retreading old topics can be a powerful source of epiphany, sometimes more so than simple extra-box thinking. I was a computer science student, of course I knew statistics. But my recent years as a NoSQL (or better stated: distributed systems) junkie have irreparably colored my worldview, filtering every metaphor with a tinge of information management.

Lounging on a half-world plane ride has its benefits, namely, the opportunity to read. Most of my Delta flight from Tel Aviv back home to Portland lacked both wifi and (in my case) a workable laptop power source. So instead, I devoured Nate Silver’s book, The Signal and the Noise. When Nate reintroduced me to the concept of statistical overfit, and relatedly underfit, I could not help but consider these cases in light of the modern problem of distributed data management, namely, operators (you may call these operators DBAs, but please, not to their faces).

When collecting information, be it for a psychological profile of chimp mating rituals, or plotting datapoints in search of the Higgs Boson, the ultimate goal is to find some sort of usable signal, some trend in the data. Not every point is useful, and in fact, any individual could be downright abnormal. This is why we need several points to spot a trend. The world rarely gives us anything clearer than a jumble of anecdotes. But plotted together, occasionally a pattern emerges. This pattern, if repeatable and useful for prediction, becomes a working theory. This is science, and is generally considered a good method for making decisions.

On the other hand, when lacking experience, we tend to over value the experience of others when we assume they have more. This works in straightforward cases, like learning to cook a burger (watch someone make one, copy their process). This isn’t so useful as similarities diverge. Watching someone make a cake won’t tell you much about the process of crafting a burger. Folks like to call this cargo cult behavior.

How Fit are You, Bro?

You need to extract useful information from experience (which I’ll use the math-y sounding word datapoints). Having a collection of datapoints to choose from is useful, but that’s only one part of the process of decision-making. I’m not speaking of a necessarily formal process here, but in the case of database operators, merely a collection of experience. Reality tends to be fairly biased toward facts (despite the desire of many people for this to not be the case). Given enough experience, especially if that experience is factual, we tend to make better and better decisions more inline with reality. That’s pretty much the essence of prediction. Our mushy human brains are more-or-less good at that, at least, better than other animals. It’s why we have computers and Everybody Loves Raymond, and my cat pees in a box.

Imagine you have a sufficient amount of relevant datapoints that you can plot on a chart. Assuming the axes have any relation to each other, and the data is sound, a trend may emerge, such as a line, or some other bounding shape. A signal is relevant data that corresponds to the rules we discover by best fit. Noise is everything else. It’s somewhat circular sounding logic, and it’s really hard to know what is really a signal. This is why science is hard, and so is choosing a proper database. We’re always checking our assumptions, and one solid counter signal can really be disastrous for a model. We may have been wrong all along, missing only enough data. As Einstein famously said in response to the book 100 Authors Against Einstein: “If I were wrong, then one would have been enough!”

Database operators (and programmers forced to play this role) must make predictions all the time, against a seemingly endless series of questions. How much data can I handle? What kind of latency can I expect? How many servers will I need, and how much work to manage them?

So, like all decision making processes, we refer to experience. The problem is, as our industry demands increasing scale, very few people actually have much experience managing giant scale systems. We tend to draw our assumptions from our limited, or biased smaller scale experience, and extrapolate outward. The theories we then tend to concoct are not the optimal fit that we desire, but instead tend to be overfit.

Overfit is when we have a limited amount of data, and overstate its general implications. If we imagine a plot of likely failure scenarios against a limited number of servers, we may be tempted to believe our biggest odds of failure are insufficient RAM, or disk failure. After all, my network has never given me problems, but I sure have lost a hard drive or two. We take these assumptions, which are only somewhat relevant to the realities of scalable systems and divine some rules for ourselves that entirely miss the point.

overfitting

fitting

In a real distributed system, network issues tend to consume most of our interest. Single-server consistency is a solved problem, and most (worthwhile) distributed databases have some sense of built in redundancy (usually replication, the root of all distributed evil).
Read more…

Comment |

Test-driven Infrastructure with Chef

Velocity 2013 Speaker Series

If you’re a System Administrator, you’re likely all too familiar with the 2:35am PagerDuty alert. “When you roll out testing on your infrastructure,” says Seth Vargo, “the number of alerts drastically decreases because you can build tests right into your Chef cookbooks.” We sat down to discuss his upcoming talk at Velocity, which promises to deliver many more restful nights for SysAdmins.

Key highlights from our discussion include:

  • There are not currently any standards regarding testing with Chef.  [Discussed at 1:09]
  • A recommended workflow that starts with unit testing  [Discussed at 2:11]
  • Moving cookbooks through a “pipeline” of testing with Test Kitchen [Discussed at 3:11]
  • In the event that something bad does make it into production, you can roll back actual infrastructure changes. [Discussed at 4:54]
  • Automating testing and cookbook uploads with Jenkins [Discussed at 5:40]

You can watch the full interview here:

 

Comment |

How to Get Fast, Cheap Feedback on Your Product with Tiny Tests

User research you can do now

There’s a lot of advice about how to do great user research. I have some pretty strong opinions about it myself.

But, as with exercise, the best kind of research is the kind that you actually DO.

So, in the interests of getting some good feedback from your users right now, I have some suggestions for Tiny Tests. These are types of research that you could do right this second with very little preparation on your part.

bottom-micrometer

What is a Tiny Test?

Tiny Tests do not take a lot of time. They don’t take a lot of money. All they take is a commitment to learning something from your users today.

Pick a Tiny Test that applies to your product and get out and run one right now. Oh, ok. You can wait until you finish the post.

Unmoderated tests

Dozens of companies now exist that allow you to run an unmoderated test in a few minutes. I’ve used UserTesting.com many times and gotten some great results really quickly. I’ve also heard good things about Loop11 and several others, so feel free to pick the one that you like best.

Read more…

Comment: 1 |

Tools for test-driven development in Scala

Two core Scala libraries support features for mocking and data generation.

Scala, a language designed for well-structured and readable programs, is richly provisioned with testing frameworks. The community has adopted test-driven development (TDD) and behavior-driven development (BDD) with zeal. These represent the baseline for trustworthy code development today.

TDD and BDD expand beyond the traditional model of incorporating a test phase into the development process. Most programmers know that ad hoc debugging is not sufficient and that they need to run tests on isolated functions (unit testing) to make sure that a change doesn’t break anything (regression testing). But testing libraries available for Scala, in supporting TDD and BDD, encourage developers to write tests before they even write the code being tested.

Tests can be expressed in human-readable text reminiscent of natural language (although you can’t stretch the comparison too far) so that you are documenting what you want your code to do while expressing the test that ensures that code ultimately will meet your requirements.

Daniel Hinojosa, author of Testing in Scala, describes the frameworks and their use for testing, TDD, and BDD in this interview.

Highlights from our discussion include: Read more…

Comment |

The many sides to shipping a great software project

An interview with Shipping Greatness author Chris Vander Mey.

Chris Vander Mey, CEO of Scaled Recognition, and author of a new O’Reilly book, Shipping Greatness, lays out in this video some of the deep lessons he learned during his years working on some very high-impact and high-priority projects at Google and Amazon.

Chris takes a very expansive view of project management, stressing the crucial decisions and attitudes that leaders need to take at every stage from the team’s initial mission statement through the design, coding, and testing to the ultimate launch. By merging technical, organizational, and cultural issues, he unravels some of the magic that makes projects successful.

Read more…

Comment |
Jason Huggins’ Angry Birds-playing Selenium robot

Jason Huggins’ Angry Birds-playing Selenium robot

How a game-playing robot could help shape the future of mobile testing.

If you try to talk to Jason Huggins about Selenium, he'll probably do to you what he did to us. He'll bring his Arduino-based Angry Birds-playing testing robot to your interview and then he'll relate his invention to the larger problems of mobile application testing and cloud-based testing infrastructure.

Comment: 1 |
Don’t put all your trust in mobile emulators

Don’t put all your trust in mobile emulators

Steve Souders on how he reduces the development risks of mobile emulators.

Steve Souders, performance evangelist at Google, looks forward to the remote capabilities of debugging and testing, but he warns against putting too much faith in emulators.

Comments Off |