ENTRIES TAGGED "Hadoop"

Dealing with Data in the Hadoop Ecosystem

Hadoop, Sqoop, and ZooKeeper

Kathleen Ting (@kate_ting), Technical Account Manager at Cloudera, and our own Andy Oram (@praxagora) sat down to discuss how to work with structured and unstructured data as well as how to keep a system up and running that is crunching that data.

Key highlights include:

  • Misconfigurations consist of almost half of the support issues that the team at Cloudera is seeing [Discussed at 0:22]
  • ZooKeeper, the canary in the Hadoop coal mine [Discussed at 1:10]
  • Leaky clients are often a problem ZooKeeper detects [Discussed at 2:10]
  • Sqoop is a bulk data transfer tool [Discussed at 2:47]
  • Sqoop helps to bring together structured and unstructured data [Discussed at 3:50]
  • ZooKeep is not for storage, but coordination, reliability, availability [Discussed at 4:44]

You can view the full interview here:

Read more…

Comment |

Working in the Hadoop Ecosystem

Working with big data and open source software

I recently sat down with Mark Grover (@mark_grover), a Software Engineer at Cloudera, to talk about the Hadoop ecosystem. He is a committer on Apache Bigtop and a contributor to Apache Hadoop, Hive, Sqoop, and Flume. He also contributed to O’Reilly Media’s Programming Hive title.

Key highlights include:

Read more…

Comments: 2 |

Why Choose a Graph Database

Collaborative filtering with Neo4j

By this time, chances are very likely that you’ve heard of NoSQL, and of graph databases like Neo4j.

NoSQL databases address important challenges that we face today, in terms of data size and data complexity. They offer a valuable solution by providing particular data models to address these dimensions.

On one side of the spectrum, these databases resolve issues for scaling out and high data values using compounded aggregate values, on the other side is a relationship based data model that allows us to model real world information containing high fidelity and complexity.

Neo4j, like many other graph databases, builds upon the property graph model; labeled nodes (for informational entities) are connected via directed, typed relationships. Both nodes and relationships hold arbitrary properties (key-value pairs). There is no rigid schema, but with node-labels and relationship-types we can have as much meta-information as we like. When importing data into a graph database, the relationships are treated with as much value as the database records themselves. This allows the engine to navigate your connections between nodes in constant time. That compares favorably to the exponential slowdown of many-JOIN SQL-queries in a relational database.

property-graph

How can you use a graph database?

Graph databases are well suited to model rich domains. Both object models and ER-diagrams are already graphs and provide a hint at the whiteboard-friendliness of the data model and the low-friction mapping of objects into graphs.

Instead of de-normalizing for performance, you would normalize interesting attributes into their own nodes, making it much easier to move, filter and aggregate along these lines. Content and asset management, job-finding, recommendations based on weighted relationships to relevant attribute-nodes are some use cases that fit this model very well.

Many people use graph databases because of their high performance online query capabilities. They process large amounts or high volumes of raw data with Map/Reduce in Hadoop or Event-Processing (like Storm, Esper, etc.) and project the computation results into a graph. We’ve seen examples of this from many domains from financial (fraud detection in money flow graphs), biotech (protein analysis on genome sequencing data) to telco (mobile network optimizations on signal-strength-measurements).

Graph databases shine when you can express your queries as a local search using a few starting points (e.g., people, products, places, orders). From there, you can follow relevant relationships to accumulate interesting information, or project visited nodes and relationships into a suitable result.

Read more…

Comment: 1 |

Get Hadoop, Hive, and HBase Up and Running in Less Than 15 Minutes

OSCON 2013 Speaker Series

If you have delved into Apache Hadoop and related projects, you know that installing and configuring Hadoop is hard. Often, a minor mistake during installation or configuration with messy tarballs will lurk for a long time until some otherwise innocuous change to the system or workload causes difficulties. Moreover, there is little to no integration testing among different projects (e.g. Hadoop, Hive, HBase, Zookeeper, etc.) in the ecosystem. Apache Bigtop is an open source project aimed at bridging exactly those gaps by:

1. Making it easier for users to deploy and configure Hadoop and related projects on their bare metal or virtualized clusters.

2. Performing integration testing among various components in the Hadoop ecosystem.

More about Apache Bigtop

The primary goal of Apache Bigtop is to build a community around the packaging and interoperability testing of Hadoop related projects. This includes testing at various levels (packaging, platform, runtime, upgrade, etc.) developed by a community with a focus on the system as a whole, rather than individual projects.

The latest released version of Apache Bigtop is Bigtop 0.5 which integrates the latest versions of various projects including Hadoop, Hive, HBase, Flume, Sqoop, Oozie and many more! The supported platforms include CentOS/RHEL 5 and 6, Fedora 16 and 17, SuSE Linux Enterprise 11, OpenSuSE 12.2, Ubuntu LTS Lucid and Precise, and Ubuntu Quantal.

Who uses Bigtop?

Folks who use Bigtop can be divided into two major categories. The first category of users are those who leverage Bigtop to power their own Hadoop Distributions. The second category of users are those who use Bigtop for deployment purposes.

In alphabetical order, they are:
Read more…

Comment |

Enterprise Data Workflows with Cascading

OSCON 2013 Speaker Series

Paco Nathan (@pacoid) is Director of Data Science at Concurrent, O’Reilly Author, and OSCON 2013 Speaker. In this interview we talk about creating enterprise data workflow with Cascading. Be sure to check out Paco’s book on the subject here

NOTE: If you are interested in attending OSCON to check out Paco’s talk or the many other cool sessions, click over to the OSCON website where you can use the discount code OS13PROG to get 20% your registration fee.

Key highlights include:

  • Cascading is an abstraction layer on top of Hadoop [Discussed at 0:23]
  • Define your business logic at a high level [Discussed at 1:21]
  • Is Cascading good for enterprise? [Discussed at 2:31]
  • Test-driven development at scale [Discussed at 3:35]
  • Cascalog and the City of Palo Alto Open Data portal [Discussed at 7:39]

You can view the full interview here:

Read more…

Comment |

Scaling People, Process, and Technology with Python

OSCON 2013 Speaker Series

NOTE: If you are interested in attending OSCON to check out Dave’s talk or the many other cool sessions, click over to the OSCON website where you can use the discount code OS13PROG to get 20% off your registration fee.

Since 2009, I’ve been leading the optimization team at AppNexus, a real-time advertising exchange. On this exchange, advertisers participate in real-time auctions to bid on individual ad impressions. The highest bid wins the auction, and that advertiser gets to show an ad. This allows advertisers to carefully target where they advertise—maximizing the effectiveness of their advertising budget—and lets websites maximize their ad revenue.

We do these auctions often (~50 billion a day) and fast (<100 milliseconds). Not surprisingly, this creates a lot of technical challenges. One of those challenges is how to automatically maximize the value advertisers get for their marketing budgets—systematically driving consumer engagement through ad placements on particular websites, times of day, etc.—and we call this process “optimization.” The volume of data is large, and the algorithms and strategies aren’t trivial.

In order to win clients and build our business to the scale we have today, it was crucial that we build a world-class optimization system. But when I started, we didn’t have a scalable tech stack to process the terabytes of data flowing through our systems every day, and we didn't have the team to do any of the required data modeling.

People

So, we needed to hire great people fast. However, there aren’t many veterans in the advertising optimization space, and because of that, we couldn’t afford to narrow our search to only experts in Java or R or Matlab. In order to give us the largest talent pool possible to recruit from, we had to choose a tech stack that is both powerful and accessible to people with diverse experience and backgrounds. So we chose Python.

Python is easy to learn. We found that people coding in R, Matlab, Java, PHP, and even those who have never programmed before could quickly learn and get up to speed with Python. This opened us up to hiring a tremendous pool of talent who we could train in Python once they joined AppNexus. To top it off, there’s a great community for hiring engineers and the PyData community is full of programmers who specialize in modeling and automation.

Additionally, Python has great libraries for data modeling. It offers great analytical tools for analysts and quants and when combined, Pandas, IPython, and Matplotlib give you a lot of the functionality of Matlab or R. This made it easy to hire and onboard our quants and analysts who were familiar with those technologies. Even better, analysts and quants can share their analysis through the browser with IPython.

Process

Now that we had all of these wonderful employees, we needed a way to cut down the time to get them ramped up and pushing code to production.

First, we wanted to get our analysts and quants looking at and modeling data as soon as possible. We didn’t want them worrying about writing database connector code, or figuring out how to turn a cursor into a data frame. To tackle this, we built a project called Link.

Imagine you have a MySQL database. You don’t want to hardcode all of your connection information because you want to have a different config for different users, or for different environments. Link allows you to define your “environment” in a JSON config file, and then reference it in code as if it is a Python object.

Now, with only three lines of code you have a database connection and a data frame straight from your mysql database. This same methodology works for Vertica, Netezza, Postgres, Sqlite, etc. New “wrappers” can be added to accommodate new technologies, allowing team members to focus on modeling the data, not how to connect to all these weird data sources.

By having the flexibility to easily connect to new data sources and APIs, our quants were able to adapt to the evolving architectures around us, and stay focused on modeling data and creating algorithms.

Second, we wanted to minimize the amount of work it took to take an algorithm from research/prototype phase to full production scale. Luckily, with everyone working in Python, our quants, analysts, and engineers are using the same language and data processing libraries. There was no need to re-implement an R script in Java to get it out across the platform.
Read more…

Comment |

Hadoop Training, OpenStreetMap Sprint, MakersFactory Kids’ Programming Camp, and More

Tech events you don't want to miss

Each Monday, we round up upcoming event highlights from the programming and technology spaces. Have an event to share? Send us a note.

Twisted Python: the engine of your Internet webcast: Jessica McKellar presents an architectural overview of the Python networking library, Twisted, and instructs on how to build robust clients and servers for popular and custom network protocols. Register for this free webcast.
Date: 10 a.m. PT, June 6 Location: Online webcast

2 Day Hadoop Training June 2013: This course offers a fast-paced technical overview of the Hadoop landscape, targeted toward both technical and non-technical people who want to understand the emerging world of big data. For more information and to register, visit the event page.
Date: June 8–9 Location: Sunnyvale, CA

Read more…

Comment |

Building Windows 8 Apps, Hadoop Developer Track Course, Write/Speak/Code, and More

Tech events you don't want to miss.

Each Monday, we round up upcoming event highlights from the programming and technology spaces. Have an event to share? Send us a note.

Zero to App in Two Weeks webcast: Kraig Brockschmidt shares lessons learned to help you improve your productivity and app-building efforts when programming Windows 8 apps in HTML, CSS, and JavaScript. Register for this free webcast.
Date: 10 a.m. PT, May 21 Location: Online webcast

Apache Hadoop Developer’s Track – 1 Day Course: This Big Data Cloud University class reviews Hadoop’s essential server components and details its relation to MapReduce, Hive and Pig programming. Course instruction includes hands-on labs. For more information or to register, visit the class website.
Date: 8:30 a.m. to 5:30 p.m. PT, May 25 Location: Los Angeles, CA

Read more…

Comment |
Microsoft opens up

Microsoft opens up

How Microsoft is contributing to and benefitting from open source.

Microsoft seems to be embracing open source more and more. What does this tell us about the company's near-term future?

Comments: 2 |
Developer Year in Review: 2011 Edition

Developer Year in Review: 2011 Edition

It was a good year for mobile, HTML5, Drupal and Hadoop.

It's time for our annual look back at the year that was, when mobile ruled the world, HTML5 PWNED Flash, Drupal and Hadoop were the hot buzzwords for your resume, and a new batch of languages tried to become stars.

Comment |