"deep learning" entries

Four short links: 25 February 2016

Four short links: 25 February 2016

Security Advice, Common Deep Learning Interface, React Text Editing, and Sexy Docs

  1. Free Security Advice (grugq) — chap wearies of handing out security advice, so gathers it and shares for all.
  2. TensorFuseCommon interface for Theano, CGT, and TensorFlow.
  3. Draft.jsa framework for building rich text editors in React, powered by an immutable model and abstracting over cross-browser differences.
  4. Dexya free-form literate documentation tool for writing any kind of technical document incorporating code. Dexy helps you write correct documents, and to easily maintain them over time as your code changes.
Four short links: 24 February 2016

Four short links: 24 February 2016

UX Metrics, Page Scraping, IoT Pain, and NLP + Deep Learning

  1. Critical Metric: Critical Responses (Steve Souders) — new UX-focused metrics […] Start Render and Speed Index.
  2. Automatically Scrape and Import a Table in Google Spreadsheets (Zach Klein) — =ImportHtml("URL", "table", num) where “table” is the element name (“table” or a list tag), and num is the number of the element in case there are multiple on the page. Bam!
  3. Getting Visibility on the iBeacon Problem (Brooklyn Museum) — the Internet of Things is great, but I wouldn’t want to have to update its firmware. As we started to troubleshoot beacon issues, we wanted a clean slate. This meant updating the firmware on all the beacons, checking the battery life, and turning off the advanced power settings that Estimote provides. This was a painstakingly manual process where I’d have to go and update each unit one-by-one. In some cases, I’d use Estimote’s cloud tool to pre-select certain actions, but I’d still have to walk to each unit to execute the changes and use of the tool hardly made things faster. Perhaps when every inch of the world is filled with sensors, Google Street View cars will also beam out firmware updates.
  4. NLP Meets Deep Learning — easy to follow slide deck talking about how deep learning is tackling NLP problems.
Four short links: 16 February 2016

Four short links: 16 February 2016

Full-on Maker, Robot Recap, Decoding Mandarin, and Sequencing Birds

  1. Washers and Screws (YouTube) — this chap is making his own clock from scratch, and here he is making his own washers and screws. Sometimes another person’s obsession can be calming. (via Greg Sadetsky)
  2. ROScon 2015 Recap with Videos (Robohub) — Shuttleworth suggests that robotics developers really need two things at this point: a robust Internet of Things infrastructure, followed by the addition of dynamic mobility that robots represent. However, software is a much more realistic business proposition for a robotics startup, especially if you leverage open source to create a developer community around your product and let others innovate through what you’ve built.
  3. Getting Deep Speech to Work in Mandarin (Baidu SVAIL) — TIL that some of the preprocessing traditionally used in speech-to-text systems throws away pitch information necessary to decode tonal languages like Mandarin. Deep Speech doesn’t use specialized features like MFCCs. We train directly from the spectrogram of the input audio signal. The spectrogram is a fairly general representation of an audio signal. The neural network is able to learn directly which information is relevant from the input, so we didn’t need to change anything about the features to move from English speech recognition to Mandarin speech recognition. Their model works better than humans at decoding short text such as queries.
  4. Sequencing Genomes of All Known Kakapo — TIL there’s a project to sequence genomes of 10,000 bird species and that there’s this crowdfunded science project to sequence the kakapo genome. There are only 125 left, and conservationists expect to use the sequenced genomes to ensure rare genes are preserved. Every genome in this species could be sequenced … I’m boggling. (via Duke)
Four short links: 15 February 2016

Four short links: 15 February 2016

Deep Learning Analogies, IoT Privacy, Robot Numbers, and App Economy

  1. Deep Visual Analogy-Making (PDF) — In this paper, we develop a novel deep network trained end-to-end to perform visual analogy making, which is the task of transforming a query image according to an example pair of related images. Open source code from the paper also available.
  2. Samsung’s TV and Privacy Gets More AwkwardSamsung has now issued a new statement clarifying how the voice activation feature works. “If a consumer consents and uses the voice recognition feature, voice data is provided to a third party during a requested voice command search,” Samsung said in a statement. “At that time, the voice data is sent to a server, which searches for the requested content then returns the desired content to the TV.” It only seems creepy until you give in and nothing bad happens, then you normalise the creepy.
  3. 2015 Robot Numbers (RoboHub) — The Robotic Industries Association (RIA), representing North American robotics, reported […] 2015 set new records and showed a 14% increase in units and 11% in dollars over 2014. The automotive industry was the primary growth sector, with robot orders increasing 19% year over year. Non-automotive robot orders grew at 5%.
  4. Mozilla, Caribou Digital Release Report Exploring the Global App Economy (Mark Surman) — The emerging markets are the 1% — meaning, they earn 1% of total app economy revenue. 95% of the estimated value in the app economy is captured by just 10 countries, and 69% of the value is captured by just the top three countries. Excluding China, the 19 countries considered low- or lower-income accounted for only 1% of total worldwide value. Developers in low-income countries struggle to export to the global stage. About one-third of developers in the sample appeared only in their domestic market.

Risto Miikkulainen on evolutionary computation and making robots think for themselves

The O'Reilly Radar Podcast: Evolutionary computation, its applications in deep learning, and how it's inspired by biology.

Subscribe to the O’Reilly Radar Podcast to track the technologies and people that will shape our world in the years to come: Stitcher, TuneIn, iTunes, SoundCloud, RSS

Haeckel_Orchidae

In this week’s episode, David Beyer, principal at Amplify Partners, co-founder of Chart.io, and part of the founding team at Patients Know Best, chats with Risto Miikkulainen, professor of computer science and neuroscience at the University of Texas at Austin. They chat about evolutionary computation, its applications in deep learning, and how it’s inspired by biology.

Finding optimal solutions

We talk about evolutionary computation as a way of solving problems, discovering solutions that are optimal or as good as possible. In these complex domains like, maybe, simulated multi-legged robots that are walking in challenging conditions—a slippery slope or a field with obstacles—there are probably many different solutions that will work. If you run the evolution multiple times, you probably will discover some different solutions. There are many paths of constructing that same solution. You have a population and you have some solution components discovered here and there, so there are many different ways for evolution to run and discover roughly the same kind of a walk, where you may be using three legs to move forward and one to push you up the slope if it’s a slippery slope.

You do (relatively) reliably discover the same solutions, but also, if you run it multiple times, you will discover others. This is also a new direction or recent direction in evolutionary computation—that the standard formulation is that you are running a single run of evolution and you try to, in the end, get the optimum. Everything in the population supports finding that optimum.

Read more…

Four short links: 11 February 2016

Four short links: 11 February 2016

Surviving Crashes, Thumbs-Up Thumbs-Down Learning, Faster Homomorphic Encryption, and Nerdy V-Day Cards

  1. All File Systems are Not Created Equal: On the Complexity of Crafting Crash Consistent Applications (Paper a Day) — an important subject for me. BOB, the Block Order Breaker, is used to find out what behaviours are exhibited by a number of modern file systems that are relevant to building crash consistent applications. ALICE, the Application Level Intelligent Crash Explorer, is then used to explore the crash recovery behaviour of a number of applications on top of these file systems.
  2. BinaryNet: Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1 (Arxiv) — instead of complex positive/negative floating-point weights, this uses +1 and -1 (which I can’t help but think of as “thumbs up”, “thumbs down”) to get nearly state-of-the-art results because a run-time, BinaryNet drastically reduces memory usage and replaces most multiplications by 1-bit exclusive-not-or (XNOR) operations, which might have a big impact on both general-purpose and dedicated Deep Learning hardware. GPLv2 code available.
  3. Microsoft Speeds Up Homomorphic Encryption (The Register) — homomorphic encryption lets databases crunch data without needing keys to decode it.
  4. Nerdy Valentine Cards (Evil Mad Scientist) — for a nerd in your life. (via Cory Doctorow)
Four short links: 5 February 2016

Four short links: 5 February 2016

Signed Filesystem, Smart Mirror, Deep Learning Tuts, and CLI: Miami

  1. Introducing the Keybase Filesystem — love that crypto is making its way into the filesystem.
  2. DIY Smart Bathroom Mirror — finally, someone is building this science-fiction future! (via BoingBoing)
  3. tensorflow tutorials — for budding deep learners.
  4. clmystery — a command-line murder mystery.
Four short links: 2 February 2016

Four short links: 2 February 2016

Fourth Industrial Revolution, Agent System, Evidence-Based Programming, and Deep Learning Service

  1. This is Not the Fourth Industrial Revolution (Slate) — the phrase “the fourth Industrial Revolution” has been around for more than 75 years. It first came into popular use in 1940.
  2. Huginn — MIT-licensed system for building agents that perform automated tasks for you online. They can read the Web, watch for events, and take actions on your behalf. Huginn’s Agents create and consume events, propagating them along a directed graph. Think of it as a hackable Yahoo! Pipes plus IFTTT on your own server.
  3. Evidence-Oriented Programming — design programming language syntax and features based on what research shows works. They tested Perl and Java, found apparently not detectably easier to use for novices than a language that my student at the time, Susanna Kiwala (formerly Siebert), created by essentially rolling dice and picking (ridiculous) symbols at random.
  4. Deep Detect — open source deep learning service.
Four short links: 22 January 2016

Four short links: 22 January 2016

Open Source Ultrasound, Deep Learning MOOC, Corp Dev Translation, and Immersive at Sundance

  1. Murgen — open source open hardware ultrasound.
  2. Udacity Deep Learning MOOC — platform is Google’s TensorFlow.
  3. CorpDev Translation“We’ll continue to follow your progress.” Translation: We’ll reach back out when we see you haven’t raised more money and you are probably more desperate because of your shorter runway.
  4. 8i Take Immersive Tech to Sundance8i’s technology lets filmmakers capture entire performances with off-the-shelf cameras and then place them in pre-existing environments, creating a fully navigable 3-D VR movie that’s far more immersive than the 360-degree videos most have seen.
Four short links: 8 January 2016

Four short links: 8 January 2016

Modern C, Colorizing Photos, Flashing Toy Drones, and Web + Native

  1. How to C in 2016 — straightforward recommendations for writing C if you have to.
  2. Using Deep Learning to Colorize Old Photos — comes with a trained TensorFlow model to play with.
  3. Open Source Firmware for Toy DronesThe Eachine H8 is a typical-looking mini-quadcopter of the kind that sell for under $20.[…] takes you through a step-by-step guide to re-flashing the device with a custom firmware to enable acrobatics, or simply to tweak the throttle-to-engine-speed mapping for the quad. (via DIY Drones)
  4. Mobile Web vs. Native Apps or Why You Want Both (Luke Wroblewski) — The Web is for audience reach and native apps are for rich experiences. Both are strategic. Both are valuable. So when it comes to mobile, it’s not Web vs. Native. It’s both. The graphs are impressive.