Hiya and welcome to another fun week in Deep Learning, This week we bring you ImageNet challenges hos
|
August 4 · Issue #51 · View online |
|
Hiya and welcome to another fun week in Deep Learning, As always, happy reading, learning and hacking!
|
|
|
Artificial Intelligence at Salesforce: An Inside Look
A look at Salesforce AI strategy and outlook starting with their rollout the machine learning platform “Einstein”. Salesforce predicts that AI’s impact through CRM software alone will add over $1 trillion to GDPs around the globe and create 800,000 new jobs. Naturally, they see themselves at the forefront of this development.
|
What Is Ray Kurzweil Up to at Google? Writing Your Emails
If you have been wondering what Ray Kurzweil is up to these days, this article holds the answer, he and his team are behind Gmail’s Smart Reply feature, which offers users three terse replies depending on the email’s content. Not quite the singularity, yet, but certainly a productive and promising application of AI.
|
ImageNet Object Localization Challenge | Kaggle
This year, Kaggle hosts all three ImageNet Challenges for the first time.  The other two are the Object Detection Challenge and the Object Detection from Video Challenge.
|
4 Counterpoints for Dr. Gary Marcus
|
|
|
|
[Book] Deep Learning for Computer Vision with Python
Adrian Rosebrock from PyImageSearch.com is gearing up to launch his latest book on deep learning + computer vision in September. We’ve had a chance to review a pre-release draft, and have to agree — it’s the best resource online to master deep learning and computer vision we have come across. We’ll publish an in-depth review on our blog soon.
|
Reinforcement Learning for Complex Goals, Using TensorFlow
This extensive article and accompanying notebook walks you through both the traditional reinforcement learning paradigm in machine learning as well as a new and emerging paradigm for extending reinforcement learning to allow for complex goals that vary over time.
|
Building a Music Recommender with Deep Learning
A fun data science project description of building a music recommender system by training a CNN on spectrograms of songs from 9 different genres.
|
One Shot Learning with Siamese Networks in PyTorch
A two part series on Understanding Siamese networks, and Implementing them in Pytorch walking you through the architecture essentials, and implementation details.
|
AI and Neuroscience: A Virtuous Circle | DeepMind
This post by DeepMind makes the case for closer collaboration between AI and neuroscience arguing that drawing inspiration from neuroscience can, on the one hand, validate existing techniques if discovered that they mimic a function of the brain and on the other hand, that neuroscience can provide a rich source of inspiration for new algorithms.
|
Theoretical Neuroscience and Deep Learning Theory
Although published in 2016 we thought this talk merits inclusion in this issue since it expands on the point of the DeepMind article above giving an introduction to theoretical neuroscience and laying out the mutually beneficial relationship with deep learning. This is a dense and information packed talked but it comes highly recommended.
|
Deep Learning - The Straight Dope
A comprehensive collection of Jupyter notebooks designed to teach deep learning, Apache MXNet, and the gluon interface.
|
|
Caffe2 Adds RNN Support
Caffe2 has added impressive RNN capabilities resulting in several product teams at Facebook, including speech recognition and ads ranking, using Caffe2 to train RNN models.
|
TensorFire: Blazing-fast Neural Networks in the Browser
TensorFire is powered by WebGL making it work on any GPU. This demo app shows off TensorFire’s ability to run the style-transfer neural network in your browser as fast as CPU TensorFlow on a desktop. The library has not been released yet, but you can sign up for updates.
|
RAWGraphs
Incredibly neat visualization tool to create custom vector-based visualizations on top of the d3.js
|
|
Natural Language Processing with Small Feed-Forward Networks
Fascinating paper showing that small and shallow feed-forward neural networks can achieve near state-of-the-art results on a range of unstructured and structured language processing tasks while being considerably cheaper in memory and computational requirements than deep recurrent models.
|
Self-organized Hierarchical Softmax
The authors propose a new self-organizing hierarchical softmax formulation for neural-network-based language models over large vocabularies. Instead of using a predefined hierarchical structure, their approach is capable of learning word clusters with clear syntactical and semantic meaning during the language model training process.
|
Learning to Infer Graphics Programs from Hand-Drawn Images
Really fun paper introducing a model that learns to convert simple hand drawings into graphics programs written in a subset of LATEX. The model combines techniques from deep learning and program synthesis. First a convolutional neural network proposes plausible drawing primitives that explain an image, then this set of drawing primitives is used like an execution trace for a graphics program.
|
|
CharManteau: Character Embedding Models For Portmanteau Creation
Portmanteaus are a word formation phenomenon where two words are combined to form a new word. The authors propose character-level neural sequence-to-sequence (S2S) methods for the task of portmanteau generation that are end-to-end-trainable, language independent, and do not explicitly use additional phonetic information.
|
Did you enjoy this issue?
|
|
|
|
If you don't want these updates anymore, please unsubscribe here
If you were forwarded this newsletter and you like it, you can subscribe here
|
|
|