Hey and welcome to another week in deep learning! This week, we look at DARPA's attempts of catching
|
May 31 · Issue #86 · View online |
|
Hey and welcome to another week in deep learning! Happy reading and hacking! If you like receiving this newsletter and would like to support our work, you can do so by sharing this issue with friends and colleagues who might find it interesting. Thanks!
|
|
|
The US military is funding an effort to catch deepfakes and other AI trickery
The Department of Defense has started a contest to generate the most convincing AI-generated fake video, imagery, and audio, while at the same time trying to create tools that are able to distinguish such counterfeits automatically. The article explores some of the existing techniques as well and gives nice insights into the possibilities they open up.
|
Cambricon, Makers of Huawei's Kirin NPU IP, Build A Big AI Chip and PCIe Card
A new player on the market, Cambricon Technologies, who has previously created AI chips for Huawei smartphones is entering the datacenter AI chip market.Â
|
Amazon Teams Up With Government to Deploy Dangerous New Facial Recognition Technology
The ACLU has acquired marketing materials and documents that hint at a quite worrying development: Amazon is actively marketing it’s facial recognition system ‘Rekognition’ to governments and especially for government surveillance. There even is a list of actual customers, a handful of cities, that currently deploys the said technology.
|
A.I. Is Harder Than You Think
A nice opinion piece on the current state of AI and especially its limitations. The article discusses, what we initially expected or still do expect from AI and what’s actually available in the form of applications.
|
Join the fastest growing Deep Learning Developer Community (sponsored)
Deep Learning Studio is Free, Open and no-coding platform. Developers, Researchers, & Students love this platform. Try it out (It’s Free).
|
|
Mention DLWEEKLY for $400 USD discount on any Lambda Quad!
|
|
Introducing Machine Learning Practica
After the success of their ’ Machine Learning Crash Course’, Google has decided to create an even more advanced version on image classification, that includes interactive coding as well. Dive right in if you’re eager to get started with deep learning!
|
Machine Learning En Plein Air: Building accessible tools for artists
An amazing article taking a look at the role of machine learning for artists and how current developments can be compared to historical parallels. Starting off in the mid-1800s the article goes all the way to the author’s thesis project, which eases ML development for artists.
|
How To Create Natural Language Semantic Search For Arbitrary Objects With Deep Learning
Hamel Husain gives a very well end-to-end example of a system that’s able to search objects semantically. The system uses sequence-to-sequence models to allow searching through code. In theory, the system will then be able to find semantically similar code blocks.
|
|
Announcing Apache MXNet 1.2.0
The 1.2.0 release of MXNet brings ONNX and MKL-DNN support, mixed precision training and more useful functionality.
|
Minimal PyTorch implementation of YOLOv3
A minimal implementation of Yolov3 in PyTorch.
|
|
AutoAugment: Learning Augmentation Policies from Data
In this paper, the authors take a closer look at data augmentation for images and describe a simple procedure called AutoAugment to search for improved data augmentation policies. Their key insight is to create a search space of data augmentation policies, evaluating the quality of a particular policy directly on the dataset of interest.
|
How Does Batch Normalization Help Optimization? (No, It Is Not About Internal Covariate Shift)
Batch Normalization (BatchNorm) is a widely adopted technique that enables faster and more stable training of deep neural networks (DNNs). In this work, the authors demonstrate that the distributional stability of layer inputs has little to do with the success of BatchNorm.
|
Did you enjoy this issue?
|
|
|
|
If you don't want these updates anymore, please unsubscribe here
If you were forwarded this newsletter and you like it, you can subscribe here
|
|
|