Unnamed: 0.1 int64 0 41k | Unnamed: 0 int64 0 41k | author stringlengths 9 1.39k | id stringlengths 11 18 | summary stringlengths 25 3.66k | title stringlengths 4 258 | year int64 1.99k 2.02k | arxiv_url stringlengths 32 39 | info stringlengths 523 3.18k | embeddings stringlengths 16.9k 17.1k |
|---|---|---|---|---|---|---|---|---|---|
300 | 300 | ['Zhengli Zhao', 'Dheeru Dua', 'Sameer Singh'] | 1710.11342v2 | Due to their complex nature, it is hard to characterize the ways in which
machine learning models can misbehave or be exploited when deployed. Recent
work on adversarial examples, i.e. inputs with minor perturbations that result
in substantially different model predictions, is helpful in evaluating the
robustness of th... | Generating Natural Adversarial Examples | 2,017 | http://arxiv.org/pdf/1710.11342v2 | Title Generating Natural Adversarial Examples Summary Due complex nature hard characterize way machine learning model misbehave exploited deployed Recent work adversarial example ie input minor perturbation result substantially different model prediction helpful evaluating robustness model exposing adversarial scenario... | [0.025467297062277794, 0.0345236137509346, -0.02137664146721363, 0.03239176422357559, -0.019342325627803802, -0.021692601963877678, 0.06322627514600754, 0.017848998308181763, -0.00045189441880211234, -0.05543028563261032, 0.0008594866376370192, 0.01735883392393589, 0.0008117759716697037, 0.06891801208257675, 0.07641702... |
301 | 301 | ['Xu Sun', 'Xuancheng Ren', 'Shuming Ma', 'Bingzhen Wei', 'Wei Li', 'Houfeng Wang'] | 1711.06528v1 | We propose a simple yet effective technique to simplify the training and the
resulting model of neural networks. In back propagation, only a small subset of
the full gradient is computed to update the model parameters. The gradient
vectors are sparsified in such a way that only the top-$k$ elements (in terms
of magnitu... | Training Simplification and Model Simplification for Deep Learning: A
Minimal Effort Back Propagation Method | 2,017 | http://arxiv.org/pdf/1711.06528v1 | Title Training Simplification Model Simplification Deep Learning Minimal Effort Back Propagation Method Summary propose simple yet effective technique simplify training resulting model neural network back propagation small subset full gradient computed update model parameter gradient vector sparsified way topk element ... | [-0.014363215304911137, 0.04249928519129753, -0.023921480402350426, 0.047291845083236694, 0.023564763367176056, 0.003695698222145438, 0.03487322852015495, 0.036320097744464874, -0.03584415838122368, 0.002887630369514227, 0.0019521421054378152, 0.04438763111829758, 0.042764995247125626, 0.035919446498155594, 0.026562314... |
302 | 302 | ['Abhishek Das', 'Samyak Datta', 'Georgia Gkioxari', 'Stefan Lee', 'Devi Parikh', 'Dhruv Batra'] | 1711.11543v2 | We present a new AI task -- Embodied Question Answering (EmbodiedQA) -- where
an agent is spawned at a random location in a 3D environment and asked a
question ("What color is the car?"). In order to answer, the agent must first
intelligently navigate to explore the environment, gather information through
first-person ... | Embodied Question Answering | 2,017 | http://arxiv.org/pdf/1711.11543v2 | Title Embodied Question Answering Summary present new AI task Embodied Question Answering EmbodiedQA agent spawned random location 3D environment asked question color car order answer agent must first intelligently navigate explore environment gather information firstperson egocentric vision answer question orange chal... | [0.06596985459327698, 0.012691699899733067, -0.019865935668349266, 0.00982577633112669, 0.013813847675919533, -6.300141831161454e-05, -0.020515792071819305, -0.017123965546488762, -0.007951822131872177, -0.048882368952035904, -0.0033281196374446154, 0.04057680442929268, 0.014527885243296623, 0.07339311391115189, 0.0211... |
303 | 303 | ['Aishwarya Agrawal', 'Dhruv Batra', 'Devi Parikh', 'Aniruddha Kembhavi'] | 1712.00377v1 | A number of studies have found that today's Visual Question Answering (VQA)
models are heavily driven by superficial correlations in the training data and
lack sufficient image grounding. To encourage development of models geared
towards the latter, we propose a new setting for VQA where for every question
type, train ... | Don't Just Assume; Look and Answer: Overcoming Priors for Visual
Question Answering | 2,017 | http://arxiv.org/pdf/1712.00377v1 | Title Dont Assume Look Answer Overcoming Priors Visual Question Answering Summary number study found today Visual Question Answering VQA model heavily driven superficial correlation training data lack sufficient image grounding encourage development model geared towards latter propose new setting VQA every question typ... | [0.051256436854600906, 0.05397116392850876, -0.019469494000077248, 0.03969557583332062, 0.013161051087081432, 0.014416893012821674, 0.0011420906521379948, 0.030919628217816353, 0.004973620176315308, -0.03810565546154976, -0.04670104756951332, 0.011278714053332806, -0.03848955035209656, 0.042882878333330154, 0.069922007... |
304 | 304 | ['Jin-Hwa Kim', 'Devi Parikh', 'Dhruv Batra', 'Byoung-Tak Zhang', 'Yuandong Tian'] | 1712.05558v1 | In this work, we propose a goal-driven collaborative task that contains
vision, language, and action in a virtual environment as its core components.
Specifically, we develop a collaborative `Image Drawing' game between two
agents, called CoDraw. Our game is grounded in a virtual world that contains
movable clip art ob... | CoDraw: Visual Dialog for Collaborative Drawing | 2,017 | http://arxiv.org/pdf/1712.05558v1 | Title CoDraw Visual Dialog Collaborative Drawing Summary work propose goaldriven collaborative task contains vision language action virtual environment core component Specifically develop collaborative Image Drawing game two agent called CoDraw game grounded virtual world contains movable clip art object Two player Tel... | [0.0213069636374712, -0.03527986630797386, -0.03172720596194267, 0.015142887830734253, -0.04521238058805466, -0.003029627725481987, 0.03395911306142807, 0.03418860584497452, -0.03761177137494087, 0.003921459894627333, -0.058106403797864914, 0.008718117140233517, 0.00938714761286974, 0.07117760926485062, -0.000524366216... |
305 | 305 | ['Sang-Woo Lee', 'Yu-Jung Heo', 'Byoung-Tak Zhang'] | 1802.03881v1 | Goal-oriented dialogue has been paid attention for its numerous applications
in artificial intelligence. To solve this task, deep learning and reinforcement
learning have recently been applied. However, these approaches struggle to find
a competent recurrent neural questioner, owing to the complexity of learning a
seri... | Answerer in Questioner's Mind for Goal-Oriented Visual Dialogue | 2,018 | http://arxiv.org/pdf/1802.03881v1 | Title Answerer Questioners Mind GoalOriented Visual Dialogue Summary Goaloriented dialogue paid attention numerous application artificial intelligence solve task deep learning reinforcement learning recently applied However approach struggle find competent recurrent neural questioner owing complexity learning series se... | [0.08786161243915558, 0.057616252452135086, -0.016296284273266792, 0.029011499136686325, 0.001160371582955122, 0.007916153408586979, 0.020862827077507973, 0.01133309118449688, -0.010481358505785465, -0.013166438788175583, -0.011685785837471485, -0.020735904574394226, -0.014125279150903225, 0.08974589407444, 0.021924840... |
306 | 306 | ['Tolga Bolukbasi', 'Kai-Wei Chang', 'Joseph Wang', 'Venkatesh Saligrama'] | 1602.08761v2 | We study the problem of structured prediction under test-time budget
constraints. We propose a novel approach applicable to a wide range of
structured prediction problems in computer vision and natural language
processing. Our approach seeks to adaptively generate computationally costly
features during test-time in ord... | Resource Constrained Structured Prediction | 2,016 | http://arxiv.org/pdf/1602.08761v2 | Title Resource Constrained Structured Prediction Summary study problem structured prediction testtime budget constraint propose novel approach applicable wide range structured prediction problem computer vision natural language processing approach seek adaptively generate computationally costly feature testtime order r... | [0.031651295721530914, 0.025738857686519623, 0.02919025346636772, 0.060441844165325165, -0.011648801155388355, -0.02231590449810028, -0.01613144390285015, 0.05831962823867798, 0.048590101301670074, -0.07138843089342117, -0.004201329778879881, -0.008253581821918488, 0.018609730526804924, 0.09011174738407135, -0.05046676... |
307 | 307 | ['Hongyuan Mei', 'Mohit Bansal', 'Matthew R. Walter'] | 1506.04089v4 | We propose a neural sequence-to-sequence model for direction following, a
task that is essential to realizing effective autonomous agents. Our
alignment-based encoder-decoder model with long short-term memory recurrent
neural networks (LSTM-RNN) translates natural language instructions to action
sequences based upon a ... | Listen, Attend, and Walk: Neural Mapping of Navigational Instructions to
Action Sequences | 2,015 | http://arxiv.org/pdf/1506.04089v4 | Title Listen Attend Walk Neural Mapping Navigational Instructions Action Sequences Summary propose neural sequencetosequence model direction following task essential realizing effective autonomous agent alignmentbased encoderdecoder model long shortterm memory recurrent neural network LSTMRNN translates natural languag... | [0.024290094152092934, -0.0022293042857199907, 0.0014228236395865679, 0.036840006709098816, -0.017758684232831, -0.003951700404286385, -0.01378256268799305, -0.05063077062368393, 0.008958814665675163, -0.051902733743190765, -0.01914503239095211, -0.005800245329737663, 0.028867525979876518, 0.05201384425163269, 0.019872... |
308 | 308 | ['Lili Mou', 'Zhengdong Lu', 'Hang Li', 'Zhi Jin'] | 1612.02741v4 | Building neural networks to query a knowledge base (a table) with natural
language is an emerging research topic in deep learning. An executor for table
querying typically requires multiple steps of execution because queries may
have complicated structures. In previous studies, researchers have developed
either fully d... | Coupling Distributed and Symbolic Execution for Natural Language Queries | 2,016 | http://arxiv.org/pdf/1612.02741v4 | Title Coupling Distributed Symbolic Execution Natural Language Queries Summary Building neural network query knowledge base table natural language emerging research topic deep learning executor table querying typically requires multiple step execution query may complicated structure previous study researcher developed ... | [0.03237861767411232, 0.0621078796684742, -0.007896753028035164, 0.016422171145677567, -0.02499024197459221, 0.01809725910425186, 0.026219798251986504, 0.0015843062428757548, 0.013832502998411655, -0.029324224218726158, -0.015943409875035286, 0.02600032649934292, -0.0275852270424366, 0.05248690024018288, -0.02991908974... |
309 | 309 | ['Christian Napoli', 'Giuseppe Pappalardo', 'Emiliano Tramontana'] | 1409.8484v1 | Due to the huge availability of documents in digital form, and the deception
possibility raise bound to the essence of digital documents and the way they
are spread, the authorship attribution problem has constantly increased its
relevance. Nowadays, authorship attribution,for both information retrieval and
analysis, h... | An agent-driven semantical identifier using radial basis neural networks
and reinforcement learning | 2,014 | http://arxiv.org/pdf/1409.8484v1 | Title agentdriven semantical identifier using radial basis neural network reinforcement learning Summary Due huge availability document digital form deception possibility raise bound essence digital document way spread authorship attribution problem constantly increased relevance Nowadays authorship attributionfor info... | [0.08474917709827423, 0.03390363231301308, -0.015425737015902996, 0.03638617321848869, -0.06699015945196152, -0.004816551227122545, 0.07219133526086807, -0.0009300599340349436, -0.037098102271556854, -0.03223426267504692, 0.003375340485945344, 0.04647037014365196, -0.01319155190140009, -0.021497365087270737, -0.0090126... |
310 | 310 | ['Karla Stepanova', 'Matej Hoffmann', 'Zdenek Straka', 'Frederico B. Klein', 'Angelo Cangelosi', 'Michal Vavrecka'] | 1706.02490v1 | Humans and animals are constantly exposed to a continuous stream of sensory
information from different modalities. At the same time, they form more
compressed representations like concepts or symbols. In species that use
language, this process is further structured by this interaction, where a
mapping between the senso... | Where is my forearm? Clustering of body parts from simultaneous tactile
and linguistic input using sequential mapping | 2,017 | http://arxiv.org/pdf/1706.02490v1 | Title forearm Clustering body part simultaneous tactile linguistic input using sequential mapping Summary Humans animal constantly exposed continuous stream sensory information different modality time form compressed representation like concept symbol specie use language process structured interaction mapping sensorimo... | [0.0074343751184642315, -0.02270730584859848, -0.017841124907135963, -0.012989522889256477, 0.009440958499908447, 0.009954162873327732, 0.015917371958494186, -0.015028340741991997, -0.016659880056977272, -0.02245429903268814, -0.042172789573669434, -0.03100765496492386, 0.054333820939064026, 0.07643438875675201, -0.003... |
311 | 311 | ['Tara N. Sainath', 'Brian Kingsbury', 'Abdel-rahman Mohamed', 'George E. Dahl', 'George Saon', 'Hagen Soltau', 'Tomas Beran', 'Aleksandr Y. Aravkin', 'Bhuvana Ramabhadran'] | 1309.1501v3 | Deep Convolutional Neural Networks (CNNs) are more powerful than Deep Neural
Networks (DNN), as they are able to better reduce spectral variation in the
input signal. This has also been confirmed experimentally, with CNNs showing
improvements in word error rate (WER) between 4-12% relative compared to DNNs
across a var... | Improvements to deep convolutional neural networks for LVCSR | 2,013 | http://arxiv.org/pdf/1309.1501v3 | Title Improvements deep convolutional neural network LVCSR Summary Deep Convolutional Neural Networks CNNs powerful Deep Neural Networks DNN able better reduce spectral variation input signal also confirmed experimentally CNNs showing improvement word error rate WER 412 relative compared DNNs across variety LVCSR task ... | [0.01443859189748764, 0.02785332500934601, 0.02006520889699459, 0.079326331615448, 0.006320999935269356, -0.029197150841355324, 0.010000168345868587, 0.03634302318096161, -0.06231171265244484, 0.013957126997411251, -0.056182097643613815, -0.019267337396740913, 0.017107225954532623, 0.023459646850824356, 0.0020984876900... |
312 | 312 | ['Hao Wang', 'Naiyan Wang', 'Dit-Yan Yeung'] | 1409.2944v2 | Collaborative filtering (CF) is a successful approach commonly used by many
recommender systems. Conventional CF-based methods use the ratings given to
items by users as the sole source of information for learning to make
recommendation. However, the ratings are often very sparse in many
applications, causing CF-based ... | Collaborative Deep Learning for Recommender Systems | 2,014 | http://arxiv.org/pdf/1409.2944v2 | Title Collaborative Deep Learning Recommender Systems Summary Collaborative filtering CF successful approach commonly used many recommender system Conventional CFbased method use rating given item user sole source information learning make recommendation However rating often sparse many application causing CFbased meth... | [0.016514373943209648, 0.01339777559041977, 0.011778511106967926, 0.006345792673528194, -0.03892507404088974, -0.0008645325433462858, 0.052268750965595245, -0.007281020749360323, -0.016501009464263916, 0.008097927086055279, -0.08761310577392578, -0.004940424580127001, 0.0025139900390058756, 0.07491623610258102, -0.0362... |
313 | 313 | ['Leila Arras', 'Franziska Horn', 'Grégoire Montavon', 'Klaus-Robert Müller', 'Wojciech Samek'] | 1606.07298v1 | Layer-wise relevance propagation (LRP) is a recently proposed technique for
explaining predictions of complex non-linear classifiers in terms of input
variables. In this paper, we apply LRP for the first time to natural language
processing (NLP). More precisely, we use it to explain the predictions of a
convolutional n... | Explaining Predictions of Non-Linear Classifiers in NLP | 2,016 | http://arxiv.org/pdf/1606.07298v1 | Title Explaining Predictions NonLinear Classifiers NLP Summary Layerwise relevance propagation LRP recently proposed technique explaining prediction complex nonlinear classifier term input variable paper apply LRP first time natural language processing NLP precisely use explain prediction convolutional neural network C... | [0.04382606968283653, 0.034422617405653, -0.023115500807762146, 0.04026142880320549, -0.01811833120882511, 0.002976194489747286, 0.009605631232261658, 0.030828626826405525, -0.044057056307792664, -0.05451709404587746, 0.03667408600449562, 0.032001424580812454, -0.00562599953263998, 0.04703965783119202, 0.00246474356390... |
314 | 314 | ['Vasily Pestun', 'Yiannis Vlassopoulos'] | 1710.10248v2 | We propose a new statistical model suitable for machine learning of systems
with long distance correlations such as natural languages. The model is based
on directed acyclic graph decorated by multi-linear tensor maps in the vertices
and vector spaces in the edges, called tensor network. Such tensor networks
have been ... | Tensor network language model | 2,017 | http://arxiv.org/pdf/1710.10248v2 | Title Tensor network language model Summary propose new statistical model suitable machine learning system long distance correlation natural language model based directed acyclic graph decorated multilinear tensor map vertex vector space edge called tensor network tensor network previously employed effective numerical ... | [0.0006524427444674075, -0.01208573393523693, -0.03267817571759224, 0.030661992728710175, -0.04480453208088875, 0.01856502890586853, 0.004919448867440224, 0.013554394245147705, -0.011060710996389389, -0.015567171387374401, -0.008440441451966763, -0.06007058173418045, 0.011063058860599995, 0.016790524125099182, 0.033239... |
315 | 315 | ['Vasily Pestun', 'John Terilla', 'Yiannis Vlassopoulos'] | 1711.01416v1 | We propose a statistical model for natural language that begins by
considering language as a monoid, then representing it in complex matrices with
a compatible translation invariant probability measure. We interpret the
probability measure as arising via the Born rule from a translation invariant
matrix product state. | Language as a matrix product state | 2,017 | http://arxiv.org/pdf/1711.01416v1 | Title Language matrix product state Summary propose statistical model natural language begin considering language monoid representing complex matrix compatible translation invariant probability measure interpret probability measure arising via Born rule translation invariant matrix product state Authors 0 Ahmed Osman W... | [0.01929464563727379, 0.024036217480897903, -0.05126729980111122, 0.029100241139531136, -0.05427022650837898, 0.011777141131460667, 0.03540840372443199, 0.009799971245229244, -0.03212626278400421, -0.055762164294719696, 0.026631193235516548, -0.04721926152706146, 0.041042253375053406, 0.06456483155488968, 0.01606570184... |
316 | 316 | ['Tara N. Sainath', 'Lior Horesh', 'Brian Kingsbury', 'Aleksandr Y. Aravkin', 'Bhuvana Ramabhadran'] | 1309.1508v3 | Hessian-free training has become a popular parallel second or- der
optimization technique for Deep Neural Network training. This study aims at
speeding up Hessian-free training, both by means of decreasing the amount of
data used for training, as well as through reduction of the number of Krylov
subspace solver iterati... | Accelerating Hessian-free optimization for deep neural networks by
implicit preconditioning and sampling | 2,013 | http://arxiv.org/pdf/1309.1508v3 | Title Accelerating Hessianfree optimization deep neural network implicit preconditioning sampling Summary Hessianfree training become popular parallel second der optimization technique Deep Neural Network training study aim speeding Hessianfree training mean decreasing amount data used training well reduction number Kr... | [-0.02776738442480564, 0.06034661456942558, -0.010230598039925098, 0.026364007964730263, 0.034973468631505966, -0.011620214208960533, 0.01766505092382431, 0.00014441912935581058, 0.004503607749938965, -0.017087500542402267, -0.046327415853738785, -0.03596096485853195, -0.004380589351058006, -0.033307358622550964, -0.00... |
317 | 317 | ['Roberto Camacho Barranco', 'Laura M. Rodriguez', 'Rebecca Urbina', 'M. Shahriar Hossain'] | 1606.07496v1 | While textual reviews have become prominent in many recommendation-based
systems, automated frameworks to provide relevant visual cues against text
reviews where pictures are not available is a new form of task confronted by
data mining and machine learning researchers. Suggestions of pictures that are
relevant to the ... | Is a Picture Worth Ten Thousand Words in a Review Dataset? | 2,016 | http://arxiv.org/pdf/1606.07496v1 | Title Picture Worth Ten Thousand Words Review Dataset Summary textual review become prominent many recommendationbased system automated framework provide relevant visual cue text review picture available new form task confronted data mining machine learning researcher Suggestions picture relevant content review could s... | [0.06192536652088165, 0.02066613733768463, -0.0245407372713089, -0.012043209746479988, 0.0009616810129955411, 0.021104011684656143, 0.03494877740740776, 0.03164412081241608, -0.006985929794609547, -0.0476079061627388, -0.026939189061522484, 0.03745461255311966, 0.0016328382771462202, 0.13056330382823944, -0.00723130442... |
318 | 318 | ['Matthias Scholz'] | 1204.0684v1 | Linear principal component analysis (PCA) can be extended to a nonlinear PCA
by using artificial neural networks. But the benefit of curved components
requires a careful control of the model complexity. Moreover, standard
techniques for model selection, including cross-validation and more generally
the use of an indepe... | Validation of nonlinear PCA | 2,012 | http://arxiv.org/pdf/1204.0684v1 | Title Validation nonlinear PCA Summary Linear principal component analysis PCA extended nonlinear PCA using artificial neural network benefit curved component requires careful control model complexity Moreover standard technique model selection including crossvalidation generally use independent test set fail applied n... | [-0.004732702858746052, 0.03236550837755203, -0.03606225922703743, 0.009588680230081081, 0.045808855444192886, 0.0013039211044088006, 0.004947184585034847, -0.015589880757033825, 0.0017258682055398822, 0.04779564589262009, 0.06112824007868767, 0.025742240250110626, 0.054516520351171494, 0.05529404431581497, 0.028260655... |
319 | 319 | ['Ethan Fetaya', 'Ohad Shamir', 'Shimon Ullman'] | 1406.2602v1 | We consider the problem of learning from a similarity matrix (such as
spectral clustering and lowd imensional embedding), when computing pairwise
similarities are costly, and only a limited number of entries can be observed.
We provide a theoretical analysis using standard notions of graph
approximation, significantly ... | Graph Approximation and Clustering on a Budget | 2,014 | http://arxiv.org/pdf/1406.2602v1 | Title Graph Approximation Clustering Budget Summary consider problem learning similarity matrix spectral clustering lowd imensional embedding computing pairwise similarity costly limited number entry observed provide theoretical analysis using standard notion graph approximation significantly generalizing previous resu... | [-0.035050295293331146, -0.040311697870492935, -0.012310556136071682, 0.040419816970825195, -0.0392531156539917, -0.019893303513526917, -0.0028460719622671604, 0.025596272200345993, 0.062237318605184555, -0.00827780831605196, -0.01603635773062706, 0.03985195234417915, 0.016865190118551254, -0.007200275082141161, 0.0191... |
320 | 320 | ['Shai Shalev-Shwartz', 'Yonatan Wexler', 'Amnon Shashua'] | 1109.0820v1 | Multiclass prediction is the problem of classifying an object into a relevant
target class. We consider the problem of learning a multiclass predictor that
uses only few features, and in particular, the number of used features should
increase sub-linearly with the number of possible classes. This implies that
features ... | ShareBoost: Efficient Multiclass Learning with Feature Sharing | 2,011 | http://arxiv.org/pdf/1109.0820v1 | Title ShareBoost Efficient Multiclass Learning Feature Sharing Summary Multiclass prediction problem classifying object relevant target class consider problem learning multiclass predictor us feature particular number used feature increase sublinearly number possible class implies feature shared several class describe ... | [-0.024203041568398476, 0.049770817160606384, -0.012178342789411545, 0.006836077198386192, -0.0008035636856220663, -0.022024810314178467, 0.06082786247134209, 0.016924966126680374, -0.03441485017538071, -0.039819348603487015, 0.012645447626709938, 0.006126794032752514, -0.02052612416446209, 0.07946121692657471, -0.0219... |
321 | 321 | ['Nan Lin', 'Junhai Jiang', 'Shicheng Guo', 'Momiao Xiong'] | 1408.0204v1 | Due to advances in sensors, growing large and complex medical image data have
the ability to visualize the pathological change in the cellular or even the
molecular level or anatomical changes in tissues and organs. As a consequence,
the medical images have the potential to enhance diagnosis of disease,
prediction of c... | Functional Principal Component Analysis and Randomized Sparse Clustering
Algorithm for Medical Image Analysis | 2,014 | http://arxiv.org/pdf/1408.0204v1 | Title Functional Principal Component Analysis Randomized Sparse Clustering Algorithm Medical Image Analysis Summary Due advance sensor growing large complex medical image data ability visualize pathological change cellular even molecular level anatomical change tissue organ consequence medical image potential enhance d... | [-0.002297246130183339, -0.005757798440754414, -0.03162936493754387, -0.0022903424687683582, 0.019461993128061295, 0.02461056225001812, 0.03363807499408722, 0.042401023209095, 0.03999677300453186, 0.06328421086072922, 0.014441088773310184, -0.02556086704134941, 0.04983000457286835, 0.03338916599750519, 0.01118294708430... |
322 | 322 | ['Liwen Zhang', 'Subhransu Maji', 'Ryota Tomioka'] | 1503.01521v3 | Similarity between objects is multi-faceted and it can be easier for human
annotators to measure it when the focus is on a specific aspect. We consider
the problem of mapping objects into view-specific embeddings where the distance
between them is consistent with the similarity comparisons of the form "from
the t-th vi... | Jointly Learning Multiple Measures of Similarities from Triplet
Comparisons | 2,015 | http://arxiv.org/pdf/1503.01521v3 | Title Jointly Learning Multiple Measures Similarities Triplet Comparisons Summary Similarity object multifaceted easier human annotator measure focus specific aspect consider problem mapping object viewspecific embeddings distance consistent similarity comparison form tth view object similar B C framework jointly learn... | [-0.027166103944182396, 0.041536830365657806, 0.021815259009599686, 0.029312513768672943, 0.005949103739112616, 0.036198556423187256, 0.03107762150466442, 0.0023092380724847317, 0.007253725081682205, -0.0360211506485939, -0.03544493019580841, 0.027011608704924583, -0.001517869415692985, -0.017213789746165276, 0.0248469... |
323 | 323 | ['Andreas C. Damianou', 'Michalis K. Titsias', 'Neil D. Lawrence'] | 1409.2287v1 | The Gaussian process latent variable model (GP-LVM) provides a flexible
approach for non-linear dimensionality reduction that has been widely applied.
However, the current approach for training GP-LVMs is based on maximum
likelihood, where the latent projection variables are maximized over rather
than integrated out. I... | Variational Inference for Uncertainty on the Inputs of Gaussian Process
Models | 2,014 | http://arxiv.org/pdf/1409.2287v1 | Title Variational Inference Uncertainty Inputs Gaussian Process Models Summary Gaussian process latent variable model GPLVM provides flexible approach nonlinear dimensionality reduction widely applied However current approach training GPLVMs based maximum likelihood latent projection variable maximized rather integrate... | [-0.01661471277475357, 0.0767054408788681, -0.0016049164114519954, 0.008719809353351593, 0.0033447102177888155, 0.0007342900498770177, 0.0016128488350659609, -0.02853258326649666, -0.07877147942781448, 0.030441954731941223, 0.04057313874363899, -0.006728356704115868, 0.01299707219004631, 0.08199609071016312, 0.02946901... |
324 | 324 | ['Mehdi Mirza', 'Simon Osindero'] | 1411.1784v1 | Generative Adversarial Nets [8] were recently introduced as a novel way to
train generative models. In this work we introduce the conditional version of
generative adversarial nets, which can be constructed by simply feeding the
data, y, we wish to condition on to both the generator and discriminator. We
show that this... | Conditional Generative Adversarial Nets | 2,014 | http://arxiv.org/pdf/1411.1784v1 | Title Conditional Generative Adversarial Nets Summary Generative Adversarial Nets 8 recently introduced novel way train generative model work introduce conditional version generative adversarial net constructed simply feeding data wish condition generator discriminator show model generate MNIST digit conditioned class ... | [0.02483746036887169, 0.06669881194829941, 0.0025523377116769552, 0.036323923617601395, 0.007651783991605043, -0.009663864970207214, 0.05208682641386986, 0.01105070672929287, 0.017810167744755745, -0.014641670510172844, 0.016967596486210823, 0.0031483510974794626, -0.04884492978453636, 0.042640913277864456, 0.034193415... |
325 | 325 | ['Krzysztof Chalupka', 'Pietro Perona', 'Frederick Eberhardt'] | 1412.2309v2 | We provide a rigorous definition of the visual cause of a behavior that is
broadly applicable to the visually driven behavior in humans, animals, neurons,
robots and other perceiving systems. Our framework generalizes standard
accounts of causal learning to settings in which the causal variables need to
be constructed ... | Visual Causal Feature Learning | 2,014 | http://arxiv.org/pdf/1412.2309v2 | Title Visual Causal Feature Learning Summary provide rigorous definition visual cause behavior broadly applicable visually driven behavior human animal neuron robot perceiving system framework generalizes standard account causal learning setting causal variable need constructed microvariables prove Causal Coarsening Th... | [-0.0047796741127967834, 0.010913015343248844, -0.02355905994772911, -0.037079911679029465, -0.020145250484347343, 0.015469438396394253, 0.020491614937782288, 0.012045727111399174, 0.005402937065809965, -0.005775235127657652, 0.019823789596557617, 0.10863538831472397, -0.0016414948040619493, 0.08719237148761749, 0.0548... |
326 | 326 | ['Behnam Neyshabur', 'Ryota Tomioka', 'Nathan Srebro'] | 1412.6614v4 | We present experiments demonstrating that some other form of capacity
control, different from network size, plays a central role in learning
multilayer feed-forward networks. We argue, partially through analogy to matrix
factorization, that this is an inductive bias that can help shed light on deep
learning. | In Search of the Real Inductive Bias: On the Role of Implicit
Regularization in Deep Learning | 2,014 | http://arxiv.org/pdf/1412.6614v4 | Title Search Real Inductive Bias Role Implicit Regularization Deep Learning Summary present experiment demonstrating form capacity control different network size play central role learning multilayer feedforward network argue partially analogy matrix factorization inductive bias help shed light deep learning Authors 0 ... | [-0.032525043934583664, 0.006899933330714703, -0.025447964668273926, 0.012960067950189114, 0.007396212313324213, -0.0015194164589047432, 0.02669672481715679, -0.003807682543992996, -0.027569405734539032, 0.05250503495335579, -0.03671678900718689, -0.030321545898914337, -0.01712300255894661, 0.06209549680352211, 0.06958... |
327 | 327 | ['Muhammad Ghifary', 'W. Bastiaan Kleijn', 'Mengjie Zhang', 'David Balduzzi'] | 1508.07680v1 | The problem of domain generalization is to take knowledge acquired from a
number of related domains where training data is available, and to then
successfully apply it to previously unseen domains. We propose a new feature
learning algorithm, Multi-Task Autoencoder (MTAE), that provides good
generalization performance ... | Domain Generalization for Object Recognition with Multi-task
Autoencoders | 2,015 | http://arxiv.org/pdf/1508.07680v1 | Title Domain Generalization Object Recognition Multitask Autoencoders Summary problem domain generalization take knowledge acquired number related domain training data available successfully apply previously unseen domain propose new feature learning algorithm MultiTask Autoencoder MTAE provides good generalization per... | [-0.024374762549996376, 0.03168083727359772, -0.026710782200098038, 0.02303546667098999, 0.020019298419356346, 0.0238779429346323, 0.0904061496257782, -0.003569613676518202, -0.028121083974838257, -0.03811495378613472, -0.05400795489549637, -0.021186141297221184, -0.021273376420140266, 0.07549112290143967, 0.0344295278... |
328 | 328 | ['John-Alexander M. Assael', 'Niklas Wahlström', 'Thomas B. Schön', 'Marc Peter Deisenroth'] | 1510.02173v2 | Data-efficient reinforcement learning (RL) in continuous state-action spaces
using very high-dimensional observations remains a key challenge in developing
fully autonomous systems. We consider a particularly important instance of this
challenge, the pixels-to-torques problem, where an RL agent learns a
closed-loop con... | Data-Efficient Learning of Feedback Policies from Image Pixels using
Deep Dynamical Models | 2,015 | http://arxiv.org/pdf/1510.02173v2 | Title DataEfficient Learning Feedback Policies Image Pixels using Deep Dynamical Models Summary Dataefficient reinforcement learning RL continuous stateaction space using highdimensional observation remains key challenge developing fully autonomous system consider particularly important instance challenge pixelstotorqu... | [-0.003288840176537633, 0.01718503050506115, -0.008624612353742123, 0.03099151700735092, -0.009667797945439816, 0.008136320859193802, 0.024760344997048378, -0.004043809603899717, -0.055324871093034744, 0.010250277817249298, 0.00019719271222129464, 0.020453568547964096, -0.03006250038743019, 0.0509597547352314, 0.042944... |
329 | 329 | ['Muhammad Ghifary', 'David Balduzzi', 'W. Bastiaan Kleijn', 'Mengjie Zhang'] | 1510.04373v2 | This paper addresses classification tasks on a particular target domain in
which labeled training data are only available from source domains different
from (but related to) the target. Two closely related frameworks, domain
adaptation and domain generalization, are concerned with such tasks, where the
only difference ... | Scatter Component Analysis: A Unified Framework for Domain Adaptation
and Domain Generalization | 2,015 | http://arxiv.org/pdf/1510.04373v2 | Title Scatter Component Analysis Unified Framework Domain Adaptation Domain Generalization Summary paper address classification task particular target domain labeled training data available source domain different related target Two closely related framework domain adaptation domain generalization concerned task differ... | [-0.027125462889671326, -0.02980937995016575, -0.02665247768163681, 0.0053611998446285725, 0.03437906876206398, 0.014813361689448357, 0.08279822766780853, -0.0020676180720329285, -0.057248033583164215, -0.025099707767367363, -0.010397735051810741, -0.017613111063838005, 0.03673088923096657, 0.052566204220056534, -0.000... |
330 | 330 | ['Zhao Kang', 'Chong Peng', 'Qiang Cheng'] | 1510.08971v1 | Matrix rank minimization problem is in general NP-hard. The nuclear norm is
used to substitute the rank function in many recent studies. Nevertheless, the
nuclear norm approximation adds all singular values together and the
approximation error may depend heavily on the magnitudes of singular values.
This might restrict... | Robust Subspace Clustering via Tighter Rank Approximation | 2,015 | http://arxiv.org/pdf/1510.08971v1 | Title Robust Subspace Clustering via Tighter Rank Approximation Summary Matrix rank minimization problem general NPhard nuclear norm used substitute rank function many recent study Nevertheless nuclear norm approximation add singular value together approximation error may depend heavily magnitude singular value might r... | [-0.0073812054470181465, 0.02380376122891903, -0.031976472586393356, 0.05247168987989426, 0.034567274153232574, 0.01644269749522209, 0.01957681030035019, 0.06488088518381119, 0.04088689759373665, 0.034004077315330505, 0.011404366232454777, 0.0055304719135165215, 0.017084890976548195, 0.007732492871582508, 0.02722683362... |
331 | 331 | ['Amogh Gudi'] | 1512.00743v2 | The human face constantly conveys information, both consciously and
subconsciously. However, as basic as it is for humans to visually interpret
this information, it is quite a big challenge for machines. Conventional
semantic facial feature recognition and analysis techniques are already in use
and are based on physiol... | Recognizing Semantic Features in Faces using Deep Learning | 2,015 | http://arxiv.org/pdf/1512.00743v2 | Title Recognizing Semantic Features Faces using Deep Learning Summary human face constantly conveys information consciously subconsciously However basic human visually interpret information quite big challenge machine Conventional semantic facial feature recognition analysis technique already use based physiological he... | [0.01549519132822752, 0.006907382048666477, 0.010920019820332527, 0.05677114427089691, 0.016417087987065315, 0.05331089347600937, 0.014811868779361248, -0.0002497587993275374, 0.0006581523921340704, -0.011993249878287315, -0.017399411648511887, 0.012742520309984684, 0.016883907839655876, 0.0881194919347763, 0.008204822... |
332 | 332 | ['Muhammad Ghifary', 'W. Bastiaan Kleijn', 'Mengjie Zhang', 'David Balduzzi', 'Wen Li'] | 1607.03516v2 | In this paper, we propose a novel unsupervised domain adaptation algorithm
based on deep learning for visual object recognition. Specifically, we design a
new model called Deep Reconstruction-Classification Network (DRCN), which
jointly learns a shared encoding representation for two tasks: i) supervised
classification... | Deep Reconstruction-Classification Networks for Unsupervised Domain
Adaptation | 2,016 | http://arxiv.org/pdf/1607.03516v2 | Title Deep ReconstructionClassification Networks Unsupervised Domain Adaptation Summary paper propose novel unsupervised domain adaptation algorithm based deep learning visual object recognition Specifically design new model called Deep ReconstructionClassification Network DRCN jointly learns shared encoding representa... | [-0.0274956151843071, 0.05599018186330795, -0.002992043038830161, 0.05921749770641327, 0.014201276004314423, 0.01939086616039276, 0.03933119401335716, -0.010237772017717361, -0.04541082680225372, -0.03239825367927551, -0.045764897018671036, 0.01125722099095583, -0.00964750349521637, 0.05330250412225723, -0.007811758201... |
333 | 333 | ['Po-Hsuan Chen', 'Xia Zhu', 'Hejia Zhang', 'Javier S. Turek', 'Janice Chen', 'Theodore L. Willke', 'Uri Hasson', 'Peter J. Ramadge'] | 1608.04846v1 | Finding the most effective way to aggregate multi-subject fMRI data is a
long-standing and challenging problem. It is of increasing interest in
contemporary fMRI studies of human cognition due to the scarcity of data per
subject and the variability of brain anatomy and functional response across
subjects. Recent work o... | A Convolutional Autoencoder for Multi-Subject fMRI Data Aggregation | 2,016 | http://arxiv.org/pdf/1608.04846v1 | Title Convolutional Autoencoder MultiSubject fMRI Data Aggregation Summary Finding effective way aggregate multisubject fMRI data longstanding challenging problem increasing interest contemporary fMRI study human cognition due scarcity data per subject variability brain anatomy functional response across subject Recent... | [-0.02918638475239277, 0.11094777286052704, -0.022762315347790718, 0.03536400571465492, 0.01663896255195141, 0.04999234899878502, 0.10483603924512863, 0.008796414360404015, -0.045401688665151596, 0.0467711016535759, -0.05497492849826813, -0.048663605004549026, 0.03624100238084793, 0.10474361479282379, 0.058560971170663... |
334 | 334 | ['Yun Wang', 'Xu Chen', 'Peter J. Ramadge'] | 1608.06010v2 | One way to solve lasso problems when the dictionary does not fit into
available memory is to first screen the dictionary to remove unneeded features.
Prior research has shown that sequential screening methods offer the greatest
promise in this endeavor. Most existing work on sequential screening targets
the context of ... | Feedback-Controlled Sequential Lasso Screening | 2,016 | http://arxiv.org/pdf/1608.06010v2 | Title FeedbackControlled Sequential Lasso Screening Summary One way solve lasso problem dictionary fit available memory first screen dictionary remove unneeded feature Prior research shown sequential screening method offer greatest promise endeavor existing work sequential screening target context tuning parameter sele... | [0.0072308434173464775, 0.06638697534799576, -0.012301868759095669, -0.020716991275548935, 0.03417609632015228, 0.009785800240933895, 0.038112539798021317, 0.04079483821988106, 0.04703814163804054, -0.024393374100327492, 0.016419949010014534, 0.02853362262248993, 0.0038744346238672733, 0.046383678913116455, 0.023715449... |
335 | 335 | ['Yun Wang', 'Peter J. Ramadge'] | 1608.06014v2 | Recently dictionary screening has been proposed as an effective way to
improve the computational efficiency of solving the lasso problem, which is one
of the most commonly used method for learning sparse representations. To
address today's ever increasing large dataset, effective screening relies on a
tight region boun... | The Symmetry of a Simple Optimization Problem in Lasso Screening | 2,016 | http://arxiv.org/pdf/1608.06014v2 | Title Symmetry Simple Optimization Problem Lasso Screening Summary Recently dictionary screening proposed effective way improve computational efficiency solving lasso problem one commonly used method learning sparse representation address today ever increasing large dataset effective screening relies tight region bound... | [-0.0346694178879261, 0.02484263852238655, -0.003674357198178768, 0.031780585646629333, 0.016243983060121536, 0.013051044195890427, 0.014753670431673527, 0.010811420157551765, 0.03335801139473915, -0.024480551481246948, 0.01261131465435028, 0.0030675269663333893, 0.0026696978602558374, 0.03679244965314865, 0.0142356306... |
336 | 336 | ['Maxime Bucher', 'Stéphane Herbin', 'Frédéric Jurie'] | 1608.07441v1 | Zero-Shot learning has been shown to be an efficient strategy for domain
adaptation. In this context, this paper builds on the recent work of Bucher et
al. [1], which proposed an approach to solve Zero-Shot classification problems
(ZSC) by introducing a novel metric learning based objective function. This
objective fun... | Hard Negative Mining for Metric Learning Based Zero-Shot Classification | 2,016 | http://arxiv.org/pdf/1608.07441v1 | Title Hard Negative Mining Metric Learning Based ZeroShot Classification Summary ZeroShot learning shown efficient strategy domain adaptation context paper build recent work Bucher et al 1 proposed approach solve ZeroShot classification problem ZSC introducing novel metric learning based objective function objective fu... | [-0.06956709176301956, -0.04282480478286743, -0.005442819558084011, 0.011128395795822144, -0.0002812288876157254, 0.0005613146349787712, 0.023711450397968292, 0.03313487023115158, -0.02153163030743599, -0.03498203679919243, -0.02448679693043232, 8.437623182544485e-05, -0.009236202575266361, 0.014643154107034206, 0.0095... |
337 | 337 | ['Xiang Xiang', 'Trac D. Tran'] | 1609.07042v4 | In this paper, we deal with two challenges for measuring the similarity of
the subject identities in practical video-based face recognition - the
variation of the head pose in uncontrolled environments and the computational
expense of processing videos. Since the frame-wise feature mean is unable to
characterize the po... | Pose-Selective Max Pooling for Measuring Similarity | 2,016 | http://arxiv.org/pdf/1609.07042v4 | Title PoseSelective Max Pooling Measuring Similarity Summary paper deal two challenge measuring similarity subject identity practical videobased face recognition variation head pose uncontrolled environment computational expense processing video Since framewise feature mean unable characterize pose diversity among fram... | [-0.03755255788564682, 0.02456127665936947, 0.0022203100379556417, 0.05058281868696213, 0.0008902945555746555, 0.04240700602531433, 0.05976670980453491, 0.005668295081704855, -0.009141220711171627, -0.00640525110065937, 0.02521650679409504, -0.029562024399638176, 0.028338605538010597, 0.01709176041185856, 0.02508774399... |
338 | 338 | ['Shehroz S. Khan', 'Babak Taati'] | 1610.03761v3 | A fall is an abnormal activity that occurs rarely, so it is hard to collect
real data for falls. It is, therefore, difficult to use supervised learning
methods to automatically detect falls. Another challenge in using machine
learning methods to automatically detect falls is the choice of engineered
features. In this p... | Detecting Unseen Falls from Wearable Devices using Channel-wise Ensemble
of Autoencoders | 2,016 | http://arxiv.org/pdf/1610.03761v3 | Title Detecting Unseen Falls Wearable Devices using Channelwise Ensemble Autoencoders Summary fall abnormal activity occurs rarely hard collect real data fall therefore difficult use supervised learning method automatically detect fall Another challenge using machine learning method automatically detect fall choice eng... | [-0.07382499426603317, -0.013315945863723755, -0.04300755262374878, 0.025670094415545464, 0.055513981729745865, -0.007175646722316742, 0.024653678759932518, -0.001905350130982697, -0.00956262368708849, -0.030558494850993156, 0.08103613555431366, 0.03480686992406845, 0.03531024605035782, 0.07011061161756516, 0.038253977... |
339 | 339 | ['Jure Sokolic', 'Raja Giryes', 'Guillermo Sapiro', 'Miguel R. D. Rodrigues'] | 1610.04574v3 | This paper studies the generalization error of invariant classifiers. In
particular, we consider the common scenario where the classification task is
invariant to certain transformations of the input, and that the classifier is
constructed (or learned) to be invariant to these transformations. Our approach
relies on fa... | Generalization Error of Invariant Classifiers | 2,016 | http://arxiv.org/pdf/1610.04574v3 | Title Generalization Error Invariant Classifiers Summary paper study generalization error invariant classifier particular consider common scenario classification task invariant certain transformation input classifier constructed learned invariant transformation approach relies factoring input space product base space s... | [-0.0070000262930989265, 0.025901639834046364, -0.0019966927357017994, 0.009904432110488415, -0.006227373145520687, 0.01137002557516098, 0.029324378818273544, 0.00957962404936552, -0.05482788756489754, -0.015758013352751732, 0.04085661843419075, 0.01983199454843998, -0.005337031092494726, 0.01314003299921751, 0.0235902... |
340 | 340 | ['Seyed-Mohsen Moosavi-Dezfooli', 'Alhussein Fawzi', 'Omar Fawzi', 'Pascal Frossard'] | 1610.08401v3 | Given a state-of-the-art deep neural network classifier, we show the
existence of a universal (image-agnostic) and very small perturbation vector
that causes natural images to be misclassified with high probability. We
propose a systematic algorithm for computing universal perturbations, and show
that state-of-the-art ... | Universal adversarial perturbations | 2,016 | http://arxiv.org/pdf/1610.08401v3 | Title Universal adversarial perturbation Summary Given stateoftheart deep neural network classifier show existence universal imageagnostic small perturbation vector cause natural image misclassified high probability propose systematic algorithm computing universal perturbation show stateoftheart deep neural network hig... | [-0.02164355292916298, 0.03978942707180977, -0.02774822898209095, 0.014536814764142036, -0.02070380188524723, -0.01888200268149376, 0.012238597497344017, -0.008295377716422081, -0.01625387743115425, 0.0015916165430098772, 0.005666621029376984, -0.0056359716691076756, -0.004936543758958578, 0.023915022611618042, 0.07806... |
341 | 341 | ['Xiang Xiang', 'Trac D. Tran'] | 1701.03102v1 | Limited annotated data available for the recognition of facial expression and
action units embarrasses the training of deep networks, which can learn
disentangled invariant features. However, a linear model with just several
parameters normally is not demanding in terms of training data. In this paper,
we propose an el... | Linear Disentangled Representation Learning for Facial Actions | 2,017 | http://arxiv.org/pdf/1701.03102v1 | Title Linear Disentangled Representation Learning Facial Actions Summary Limited annotated data available recognition facial expression action unit embarrasses training deep network learn disentangled invariant feature However linear model several parameter normally demanding term training data paper propose elegant li... | [-0.04392202943563461, 0.0781177431344986, -0.0037950545083731413, 0.05190938338637352, 0.018013276159763336, 0.07468525320291519, 0.01124053169041872, 0.013821710832417011, 0.018731869757175446, -0.04951053857803345, -0.008052118122577667, 0.002034249948337674, -0.013612786307930946, 0.10846571624279022, 0.03620795533... |
342 | 342 | ['Jan Hendrik Metzen', 'Tim Genewein', 'Volker Fischer', 'Bastian Bischoff'] | 1702.04267v2 | Machine learning and deep learning in particular has advanced tremendously on
perceptual tasks in recent years. However, it remains vulnerable against
adversarial perturbations of the input that have been crafted specifically to
fool the system while being quasi-imperceptible to a human. In this work, we
propose to aug... | On Detecting Adversarial Perturbations | 2,017 | http://arxiv.org/pdf/1702.04267v2 | Title Detecting Adversarial Perturbations Summary Machine learning deep learning particular advanced tremendously perceptual task recent year However remains vulnerable adversarial perturbation input crafted specifically fool system quasiimperceptible human work propose augment deep neural network small detector subnet... | [0.009992479346692562, 0.06535062938928604, -0.02379775047302246, 0.05759544298052788, 0.0002230452955700457, -0.01433858834207058, 0.02848367765545845, -0.007302304729819298, -0.00942637212574482, -0.029911896213889122, -0.009395372122526169, 0.036857280880212784, 0.0065127466805279255, 0.02342786267399788, 0.05856072... |
343 | 343 | ['Zhiming Zhou', 'Han Cai', 'Shu Rong', 'Yuxuan Song', 'Kan Ren', 'Weinan Zhang', 'Yong Yu', 'Jun Wang'] | 1703.02000v7 | Class labels have been empirically shown useful in improving the sample
quality of generative adversarial nets (GANs). In this paper, we mathematically
study the properties of the current variants of GANs that make use of class
label information. With class aware gradient and cross-entropy decomposition,
we reveal how ... | Activation Maximization Generative Adversarial Nets | 2,017 | http://arxiv.org/pdf/1703.02000v7 | Title Activation Maximization Generative Adversarial Nets Summary Class label empirically shown useful improving sample quality generative adversarial net GANs paper mathematically study property current variant GANs make use class label information class aware gradient crossentropy decomposition reveal class label ass... | [-0.007207329850643873, 0.09048110246658325, 0.004069111775606871, 0.015610373578965664, 0.020884061232209206, -0.01440146192908287, 0.0439172200858593, 0.01177096925675869, -0.024044563993811607, 0.011028407141566277, -0.03259025141596794, 0.012897496111690998, -0.014729304239153862, 0.02782369963824749, 0.04626112803... |
344 | 344 | ['Ruth Fong', 'Andrea Vedaldi'] | 1704.03296v3 | As machine learning algorithms are increasingly applied to high impact yet
high risk tasks, such as medical diagnosis or autonomous driving, it is
critical that researchers can explain how such algorithms arrived at their
predictions. In recent years, a number of image saliency methods have been
developed to summarize ... | Interpretable Explanations of Black Boxes by Meaningful Perturbation | 2,017 | http://arxiv.org/pdf/1704.03296v3 | Title Interpretable Explanations Black Boxes Meaningful Perturbation Summary machine learning algorithm increasingly applied high impact yet high risk task medical diagnosis autonomous driving critical researcher explain algorithm arrived prediction recent year number image saliency method developed summarize highly co... | [-0.0006938378792256117, -0.02330688014626503, -0.016262996941804886, 0.026938078925013542, -0.0102374954149127, 0.023205429315567017, 0.020958315581083298, 0.042662255465984344, -0.03826913610100746, -0.008606581017374992, 0.07605795562267303, 0.035809777677059174, 0.004910981748253107, 0.0569586418569088, 0.021658621... |
345 | 345 | ['Hong Zhao'] | 1704.06885v1 | Though the deep learning is pushing the machine learning to a new stage,
basic theories of machine learning are still limited. The principle of
learning, the role of the a prior knowledge, the role of neuron bias, and the
basis for choosing neural transfer function and cost function, etc., are still
far from clear. In ... | A General Theory for Training Learning Machine | 2,017 | http://arxiv.org/pdf/1704.06885v1 | Title General Theory Training Learning Machine Summary Though deep learning pushing machine learning new stage basic theory machine learning still limited principle learning role prior knowledge role neuron bias basis choosing neural transfer function cost function etc still far clear paper present general theoretical ... | [0.012812108732759953, -0.01811564527451992, -0.033388908952474594, 0.05551278963685036, 0.019576644524931908, -0.003350692568346858, 0.0046074045822024345, 0.00016276097449008375, -0.016794392839074135, 0.00841854140162468, 0.010855269618332386, 0.001528909895569086, 0.022682560607790947, 0.04476592689752579, 0.040381... |
346 | 346 | ['Yotam Hechtlinger', 'Purvasha Chakravarti', 'Jining Qin'] | 1704.08165v1 | This paper introduces a generalization of Convolutional Neural Networks
(CNNs) from low-dimensional grid data, such as images, to graph-structured
data. We propose a novel spatial convolution utilizing a random walk to uncover
the relations within the input, analogous to the way the standard convolution
uses the spatia... | A Generalization of Convolutional Neural Networks to Graph-Structured
Data | 2,017 | http://arxiv.org/pdf/1704.08165v1 | Title Generalization Convolutional Neural Networks GraphStructured Data Summary paper introduces generalization Convolutional Neural Networks CNNs lowdimensional grid data image graphstructured data propose novel spatial convolution utilizing random walk uncover relation within input analogous way standard convolution ... | [0.03470451012253761, 0.039715345948934555, -0.007747909985482693, 0.010968849994242191, -0.008142500184476376, -0.01994643546640873, 0.025686796754598618, 0.02761830948293209, 0.047427188605070114, 0.03826754167675972, 0.04872293770313263, 0.03953459486365318, 0.0011169301578775048, 0.05940108373761177, 0.054836910218... |
347 | 347 | ['Matthias Hein', 'Maksym Andriushchenko'] | 1705.08475v2 | Recent work has shown that state-of-the-art classifiers are quite brittle, in
the sense that a small adversarial change of an originally with high confidence
correctly classified input leads to a wrong classification again with high
confidence. This raises concerns that such classifiers are vulnerable to
attacks and ca... | Formal Guarantees on the Robustness of a Classifier against Adversarial
Manipulation | 2,017 | http://arxiv.org/pdf/1705.08475v2 | Title Formal Guarantees Robustness Classifier Adversarial Manipulation Summary Recent work shown stateoftheart classifier quite brittle sense small adversarial change originally high confidence correctly classified input lead wrong classification high confidence raise concern classifier vulnerable attack call question ... | [0.006188260857015848, 0.04354482144117355, -0.013780206441879272, 0.004464882891625166, 0.01014008466154337, -0.04665730521082878, 0.0035088288132101297, -0.0023654901888221502, -0.0012604391667991877, -0.027245502918958664, 0.05376371368765831, 0.04295532405376434, 0.007749683689326048, 0.053151994943618774, 0.067649... |
348 | 348 | ['Alhussein Fawzi', 'Seyed-Mohsen Moosavi-Dezfooli', 'Pascal Frossard', 'Stefano Soatto'] | 1705.09552v1 | The goal of this paper is to analyze the geometric properties of deep neural
network classifiers in the input space. We specifically study the topology of
classification regions created by deep networks, as well as their associated
decision boundary. Through a systematic empirical investigation, we show that
state-of-t... | Classification regions of deep neural networks | 2,017 | http://arxiv.org/pdf/1705.09552v1 | Title Classification region deep neural network Summary goal paper analyze geometric property deep neural network classifier input space specifically study topology classification region created deep network well associated decision boundary systematic empirical investigation show stateoftheart deep net learn connected... | [-0.020803967490792274, 0.011388389393687248, -0.06109420582652092, 0.05848638340830803, -0.03571015223860741, -0.02050507441163063, 0.02685772068798542, -0.03100939467549324, -0.012953218072652817, 0.0019034185679629445, 0.002322569489479065, 0.04575680196285248, 0.010220460593700409, 0.041609954088926315, 0.043501451... |
349 | 349 | ['Seyed-Mohsen Moosavi-Dezfooli', 'Alhussein Fawzi', 'Omar Fawzi', 'Pascal Frossard', 'Stefano Soatto'] | 1705.09554v1 | Deep networks have recently been shown to be vulnerable to universal
perturbations: there exist very small image-agnostic perturbations that cause
most natural images to be misclassified by such classifiers. In this paper, we
propose the first quantitative analysis of the robustness of classifiers to
universal perturba... | Analysis of universal adversarial perturbations | 2,017 | http://arxiv.org/pdf/1705.09554v1 | Title Analysis universal adversarial perturbation Summary Deep network recently shown vulnerable universal perturbation exist small imageagnostic perturbation cause natural image misclassified classifier paper propose first quantitative analysis robustness classifier universal perturbation draw formal link robustness u... | [-0.006989311892539263, 0.007897590287029743, -0.027320239692926407, 0.018769506365060806, -0.02717568166553974, -0.01606050319969654, 0.010597921907901764, -0.03947088122367859, -0.009403351694345474, -0.005333918612450361, 0.013914406299591064, 0.013099141418933868, -0.0457087904214859, 0.029455699026584625, 0.065333... |
350 | 350 | ['Yunus Saatchi', 'Andrew Gordon Wilson'] | 1705.09558v3 | Generative adversarial networks (GANs) can implicitly learn rich
distributions over images, audio, and data which are hard to model with an
explicit likelihood. We present a practical Bayesian formulation for
unsupervised and semi-supervised learning with GANs. Within this framework, we
use stochastic gradient Hamilton... | Bayesian GAN | 2,017 | http://arxiv.org/pdf/1705.09558v3 | Title Bayesian GAN Summary Generative adversarial network GANs implicitly learn rich distribution image audio data hard model explicit likelihood present practical Bayesian formulation unsupervised semisupervised learning GANs Within framework use stochastic gradient Hamiltonian Monte Carlo marginalize weight generator... | [-0.02308654971420765, 0.08587949723005295, -0.019761765375733376, 0.004068419802933931, 0.02050447277724743, -0.004102040082216263, 0.04403464123606682, -0.020683765411376953, -0.06667576730251312, 0.03175995126366615, -0.07144792377948761, 0.027946999296545982, -0.0347418487071991, 0.02525789849460125, 0.063532970845... |
351 | 351 | ['Emily Denton', 'Vighnesh Birodkar'] | 1705.10915v1 | We present a new model DrNET that learns disentangled image representations
from video. Our approach leverages the temporal coherence of video and a novel
adversarial loss to learn a representation that factorizes each frame into a
stationary part and a temporally varying component. The disentangled
representation can ... | Unsupervised Learning of Disentangled Representations from Video | 2,017 | http://arxiv.org/pdf/1705.10915v1 | Title Unsupervised Learning Disentangled Representations Video Summary present new model DrNET learns disentangled image representation video approach leverage temporal coherence video novel adversarial loss learn representation factorizes frame stationary part temporally varying component disentangled representation u... | [-0.04752199724316597, 0.05794048309326172, 0.012145565822720528, 0.058078933507204056, -0.018604572862386703, -0.01233634352684021, 0.012698106467723846, -0.006449037231504917, -0.018665088340640068, 0.003239475656300783, -0.006050102412700653, 0.016986601054668427, -0.006373200099915266, 0.07627525180578232, 0.009441... |
352 | 352 | ['Yujia Li', 'Alexander Schwing', 'Kuan-Chieh Wang', 'Richard Zemel'] | 1706.06216v1 | Generative adversarial nets (GANs) are a promising technique for modeling a
distribution from samples. It is however well known that GAN training suffers
from instability due to the nature of its maximin formulation. In this paper,
we explore ways to tackle the instability problem by dualizing the
discriminator. We sta... | Dualing GANs | 2,017 | http://arxiv.org/pdf/1706.06216v1 | Title Dualing GANs Summary Generative adversarial net GANs promising technique modeling distribution sample however well known GAN training suffers instability due nature maximin formulation paper explore way tackle instability problem dualizing discriminator start linear discriminator case conjugate duality provides m... | [-0.0033893149811774492, 0.05406666174530983, -0.03474172204732895, 0.028156612068414688, 0.03303755074739456, -0.0018474722746759653, -0.01774638518691063, 0.0013656294904649258, -0.035452473908662796, 0.021841276437044144, -0.02383692003786564, -0.0471636988222599, -0.047075945883989334, -0.011637123301625252, 0.0765... |
353 | 353 | ['Eunhee Kang', 'Jaejun Yoo', 'Jong Chul Ye'] | 1707.09938v2 | Model based iterative reconstruction (MBIR) algorithms for low-dose X-ray CT
are computationally expensive. To address this problem, we recently proposed
the world-first deep convolutional neural network (CNN) for low-dose X-ray CT
and won the second place in 2016 AAPM Low-Dose CT Grand Challenge. However,
some of the ... | Wavelet Residual Network for Low-Dose CT via Deep Convolutional
Framelets | 2,017 | http://arxiv.org/pdf/1707.09938v2 | Title Wavelet Residual Network LowDose CT via Deep Convolutional Framelets Summary Model based iterative reconstruction MBIR algorithm lowdose Xray CT computationally expensive address problem recently proposed worldfirst deep convolutional neural network CNN lowdose Xray CT second place 2016 AAPM LowDose CT Grand Chal... | [-0.005884728394448757, 0.08598175644874573, 0.0025116964243352413, 0.047985225915908813, 0.011937020346522331, 0.01774267852306366, 0.018186010420322418, 0.025364933535456657, -0.014553194865584373, 0.0772213265299797, 0.009773721918463707, 0.03725576400756836, 0.0014747347449883819, -0.002036175923421979, 0.022917762... |
354 | 354 | ['Chuhang Zou', 'Ersin Yumer', 'Jimei Yang', 'Duygu Ceylan', 'Derek Hoiem'] | 1708.01648v1 | The success of various applications including robotics, digital content
creation, and visualization demand a structured and abstract representation of
the 3D world from limited sensor data. Inspired by the nature of human
perception of 3D shapes as a collection of simple parts, we explore such an
abstract shape represe... | 3D-PRNN: Generating Shape Primitives with Recurrent Neural Networks | 2,017 | http://arxiv.org/pdf/1708.01648v1 | Title 3DPRNN Generating Shape Primitives Recurrent Neural Networks Summary success various application including robotics digital content creation visualization demand structured abstract representation 3D world limited sensor data Inspired nature human perception 3D shape collection simple part explore abstract shape ... | [-0.026282045990228653, 0.011064124293625355, -0.002964281477034092, 0.03498736023902893, -0.014073737896978855, -0.0253475159406662, -0.004067199770361185, -0.04992694780230522, -0.08539851009845734, 0.02635801210999489, 0.0007618836243636906, 0.0014090482145547867, 0.011729761026799679, 0.07452578097581863, 0.0408184... |
355 | 355 | ['Zhiming Zhou', 'Weinan Zhang', 'Jun Wang'] | 1708.01729v2 | In this article, we mathematically study several GAN related topics,
including Inception score, label smoothing, gradient vanishing and the
-log(D(x)) alternative.
--- An advanced version is included in arXiv:1703.02000 "Activation
Maximization Generative Adversarial Nets". Please refer Section 6 in 1703.02000
for de... | Inception Score, Label Smoothing, Gradient Vanishing and -log(D(x))
Alternative | 2,017 | http://arxiv.org/pdf/1708.01729v2 | Title Inception Score Label Smoothing Gradient Vanishing logDx Alternative Summary article mathematically study several GAN related topic including Inception score label smoothing gradient vanishing logDx alternative advanced version included arXiv170302000 Activation Maximization Generative Adversarial Nets Please ref... | [-0.007836745120584965, 0.05415726453065872, 0.00969302374869585, 0.05750613287091255, 0.0044973078183829784, -0.02896241657435894, 0.021426258608698845, 0.009973583742976189, -0.044108401983976364, 0.037905432283878326, 0.0024502864107489586, -0.016263682395219803, -0.01615750789642334, 0.02052057534456253, 0.04730376... |
356 | 356 | ['Kai Arulkumaran', 'Marc Peter Deisenroth', 'Miles Brundage', 'Anil Anthony Bharath'] | 1708.05866v2 | Deep reinforcement learning is poised to revolutionise the field of AI and
represents a step towards building autonomous systems with a higher level
understanding of the visual world. Currently, deep learning is enabling
reinforcement learning to scale to problems that were previously intractable,
such as learning to p... | A Brief Survey of Deep Reinforcement Learning | 2,017 | http://arxiv.org/pdf/1708.05866v2 | Title Brief Survey Deep Reinforcement Learning Summary Deep reinforcement learning poised revolutionise field AI represents step towards building autonomous system higher level understanding visual world Currently deep learning enabling reinforcement learning scale problem previously intractable learning play video gam... | [0.001934720086865127, 0.018902035430073738, -0.0207810178399086, 0.024373536929488182, -0.006007027346640825, -0.008741470053792, 0.022436074912548065, -0.02780217118561268, -0.061189599335193634, 0.015256786718964577, -0.01792782172560692, 0.004106141161173582, 0.006279111839830875, 0.06601067632436752, -0.0008085505... |
357 | 357 | ['Caiwen Ding', 'Siyu Liao', 'Yanzhi Wang', 'Zhe Li', 'Ning Liu', 'Youwei Zhuo', 'Chao Wang', 'Xuehai Qian', 'Yu Bai', 'Geng Yuan', 'Xiaolong Ma', 'Yipeng Zhang', 'Jian Tang', 'Qinru Qiu', 'Xue Lin', 'Bo Yuan'] | 1708.08917v1 | Large-scale deep neural networks (DNNs) are both compute and memory
intensive. As the size of DNNs continues to grow, it is critical to improve the
energy efficiency and performance while maintaining accuracy. For DNNs, the
model size is an important factor affecting performance, scalability and energy
efficiency. Weig... | CirCNN: Accelerating and Compressing Deep Neural Networks Using
Block-CirculantWeight Matrices | 2,017 | http://arxiv.org/pdf/1708.08917v1 | Title CirCNN Accelerating Compressing Deep Neural Networks Using BlockCirculantWeight Matrices Summary Largescale deep neural network DNNs compute memory intensive size DNNs continues grow critical improve energy efficiency performance maintaining accuracy DNNs model size important factor affecting performance scalabil... | [-0.01497843861579895, 0.0228132251650095, -0.016226397827267647, 0.06257771700620651, 0.004769204184412956, -0.0003510774113237858, 0.060953062027692795, 0.006617339793592691, -0.005156007129698992, 0.024105293676257133, -0.02313764952123165, -0.01523275300860405, 0.015319127589464188, 0.003590119071304798, 0.01766840... |
358 | 358 | ['Cătălina Cangea', 'Petar Veličković', 'Pietro Liò'] | 1709.00572v1 | We propose two multimodal deep learning architectures that allow for
cross-modal dataflow (XFlow) between the feature extractors, thereby extracting
more interpretable features and obtaining a better representation than through
unimodal learning, for the same amount of training data. These models can
usefully exploit c... | XFlow: 1D-2D Cross-modal Deep Neural Networks for Audiovisual
Classification | 2,017 | http://arxiv.org/pdf/1709.00572v1 | Title XFlow 1D2D Crossmodal Deep Neural Networks Audiovisual Classification Summary propose two multimodal deep learning architecture allow crossmodal dataflow XFlow feature extractor thereby extracting interpretable feature obtaining better representation unimodal learning amount training data model usefully exploit c... | [-0.04427507147192955, 0.014811825007200241, -0.01971091330051422, 0.04842028394341469, 0.007753276266157627, -0.028473369777202606, 0.06911830604076385, -0.01682870276272297, -0.04300621524453163, -0.0021564385388046503, -0.12860608100891113, -0.002302121836692095, 0.01866266317665577, 0.06615891307592392, 0.028456566... |
359 | 359 | ['Kun ho Kim', 'Oisin Mac Aodha', 'Pietro Perona'] | 1710.01691v2 | Low dimensional embeddings that capture the main variations of interest in
collections of data are important for many applications. One way to construct
these embeddings is to acquire estimates of similarity from the crowd. However,
similarity is a multi-dimensional concept that varies from individual to
individual. Ex... | Context Embedding Networks | 2,017 | http://arxiv.org/pdf/1710.01691v2 | Title Context Embedding Networks Summary Low dimensional embeddings capture main variation interest collection data important many application One way construct embeddings acquire estimate similarity crowd However similarity multidimensional concept varies individual individual Existing model learning embeddings crowd ... | [-0.03537415713071823, 0.06395672261714935, -0.007716844789683819, 0.07160276174545288, 0.008155053481459618, 0.007829046808183193, 0.04133368283510208, -0.00821962021291256, -0.03246283158659935, -0.010783545672893524, -0.029003407806158066, 0.027255209162831306, 0.002010752446949482, 0.006190483458340168, 0.068160429... |
360 | 360 | ['Garrett B. Goh', 'Charles Siegel', 'Abhinav Vishnu', 'Nathan O. Hodas', 'Nathan Baker'] | 1710.02238v2 | The meteoric rise of deep learning models in computer vision research, having
achieved human-level accuracy in image recognition tasks is firm evidence of
the impact of representation learning of deep neural networks. In the chemistry
domain, recent advances have also led to the development of similar CNN models,
such ... | How Much Chemistry Does a Deep Neural Network Need to Know to Make
Accurate Predictions? | 2,017 | http://arxiv.org/pdf/1710.02238v2 | Title Much Chemistry Deep Neural Network Need Know Make Accurate Predictions Summary meteoric rise deep learning model computer vision research achieved humanlevel accuracy image recognition task firm evidence impact representation learning deep neural network chemistry domain recent advance also led development simila... | [0.00770224817097187, 0.02630946971476078, -0.025033870711922646, 0.006477885879576206, 0.021847710013389587, -0.024519821628928185, 0.006756324786692858, 0.0017183911986649036, 0.04864497855305672, 0.02938755787909031, -0.009336560033261776, 0.013640801422297955, -0.040505681186914444, 0.05479798838496208, 0.036912634... |
361 | 361 | ['Abhishek Kumar', 'Prasanna Sattigeri', 'Avinash Balakrishnan'] | 1711.00848v2 | Disentangled representations, where the higher level data generative factors
are reflected in disjoint latent dimensions, offer several benefits such as
ease of deriving invariant representations, transferability to other tasks,
interpretability, etc. We consider the problem of unsupervised learning of
disentangled rep... | Variational Inference of Disentangled Latent Concepts from Unlabeled
Observations | 2,017 | http://arxiv.org/pdf/1711.00848v2 | Title Variational Inference Disentangled Latent Concepts Unlabeled Observations Summary Disentangled representation higher level data generative factor reflected disjoint latent dimension offer several benefit ease deriving invariant representation transferability task interpretability etc consider problem unsupervised... | [-0.013651109300553799, 0.13964641094207764, -0.014092152938246727, 0.04212312772870064, -0.01712259091436863, 0.02894134446978569, -0.0025744237937033176, -0.014820554293692112, -0.019702458754181862, 0.021822955459356308, 0.0009461340378038585, 0.010021849535405636, 0.025552650913596153, 0.060200463980436325, 0.03280... |
362 | 362 | ['Stanisław Jastrzębski', 'Zachary Kenton', 'Devansh Arpit', 'Nicolas Ballas', 'Asja Fischer', 'Yoshua Bengio', 'Amos Storkey'] | 1711.04623v1 | We study the properties of the endpoint of stochastic gradient descent (SGD).
By approximating SGD as a stochastic differential equation (SDE) we consider
the Boltzmann-Gibbs equilibrium distribution of that SDE under the assumption
of isotropic variance in loss gradients. Through this analysis, we find that
three fact... | Three Factors Influencing Minima in SGD | 2,017 | http://arxiv.org/pdf/1711.04623v1 | Title Three Factors Influencing Minima SGD Summary study property endpoint stochastic gradient descent SGD approximating SGD stochastic differential equation SDE consider BoltzmannGibbs equilibrium distribution SDE assumption isotropic variance loss gradient analysis find three factor learning rate batch size variance ... | [-0.03010372631251812, -0.06074045971035957, -0.015608815476298332, 0.004908312577754259, 0.015153571963310242, -0.05740569531917572, 0.02932601235806942, 0.008304344490170479, -0.04873552918434143, 0.055902350693941116, 0.02767437882721424, 0.006699562072753906, -0.020996060222387314, 0.03697074204683304, -0.003464875... |
363 | 363 | ['Paweł Liskowski', 'Wojciech Jaśkowski', 'Krzysztof Krawiec'] | 1711.06583v1 | Achieving superhuman playing level by AlphaGo corroborated the capabilities
of convolutional neural architectures (CNNs) for capturing complex spatial
patterns. This result was to a great extent due to several analogies between Go
board states and 2D images CNNs have been designed for, in particular
translational invar... | Learning to Play Othello with Deep Neural Networks | 2,017 | http://arxiv.org/pdf/1711.06583v1 | Title Learning Play Othello Deep Neural Networks Summary Achieving superhuman playing level AlphaGo corroborated capability convolutional neural architecture CNNs capturing complex spatial pattern result great extent due several analogy Go board state 2D image CNNs designed particular translational invariance relativel... | [-0.004043870139867067, 0.045120298862457275, -0.05259804427623749, 0.06701121479272842, 0.009386658668518066, -0.02731923572719097, 0.0049227941781282425, -0.0018305566627532244, -0.04357238486409187, 0.025756599381566048, -0.018135692924261093, -0.002616130979731679, -0.0027867441531270742, 0.059946559369564056, 0.07... |
364 | 364 | ['Jaejun Yoo', 'Sohail Sabir', 'Duchang Heo', 'Kee Hyun Kim', 'Abdul Wahab', 'Yoonseok Choi', 'Seul-I Lee', 'Eun Young Chae', 'Hak Hee Kim', 'Young Min Bae', 'Young-wook Choi', 'Seungryong Cho', 'Jong Chul Ye'] | 1712.00912v1 | Can artificial intelligence (AI) learn complicated non-linear physics? Here
we propose a novel deep learning approach that learns non-linear photon
scattering physics and obtains accurate 3D distribution of optical anomalies.
In contrast to the traditional black-box deep learning approaches to inverse
problems, our dee... | Deep Learning Can Reverse Photon Migration for Diffuse Optical
Tomography | 2,017 | http://arxiv.org/pdf/1712.00912v1 | Title Deep Learning Reverse Photon Migration Diffuse Optical Tomography Summary artificial intelligence AI learn complicated nonlinear physic propose novel deep learning approach learns nonlinear photon scattering physic obtains accurate 3D distribution optical anomaly contrast traditional blackbox deep learning approa... | [0.015387685038149357, -0.026141151785850525, -0.03536622226238251, -0.014373610727488995, -0.017385335639119148, -0.01864778995513916, 0.011410445906221867, 0.011899719946086407, -0.08374179899692535, 0.022748271003365517, 0.04173847660422325, -0.0041262502782046795, -0.022999459877610207, 0.016142629086971283, 0.0176... |
365 | 365 | ['Garrett B. Goh', 'Charles Siegel', 'Abhinav Vishnu', 'Nathan O. Hodas'] | 1712.02734v2 | With access to large datasets, deep neural networks (DNN) have achieved
human-level accuracy in image and speech recognition tasks. However, in
chemistry, data is inherently small and fragmented. In this work, we develop an
approach of using rule-based knowledge for training ChemNet, a transferable and
generalizable de... | Using Rule-Based Labels for Weak Supervised Learning: A ChemNet for
Transferable Chemical Property Prediction | 2,017 | http://arxiv.org/pdf/1712.02734v2 | Title Using RuleBased Labels Weak Supervised Learning ChemNet Transferable Chemical Property Prediction Summary access large datasets deep neural network DNN achieved humanlevel accuracy image speech recognition task However chemistry data inherently small fragmented work develop approach using rulebased knowledge trai... | [-0.01637466624379158, 0.09591254591941833, -0.015093043446540833, 0.01855919323861599, 0.022989120334386826, -0.03170975670218468, 0.04311203956604004, 0.013675128109753132, 0.04681979492306709, 0.0271495059132576, -0.05752216652035713, 0.023094531148672104, -0.04586184397339821, 0.049026962369680405, 0.01768498681485... |
366 | 366 | ['Yeo Hun Yoon', 'Shujaat Khan', 'Jaeyoung Huh', 'Jong Chul Ye'] | 1712.06096v2 | In portable, three dimensional, and ultra-fast ultrasound (US) imaging
systems, there is an increasing need to reconstruct high quality images from a
limited number of RF data from receiver (Rx) or scan-line (SC) sub-sampling.
However, due to the severe side lobe artifacts from RF sub-sampling, the
standard beam-former... | Deep Learning in RF Sub-sampled B-mode Ultrasound Imaging | 2,017 | http://arxiv.org/pdf/1712.06096v2 | Title Deep Learning RF Subsampled Bmode Ultrasound Imaging Summary portable three dimensional ultrafast ultrasound US imaging system increasing need reconstruct high quality image limited number RF data receiver Rx scanline SC subsampling However due severe side lobe artifact RF subsampling standard beamformer often pr... | [0.0008131703943945467, 0.05844822898507118, -0.02203519642353058, -0.026201285421848297, 0.022137360647320747, 0.01236604992300272, 0.045752402395009995, 0.03850363940000534, -0.07252732664346695, 0.04685826599597931, -0.0035056264605373144, -0.07232800126075745, -0.003909144084900618, 0.04674515500664711, -0.00812863... |
367 | 367 | ['Yoseob Han', 'Jawook Gu', 'Jong Chul Ye'] | 1712.10248v2 | Interior tomography for the region-of-interest (ROI) imaging has advantages
of using a small detector and reducing X-ray radiation dose. However, standard
analytic reconstruction suffers from severe cupping artifacts due to existence
of null space in the truncated Radon transform. Existing penalized
reconstruction meth... | Deep Learning Interior Tomography for Region-of-Interest Reconstruction | 2,017 | http://arxiv.org/pdf/1712.10248v2 | Title Deep Learning Interior Tomography RegionofInterest Reconstruction Summary Interior tomography regionofinterest ROI imaging advantage using small detector reducing Xray radiation dose However standard analytic reconstruction suffers severe cupping artifact due existence null space truncated Radon transform Existin... | [-0.03072839044034481, 0.03370160236954689, -0.01693260483443737, 0.005782017018646002, -0.020232217386364937, 0.005034395959228277, -0.019472967833280563, 0.06936164945363998, 0.013841195963323116, 0.0629819855093956, -0.04035986587405205, -0.022958718240261078, 0.010653545148670673, 0.047053512185811996, 0.0118306698... |
368 | 368 | ['Yoseob Han', 'Jingu Kang', 'Jong Chul Ye'] | 1801.01258v1 | For homeland and transportation security applications, 2D X-ray explosive
detection system (EDS) have been widely used, but they have limitations in
recognizing 3D shape of the hidden objects. Among various types of 3D computed
tomography (CT) systems to address this issue, this paper is interested in a
stationary CT u... | Deep Learning Reconstruction for 9-View Dual Energy CT Baggage Scanner | 2,018 | http://arxiv.org/pdf/1801.01258v1 | Title Deep Learning Reconstruction 9View Dual Energy CT Baggage Scanner Summary homeland transportation security application 2D Xray explosive detection system EDS widely used limitation recognizing 3D shape hidden object Among various type 3D computed tomography CT system address issue paper interested stationary CT u... | [-0.04104342311620712, 0.032573625445365906, -0.005146138835698366, 0.051213718950748444, 0.013152572326362133, 0.0104643814265728, 0.03190971165895462, 0.019044918939471245, -0.054176948964595795, 0.0614003948867321, 0.01349288783967495, 0.0028527413960546255, 0.006450768560171127, 0.10132794082164764, 0.0114539684727... |
369 | 369 | ['Jayanta K Dutta', 'Jiayi Liu', 'Unmesh Kurup', 'Mohak Shah'] | 1801.08577v1 | Deep learning has shown promising results on many machine learning tasks but
DL models are often complex networks with large number of neurons and layers,
and recently, complex layer structures known as building blocks. Finding the
best deep model requires a combination of finding both the right architecture
and the co... | Effective Building Block Design for Deep Convolutional Neural Networks
using Search | 2,018 | http://arxiv.org/pdf/1801.08577v1 | Title Effective Building Block Design Deep Convolutional Neural Networks using Search Summary Deep learning shown promising result many machine learning task DL model often complex network large number neuron layer recently complex layer structure known building block Finding best deep model requires combination findin... | [0.020713958889245987, 0.06414908915758133, -0.00911646243184805, 0.07008805871009827, 0.019468775019049644, -0.03916212543845177, 0.05745573341846466, 0.03063204325735569, -0.0124319763854146, 0.014917802065610886, -0.011679022572934628, -0.0071962615475058556, -0.03560882434248924, 0.045701056718826294, 0.01874051615... |
370 | 370 | ['Haque Ishfaq', 'Assaf Hoogi', 'Daniel Rubin'] | 1802.04403v1 | Deep metric learning has been demonstrated to be highly effective in learning
semantic representation and encoding information that can be used to measure
data similarity, by relying on the embedding learned from metric learning. At
the same time, variational autoencoder (VAE) has widely been used to
approximate infere... | TVAE: Triplet-Based Variational Autoencoder using Metric Learning | 2,018 | http://arxiv.org/pdf/1802.04403v1 | Title TVAE TripletBased Variational Autoencoder using Metric Learning Summary Deep metric learning demonstrated highly effective learning semantic representation encoding information used measure data similarity relying embedding learned metric learning time variational autoencoder VAE widely used approximate inference... | [-0.019333014264702797, 0.03620387986302376, -0.011517582461237907, 0.03062313050031662, 0.00433266069740057, 0.024699347093701363, 0.004918583203107119, 0.0010207797167822719, -0.04832843318581581, -0.01274857483804226, 0.006990343797951937, 0.00701321242377162, -0.02643158845603466, 0.08037950843572617, 0.05721693858... |
371 | 371 | ['Nick Haber', 'Damian Mrowca', 'Li Fei-Fei', 'Daniel L. K. Yamins'] | 1802.07442v1 | Infants are experts at playing, with an amazing ability to generate novel
structured behaviors in unstructured environments that lack clear extrinsic
reward signals. We seek to mathematically formalize these abilities using a
neural network that implements curiosity-driven intrinsic motivation. Using a
simple but ecolo... | Learning to Play with Intrinsically-Motivated Self-Aware Agents | 2,018 | http://arxiv.org/pdf/1802.07442v1 | Title Learning Play IntrinsicallyMotivated SelfAware Agents Summary Infants expert playing amazing ability generate novel structured behavior unstructured environment lack clear extrinsic reward signal seek mathematically formalize ability using neural network implement curiositydriven intrinsic motivation Using simple... | [0.007545554544776678, 0.010510986670851707, -0.035660646855831146, 0.01321401633322239, 0.02706293947994709, -0.02532416209578514, -0.01706690713763237, -0.03375625237822533, -0.020416108891367912, 0.006275697145611048, -0.06261751055717468, 0.07055346667766571, -0.045897942036390305, 0.07986181229352951, 0.0372456461... |
372 | 372 | ['Nick Haber', 'Damian Mrowca', 'Li Fei-Fei', 'Daniel L. K. Yamins'] | 1802.07461v1 | Infants are experts at playing, with an amazing ability to generate novel
structured behaviors in unstructured environments that lack clear extrinsic
reward signals. We seek to replicate some of these abilities with a neural
network that implements curiosity-driven intrinsic motivation. Using a simple
but ecologically ... | Emergence of Structured Behaviors from Curiosity-Based Intrinsic
Motivation | 2,018 | http://arxiv.org/pdf/1802.07461v1 | Title Emergence Structured Behaviors CuriosityBased Intrinsic Motivation Summary Infants expert playing amazing ability generate novel structured behavior unstructured environment lack clear extrinsic reward signal seek replicate ability neural network implement curiositydriven intrinsic motivation Using simple ecologi... | [0.0014127821195870638, 0.01567617617547512, -0.03436420485377312, 0.00010565984848653898, 0.015486599877476692, -0.00438009575009346, -0.043718114495277405, -0.016360478475689888, -0.04642879217863083, 0.02967933937907219, -0.035631969571113586, 0.02661474049091339, -0.018195616081357002, 0.0818972960114479, 0.0488295... |
373 | 373 | ['Emily Denton', 'Rob Fergus'] | 1802.07687v2 | Generating video frames that accurately predict future world states is
challenging. Existing approaches either fail to capture the full distribution
of outcomes, or yield blurry generations, or both. In this paper we introduce
an unsupervised video generation model that learns a prior model of uncertainty
in a given en... | Stochastic Video Generation with a Learned Prior | 2,018 | http://arxiv.org/pdf/1802.07687v2 | Title Stochastic Video Generation Learned Prior Summary Generating video frame accurately predict future world state challenging Existing approach either fail capture full distribution outcome yield blurry generation paper introduce unsupervised video generation model learns prior model uncertainty given environment Vi... | [-0.01858837902545929, 0.07819383591413498, 0.04127709940075874, -0.027748022228479385, -0.005655407905578613, -0.03293720632791519, -0.008636883459985256, 0.019356105476617813, -0.04649140685796738, -0.030510656535625458, 0.061888791620731354, -0.02407054416835308, -0.014147943817079067, 0.07956864684820175, 0.0528176... |
374 | 374 | ['Weifeng Ge', 'Sibei Yang', 'Yizhou Yu'] | 1802.09129v1 | Supervised object detection and semantic segmentation require object or even
pixel level annotations. When there exist image level labels only, it is
challenging for weakly supervised algorithms to achieve accurate predictions.
The accuracy achieved by top weakly supervised algorithms is still
significantly lower than ... | Multi-Evidence Filtering and Fusion for Multi-Label Classification,
Object Detection and Semantic Segmentation Based on Weakly Supervised
Learning | 2,018 | http://arxiv.org/pdf/1802.09129v1 | Title MultiEvidence Filtering Fusion MultiLabel Classification Object Detection Semantic Segmentation Based Weakly Supervised Learning Summary Supervised object detection semantic segmentation require object even pixel level annotation exist image level label challenging weakly supervised algorithm achieve accurate pre... | [0.03029974363744259, -0.01569221541285515, 0.030196411535143852, 0.06377570331096649, 0.007281121332198381, -0.0022118433844298124, 0.009080919437110424, -0.033626068383455276, 0.029017699882388115, 0.006745519116520882, -0.05035201460123062, 0.06610722839832306, -0.012566382996737957, 0.030714480206370354, 0.02697962... |
375 | 375 | ['Quynh Nguyen', 'Mahesh Mukkamala', 'Matthias Hein'] | 1803.00094v1 | In the recent literature the important role of depth in deep learning has
been emphasized. In this paper we argue that sufficient width of a feedforward
network is equally important by answering the simple question under which
conditions the decision regions of a neural network are connected. It turns out
that for a cl... | Neural Networks Should Be Wide Enough to Learn Disconnected Decision
Regions | 2,018 | http://arxiv.org/pdf/1803.00094v1 | Title Neural Networks Wide Enough Learn Disconnected Decision Regions Summary recent literature important role depth deep learning emphasized paper argue sufficient width feedforward network equally important answering simple question condition decision region neural network connected turn class activation function inc... | [0.0016157060163095593, 0.06191333010792732, -0.02980755642056465, 0.038863662630319595, -0.003247797256335616, -0.021014919504523277, 0.07946360111236572, -0.019299747422337532, -0.020289214327931404, -0.004484830889850855, -0.006770009640604258, 0.016767330467700958, 0.008010574616491795, 0.02744484692811966, 0.02263... |
376 | 376 | ['Chengliang Yang', 'Anand Rangarajan', 'Sanjay Ranka'] | 1803.02544v2 | We develop three efficient approaches for generating visual explanations from
3D convolutional neural networks (3D-CNNs) for Alzheimer's disease
classification. One approach conducts sensitivity analysis on hierarchical 3D
image segmentation, and the other two visualize network activations on a
spatial map. Visual chec... | Visual Explanations From Deep 3D Convolutional Neural Networks for
Alzheimer's Disease Classification | 2,018 | http://arxiv.org/pdf/1803.02544v2 | Title Visual Explanations Deep 3D Convolutional Neural Networks Alzheimers Disease Classification Summary develop three efficient approach generating visual explanation 3D convolutional neural network 3DCNNs Alzheimers disease classification One approach conduct sensitivity analysis hierarchical 3D image segmentation t... | [-0.001759025384671986, 0.022373871877789497, 0.0005091670900583267, 0.051700640469789505, 0.025821156799793243, 0.00948801264166832, 0.051337163895368576, -1.5511152014369145e-05, -0.0009512273827567697, 0.03166145086288452, 0.02257706969976425, 0.022253500297665596, 0.03461276739835739, 0.03744889050722122, 0.0280076... |
377 | 377 | ['Pavel Izmailov', 'Dmitrii Podoprikhin', 'Timur Garipov', 'Dmitry Vetrov', 'Andrew Gordon Wilson'] | 1803.05407v1 | Deep neural networks are typically trained by optimizing a loss function with
an SGD variant, in conjunction with a decaying learning rate, until
convergence. We show that simple averaging of multiple points along the
trajectory of SGD, with a cyclical or constant learning rate, leads to better
generalization than conv... | Averaging Weights Leads to Wider Optima and Better Generalization | 2,018 | http://arxiv.org/pdf/1803.05407v1 | Title Averaging Weights Leads Wider Optima Better Generalization Summary Deep neural network typically trained optimizing loss function SGD variant conjunction decaying learning rate convergence show simple averaging multiple point along trajectory SGD cyclical constant learning rate lead better generalization conventi... | [-0.02773691713809967, 0.02495228312909603, 0.008313159458339214, 0.036607641726732254, 0.02793952263891697, -0.0017876147758215666, 0.06204995885491371, -0.0396859273314476, -0.0422554649412632, 0.07349628955125809, 0.03743846341967583, -0.04685478284955025, 0.03435065224766731, 0.030554182827472687, 0.039004039019346... |
378 | 378 | ['Abdulrahman Oladipupo Ibraheem'] | 1412.6749v1 | By drawing on ideas from optimisation theory, artificial neural networks
(ANN), graph embeddings and sparse representations, I develop a novel
technique, termed SENNS (Sparse Extraction Neural NetworkS), aimed at
addressing the feature extraction problem. The proposed method uses (preferably
deep) ANNs for projecting i... | SENNS: Sparse Extraction Neural NetworkS for Feature Extraction | 2,014 | http://arxiv.org/pdf/1412.6749v1 | Title SENNS Sparse Extraction Neural NetworkS Feature Extraction Summary drawing idea optimisation theory artificial neural network ANN graph embeddings sparse representation develop novel technique termed SENNS Sparse Extraction Neural NetworkS aimed addressing feature extraction problem proposed method us preferably ... | [-0.0012898645363748074, 0.04691426083445549, -0.014183642342686653, 0.08530305325984955, -0.0022745367605239153, -0.0011008928995579481, 0.06389493495225906, 0.026457104831933975, -0.01797901839017868, -0.009211565367877483, -0.00664258049800992, -0.014513140544295311, 0.051773786544799805, 0.026841364800930023, 0.024... |
379 | 379 | ['Dougal J. Sutherland', 'Hsiao-Yu Tung', 'Heiko Strathmann', 'Soumyajit De', 'Aaditya Ramdas', 'Alex Smola', 'Arthur Gretton'] | 1611.04488v4 | We propose a method to optimize the representation and distinguishability of
samples from two probability distributions, by maximizing the estimated power
of a statistical test based on the maximum mean discrepancy (MMD). This
optimized MMD is applied to the setting of unsupervised learning by generative
adversarial ne... | Generative Models and Model Criticism via Optimized Maximum Mean
Discrepancy | 2,016 | http://arxiv.org/pdf/1611.04488v4 | Title Generative Models Model Criticism via Optimized Maximum Mean Discrepancy Summary propose method optimize representation distinguishability sample two probability distribution maximizing estimated power statistical test based maximum mean discrepancy MMD optimized MMD applied setting unsupervised learning generati... | [0.013605509884655476, 0.07093748450279236, -0.03573538362979889, 0.02927105501294136, -0.013702447526156902, -0.004465721547603607, 0.05487830191850662, -0.011813649907708168, -0.03715183958411217, 0.006023547146469355, 0.022856414318084717, -0.01222627516835928, -0.0026135826483368874, 0.009016846306622028, 0.0426554... |
380 | 380 | ['Jiequn Han', 'Weinan E'] | 1611.07422v1 | Many real world stochastic control problems suffer from the "curse of
dimensionality". To overcome this difficulty, we develop a deep learning
approach that directly solves high-dimensional stochastic control problems
based on Monte-Carlo sampling. We approximate the time-dependent controls as
feedforward neural networ... | Deep Learning Approximation for Stochastic Control Problems | 2,016 | http://arxiv.org/pdf/1611.07422v1 | Title Deep Learning Approximation Stochastic Control Problems Summary Many real world stochastic control problem suffer curse dimensionality overcome difficulty develop deep learning approach directly solves highdimensional stochastic control problem based MonteCarlo sampling approximate timedependent control feedforwa... | [-0.03404943272471428, 0.016462624073028564, -0.009168632328510284, 0.01569977216422558, 0.03265073150396347, -0.028827466070652008, 0.0308519396930933, 0.012387428432703018, -0.023935584351420403, 0.02761777676641941, -0.005317831877619028, -0.019688762724399567, -0.03616824373602867, 0.09597254544496536, 0.0452575609... |
381 | 381 | ['Marwin H. S. Segler', 'Thierry Kogej', 'Christian Tyrchan', 'Mark P. Waller'] | 1701.01329v1 | In de novo drug design, computational strategies are used to generate novel
molecules with good affinity to the desired biological target. In this work, we
show that recurrent neural networks can be trained as generative models for
molecular structures, similar to statistical language models in natural
language process... | Generating Focussed Molecule Libraries for Drug Discovery with Recurrent
Neural Networks | 2,017 | http://arxiv.org/pdf/1701.01329v1 | Title Generating Focussed Molecule Libraries Drug Discovery Recurrent Neural Networks Summary de novo drug design computational strategy used generate novel molecule good affinity desired biological target work show recurrent neural network trained generative model molecular structure similar statistical language model... | [0.03883090987801552, 0.023383507505059242, -0.0018499937141314149, 0.007632397580891848, 0.002424982376396656, -0.029439885169267654, 0.032943740487098694, 0.002812813501805067, 0.001298057846724987, -0.011611754074692726, 0.012721366249024868, 0.00701055396348238, 0.02611383982002735, 0.07596792280673981, 0.062137685... |
382 | 382 | ['Matthias Plappert', 'Rein Houthooft', 'Prafulla Dhariwal', 'Szymon Sidor', 'Richard Y. Chen', 'Xi Chen', 'Tamim Asfour', 'Pieter Abbeel', 'Marcin Andrychowicz'] | 1706.01905v2 | Deep reinforcement learning (RL) methods generally engage in exploratory
behavior through noise injection in the action space. An alternative is to add
noise directly to the agent's parameters, which can lead to more consistent
exploration and a richer set of behaviors. Methods such as evolutionary
strategies use param... | Parameter Space Noise for Exploration | 2,017 | http://arxiv.org/pdf/1706.01905v2 | Title Parameter Space Noise Exploration Summary Deep reinforcement learning RL method generally engage exploratory behavior noise injection action space alternative add noise directly agent parameter lead consistent exploration richer set behavior Methods evolutionary strategy use parameter perturbation discard tempora... | [-0.0003265595296397805, 0.047623660415410995, 0.0010571572929620743, -0.02193429134786129, 0.014544806443154812, -0.002184385433793068, -0.035086486488580704, -0.03803787752985954, -0.06285911798477173, 0.028673654422163963, 0.015436917543411255, 0.03805789723992348, -0.04189801216125488, 0.04146125540137291, 0.018569... |
383 | 383 | ['El Mahdi El Mhamdi', 'Rachid Guerraoui', 'Sebastien Rouault'] | 1707.08167v2 | With the development of neural networks based machine learning and their
usage in mission critical applications, voices are rising against the
\textit{black box} aspect of neural networks as it becomes crucial to
understand their limits and capabilities. With the rise of neuromorphic
hardware, it is even more critical ... | On The Robustness of a Neural Network | 2,017 | http://arxiv.org/pdf/1707.08167v2 | Title Robustness Neural Network Summary development neural network based machine learning usage mission critical application voice rising textitblack box aspect neural network becomes crucial understand limit capability rise neuromorphic hardware even critical understand neural network distributed system tolerates fail... | [-0.026057636365294456, 0.03605284169316292, -0.02108803763985634, 0.025893907994031906, 0.003046888392418623, -0.05413280799984932, 0.011631691828370094, -0.04342258721590042, 0.0014416167978197336, 0.007316091563552618, 0.044314607977867126, 0.02957012690603733, -0.0011027463478967547, 0.06523576378822327, 0.04307548... |
384 | 384 | ['Jiaxin Shi', 'Jianfei Chen', 'Jun Zhu', 'Shengyang Sun', 'Yucen Luo', 'Yihong Gu', 'Yuhao Zhou'] | 1709.05870v1 | In this paper we introduce ZhuSuan, a python probabilistic programming
library for Bayesian deep learning, which conjoins the complimentary advantages
of Bayesian methods and deep learning. ZhuSuan is built upon Tensorflow. Unlike
existing deep learning libraries, which are mainly designed for deterministic
neural netw... | ZhuSuan: A Library for Bayesian Deep Learning | 2,017 | http://arxiv.org/pdf/1709.05870v1 | Title ZhuSuan Library Bayesian Deep Learning Summary paper introduce ZhuSuan python probabilistic programming library Bayesian deep learning conjoins complimentary advantage Bayesian method deep learning ZhuSuan built upon Tensorflow Unlike existing deep learning library mainly designed deterministic neural network sup... | [-0.01321567315608263, 0.04119647294282913, -0.007770095020532608, -0.015688182786107063, 0.009080410934984684, 0.0002429285377729684, 0.04519704729318619, -0.015076905488967896, -0.038274284452199936, -0.004509477876126766, 0.017884528264403343, -0.011379523202776909, 0.03480071201920509, 0.09155195206403732, 0.012622... |
385 | 385 | ['Konstantinos Chatzilygeroudis', 'Jean-Baptiste Mouret'] | 1709.06917v2 | The most data-efficient algorithms for reinforcement learning in robotics are
model-based policy search algorithms, which alternate between learning a
dynamical model of the robot and optimizing a policy to maximize the expected
return given the model and its uncertainties. Among the few proposed
approaches, the recent... | Using Parameterized Black-Box Priors to Scale Up Model-Based Policy
Search for Robotics | 2,017 | http://arxiv.org/pdf/1709.06917v2 | Title Using Parameterized BlackBox Priors Scale ModelBased Policy Search Robotics Summary dataefficient algorithm reinforcement learning robotics modelbased policy search algorithm alternate learning dynamical model robot optimizing policy maximize expected return given model uncertainty Among proposed approach recentl... | [0.011151357553899288, -0.012564446777105331, 0.001024615135975182, -0.06330064684152603, 0.008877766318619251, -0.0006621857755817473, 0.014855976216495037, -0.004244039300829172, -0.028440549969673157, -0.009114322252571583, -0.04701918363571167, -0.018953688442707062, -0.012030098587274551, 0.07456639409065247, 0.02... |
386 | 386 | ['Rémi Pautrat', 'Konstantinos Chatzilygeroudis', 'Jean-Baptiste Mouret'] | 1709.06919v2 | One of the most interesting features of Bayesian optimization for direct
policy search is that it can leverage priors (e.g., from simulation or from
previous tasks) to accelerate learning on a robot. In this paper, we are
interested in situations for which several priors exist but we do not know in
advance which one fi... | Bayesian Optimization with Automatic Prior Selection for Data-Efficient
Direct Policy Search | 2,017 | http://arxiv.org/pdf/1709.06919v2 | Title Bayesian Optimization Automatic Prior Selection DataEfficient Direct Policy Search Summary One interesting feature Bayesian optimization direct policy search leverage prior eg simulation previous task accelerate learning robot paper interested situation several prior exist know advance one fit best current situat... | [-0.00995427742600441, 0.028338341042399406, -0.012279587797820568, -0.004892553668469191, 0.01670580916106701, -0.006485891528427601, 0.032854028046131134, 0.020987290889024734, -0.007547059096395969, -0.0314612090587616, 0.006708703003823757, 0.0399947352707386, 0.0047004008665680885, 0.032445043325424194, 0.01308621... |
387 | 387 | ['Thiago Serra', 'Christian Tjandraatmadja', 'Srikumar Ramalingam'] | 1711.02114v2 | In this paper, we study the representational power of deep neural networks
(DNN) that belong to the family of piecewise-linear (PWL) functions, based on
PWL activation units such as rectifier or maxout. We investigate the complexity
of such networks by studying the number of linear regions of the PWL function.
Typicall... | Bounding and Counting Linear Regions of Deep Neural Networks | 2,017 | http://arxiv.org/pdf/1711.02114v2 | Title Bounding Counting Linear Regions Deep Neural Networks Summary paper study representational power deep neural network DNN belong family piecewiselinear PWL function based PWL activation unit rectifier maxout investigate complexity network studying number linear region PWL function Typically PWL function DNN seen l... | [-0.050812914967536926, 0.04456111043691635, -0.04539470374584198, 0.06889405846595764, 0.018647756427526474, -0.03307289630174637, 0.053659118711948395, -0.04094891995191574, -0.011029049754142761, 0.03817381337285042, 0.009715624153614044, -0.005379774607717991, -0.011343705467879772, 0.06885793060064316, 0.045825649... |
388 | 388 | ['Guillaume Bellec', 'David Kappel', 'Wolfgang Maass', 'Robert Legenstein'] | 1711.05136v4 | Neuromorphic hardware tends to pose limits on the connectivity of deep
networks that one can run on them. But also generic hardware and software
implementations of deep learning run more efficiently for sparse networks.
Several methods exist for pruning connections of a neural network after it was
trained without conne... | Deep Rewiring: Training very sparse deep networks | 2,017 | http://arxiv.org/pdf/1711.05136v4 | Title Deep Rewiring Training sparse deep network Summary Neuromorphic hardware tends pose limit connectivity deep network one run also generic hardware software implementation deep learning run efficiently sparse network Several method exist pruning connection neural network trained without connectivity constraint pres... | [-0.043236829340457916, 0.016069017350673676, -0.0027782388497143984, 0.04958239570260048, 0.008396126329898834, -0.05497850477695465, 0.00010714112431742251, -0.016810746863484383, -0.05139242112636566, -0.010272297076880932, -0.042071785777807236, 0.02903454750776291, 0.0350470244884491, 0.03272346034646034, 0.054298... |
389 | 389 | ['Artit Wangperawong', 'Kettip Kriangchaivech', 'Austin Lanari', 'Supui Lam', 'Panthong Wangperawong'] | 1801.03143v1 | To compare entities of differing types and structural components, the
artificial neural network paradigm was used to cross-compare structural
components between heterogeneous documents. Trainable weighted structural
components were input into machine-learned activation functions of the neurons.
The model was used for m... | Comparing heterogeneous entities using artificial neural networks of
trainable weighted structural components and machine-learned activation
functions | 2,018 | http://arxiv.org/pdf/1801.03143v1 | Title Comparing heterogeneous entity using artificial neural network trainable weighted structural component machinelearned activation function Summary compare entity differing type structural component artificial neural network paradigm used crosscompare structural component heterogeneous document Trainable weighted s... | [0.03217822685837746, 0.026238176971673965, -0.025483589619398117, 0.0047994875349104404, -0.006483330857008696, 0.01590096764266491, 0.03214715048670769, 0.030115464702248573, -0.017625799402594566, -0.04611966386437416, -0.04700959846377373, -0.02767229452729225, 0.040130794048309326, 0.04134885221719742, -0.00246275... |
390 | 390 | ['Adrien Baranes', 'Pierre-Yves Oudeyer'] | 1301.4862v1 | We introduce the Self-Adaptive Goal Generation - Robust Intelligent Adaptive
Curiosity (SAGG-RIAC) architecture as an intrinsi- cally motivated goal
exploration mechanism which allows active learning of inverse models in
high-dimensional redundant robots. This allows a robot to efficiently and
actively learn distributi... | Active Learning of Inverse Models with Intrinsically Motivated Goal
Exploration in Robots | 2,013 | http://arxiv.org/pdf/1301.4862v1 | Title Active Learning Inverse Models Intrinsically Motivated Goal Exploration Robots Summary introduce SelfAdaptive Goal Generation Robust Intelligent Adaptive Curiosity SAGGRIAC architecture intrinsi cally motivated goal exploration mechanism allows active learning inverse model highdimensional redundant robot allows ... | [0.01569799706339836, -0.019697241485118866, -0.010390009731054306, -0.04707631096243858, 0.019959047436714172, 0.0067192609421908855, -0.008895752020180225, -0.052416108548641205, -0.023944152519106865, -0.005776107311248779, -0.06151485815644264, 0.05318199843168259, -0.0376877561211586, 0.05663970857858658, 0.006373... |
391 | 391 | ['Peter Ondruska', 'Julie Dequaire', 'Dominic Zeng Wang', 'Ingmar Posner'] | 1604.05091v2 | In this work we present a novel end-to-end framework for tracking and
classifying a robot's surroundings in complex, dynamic and only partially
observable real-world environments. The approach deploys a recurrent neural
network to filter an input stream of raw laser measurements in order to
directly infer object locati... | End-to-End Tracking and Semantic Segmentation Using Recurrent Neural
Networks | 2,016 | http://arxiv.org/pdf/1604.05091v2 | Title EndtoEnd Tracking Semantic Segmentation Using Recurrent Neural Networks Summary work present novel endtoend framework tracking classifying robot surroundings complex dynamic partially observable realworld environment approach deploys recurrent neural network filter input stream raw laser measurement order directl... | [-0.0054203178733587265, -0.001586581813171506, 0.032583948224782944, 0.06681782752275467, 0.008075389079749584, -0.026314599439501762, 0.0012463745661079884, -0.06645060330629349, -0.04777192696928978, -0.017123231664299965, 0.03182670474052429, 0.04510796442627907, -0.04645577073097229, 0.03661247342824936, -0.014234... |
392 | 392 | ['Peter Ondruska', 'Ingmar Posner'] | 1602.00991v2 | This paper presents to the best of our knowledge the first end-to-end object
tracking approach which directly maps from raw sensor input to object tracks in
sensor space without requiring any feature engineering or system identification
in the form of plant or sensor models. Specifically, our system accepts a
stream of... | Deep Tracking: Seeing Beyond Seeing Using Recurrent Neural Networks | 2,016 | http://arxiv.org/pdf/1602.00991v2 | Title Deep Tracking Seeing Beyond Seeing Using Recurrent Neural Networks Summary paper present best knowledge first endtoend object tracking approach directly map raw sensor input object track sensor space without requiring feature engineering system identification form plant sensor model Specifically system accepts st... | [-0.040110498666763306, 0.020441696047782898, 0.03217256814241409, 0.0626005306839943, -0.010437419638037682, -0.030604690313339233, -0.00931694358587265, -0.04911627620458603, -0.008815133012831211, -0.015731601044535637, 0.02961554564535618, 0.012684076093137264, 0.0022867396473884583, 0.08908965438604355, 0.01761854... |
393 | 393 | ['William Lotter', 'Gabriel Kreiman', 'David Cox'] | 1605.08104v5 | While great strides have been made in using deep learning algorithms to solve
supervised learning tasks, the problem of unsupervised learning - leveraging
unlabeled examples to learn about the structure of a domain - remains a
difficult unsolved challenge. Here, we explore prediction of future frames in a
video sequenc... | Deep Predictive Coding Networks for Video Prediction and Unsupervised
Learning | 2,016 | http://arxiv.org/pdf/1605.08104v5 | Title Deep Predictive Coding Networks Video Prediction Unsupervised Learning Summary great stride made using deep learning algorithm solve supervised learning task problem unsupervised learning leveraging unlabeled example learn structure domain remains difficult unsolved challenge explore prediction future frame video... | [-0.02492854744195938, 0.03656262159347534, -0.0023122630082070827, 0.03793924301862717, 0.02912059612572193, 0.008039752021431923, 0.009802541695535183, 0.01682428829371929, -0.0555635504424572, 0.022038007155060768, 0.03866586089134216, -0.0014181542210280895, 0.015415824018418789, 0.08241347223520279, 0.041331782937... |
394 | 394 | ['Martin Engelcke', 'Dushyant Rao', 'Dominic Zeng Wang', 'Chi Hay Tong', 'Ingmar Posner'] | 1609.06666v2 | This paper proposes a computationally efficient approach to detecting objects
natively in 3D point clouds using convolutional neural networks (CNNs). In
particular, this is achieved by leveraging a feature-centric voting scheme to
implement novel convolutional layers which explicitly exploit the sparsity
encountered in... | Vote3Deep: Fast Object Detection in 3D Point Clouds Using Efficient
Convolutional Neural Networks | 2,016 | http://arxiv.org/pdf/1609.06666v2 | Title Vote3Deep Fast Object Detection 3D Point Clouds Using Efficient Convolutional Neural Networks Summary paper proposes computationally efficient approach detecting object natively 3D point cloud using convolutional neural network CNNs particular achieved leveraging featurecentric voting scheme implement novel convo... | [-0.010530673898756504, -0.00397380068898201, 0.03137296810746193, 0.08886940777301788, -0.00474140839651227, -0.0034435151610523462, 0.012546230107545853, -0.039406031370162964, -0.03957242891192436, 0.02292507514357567, 0.008108907379209995, 0.05722877010703087, -0.008607571944594383, 0.06548508256673813, 0.035875830... |
395 | 395 | ['Naveen Kodali', 'Jacob Abernethy', 'James Hays', 'Zsolt Kira'] | 1705.07215v5 | We propose studying GAN training dynamics as regret minimization, which is in
contrast to the popular view that there is consistent minimization of a
divergence between real and generated distributions. We analyze the convergence
of GAN training from this new point of view to understand why mode collapse
happens. We hy... | On Convergence and Stability of GANs | 2,017 | http://arxiv.org/pdf/1705.07215v5 | Title Convergence Stability GANs Summary propose studying GAN training dynamic regret minimization contrast popular view consistent minimization divergence real generated distribution analyze convergence GAN training new point view understand mode collapse happens hypothesize existence undesirable local equilibrium non... | [-0.04972640424966812, 0.07576105743646622, -0.020075885578989983, 0.007751937955617905, 0.036929573863744736, -0.014320901595056057, -0.007350770756602287, 0.008607647381722927, -0.06422300636768341, 0.0783543586730957, -0.006085442379117012, -0.030311310663819313, -0.020794961601495743, 0.007470865733921528, 0.056025... |
396 | 396 | ['YuXuan Liu', 'Abhishek Gupta', 'Pieter Abbeel', 'Sergey Levine'] | 1707.03374v1 | Imitation learning is an effective approach for autonomous systems to acquire
control policies when an explicit reward function is unavailable, using
supervision provided as demonstrations from an expert, typically a human
operator. However, standard imitation learning methods assume that the agent
receives examples of... | Imitation from Observation: Learning to Imitate Behaviors from Raw Video
via Context Translation | 2,017 | http://arxiv.org/pdf/1707.03374v1 | Title Imitation Observation Learning Imitate Behaviors Raw Video via Context Translation Summary Imitation learning effective approach autonomous system acquire control policy explicit reward function unavailable using supervision provided demonstration expert typically human operator However standard imitation learnin... | [0.015244332142174244, 0.020528141409158707, 0.016690772026777267, -0.00498945452272892, -0.014369165524840355, -0.00951950903981924, 0.032931145280599594, 0.016603803262114525, -0.026855409145355225, -0.03213277459144592, -0.016663720831274986, 0.02962314523756504, -0.027660749852657318, 0.028468824923038483, 0.017942... |
397 | 397 | ['Vamsi K. Ithapu', 'Sathya Ravi', 'Vikas Singh'] | 1506.03412v3 | Unsupervised pretraining and dropout have been well studied, especially with
respect to regularization and output consistency. However, our understanding
about the explicit convergence rates of the parameter estimates, and their
dependence on the learning (like denoising and dropout rate) and structural
(like depth and... | Convergence rates for pretraining and dropout: Guiding learning
parameters using network structure | 2,015 | http://arxiv.org/pdf/1506.03412v3 | Title Convergence rate pretraining dropout Guiding learning parameter using network structure Summary Unsupervised pretraining dropout well studied especially respect regularization output consistency However understanding explicit convergence rate parameter estimate dependence learning like denoising dropout rate stru... | [-0.022588271647691727, 0.05241832509636879, -0.01163649931550026, 0.012558311223983765, 0.01884423941373825, -0.02940197102725506, 0.014257178641855717, 0.0019173461478203535, -0.042971834540367126, 0.020522965118288994, -0.014107691124081612, 0.02639881707727909, -0.008156240917742252, 0.09559813886880875, 0.00765846... |
398 | 398 | ['Zhuolin Jiang', 'Yaming Wang', 'Larry Davis', 'Walt Andrews', 'Viktor Rozgic'] | 1602.01168v2 | Deep Convolutional Neural Networks (CNN) enforces supervised information only
at the output layer, and hidden layers are trained by back propagating the
prediction error from the output layer without explicit supervision. We propose
a supervised feature learning approach, Label Consistent Neural Network, which
enforces... | Learning Discriminative Features via Label Consistent Neural Network | 2,016 | http://arxiv.org/pdf/1602.01168v2 | Title Learning Discriminative Features via Label Consistent Neural Network Summary Deep Convolutional Neural Networks CNN enforces supervised information output layer hidden layer trained back propagating prediction error output layer without explicit supervision propose supervised feature learning approach Label Consi... | [0.020428013056516647, 0.04332013055682182, -0.0005980939022265375, 0.03674156591296196, 0.031040387228131294, -0.0040699574165046215, 0.050413019955158234, -0.005540117155760527, 0.012124375440180302, -0.029857371002435684, -0.041967373341321945, 0.0609920397400856, -0.05401777848601341, 0.026943597942590714, -0.01131... |
399 | 399 | ['Hamid Dadkhahi', 'Marco F. Duarte', 'Benjamin Marlin'] | 1606.08282v3 | This paper proposes an out-of-sample extension framework for a global
manifold learning algorithm (Isomap) that uses temporal information in
out-of-sample points in order to make the embedding more robust to noise and
artifacts. Given a set of noise-free training data and its embedding, the
proposed framework extends t... | Out-of-Sample Extension for Dimensionality Reduction of Noisy Time
Series | 2,016 | http://arxiv.org/pdf/1606.08282v3 | Title OutofSample Extension Dimensionality Reduction Noisy Time Series Summary paper proposes outofsample extension framework global manifold learning algorithm Isomap us temporal information outofsample point order make embedding robust noise artifact Given set noisefree training data embedding proposed framework exte... | [-0.05783146992325783, 0.03087870217859745, 0.016298944130539894, 0.0324084535241127, 0.02472299337387085, 0.04794783517718315, 0.006237872876226902, 0.01697060652077198, 0.028118178248405457, 0.0300690196454525, 0.07045266032218933, 0.03230315446853638, 0.004669292829930782, 0.006983146071434021, 0.042533956468105316,... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.