# Computer Science

## New submissions

[ total of 333 entries: 1-333 ]
[ showing up to 2000 entries per page: fewer | more ]

### New submissions for Tue, 20 Mar 18

[1]
Title: Serverless Data Analytics with Flint
Subjects: Distributed, Parallel, and Cluster Computing (cs.DC); Databases (cs.DB)

Serverless architectures organized around loosely-coupled function invocations represent an emerging design for many applications. Recent work mostly focuses on user-facing products and event-driven processing pipelines. In this paper, we explore a completely different part of the application space and examine the feasibility of analytical processing on big data using a serverless architecture. We present Flint, a prototype Spark execution engine that takes advantage of AWS Lambda to provide a pure pay-as-you-go cost model. With Flint, a developer uses PySpark exactly as before, but without needing an actual Spark cluster. We describe the design, implementation, and performance of Flint, along with the challenges associated with serverless analytics.

[2]
Title: A Low-rank Tensor Regularization Strategy for Hyperspectral Unmixing
Subjects: Computer Vision and Pattern Recognition (cs.CV)

Tensor-based methods have recently emerged as a more natural and effective formulation to address many problems in hyperspectral imaging. In hyperspectral unmixing (HU), low-rank constraints on the abundance maps have been shown to act as a regularization which adequately accounts for the multidimensional structure of the underlying signal. However, imposing a strict low-rank constraint for the abundance maps does not seem to be adequate, as important information that may be required to represent fine scale abundance behavior may be discarded. This paper introduces a new low-rank tensor regularization that adequately captures the low-rank structure underlying the abundance maps without hindering the flexibility of the solution. Simulation results with synthetic and real data show that the the extra flexibility introduced by the proposed regularization significantly improves the unmixing results.

[3]
Subjects: Learning (cs.LG); Machine Learning (stat.ML)

In this paper, we develop improved techniques for defending against adversarial examples at scale. First, we implement the state of the art version of adversarial training at unprecedented scale on ImageNet and investigate whether it remains effective in this setting - an important open scientific question (Athalye et al., 2018). Next, we introduce enhanced defenses using a technique we call logit pairing, a method that encourages logits for pairs of examples to be similar. When applied to clean examples and their adversarial counterparts, logit pairing improves accuracy on adversarial examples over vanilla adversarial training; we also find that logit pairing on clean examples only is competitive with adversarial training in terms of accuracy on two datasets. Finally, we show that adversarial logit pairing achieves the state of the art defense on ImageNet against PGD white box attacks, with an accuracy improvement from 1.5% to 27.9%. Adversarial logit pairing also successfully damages the current state of the art defense against black box attacks on ImageNet (Tramer et al., 2018), dropping its accuracy from 66.6% to 47.1%. With this new accuracy drop, adversarial logit pairing ties with Tramer et al.(2018) for the state of the art on black box attacks on ImageNet.

[4]
Title: A Generalised Method for Empirical Game Theoretic Analysis
Subjects: Computer Science and Game Theory (cs.GT); Multiagent Systems (cs.MA)

This paper provides theoretical bounds for empirical game theoretical analysis of complex multi-agent interactions. We provide insights in the empirical meta game showing that a Nash equilibrium of the meta-game is an approximate Nash equilibrium of the true underlying game. We investigate and show how many data samples are required to obtain a close enough approximation of the underlying game. Additionally, we extend the meta-game analysis methodology to asymmetric games. The state-of-the-art has only considered empirical games in which agents have access to the same strategy sets and the payoff structure is symmetric, implying that agents are interchangeable. Finally, we carry out an empirical illustration of the generalised method in several domains, illustrating the theory and evolutionary dynamics of several versions of the AlphaGo algorithm (symmetric), the dynamics of the Colonel Blotto game played by human players on Facebook (symmetric), and an example of a meta-game in Leduc Poker (asymmetric), generated by the PSRO multi-agent learning algorithm.

[5]
Title: Spread of Information with Confirmation Bias in Cyber-Social Networks
Subjects: Social and Information Networks (cs.SI); Multiagent Systems (cs.MA); Systems and Control (cs.SY); Optimization and Control (math.OC)

This paper provides a model to investigate information spreading over cyber-social network of agents communicating with each other. The cyber-social network considered here comprises individuals and news agencies. Each individual holds a belief represented by a scalar. Individuals receive information from news agencies that are closer to their belief, confirmation bias is explicitly incorporated into the model. The proposed dynamics of cyber-social networks is adopted from DeGroot-Friedkin model, where the individual's opinion update mechanism is a convex combination of his innate opinion, his neighbors' opinions at the previous time step (obtained from the social network), and the opinions passed along by news agencies from cyber layer which he follows. The characteristics of the interdependent social and cyber networks are radically different here: the social network relies on trust and hence static while the news agencies are highly dynamic since they are weighted as a function of the distance between an individual state and the state of news agency to account for confirmation bias. The conditions for convergence of the aforementioned dynamics to a unique equilibrium are characterized. The estimation and exact computation of the steady-state values under non-linear and linear state-dependent weight functions are provided. Finally, the impact of polarization in the opinions of news agencies on the public opinion evolution is numerically analyzed in the context of the well-known Krackhardt's advice network.

[6]
Title: An Edge Computing Empowered Radio Access Network With UAV-Mounted FSO Fronthaul and Backhaul: Key Challenges and Approaches
Comments: This work is accepted by IEEE Wireless Communications Magazine
Subjects: Networking and Internet Architecture (cs.NI)

One promising approach to address the supply-demand mismatch between the terrestrial infrastructure and the temporary and/or unexpected traffic demands is to leverage the unmanned aerial vehicle (UAV) technologies. Motivated by the recent advancement of UAV technologies and retromodulator based free space optical communication, we propose a novel edge-computing empowered radio access network architecture where the fronthaul and backhaul links are mounted on the UAVs for rapid event response and flexible deployment. The implementation of hardware and networking technologies for the proposed architecture are investigated. Due to the limited payload and endurance as well as the high mobility of UAVs, research challenges related to the communication resource management and recent research progress are reported.

[7]
Title: Forecasting Economics and Financial Time Series: ARIMA vs. LSTM
Comments: 19 pages, 2 figures, 1 diagram, 2 listings
Subjects: Learning (cs.LG); Machine Learning (stat.ML)

[8]
Title: Corpus Statistics in Text Classification of Online Data
Comments: 12 pages, 6 tables, 1 figure
Subjects: Computation and Language (cs.CL); Information Retrieval (cs.IR); Learning (cs.LG)

Transformation of Machine Learning (ML) from a boutique science to a generally accepted technology has increased importance of reproduction and transportability of ML studies. In the current work, we investigate how corpus characteristics of textual data sets correspond to text classification results. We work with two data sets gathered from sub-forums of an online health-related forum. Our empirical results are obtained for a multi-class sentiment analysis application.

[9]
Title: Reviving and Improving Recurrent Back-Propagation
Subjects: Learning (cs.LG); Machine Learning (stat.ML)

In this paper, we revisit the recurrent back-propagation (RBP) algorithm, discuss the conditions under which it applies as well as how to satisfy them in deep neural networks. We show that RBP can be unstable and propose two variants based on conjugate gradient on the normal equations (CG-RBP) and Neumann series (Neumann-RBP). We further investigate the relationship between Neumann-RBP and back propagation through time (BPTT) and its truncated version (TBPTT). Our Neumann-RBP has the same time complexity as TBPTT but only requires constant memory, whereas TBPTT's memory cost scales linearly with the number of truncation steps. We examine all RBP variants along with BPTT and TBPTT in three different application domains: associative memory with continuous Hopfield networks, document classification in citation networks using graph neural networks and hyperparameter optimization for fully connected networks. All experiments demonstrate that RBPs, especially the Neumann-RBP variant, are efficient and effective for optimizing convergent recurrent neural networks.

[10]
Title: Decision support with text-based emotion recognition: Deep learning for affective computing
Subjects: Computation and Language (cs.CL)

Emotions widely affect the decision-making of humans and, hence, affective computing takes emotional states into account with the goal of tailoring decision support to individuals. However, the accurate recognition of emotions within narrative materials presents a challenging undertaking due to the complexity and ambiguity of language. Even though deep learning has evolved as the state-of-the-art in various tasks from text mining, its benefits with regard to affective computing are not yet understood. We thus propose the following innovations: (1) we adapt recurrent neural networks from the field of deep learning to affective computing. (2) We extend these networks for predicting the score of different affective dimensions. (3) We implement transfer learning for pre-training word embeddings. Analyzing the results, we find that deep learning consistently outperforms traditional machine learning with improvements of up to 21% in F1-score when labeling emotions and 6% in forecast errors when rating the intensity of emotions. Altogether, the findings have considerable implications for the use of affective computing in providing decision support.

[11]
Title: Self-Calibration of Mobile Manipulator Kinematic and Sensor Extrinsic Parameters Through Contact-Based Interaction
Comments: Accepted to the International Conference on Robotics and Automation (ICRA'18), Brisbane, Australia, May 21-25, 2018
Subjects: Robotics (cs.RO)

We present a novel approach for mobile manipulator self-calibration using contact information. Our method, based on point cloud registration, is applied to estimate the extrinsic transform between a fixed vision sensor mounted on a mobile base and an end effector. Beyond sensor calibration, we demonstrate that the method can be extended to include manipulator kinematic model parameters, which involves a non-rigid registration process. Our procedure uses on-board sensing exclusively and does not rely on any external measurement devices, fiducial markers, or calibration rigs. Further, it is fully automatic in the general case. We experimentally validate the proposed method on a custom mobile manipulator platform, and demonstrate centimetre-level post-calibration accuracy in positioning of the end effector using visual guidance only. We also discuss the stability properties of the registration algorithm, in order to determine the conditions under which calibration is possible.

[12]
Title: Deep Component Analysis via Alternating Direction Neural Networks
Subjects: Learning (cs.LG); Computer Vision and Pattern Recognition (cs.CV); Machine Learning (stat.ML)

Despite a lack of theoretical understanding, deep neural networks have achieved unparalleled performance in a wide range of applications. On the other hand, shallow representation learning with component analysis is associated with rich intuition and theory, but smaller capacity often limits its usefulness. To bridge this gap, we introduce Deep Component Analysis (DeepCA), an expressive multilayer model formulation that enforces hierarchical structure through constraints on latent variables in each layer. For inference, we propose a differentiable optimization algorithm implemented using recurrent Alternating Direction Neural Networks (ADNNs) that enable parameter learning using standard backpropagation. By interpreting feed-forward networks as single-iteration approximations of inference in our model, we provide both a novel theoretical perspective for understanding them and a practical technique for constraining predictions with prior knowledge. Experimentally, we demonstrate performance improvements on a variety of tasks, including single-image depth prediction with sparse output constraints.

[13]
Title: Learning to Segment via Cut-and-Paste
Subjects: Computer Vision and Pattern Recognition (cs.CV)

This paper presents a weakly-supervised approach to object instance segmentation. Starting with known or predicted object bounding boxes, we learn object masks by playing a game of cut-and-paste in an adversarial learning setup. A mask generator takes a detection box and Faster R-CNN features, and constructs a segmentation mask that is used to cut-and-paste the object into a new image location. The discriminator tries to distinguish between real objects, and those cut and pasted via the generator, giving a learning signal that leads to improved object masks. We verify our method experimentally using Cityscapes, COCO, and aerial image datasets, learning to segment objects without ever having seen a mask in training. Our method exceeds the performance of existing weakly supervised methods, without requiring hand-tuned segment proposals, and reaches 90% of supervised performance.

[14]
Title: Differential Privacy for Growing Databases
Subjects: Data Structures and Algorithms (cs.DS); Databases (cs.DB)

We study the design of differentially private algorithms for adaptive analysis of dynamically growing databases, where a database accumulates new data entries while the analysis is ongoing. We provide a collection of tools for machine learning and other types of data analysis that guarantee differential privacy and accuracy as the underlying databases grow arbitrarily large. We give both a general technique and a specific algorithm for adaptive analysis of dynamically growing databases. Our general technique is illustrated by two algorithms that schedule black box access to some algorithm that operates on a fixed database to generically transform private and accurate algorithms for static databases into private and accurate algorithms for dynamically growing databases. These results show that almost any private and accurate algorithm can be rerun at appropriate points of data growth with minimal loss of accuracy, even when data growth is unbounded. Our specific algorithm directly adapts the private multiplicative weights algorithm to the dynamic setting, maintaining the accuracy guarantee of the static setting through unbounded data growth. Along the way, we develop extensions of several other differentially private algorithms to the dynamic setting, which may be of independent interest for future work on the design of differentially private algorithms for growing databases.

[15]
Title: Coding for Channels with SNR Variation: Spatial Coupling and Efficient Interleaving
Comments: 8 pages, Submitted to IEEE Transactions on Magnetics (TMAG)
Subjects: Information Theory (cs.IT)

In magnetic-recording systems, consecutive sections experience different signal to noise ratios (SNRs). To perform error correction over these systems, one approach is to use an individual block code for each section. However, a section affected by a lower SNR shows a weaker performance compared to a section affected by a higher SNR. A commonly used technique is to perform interleaving across blocks to alleviate negative effects of varying SNR. However, this scheme is typically costly to implement and does not result in the best performance. Spatially-coupled (SC) codes are a family of graph-based codes with capacity approaching performance and low latency decoding. An SC code is constructed by partitioning an underlying block code to several component matrices, and coupling copies of the component matrices together. The contribution of this paper is threefold. First, we present a new partitioning technique to efficiently construct SC codes with column weights 4 and 6. Second, we present an SC code construction for channels with SNR variation. Our SC code construction provides local error correction for each section by means of the underlying codes that cover one section each, and simultaneously, an added level of error correction by means of coupling among the underlying codes. Consequently, and because of the structure of SC codes, more reliable sections can help unreliable ones to achieve an improved performance. Third, we introduce a low-complexity interleaving scheme specific to SC codes that further improves their performance over channels with SNR variation. Our simulation results show that our SC codes outperform individual block codes by more than 1 and 2 orders of magnitudes in the error floor region compared to the block codes with and without regular interleaving, respectively. This improvement is more pronounced by increasing the memory and column weight.

[16]
Title: Leveraging Sparsity to Speed Up Polynomial Feature Expansions of CSR Matrices Using $K$-Simplex Numbers
Subjects: Data Structures and Algorithms (cs.DS); Numerical Analysis (cs.NA)

We provide an algorithm that speeds up polynomial and interaction feature expansion on sparse matrices. The algorithm operates on and produces a compressed sparse row matrix with no densification. It works by defining a bijective function involving simplex numbers of column indices in the original matrix to column indices in the expanded matrix. This function allows for only the nonzero elements to be iterated over and multiplied together, greatly improving the expansion of sparse matrices in compressed sparse row format. For a vector of dimension $D$ and density $0 \le d \le 1$, the algorithm has time complexity $\Theta(d^KD^K)$ where $K$ is the polynomial-feature order; this is an improvement by a factor $d^K$ over the standard method.

[17]
Title: A New Result on the Complexity of Heuristic Estimates for the A* Algorithm
Journal-ref: Artificial Intelligence, 55, 1 (May 1992), 129-143
Subjects: Artificial Intelligence (cs.AI)

Relaxed models are abstract problem descriptions generated by ignoring constraints that are present in base-level problems. They play an important role in planning and search algorithms, as it has been shown that the length of an optimal solution to a relaxed model yields a monotone heuristic for an A? search of a base-level problem. Optimal solutions to a relaxed model may be computed algorithmically or by search in a further relaxed model, leading to a search that explores a hierarchy of relaxed models. In this paper, we review the traditional definition of problem relaxation and show that searching in the abstraction hierarchy created by problem relaxation will not reduce the computational effort required to find optimal solutions to the base- level problem, unless the relaxed problem found in the hierarchy can be transformed by some optimization (e.g., subproblem factoring). Specifically, we prove that any A* search of the base-level using a heuristic h2 will largely dominate an A* search of the base-level using a heuristic h1, if h1 must be computed by an A* search of the relaxed model using h2.

[18]
Title: Decentralization Meets Quantization
Subjects: Learning (cs.LG); Distributed, Parallel, and Cluster Computing (cs.DC); Systems and Control (cs.SY); Machine Learning (stat.ML)

Optimizing distributed learning systems is an art of balancing between computation and communication. There have been two lines of research that try to deal with slower networks: {\em quantization} for low bandwidth networks, and {\em decentralization} for high latency networks. In this paper, we explore a natural question: {\em can the combination of both decentralization and quantization lead to a system that is robust to both bandwidth and latency?}
Although the system implication of such combination is trivial, the underlying theoretical principle and algorithm design is challenging: simply quantizing data sent in a decentralized training algorithm would accumulate the error. In this paper, we develop a framework of quantized, decentralized training and propose two different strategies, which we call {\em extrapolation compression} and {\em difference compression}. We analyze both algorithms and prove both converge at the rate of $O(1/\sqrt{nT})$ where $n$ is the number of workers and $T$ is the number of iterations, matching the {\rc convergence} rate for full precision, centralized training. We evaluate our algorithms on training deep learning models, and find that our proposed algorithm outperforms the best of merely decentralized and merely quantized algorithm significantly for networks with {\em both} high latency and low bandwidth.

[19]
Title: Datalog: Bag Semantics via Set Semantics
Subjects: Databases (cs.DB); Artificial Intelligence (cs.AI); Logic in Computer Science (cs.LO)

Duplicates in data management are common and problematic. In this work, we present a translation of Datalog under bag semantics into a well-behaved extension of Datalog (the so-called warded Datalog+-) under set semantics. From a theoretical point of view, this allows us to reason on bag semantics by making use of the well-established theoretical foundations of set semantics. From a practical point of view, this allows us to handle the bag semantics of Datalog by powerful, existing query engines for the required extension of Datalog. Moreover, this translation has the potential for further extensions -- above all to capture the bag semantics of the semantic web query language SPARQL.

[20]
Title: Frequency-Domain Decoupling for MIMO-GFDM Spatial Multiplexing
Subjects: Information Theory (cs.IT)

Generalized frequency division multiplexing (GFDM) is considered a non-orthogonal waveform and known to encounter difficulties when using in the spatial multiplexing mode of multiple-input-multiple-output (MIMO) scenario. In this paper, a class of GFDM prototype filters, under which the GFDM system is free from inter-subcarrier interference, is investigated, enabling frequency-domain decoupling in the processing at the GFDM receiver. An efficient MIMO-GFDM detection method based on depth-first sphere decoding is then proposed with such class of filters. Numerical results confirm a significant reduction in complexity, especially when the number of subcarriers is large, compared with existing methods presented in recent years.

[21]
Title: Constrained Deep Learning using Conditional Gradient and Applications in Computer Vision
Subjects: Learning (cs.LG); Computer Vision and Pattern Recognition (cs.CV); Machine Learning (stat.ML)

A number of results have recently demonstrated the benefits of incorporating various constraints when training deep architectures in vision and machine learning. The advantages range from guarantees for statistical generalization to better accuracy to compression. But support for general constraints within widely used libraries remains scarce and their broader deployment within many applications that can benefit from them remains under-explored. Part of the reason is that Stochastic gradient descent (SGD), the workhorse for training deep neural networks, does not natively deal with constraints with global scope very well. In this paper, we revisit a classical first order scheme from numerical optimization, Conditional Gradients (CG), that has, thus far had limited applicability in training deep models. We show via rigorous analysis how various constraints can be naturally handled by modifications of this algorithm. We provide convergence guarantees and show a suite of immediate benefits that are possible -- from training ResNets with fewer layers but better accuracy simply by substituting in our version of CG to faster training of GANs with 50% fewer epochs in image inpainting applications to provably better generalization guarantees using efficiently implementable forms of recently proposed regularizers.

[22]
Title: Experiments with Neural Networks for Small and Large Scale Authorship Verification
Subjects: Computation and Language (cs.CL)

We propose two models for a special case of authorship verification problem. The task is to investigate whether the two documents of a given pair are written by the same author. We consider the authorship verification problem for both small and large scale datasets. The underlying small-scale problem has two main challenges: First, the authors of the documents are unknown to us because no previous writing samples are available. Second, the two documents are short (a few hundred to a few thousand words) and may differ considerably in the genre and/or topic. To solve it we propose transformation encoder to transform one document of the pair into the other. This document transformation generates a loss which is used as a recognizable feature to verify if the authors of the pair are identical. For the large scale problem where various authors are engaged and more examples are available with larger length, a parallel recurrent neural network is proposed. It compares the language models of the two documents. We evaluate our methods on various types of datasets including Authorship Identification datasets of PAN competition, Amazon reviews, and machine learning articles. Experiments show that both methods achieve stable and competitive performance compared to the baselines.

[23]
Title: Learning to Cluster for Proposal-Free Instance Segmentation
Subjects: Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Learning (cs.LG)

This work proposed a novel learning objective to train a deep neural network to perform end-to-end image pixel clustering. We applied the approach to instance segmentation, which is at the intersection of image semantic segmentation and object detection. We utilize the most fundamental property of instance labeling -- the pairwise relationship between pixels -- as the supervision to formulate the learning objective, then apply it to train a fully convolutional network (FCN) for learning to perform pixel-wise clustering. The resulting clusters can be used as the instance labeling directly. To support labeling of an unlimited number of instance, we further formulate ideas from graph coloring theory into the proposed learning objective. The evaluation on the Cityscapes dataset demonstrates strong performance and therefore proof of the concept. Moreover, our approach won the second place in the lane detection competition of 2017 CVPR Autonomous Driving Challenge, and was the top performer without using external data.

[24]
Title: Toward Understanding the Impact of User Participation in Autonomous Ridesharing Systems
Subjects: Computers and Society (cs.CY); Multiagent Systems (cs.MA); Systems and Control (cs.SY)

Autonomous ridesharing systems (ARS) promise many societal and environmental benefits including decreased accident rates, reduced energy consumption and pollutant emissions, and diminished land use for parking. To unleash ARS' potential, stakeholders must understand how the degree of passenger participation influences the ridesharing systems' efficiency. To date, however, a careful study that quantifies the impact of user participation on ARS' performance is missing. Here, we present the first simulation analysis to investigate how and to what extent user participation affects the efficiency of ARS. We demonstrate how specific configurations (e.g., fleet size, vehicle capacity, and the maximum waiting time) of a system can be identified to counter the performance loss due to users' uncoordinated behavior on ridesharing participation. Our results indicate that stakeholders of ARS should base decisions regarding system configurations on insights from data-driven simulations and make tradeoffs between system efficiency and price of anarchy for desired outcomes.

[25]
Title: Optimizing Information Freshness in Wireless Networks under General Interference Constraints
Subjects: Information Theory (cs.IT); Networking and Internet Architecture (cs.NI)

Age of information (AoI) is a recently proposed metric for measuring information freshness. AoI measures the time that elapsed since the last received update was generated. We consider the problem of minimizing average and peak AoI in wireless networks under general interference constraints. When fresh information is always available for transmission, we show that a stationary scheduling policy is peak age optimal. We also prove that this policy achieves average age that is within a factor of two of the optimal average age. In the case where fresh information is not always available, and packet/information generation rate has to be controlled along with scheduling links for transmission, we prove an important separation principle: the optimal scheduling policy can be designed assuming fresh information, and independently, the packet generation rate control can be done by ignoring interference. Peak and average AoI for discrete time G/Ber/1 queue is analyzed for the first time, which may be of independent interest.

[26]
Title: Distributed Scheduling Algorithms for Optimizing Information Freshness in Wireless Networks
Subjects: Information Theory (cs.IT); Networking and Internet Architecture (cs.NI)

Age of Information (AoI), measures the time elapsed since the last received information packet was generated at the source. We consider the problem of AoI minimization for single-hop flows in a wireless network, under pairwise interference constraints and time varying channel. We consider simple, yet broad, class of distributed scheduling policies, in which a transmission is attempted over each link with a certain attempt probability. We obtain an interesting relation between the optimal attempt probability and the optimal AoI of the link, and its neighboring links. We then show that the optimal attempt probabilities can be computed by solving a convex optimization problem, which can be done distributively.

[27]
Title: Optimizing Age of Information in Wireless Networks with Perfect Channel State Information
Subjects: Information Theory (cs.IT); Networking and Internet Architecture (cs.NI)

Age of information (AoI), defined as the time elapsed since the last received update was generated, is a newly proposed metric to measure the timeliness of information updates in a network. We consider AoI minimization problem for a network with general interference constraints, and time varying channels. We propose two policies, namely, virtual-queue based policy and age-based policy when the channel state is available to the network scheduler at each time step. We prove that the virtual-queue based policy is nearly optimal, up to a constant additive factor, and the age-based policy is at-most factor 4 away from optimality. Comparing with our previous work, which derived age optimal policies when channel state information is not available to the scheduler, we demonstrate a 4 fold improvement in age due to the availability of channel state information.

[28]
Title: Queuing Theory Guided Intelligent Traffic Scheduling through Video Analysis using Dirichlet Process Mixture Model
Subjects: Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV)

Accurate prediction of traffic signal duration for roadway junction is a challenging problem due to the dynamic nature of traffic flows. Though supervised learning can be used, parameters may vary across roadway junctions. In this paper, we present a computer vision guided expert system that can learn the departure rate of a given traffic junction modeled using traditional queuing theory. First, we temporally group the optical flow of the moving vehicles using Dirichlet Process Mixture Model (DPMM). These groups are referred to as tracklets or temporal clusters. Tracklet features are then used to learn the dynamic behavior of a traffic junction, especially during on/off cycles of a signal. The proposed queuing theory based approach can predict the signal open duration for the next cycle with higher accuracy when compared with other popular features used for tracking. The hypothesis has been verified on two publicly available video datasets. The results reveal that the DPMM based features are better than existing tracking frameworks to estimate $\mu$. Thus, signal duration prediction is more accurate when tested on these datasets.The method can be used for designing intelligent operator-independent traffic control systems for roadway junctions at cities and highways.

[29]
Title: An extended type system with lambda-typed lambda-expressions (extended version)
Authors: Matthias Weber
Subjects: Logic in Computer Science (cs.LO)

We present the type system $\mathtt{d}$, an extended type system with lambda-typed lambda-expressions. It is related to type systems originating from the Automath project. $\mathtt{d}$ extends existing lambda-typed systems by an existential abstraction operator as well as propositional operators. $\beta$-reduction is extended to also normalize negated expressions using a subset of the laws of classical negation, hence $\mathtt{d}$ is normalizing both proofs and formulas which are handled uniformly as functional expressions. $\mathtt{d}$ is using a reflexive typing axiom for a constant $\tau$ to which no function can be typed. Some properties are shown including confluence, subject reduction, uniqueness of types, strong normalization, and consistency. We illustrate how, when using $\mathtt{d}$, due to its limited logical strength additional axioms must be added both for negation and for the mathematical structures whose deductions are to be formalized.

[30]
Title: Robust event-stream pattern tracking based on correlative filter
Subjects: Computer Vision and Pattern Recognition (cs.CV)

Object tracking based on retina-inspired and event-based dynamic vision sensor (DVS) is challenging for the noise events, rapid change of event-stream shape, chaos of complex background textures, and occlusion. To address these challenges, this paper presents a robust event-stream pattern tracking method based on correlative filter mechanism. In the proposed method, rate coding is used to encode the event-stream object in each segment. Feature representations from hierarchical convolutional layers of a deep convolutional neural network (CNN) are used to represent the appearance of the rate encoded event-stream object. The results prove that our method not only achieves good tracking performance in many complicated scenes with noise events, complex background textures, occlusion, and intersected trajectories, but also is robust to variable scale, variable pose, and non-rigid deformations. In addition, this correlative filter based event-stream tracking has the advantage of high speed. The proposed approach will promote the potential applications of these event-based vision sensors in self-driving, robots and many other high-speed scenes.

[31]
Title: Evolving Deep Convolutional Neural Networks by Variable-length Particle Swarm Optimization for Image Classification
Comments: accepted by IEEE CEC 2018
Subjects: Neural and Evolutionary Computing (cs.NE); Computer Vision and Pattern Recognition (cs.CV)

Convolutional neural networks (CNNs) are one of the most effective deep learning methods to solve image classification problems, but the best architecture of a CNN to solve a specific problem can be extremely complicated and hard to design. This paper focuses on utilising Particle Swarm Optimisation (PSO) to automatically search for the optimal architecture of CNNs without any manual work involved. In order to achieve the goal, three improvements are made based on traditional PSO. First, a novel encoding strategy inspired by computer networks which empowers particle vectors to easily encode CNN layers is proposed; Second, in order to allow the proposed method to learn variable-length CNN architectures, a Disabled layer is designed to hide some dimensions of the particle vector to achieve variable-length particles; Third, since the learning process on large data is slow, partial datasets are randomly picked for the evaluation to dramatically speed it up. The proposed algorithm is examined and compared with 12 existing algorithms including the state-of-art methods on three widely used image classification benchmark datasets. The experimental results show that the proposed algorithm is a strong competitor to the state-of-art algorithms in terms of classification error. This is the first work using PSO for automatically evolving the architectures of CNNs.

[32]
Title: Attack Trees in Isabelle -- CTL semantics, correctness and completeness
Subjects: Cryptography and Security (cs.CR); Logic in Computer Science (cs.LO)

In this paper, we present a proof theory for attack trees. Attack trees are a well established and useful model for the construction of attacks on systems since they allow a stepwise exploration of high level attacks in application scenarios. Using the expressiveness of Higher Order Logic in Isabelle, we succeed in developing a generic theory of attack trees with a state-based semantics based on Kripke structures and CTL. The resulting framework allows mechanically supported logic analysis of the meta-theory of the proof calculus of attack trees and at the same time the developed proof theory enables application to case studies. A central correctness result proved in Isabelle establishes a connection between the notion of attack tree validity and a CTL attack statement. The application is illustrated on an insider attack on healthcare IoT systems.

[33]
Title: Variational Bayesian Inference of Line Spectral Estimation with Multiple Measurement Vectors
Subjects: Information Theory (cs.IT)

In this paper, the line spectral estimation (LSE) problem with multiple measurement vectors (MMVs) is studied utilizing the Bayesian methods. Motivated by the recently proposed variational line spectral estimation (VALSE) method, we extend it to deal with the MMVs setting, which is especially important in array signal processing. The VALSE method can automatically estimate the model order and nuisance parameters such as noise variance and weight variance. In addition, by approximating the probability density function (PDF) of the frequencies with the mixture of von Mises PDFs, closed-form update equation and the uncertainty degree of the estimates can be obtained. Interestingly, we find that the VALSE with MMVs can be viewed as applying the VALSE with single measurement vector (SMV) to each snapshot, and combining the intermediate data appropriately. Furthermore, the proposed prior distribution provides a good interpretation of tradeoff between grid and off-grid based methods. Finally, numerical results demonstrate the effectiveness of the VALSE method, compared to the state-of-the-art methods in the MMVs setting.

[34]
Title: Argumentation theory for mathematical argument
Comments: 34 pages; submitted to Argumentation
Subjects: Computation and Language (cs.CL); Artificial Intelligence (cs.AI)

To adequately model mathematical arguments the analyst must be able to represent the mathematical objects under discussion and the relationships between them, as well as inferences drawn about these objects and relationships as the discourse unfolds. We introduce a framework with these properties, which has been applied to both mathematical dialogues and expository texts. The framework can recover salient elements of discourse at, and within, the sentence level, as well as the way mathematical content connects to form larger argumentative structures. We show how the framework might be used to support computational reasoning, and argue that it provides a more natural way to examine the process of proving theorems than do Lamport's structured proofs.

[35]
Title: Analysing Developers Affectiveness through Markov chain Models
Subjects: Software Engineering (cs.SE)

In this paper, we present an analysis of more than 500K comments from open-source repositories of software systems. Our aim is to empirically determine how developers interact with each other under certain psychological conditions generated by politeness, sentiment and emotion expressed in developers' comments. Developers involved in open-source projects do not usually know each other; they mainly communicate through mailing lists, chat rooms, and tools such as issue tracking systems. The way in which they communicate affects the development process and the productivity of the people involved in the project. We evaluated politeness, sentiment, and emotions of comments posted by developers and studied the communication flow to understand how they interacted in the presence of impolite and negative comments (and vice versa). Our analysis shows that when in presence of impolite or negative comments, the probability of the next comment being impolite or negative is 14% and 25%, respectively; anger, however, has a probability of 40% of being followed by a further anger comment. The result could help managers take control the development phases of a system since social aspects can seriously affect a developer's productivity. In a distributed environment this may have a particular resonance.

[36]
Title: Weakly Supervised Salient Object Detection Using Image Labels
Subjects: Computer Vision and Pattern Recognition (cs.CV)

Deep learning based salient object detection has recently achieved great success with its performance greatly outperforms any other unsupervised methods. However, annotating per-pixel saliency masks is a tedious and inefficient procedure. In this paper, we note that superior salient object detection can be obtained by iteratively mining and correcting the labeling ambiguity on saliency maps from traditional unsupervised methods. We propose to use the combination of a coarse salient object activation map from the classification network and saliency maps generated from unsupervised methods as pixel-level annotation, and develop a simple yet very effective algorithm to train fully convolutional networks for salient object detection supervised by these noisy annotations. Our algorithm is based on alternately exploiting a graphical model and training a fully convolutional network for model updating. The graphical model corrects the internal labeling ambiguity through spatial consistency and structure preserving while the fully convolutional network helps to correct the cross-image semantic ambiguity and simultaneously update the coarse activation map for next iteration. Experimental results demonstrate that our proposed method greatly outperforms all state-of-the-art unsupervised saliency detection methods and can be comparable to the current best strongly-supervised methods training with thousands of pixel-level saliency map annotations on all public benchmarks.

[37]
Title: Learning Unsupervised Visual Grounding Through Semantic Self-Supervision
Subjects: Computer Vision and Pattern Recognition (cs.CV)

Localizing natural language phrases in images is a challenging problem that requires joint understanding of both the textual and visual modalities. In the unsupervised setting, lack of supervisory signals exacerbate this difficulty. In this paper, we propose a novel framework for unsupervised visual grounding which uses concept learning as a proxy task to obtain self-supervision. The simple intuition behind this idea is to encourage the model to localize to regions which can explain some semantic property in the data, in our case, the property being the presence of a concept in a set of images. We present thorough quantitative and qualitative experiments to demonstrate the efficacy of our approach and show a 5.6% improvement over the current state of the art on Visual Genome dataset, a 5.8% improvement on the ReferItGame dataset and comparable to state-of-art performance on the Flickr30k dataset.

[38]
Title: MergeNet: A Deep Net Architecture for Small Obstacle Discovery
Subjects: Computer Vision and Pattern Recognition (cs.CV); Robotics (cs.RO)

We present here, a novel network architecture called MergeNet for discovering small obstacles for on-road scenes in the context of autonomous driving. The basis of the architecture rests on the central consideration of training with less amount of data since the physical setup and the annotation process for small obstacles is hard to scale. For making effective use of the limited data, we propose a multi-stage training procedure involving weight-sharing, separate learning of low and high level features from the RGBD input and a refining stage which learns to fuse the obtained complementary features. The model is trained and evaluated on the Lost and Found dataset and is able to achieve state-of-art results with just 135 images in comparison to the 1000 images used by the previous benchmark. Additionally, we also compare our results with recent methods trained on 6000 images and show that our method achieves comparable performance with only 1000 training samples.

[39]
Title: Towards Efficient Data-flow Test Data Generation Using KLEE
Subjects: Software Engineering (cs.SE)

Dataflow coverage, one of the white-box testing criteria, focuses on the relations between variable definitions and their uses.Several empirical studies have proved data-flow testing is more effective than control-flow testing. However, data-flow testing still cannot find its adoption in practice, due to the lack of effective tool support. To this end, we propose a guided symbolic execution approach to efficiently search for program paths to satisfy data-flow coverage criteria. We implemented this approach on KLEE and evaluated with 30 program subjects which are constructed by the subjects used in previous data-flow testing literature, SIR, SV-COMP benchmarks. Moreover, we are planning to integrate the data-flow testing technique into the new proposed symbolic execution engine, SmartUnit, which is a cloud-based unit testing service for industrial software, supporting coverage-based testing. It has successfully helped several well-known corporations and institutions in China to adopt coverage-based testing in practice, totally tested more than one million lines of real code from industry.

[40]
Title: Learning Mixtures of Product Distributions via Higher Multilinear Moments
Subjects: Learning (cs.LG); Computational Complexity (cs.CC); Data Structures and Algorithms (cs.DS); Machine Learning (stat.ML)

Learning mixtures of $k$ binary product distributions is a central problem in computational learning theory, but one where there are wide gaps between the best known algorithms and lower bounds (even for restricted families of algorithms). We narrow many of these gaps by developing novel insights about how to reason about higher order multilinear moments. Our results include:
1) An $n^{O(k^2)}$ time algorithm for learning mixtures of binary product distributions, giving the first improvement on the $n^{O(k^3)}$ time algorithm of Feldman, O'Donnell and Servedio
2) An $n^{\Omega(\sqrt{k})}$ statistical query lower bound, improving on the $n^{\Omega(\log k)}$ lower bound that is based on connections to sparse parity with noise
3) An $n^{O(\log k)}$ time algorithm for learning mixtures of $k$ subcubes. This special case can still simulate many other hard learning problems, but is much richer than any of them alone. As a corollary, we obtain more flexible algorithms for learning decision trees under the uniform distribution, that work with stochastic transitions, when we are only given positive examples and with a polylogarithmic number of samples for any fixed $k$.
Our algorithms are based on a win-win analysis where we either build a basis for the moments or locate a degeneracy that can be used to simplify the problem, which we believe will have applications to other learning problems over discrete domains.

[41]
Title: SeqFace: Make full use of sequence information for face recognitio
Subjects: Computer Vision and Pattern Recognition (cs.CV)

Deep convolutional neural networks (CNNs) have greatly improved the Face Recognition (FR) performance in recent years. Almost all CNNs in FR are trained on the carefully labeled datasets containing plenty of identities. However, such high-quality datasets are very expensive to collect, which restricts many researchers to achieve state-of-the-art performance. In this paper, we propose a framework, called SeqFace, for learning discriminative face features. Besides a traditional identity training dataset, the designed SeqFace can train CNNs by using an additional dataset which includes a large number of face sequences collected from videos. Moreover, the label smoothing regularization (LSR) and a new proposed discriminative sequence agent (DSA) loss are employed to enhance discrimination power of deep face features via making full use of the sequence data. Our method achieves excellent performance on Labeled Faces in the Wild (LFW), YouTube Faces (YTF), only with a single ResNet. The code and models are publicly available onlien\footnote{https://github.com/huangyangyu/SeqFace.

[42]
Title: A Benchmark Study on Sentiment Analysis for Software Engineering Research
Comments: Proceedings of 15th International Conference on Mining Software Repositories (MSR 2018)
Subjects: Software Engineering (cs.SE)

A recent research trend has emerged to identify developers' emotions, by applying sentiment analysis to the content of communication traces left in collaborative development environments. Trying to overcome the limitations posed by using off-the-shelf sentiment analysis tools, researchers recently started to develop their own tools for the software engineering domain. In this paper, we report a benchmark study to assess the performance and reliability of three sentiment analysis tools specifically customized for software engineering. Furthermore, we offer a reflection on the open challenges, as they emerge from a qualitative analysis of misclassified texts.

[43]
Title: Topology Estimation using Graphical Models in Multi-Phase Power Distribution Grids
Subjects: Systems and Control (cs.SY); Optimization and Control (math.OC); Machine Learning (stat.ML)

Distribution grid is the medium and low voltage part of a large power system. Structurally, the majority of distribution networks operate radially, such that energized lines form a collection of trees, i.e. forest, with a substation being at the root of any tree. The operational topology/forest may change from time to time, however tracking these changes, even though important for the distribution grid operation and control, is hindered by limited real-time monitoring. This paper develops a learning framework to reconstruct radial operational structure of the distribution grid from synchronized voltage measurements in the grid subject to the exogenous fluctuations in nodal power consumption. To detect operational lines our learning algorithm uses conditional independence tests for continuous random variables that is applicable to a wide class of probability distributions of the nodal consumption and Gaussian injections in particular. Moreover, our algorithm applies to the practical case of unbalanced three-phase power flow. Algorithm performance is validated on AC power flow simulations over IEEE distribution grid test cases.

[44]
Title: LoRa Throughput Analysis with Imperfect Spreading Factor Orthogonality
Subjects: Networking and Internet Architecture (cs.NI)

LoRa is one of the promising techniques for enabling Low Power Wide Area Networks (LPWANs) for future Internet-of-Things (IoT) devices. Although LoRa allows flexible adaptations of coverage and data rates, it is subject to intrinsic types of interferences: co-SF interferences where end-devices with the same Spreading Factors (SFs) are subject to collisions, and inter-SF interferences where end-devices with different SFs experience collisions. Most current works have considered perfect orthogonality among different SFs. In this work, we provide a theoretical analysis of the achievable LoRa throughput in uplink, where the capture conditions specific to LoRa are included. Results show the accuracy of our analysis despite approximations, and the throughput losses from imperfect SF orthogonality, under different SF allocations. Our analysis will enable the design of specific SF allocation mechanisms, in view of further throughput enhancements.

[45]
Title: Dear Sir or Madam, May I introduce the YAFC Corpus: Corpus, Benchmarks and Metrics for Formality Style Transfer
Comments: To appear in the proceedings of North American Chapter of the Association for Computational Linguistics: Human Language Technologies 2018
Subjects: Computation and Language (cs.CL)

Style transfer is the task of automatically transforming a piece of text in one particular style into another. A major barrier to progress in this field has been a lack of training and evaluation datasets, as well as benchmarks and automatic metrics. In this work, we create the largest corpus for a particular stylistic transfer (formality) and show that techniques from the machine translation community can serve as strong baselines for future work. We also discuss challenges of using automatic metrics.

[46]
Title: The Graph Structure of Chebyshev Polynomials over Finite Fields and Applications
Subjects: Discrete Mathematics (cs.DM); Combinatorics (math.CO)

We completely describe the functional graph associated to iterations of Chebyshev polynomials over finite fields. Then, we use our structural results to obtain estimates for the average rho length, average number of connected components and the expected value for the period and preperiod of iterating Chebyshev polynomials.

[47]
Title: Learning over Knowledge-Base Embeddings for Recommendation
Subjects: Information Retrieval (cs.IR)

State-of-the-art recommendation algorithms -- especially the collaborative filtering (CF) based approaches with shallow or deep models -- usually work with various unstructured information sources for recommendation, such as textual reviews, visual images, and various implicit or explicit feedbacks. Though structured knowledge bases were considered in content-based approaches, they have been largely neglected recently due to the availability of vast amount of data, and the learning power of many complex models.
However, structured knowledge bases exhibit unique advantages in personalized recommendation systems. When the explicit knowledge about users and items is considered for recommendation, the system could provide highly customized recommendations based on users' historical behaviors. A great challenge for using knowledge bases for recommendation is how to integrated large-scale structured and unstructured data, while taking advantage of collaborative filtering for highly accurate performance. Recent achievements on knowledge base embedding sheds light on this problem, which makes it possible to learn user and item representations while preserving the structure of their relationship with external knowledge. In this work, we propose to reason over knowledge base embeddings for personalized recommendation. Specifically, we propose a knowledge base representation learning approach to embed heterogeneous entities for recommendation. Experimental results on real-world dataset verified the superior performance of our approach compared with state-of-the-art baselines.

[48]
Title: Adaptive strategy for superpixel-based region-growing image segmentation
Journal-ref: J. of Electronic Imaging, 26(6), 061605 (2017)
Subjects: Computer Vision and Pattern Recognition (cs.CV)

This work presents a region-growing image segmentation approach based on superpixel decomposition. From an initial contour-constrained over-segmentation of the input image, the image segmentation is achieved by iteratively merging similar superpixels into regions. This approach raises two key issues: (1) how to compute the similarity between superpixels in order to perform accurate merging and (2) in which order those superpixels must be merged together. In this perspective, we firstly introduce a robust adaptive multi-scale superpixel similarity in which region comparisons are made both at content and common border level. Secondly, we propose a global merging strategy to efficiently guide the region merging process. Such strategy uses an adpative merging criterion to ensure that best region aggregations are given highest priorities. This allows to reach a final segmentation into consistent regions with strong boundary adherence. We perform experiments on the BSDS500 image dataset to highlight to which extent our method compares favorably against other well-known image segmentation algorithms. The obtained results demonstrate the promising potential of the proposed approach.

[49]
Title: Convolutional Point-set Representation: A Convolutional Bridge Between a Densely Annotated Image and 3D Face Alignment
Subjects: Computer Vision and Pattern Recognition (cs.CV); Graphics (cs.GR)

We present a robust method for estimating the facial pose and shape information from a densely annotated facial image. The method relies on Convolutional Point-set Representation (CPR), a carefully designed matrix representation to summarize different layers of information encoded in the set of detected points in the annotated image. The CPR disentangles the dependencies of shape and different pose parameters and enables updating different parameters in a sequential manner via convolutional neural networks and recurrent layers. When updating the pose parameters, we sample reprojection errors along with a predicted direction and update the parameters based on the pattern of reprojection errors. This technique boosts the model's capability in searching a local minimum under challenging scenarios. We also demonstrate that annotation from different sources can be merged under the framework of CPR and contributes to outperforming the current state-of-the-art solutions for 3D face alignment. Experiments indicate the proposed CPRFA (CPR-based Face Alignment) significantly improves 3D alignment accuracy when the densely annotated image contains noise and missing values, which is common under "in-the-wild" acquisition scenarios.

[50]
Title: Cost-aware Vulnerability Prediction: the HARMLESS Approach
Comments: 10+1 pages, 2 figures, 5 tables. Submitted to ESEC/FSE 2018
Subjects: Software Engineering (cs.SE)

Society needs more secure software. But predicting vulnerabilities is difficult and existing methods are not applied in practical use due to various limitations. The goal of this paper is to design a vulnerability prediction method in a cost-aware manner so that it can balance the percentage of vulnerabilities found against the cost of human effort on security review and test. To this purpose, this paper presents HARMLESS, an incremental vulnerability prediction tool. HARMLESS is an active learner that (a) builds a support vector machine on the source code files reviewed to date; then (b) suggests what other source code files might have vulnerabilities and need to be reviewed next. A unique feature of HARMLESS is that HARMLESS can estimate the number of remaining vulnerabilities. To the best of our knowledge, HARMLESS is the first tool providing such estimation in the arena of vulnerability prediction. Using that estimator, HARMLESS can guide the security review and test to any level of desired recall, i.e. percentage of vulnerabilities found. In experiments on a case study of Mozilla Firefox project, HARMLESS found 90, 95, 99% of known vulnerabilities by reviewing and testing 26, 33, 45% of the source code files, respectively.

[51]
Title: Meta-F*: Metaprogramming and Tactics in an Effectful Program Verifier
Subjects: Programming Languages (cs.PL)

Verification tools for effectful programming languages often rely on automated theorem provers such as SMT solvers to discharge their proof obligations, usually with very limited facilities for user interaction. When the need arises for logics (e.g., higher order or separation logic) or theories (e.g., non-linear arithmetic) that are hard for SMT solvers to efficiently automate, this style of program verification becomes problematic.
Building on ideas from the interactive theorem proving community we introduce Meta-F*, a metaprogramming framework for the F* effectful language and SMT-based program verification tool. Meta-F* allows developers to write effectful metaprograms suitable for proof scripting, user-defined proof automation, and verified program construction and transformation. Metaprograms are effectful programs in F* itself, making good use of F*'s libraries, IDE support, and extraction to efficient native code. Meta-F*, moreover, is well-integrated with F*'s weakest precondition calculus and can solve or pre-process parts of the verification condition while leaving the rest for the SMT solver.
We evaluate Meta-F* on a variety of examples, demonstrating that tactics, and metaprogramming in general, improve proof stability and automation in F*. Using metaprogrammed decision procedures for richer logics in combination with SMT solving makes it practical to apply F* in settings that were previously out of reach, such as separation logic, or that suffered from poor automation, such as the non-linear arithmetic proofs needed for verifying cryptographic primitives.

[52]
Title: Fusion of an Ensemble of Augmented Image Detectors for Robust Object Detection
Comments: 21 pages, 12 figures, journal paper, MDPI Sensors, 2018
Subjects: Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Image and Video Processing (eess.IV)

A significant challenge in object detection is accurate identification of an object's position in image space, whereas one algorithm with one set of parameters is usually not enough, and the fusion of multiple algorithms and/or parameters can lead to more robust results. Herein, a new computational intelligence fusion approach based on the dynamic analysis of agreement among object detection outputs is proposed. Furthermore, we propose an online versus just in training image augmentation strategy. Experiments comparing the results both with and without fusion are presented. We demonstrate that the augmented and fused combination results are the best, with respect to higher accuracy rates and reduction of outlier influences. The approach is demonstrated in the context of cone, pedestrian and box detection for Advanced Driver Assistance Systems (ADAS) applications.

[53]
Title: Tell Me Why Is It So? Explaining Knowledge Graph Relationships by Finding Descriptive Support Passages
Subjects: Artificial Intelligence (cs.AI); Information Retrieval (cs.IR)

We address the problem of finding descriptive explanations of facts stored in a knowledge graph. This is important in high-risk domains such as healthcare, intelligence, etc. where users need additional information for decision making and is especially crucial for applications that rely on automatically constructed knowledge bases where machine learned systems extract facts from an input corpus and working of the extractors is opaque to the end-user. We follow an approach inspired from information retrieval and propose a simple and efficient, yet effective solution that takes into account passage level as well as document level properties to produce a ranked list of passages describing a given input relation. We test our approach using Wikidata as the knowledge base and Wikipedia as the source corpus and report results of user studies conducted to study the effectiveness of our proposed model.

[54]
Title: Improving Bitcoin's Resilience to Churn
Subjects: Cryptography and Security (cs.CR)

Efficient and reliable block propagation on the Bitcoin network is vital for ensuring the scalability of this peer-to-peer network. To this end, several schemes have been proposed over the last few years to speed up the block propagation, most notably the compact block protocol (BIP 152). Despite this, we show experimental evidence that nodes that have recently joined the network may need about ten days until this protocol becomes 90% effective. This problem is endemic for nodes that do not have persistent network connectivity. We propose to mitigate this ineffectiveness by maintaining mempool synchronization among Bitcoin nodes. For this purpose, we design and implement into Bitcoin a new prioritized data synchronization protocol, called FalafelSync. Our experiments show that FalafelSync helps intermittently connected nodes to maintain better consistency with more stable nodes, thereby showing promise for improving block propagation in the broader network. In the process, we have also developed an effective logging mechanism for bitcoin nodes we release for public use.

[55]
Title: Multi-device, Multi-tenant Model Selection with GP-EI
Subjects: Learning (cs.LG); Distributed, Parallel, and Cluster Computing (cs.DC); Machine Learning (stat.ML)

Bayesian optimization is the core technique behind the emergence of AutoML, which holds the promise of automatically searching for models and hyperparameters to make machine learning techniques more accessible. As such services are moving towards the cloud, we ask -- {\em When multiple AutoML users share the same computational infrastructure, how should we allocate resources to maximize the "global happiness" of all users?}
We focus on GP-EI, one of the most popular algorithms for automatic model selection and hyperparameter tuning, and develop a novel multi-device, multi-tenant extension that is aware of \emph{multiple} computation devices and multiple users sharing the same set of computation devices. Theoretically, given $N$ users and $M$ devices, we obtain a regret bound of $O((\text{\bf {MIU}}(T,K) + M)\frac{N^2}{M})$, where $\text{\bf {MIU}}(T,K)$ refers to the maximal incremental uncertainty up to time $T$ for the covariance matrix $K$. Empirically, we evaluate our algorithm on two applications of automatic model selection, and show that our algorithm significantly outperforms the strategy of serving users independently. Moreover, when multiple computation devices are available, we achieve near-linear speedup when the number of users is much larger than the number of devices.

[56]
Title: Viewpoint: Artificial Intelligence and Labour
Subjects: Computers and Society (cs.CY); Artificial Intelligence (cs.AI)

The welfare of modern societies has been intrinsically linked to wage labour. With some exceptions, the modern human has to sell her labour-power to be able reproduce biologically and socially. Thus, a lingering fear of technological unemployment features predominately as a theme among Artificial Intelligence researchers. In this short paper we show that, if past trends are anything to go by, this fear is irrational. On the contrary, we argue that the main problem humanity will be facing is the normalisation of extremely long working hours.

[57]
Title: A Dual Approach to Scalable Verification of Deep Networks
Subjects: Learning (cs.LG); Machine Learning (stat.ML)

This paper addresses the problem of formally verifying desirable properties of neural networks, i.e., obtaining provable guarantees that the outputs of the neural network will always behave in a certain way for a given class of inputs. Most previous work on this topic was limited in its applicability by the size of the network, network architecture and the complexity of properties to be verified. In contrast, our framework applies to much more general class of activation functions and specifications on neural network inputs and outputs. We formulate verification as an optimization problem and solve a Lagrangian relaxation of the optimization problem to obtain an upper bound on the verification objective. Our approach is anytime, i.e. it can be stopped at any time and a valid bound on the objective can be obtained. We develop specialized verification algorithms with provable tightness guarantees under special assumptions and demonstrate the practical significance of our general verification approach on a variety of verification tasks.

[58]
Subjects: Information Theory (cs.IT)

[59]
Title: A Multi-perspective Approach To Anomaly Detection For Self-aware Embodied Agents
Comments: IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP) 2018
Subjects: Computer Vision and Pattern Recognition (cs.CV)

This paper focuses on multi-sensor anomaly detection for moving cognitive agents using both external and private first-person visual observations. Both observation types are used to characterize agents' motion in a given environment. The proposed method generates locally uniform motion models by dividing a Gaussian process that approximates agents' displacements on the scene and provides a Shared Level (SL) self-awareness based on Environment Centered (EC) models. Such models are then used to train in a semi-unsupervised way a set of Generative Adversarial Networks (GANs) that produce an estimation of external and internal parameters of moving agents. Obtained results exemplify the feasibility of using multi-perspective data for predicting and analyzing trajectory information.

[60]
Title: Variational Knowledge Graph Reasoning
Subjects: Artificial Intelligence (cs.AI); Computation and Language (cs.CL)

Inferring missing links in knowledge graphs (KG) has attracted a lot of attention from the research community. In this paper, we tackle a practical query answering task involving predicting the relation of a given entity pair. We frame this prediction problem as an inference problem in a probabilistic graphical model and aim at resolving it from a variational inference perspective. In order to model the relation between the query entity pair, we assume that there exist underlying latent variables (assemble of all paths connecting these two nodes) in the KG, which carries the equivalent semantics of their relation. However, due to the intractability of connections in large KGs, we propose to use variation inference to maximize the evidence lower bound. More specifically, our framework (\textsc{Diva}) is composed of three modules, i.e. a posterior approximator, a prior (path finder), and a likelihood (path reasoner). By using variational inference, we are able to incorporate them closely into a unified architecture and jointly optimize them to perform KG reasoning. With active interactions among these sub-modules, \textsc{Diva} is better at handling noise and cope with more complex reasoning scenarios. In order to evaluate our method, we conduct the experiment of the link prediction task on NELL-995 and FB15K datasets and achieve state-of-the-art performances on both datasets.

[61]
Title: Learning Long Term Dependencies via Fourier Recurrent Units
Subjects: Learning (cs.LG); Machine Learning (stat.ML)

It is a known fact that training recurrent neural networks for tasks that have long term dependencies is challenging. One of the main reasons is the vanishing or exploding gradient problem, which prevents gradient information from propagating to early layers. In this paper we propose a simple recurrent architecture, the Fourier Recurrent Unit (FRU), that stabilizes the gradients that arise in its training while giving us stronger expressive power. Specifically, FRU summarizes the hidden states $h^{(t)}$ along the temporal dimension with Fourier basis functions. This allows gradients to easily reach any layer due to FRU's residual learning structure and the global support of trigonometric functions. We show that FRU has gradient lower and upper bounds independent of temporal dimension. We also show the strong expressivity of sparse Fourier basis, from which FRU obtains its strong expressive power. Our experimental study also demonstrates that with fewer parameters the proposed architecture outperforms other recurrent architectures on many tasks.

[62]
Title: Structural query-by-committee
Subjects: Learning (cs.LG); Machine Learning (stat.ML)

In this work, we describe a framework that unifies many different interactive learning tasks. We present a generalization of the {\it query-by-committee} active learning algorithm for this setting, and we study its consistency and rate of convergence, both theoretically and empirically, with and without noise.

[63]
Title: Comparative Study of Approximate Multipliers
Subjects: Emerging Technologies (cs.ET)

Approximate multipliers are widely being advocated for energy-efficient computing in applications that exhibit an inherent tolerance to inaccuracy. However, the inclusion of accuracy as a key design parameter, besides the performance, area and power, makes the identification of the most suitable approximate multiplier quite challenging. In this paper, we identify three major decision making factors for the selection of an approximate multipliers circuit: (1) the type of approximate full adder (FA) used to construct the multiplier, (2) the architecture, i.e., array or tree, of the multiplier and (3) the placement of sub-modules of approximate and exact multipliers in the main multiplier module. Based on these factors, we explored the design space for circuit level implementations of approximate multipliers. We used circuit level implementations of some of the most widely used approximate full adders, i.e., approximate mirror adders, XOR/XNOR based approximate full adders and Inexact adder cell. These FA cells are then used to develop circuits for the approximate high order compressors as building blocks for 8x8 array and tree multipliers. We then develop various implementations of higher bit multipliers by using a combination of exact and inaccurate 8x8 multiplier cells. All these implementations have been done using the Cadence's Spectre tool with the TSMC65nm technology. The design space of these multipliers is explored based on their power, area, delay and error and the best approximate multiplier designs are identified. The report also presents the validation of our results using an image blending application. An open source library of implemented cells and multiplier circuits are available online.

[64]
Title: Early Hospital Mortality Prediction using Vital Signals
Comments: 8 page, 5 figures, Submitted to IEEE&ACM CHASE 2018
Subjects: Learning (cs.LG); Machine Learning (stat.ML)

Early hospital mortality prediction is critical as intensivists strive to make efficient medical decisions about the severely ill patients staying in intensive care units. As a result, various methods have been developed to address this problem based on clinical records. However, some of the laboratory test results are time-consuming and need to be processed. In this paper, we propose a novel method to predict mortality using features extracted from the heart signals of patients within the first hour of ICU admission. In order to predict the risk, quantitative features have been computed based on the heart rate signals of ICU patients. Each signal is described in terms of 12 statistical and signal-based features. The extracted features are fed into eight classifiers: decision tree, linear discriminant, logistic regression, support vector machine (SVM), random forest, boosted trees, Gaussian SVM, and K-nearest neighborhood (K-NN). To derive insight into the performance of the proposed method, several experiments have been conducted using the well-known clinical dataset named Medical Information Mart for Intensive Care III (MIMIC-III). The experimental results demonstrate the capability of the proposed method in terms of precision, recall, F1-score, and area under the receiver operating characteristic curve (AUC). The decision tree classifier satisfies both accuracy and interpretability better than the other classifiers, producing an F1-score and AUC equal to 0.91 and 0.93, respectively. It indicates that heart rate signals can be used for predicting mortality in patients in the ICU, achieving a comparable performance with existing predictions that rely on high dimensional features from clinical records which need to be processed and may contain missing information.

[65]
Title: Deep Learning for Nonlinear Diffractive Imaging
Subjects: Computer Vision and Pattern Recognition (cs.CV)

Image reconstruction under multiple light scattering is crucial for a number of important applications in cell microscopy and tissue imaging. The reconstruction problem is often formulated as a nonconvex optimization, where a nonlinear measurement model is used to account for multiple scattering and a regularizer is used to enforce the prior on the object. In this letter, We propose a powerful alternative to this optimization-based view of image reconstruction by designing and training a deep convolutional neural network (CNN) for inverting multiple scattering. Simulations show that the proposed formulation is substantially faster and achieves higher imaging quality compared to the state-of-the-art methods based on optimization.

[66]
Title: Network Service Orchestration: A Survey
Comments: Submitted to IEEE Communications Surveys and Tutorials
Subjects: Networking and Internet Architecture (cs.NI)

Business models of network service providers are undergoing an evolving transformation fueled by vertical customer demands and technological advances such as 5G, Software Defined Networking (SDN), and Network Function Virtualization (NFV). Emerging scenarios call for agile network services consuming network, storage, and compute resources across heterogeneous infrastructures and administrative domains. Coordinating resource control and service creation across interconnected domains and diverse technologies becomes a grand challenge. Research and development efforts are being devoted on enabling orchestration processes to automate, coordinate, and manage the deployment and operation of network services. In this survey, we delve into the topic of Network Service Orchestration (NSO) by reviewing the historical background, relevant research projects, enabling technologies, and standardization activities. We define key concepts and propose a taxonomy of NSO approaches and solutions to pave the way to the understanding of the diverse ongoing efforts towards the realization of multiple NSO application scenarios. Based on the analysis of the state of affairs, we finalize by discussing a series of open challenges and research opportunities, altogether contributing to a timely and comprehensive survey on the vibrant and strategic topic of network service orchestration.

[67]
Title: Facial Landmarks Detection by Self-Iterative Regression based Landmarks-Attention Network
Subjects: Computer Vision and Pattern Recognition (cs.CV)

Cascaded Regression (CR) based methods have been proposed to solve facial landmarks detection problem, which learn a series of descent directions by multiple cascaded regressors separately trained in coarse and fine stages. They outperform the traditional gradient descent based methods in both accuracy and running speed. However, cascaded regression is not robust enough because each regressor's training data comes from the output of previous regressor. Moreover, training multiple regressors requires lots of computing resources, especially for deep learning based methods. In this paper, we develop a Self-Iterative Regression (SIR) framework to improve the model efficiency. Only one self-iterative regressor is trained to learn the descent directions for samples from coarse stages to fine stages, and parameters are iteratively updated by the same regressor. Specifically, we proposed Landmarks-Attention Network (LAN) as our regressor, which concurrently learns features around each landmark and obtains the holistic location increment. By doing so, not only the rest of regressors are removed to simplify the training process, but the number of model parameters is significantly decreased. The experiments demonstrate that with only 3.72M model parameters, our proposed method achieves the state-of-the-art performance.

[68]
Title: Two new classes of quantum MDS codes
Subjects: Information Theory (cs.IT)

Let $p$ be a prime and let $q$ be a power of $p$. In this paper, by using generalized Reed-Solomon (GRS for short) codes and extended GRS codes, we construct two new classes of quantum maximum-distance- separable (MDS) codes with parameters $[[tq, tq-2d+2, d]]_{q}$ for any $1 \leq t \leq q, 2 \leq d \leq \lfloor \frac{tq+q-1}{q+1}\rfloor+1$, and $[[t(q+1)+2, t(q+1)-2d+4, d]]_{q}$ for any $1 \leq t \leq q-1, 2 \leq d \leq t+2$ with $(p,t,d) \neq (2, q-1, q)$. Our quantum codes have flexible parameters, and have minimum distances larger than $\frac{q}{2}+1$ when $t > \frac{q}{2}$. Furthermore, it turns out that our constructions generalize and improve some previous results.

[69]
Title: A Robust AUC Maximization Framework with Simultaneous Outlier Detection and Feature Selection for Positive-Unlabeled Classification
Subjects: Learning (cs.LG); Machine Learning (stat.ML)

The positive-unlabeled (PU) classification is a common scenario in real-world applications such as healthcare, text classification, and bioinformatics, in which we only observe a few samples labeled as "positive" together with a large volume of "unlabeled" samples that may contain both positive and negative samples. Building robust classifier for the PU problem is very challenging, especially for complex data where the negative samples overwhelm and mislabeled samples or corrupted features exist. To address these three issues, we propose a robust learning framework that unifies AUC maximization (a robust metric for biased labels), outlier detection (for excluding wrong labels), and feature selection (for excluding corrupted features). The generalization error bounds are provided for the proposed model that give valuable insight into the theoretical performance of the method and lead to useful practical guidance, e.g., to train a model, we find that the included unlabeled samples are sufficient as long as the sample size is comparable to the number of positive samples in the training process. Empirical comparisons and two real-world applications on surgical site infection (SSI) and EEG seizure detection are also conducted to show the effectiveness of the proposed model.

[70]
Title: Covert Communications Versus Physical Layer Security
Comments: 4 pages, 2 figures, one table
Subjects: Cryptography and Security (cs.CR)

This letter studies and compares the physical-layer security approach and the covert communication approach for a wiretap channel with a transmitter, an intended receiver and an eavesdropper. In order to make the comparison, we investigate the power allocation problems for maximizing the secrecy/covert rate subject to the transmit power constraint. Simulation results illustrate that if the eavesdropper is not noisy or it is near to the transmitter, covert communication is more preferable.

[71]
Title: Dynamic Trajectory Model for Analysis of Traffic States using DPMM
Subjects: Computer Vision and Pattern Recognition (cs.CV)

Appropriate modeling of a surveillance scene is essential while analyzing and detecting anomalies in road traffic. Learning usual paths can provide much insight into road traffic situation and to identify abnormal routes taken by commuters/vehicles in a traffic scene. If usual traffic paths are learned in a nonparametric way, manual effort of marking the roads can be avoided. We propose an unsupervised and nonparametric method to learn frequently used paths from the tracks of moving objects in $\Theta(kn)$ time, where $k$ is the number of paths and $n$ represents the number of tracks. In the proposed method, temporal correlation of the moving objects is taken into consideration to make the clustering meaningful using Temporally Incremental Gravity Model-Dynamic Trajectory Model (TIGM-DTM). In addition, the scene learning is based on distance, thus making it realistically intuitive in estimating the model parameters. Experimental validation reveals that the proposed method can learn a scene quickly without knowing the number of paths ($k$). We have compared the results with mean shift and DBSCAN. Further, we extend the model to represent states of a scene that can be used for taking timely actions. We have applied the model to understand its effectiveness in other domain.

[72]
Title: Feature Selection and Classification of Post-Graduation Income of College Students in the United States
Comments: 14 pages, 6 tables, 3 figures
Subjects: Computers and Society (cs.CY)

This study investigated the most important attributes of the 6-year post-graduation income of college graduates who used financial aid during their time at college in the United States. The latest data released by the United States Department of Education was used. Specifically, 1,429 cohorts of graduates from three years (2001, 2003, and 2005) were included in the data analysis. Three attribute selection methods, including filter methods, forward selection, and Genetic Algorithm, were applied to the attribute selection from 30 relevant attributes. Five groups of machine learning algorithms were applied to the dataset for classification using the best selected attribute subsets. Based on our findings, we discuss the role of neighborhood professional degree attainment, parental income, SAT scores, and family college education in post-graduation incomes and the implications for social stratification.

[73]
Title: Towards an Area-Efficient Implementation of a High ILP EDGE Soft Processor
Authors: Jan Gray, Aaron Smith
Subjects: Hardware Architecture (cs.AR)

In-order scalar RISC architectures have been the dominant paradigm in FPGA soft processor design for twenty years. Prior out-of-order superscalar implementations have not exhibited competitive area or absolute performance. This paper describes a new way to build fast and area-efficient out-of-order superscalar soft processors by utilizing an Explicit Data Graph Execution (EDGE) instruction set architecture. By carefully mapping the EDGE microarchitecture, and in particular, its dataflow instruction scheduler, we demonstrate the feasibility of an out-of-order FPGA architecture. Two scheduler design alternatives are compared.

[74]
Title: ShIFT: A Semi-haptic Interface for Flute Tutoring
Subjects: Human-Computer Interaction (cs.HC)

Traditional instrument learning is time-consuming. It begins with learning music notation and necessitates layers of sophistication and abstraction. Haptic interfaces open another door to the music world for the vast majority of beginners when traditional training methods are not effective. However, existing haptic interfaces can only deal with specially designed pieces with great restrictions on performance duration and pitch range due to the fact that not all performance motions could be guided haptically for most instruments. Our system breaks such restrictions using a semi-haptic interface. For the first time, the pitch range of the haptically learned pieces goes beyond an octave (with the fingering motion covers most of the possible choices) and the duration of learned pieces cover a whole phrase. This significant change leads to a more realistic instrument learning process. Experiments show that our semi-haptic interface is effective as long as learners are not "tone deaf." Using our prototype device, the learning rate is about 30% faster compared to learning from videos.

[75]
Title: The Automatic Identification of Butterfly Species
Comments: 9 pages, in Chinese, 8 figures
Subjects: Computer Vision and Pattern Recognition (cs.CV)

The available butterfly data sets comprise a few limited species, and the images in the data sets are always standard patterns without the images of butterflies in their living environment. To overcome the aforementioned limitations in the butterfly data sets, we build a butterfly data set composed of all species of butterflies in China with 4270 standard pattern images of 1176 butterfly species, and 1425 images from living environment of 111 species. We propose to use the deep learning technique Faster-Rcnn to train an automatic butterfly identification system including butterfly position detection and species recognition. We delete those species with only one living environment image from data set, then partition the rest images from living environment into two subsets, one used as test subset, the other as training subset respectively combined with all standard pattern butterfly images or the standard pattern butterfly images with the same species of the images from living environment. In order to construct the training subset for FasterRcnn, nine methods were adopted to amplifying the images in the training subset including the turning of up and down, and left and right, rotation with different angles, adding noises, blurring, and contrast ratio adjusting etc. Three prediction models were trained. The mAP (Mean Average prediction) criterion was used to evaluate the performance of the prediction model. The experimental results demonstrate that our Faster-Rcnn based butterfly automatic identification system performed well, and its worst mAP is up to 60%, and can simultaneously detect the positions of more than one butterflies in one images from living environment and recognize the species of those butterflies as well.

[76]
Title: Cross-modality image synthesis from unpaired data using CycleGAN: Effects of gradient consistency loss and training data size
Comments: 8 pages, 6 figures, under review as a conference paper at MICCAI 2018
Subjects: Computer Vision and Pattern Recognition (cs.CV)

CT is commonly used in orthopedic procedures. MRI is used along with CT to identify muscle structures and diagnose osteonecrosis due to its superior soft tissue contrast. However, MRI has poor contrast for bone structures. Clearly, it would be helpful if a corresponding CT were available, as bone boundaries are more clearly seen and CT has standardized (i.e., Hounsfield) units. Therefore, we aim at MR-to-CT synthesis. The CycleGAN was successfully applied to unpaired CT and MR images of the head, these images do not have as much variation of intensity pairs as do images in the pelvic region due to the presence of joints and muscles. In this paper, we extended the CycleGAN approach by adding the gradient consistency loss to improve the accuracy at the boundaries. We conducted two experiments. To evaluate image synthesis, we investigated dependency of image synthesis accuracy on 1) the number of training data and 2) the gradient consistency loss. To demonstrate the applicability of our method, we also investigated a segmentation accuracy on synthesized images.

[77]
Title: A Guided FP-growth algorithm for fast mining of frequent itemsets from big data
Subjects: Databases (cs.DB)

In this paper we present the GFP-growth (Guided FP-growth) algorithm, a novel method for finding the count of a given list of itemsets in large data. Unlike FP-growth, our algorithm is designed to focus on the specific multiple itemsets of interest and hence its time and memory costs are better. We prove that the GFP-growth algorithm yields the exact frequency-counts for the required itemsets. We show that for a number of different problems, a solution can be devised which takes advantage of the efficient implementation of multi-targeted mining for boosting the performance. In particular, we study in detail the problem of generating the minority-class rules from imbalanced data, a scenario that appears in many real-life domains such as medical applications, failure prediction, network and cyber security, and maintenance. We develop the Minority-Report Algorithm that uses the GFP-growth for boosting performance. We prove some theoretical properties of the Minority-Report Algorithm and demonstrate its superior performance using simulations and real data.

[78]
Title: Zoom and Learn: Generalizing Deep Stereo Matching to Novel Domains
Subjects: Computer Vision and Pattern Recognition (cs.CV)

Despite the recent success of stereo matching with convolutional neural networks (CNNs), it remains arduous to generalize a pre-trained deep stereo model to a novel domain. A major difficulty is to collect accurate ground-truth disparities for stereo pairs in the target domain. In this work, we propose a self-adaptation approach for CNN training, utilizing both synthetic training data (with ground-truth disparities) and stereo pairs in the new domain (without ground-truths). Our method is driven by two empirical observations. By feeding real stereo pairs of different domains to stereo models pre-trained with synthetic data, we see that: i) a pre-trained model does not generalize well to the new domain, producing artifacts at boundaries and ill-posed regions; however, ii) feeding an up-sampled stereo pair leads to a disparity map with extra details. To avoid i) while exploiting ii), we formulate an iterative optimization problem with graph Laplacian regularization. At each iteration, the CNN adapts itself better to the new domain: we let the CNN learn its own higher-resolution output; at the meanwhile, a graph Laplacian regularization is imposed to discriminatively keep the desired edges while smoothing out the artifacts. We demonstrate the effectiveness of our method in two domains: daily scenes collected by smartphone cameras, and street views captured in a driving car.

[79]
Title: The Web as a Knowledge-base for Answering Complex Questions
Comments: accepted as a long paper at NAACL 2018
Subjects: Computation and Language (cs.CL); Artificial Intelligence (cs.AI); Learning (cs.LG)

[80]
Title: Computing and Testing Pareto Optimal Committees
Subjects: Computer Science and Game Theory (cs.GT); Artificial Intelligence (cs.AI); Data Structures and Algorithms (cs.DS)

Selecting a set of alternatives based on the preferences of agents is an important problem in committee selection and beyond. Among the various criteria put forth for the desirability of a committee, Pareto optimality is a minimal and important requirement. As asking agents to specify their preferences over exponentially many subsets of alternatives is practically infeasible, we assume that each agent specifies a weak order on single alternatives, from which a preference relation over subsets is derived using some preference extension. We consider five prominent extensions (responsive, downward lexicographic, upward lexicographic, best, and worst). For each of them, we consider the corresponding Pareto optimality notion, and we study the complexity of computing and verifying Pareto optimal outcomes. We also consider strategic issues: for four of the set extensions, we present a linear-time, Pareto optimal and strategyproof algorithm that even works for weak preferences.

[81]
Title: Line Artist: A Multiple Style Sketch to Painting Synthesis Scheme
Subjects: Computer Vision and Pattern Recognition (cs.CV)

Drawing a beautiful painting is a dream of many people since childhood. In this paper, we propose a novel scheme, Line Artist, to synthesize artistic style paintings with freehand sketch images, leveraging the power of deep learning and advanced algorithms. Our scheme includes three models. The Sketch Image Extraction (SIE) model is applied to generate the training data. It includes smoothing reality images and pencil sketch extraction. The Detailed Image Synthesis (DIS) model trains a conditional generative adversarial network to generate detailed real-world information. The Adaptively Weighted Artistic Style Transfer (AWAST) model is capable to combine multiple style images with a content with the VGG19 network and PageRank algorithm. The appealing artistic images are then generated by optimization iterations. Experiments are operated on the Kaggle Cats dataset and The Oxford Buildings Dataset. Our synthesis results are proved to be artistic, beautiful and robust.

[82]
Title: Cubical Assemblies and Independence of the Propositional Resizing Axiom
Authors: Taichi Uemura
Subjects: Logic in Computer Science (cs.LO)

We construct a model of cubical type theory with a univalent and impredicative universe in a category of cubical assemblies. We show that the cubical assembly model does not satisfy the propositional resizing axiom.

[83]
Title: Ratio-Preserving Half-Cylindrical Warps for Natural Image Stitching
Subjects: Computer Vision and Pattern Recognition (cs.CV)

A novel warp for natural image stitching is proposed that utilizes the property of cylindrical warp and a horizontal pixel selection strategy. The proposed ratio-preserving half-cylindrical warp is a combination of homography and cylindrical warps which guarantees alignment by homography and possesses less projective distortion by cylindrical warp. Unlike previous approaches applying cylindrical warp before homography, we use partition lines to divide the image into different parts and apply homography in the overlapping region while a composition of homography and cylindrical warps in the non-overlapping region. The pixel selection strategy then samples the points in horizontal and reconstructs the image via interpolation to further reduce horizontal distortion by maintaining the ratio as similarity. With applying half-cylindrical warp and horizontal pixel selection, the projective distortion in vertical and horizontal is mitigated simultaneously. Experiments show that our warp is efficient and produces a more natural-looking stitched result than previous methods.

[84]
Title: Sdf-GAN: Semi-supervised Depth Fusion with Multi-scale Adversarial Networks
Subjects: Computer Vision and Pattern Recognition (cs.CV)

Fusing disparity maps from different algorithms to exploit their complementary advantages is still challenging. Uncertainty estimation and complex disparity relationships between neighboring pixels limit the accuracy and robustness of the existing methods and there is no common method for depth fusion of different kind of data. In this paper, we introduce a method to incorporate supplementary information (intensity, gradient constraints etc.) into a Generative Adversarial Network to better refine each input disparity value. By adopting a multi-scale strategy, the disparity relationship in the fused disparity map is a better estimate of the real distribution. The approach includes a more robust object function to avoid blurry edges, impaints invalid disparity values and requires much fewer ground data to train. The algorithm can be generalized to different kinds of depth fusion. The experiments were conducted through simulation and real data, exploring different fusion opportunities: stereo-monocular fusion from coarse input, stereo-stereo fusion from moderately accurate input, fusion from accurate binocular input and Time of Flight sensor. The experiments show the superiority of the proposed algorithm compared with the most recent algorithms on the public Scene Flow and SYNTH3 datasets. The code is available from https://github.com/Canpu999

[85]
Title: A View-based Programmable Architecture for Controlling and Integrating Decentralized Data
Comments: 14 pages, 2 figures, conference
Subjects: Databases (cs.DB)

The view and the view update are known mechanism for controlling access of data and for integrating data of different schemas. Despite intensive and long research on them in both the database community and the programming language community, we are facing difficulties to use them in practice. The main reason is that we are lacking of control over the view update strategy to deal with inherited ambiguity of view update for a given view.
This vision paper aims to provide a new language-based approach to controlling and integrating decentralized data based on the view, and establish a software foundation for systematic construction of such data management systems. Our key observation is that a view should be defined through a view update strategy rather than a query. In other words, the view definition should be extracted from the view update strategy, which is in sharp contrast to the traditional approaches where the view update strategy is derived from the view definition.
In this paper, we present the first programmable architecture with a declarative language for specifying update strategies over views, whose unique view definition can be automatically derived, and show how it can be effectively used to control data access, integrate data generally allowing coexistence of GAV (global as view) and LAV (local as view), and perform both analysis and updates on the integrated data. We demonstrate its usefulness through development of a privacy-preserving ride-sharing alliance system, discuss its application scope, and highlight future challenges.

[86]
Title: Code Vectors: Understanding Programs Through Embedded Abstracted Symbolic Traces
Subjects: Software Engineering (cs.SE)

With the rise of machine learning, there is a great deal of interest in treating programs as data to be fed to learning algorithms. However, programs do not start off in a form that is immediately amenable to most off-the-shelf learning techniques. Instead, it is necessary to transform the program to a suitable representation before a learning technique can be applied.
In this paper, we use abstractions of traces obtained from symbolic execution of a program as a representation for learning word embeddings. We trained a variety of word embeddings under hundreds of parameterizations, and evaluated each learned embedding on a suite of different tasks. In our evaluation, we obtain 93% top-1 accuracy on a benchmark consisting of over 19,000 API-usage analogies extracted from the Linux kernel. In addition, we show that embeddings learned from (mainly) semantic abstractions provide nearly triple the accuracy of those learned from (mainly) syntactic abstractions.

[87]
Title: An Improved Welfare Guarantee for First Price Auctions
Comments: To appear in STOC 2018
Subjects: Computer Science and Game Theory (cs.GT)

This paper proves that the welfare of the first price auction in Bayes-Nash equilibrium is at least a $.743$-fraction of the welfare of the optimal mechanism assuming agents' values are independently distributed. The previous best bound was $1-1/e \approx .63$, derived in Syrgkanis and Tardos (2013) using smoothness, the standard technique for reasoning about welfare of games in equilibrium. In the worst known example (from Hartline et al. (2014)), the first price auction achieves a $\approx .869$-fraction of the optimal welfare, far better than the theoretical guarantee. Despite this large gap, it was unclear whether the $1-1/e \approx .63$ bound was tight. We prove that it is not. Our analysis eschews smoothness, and instead uses the independence assumption on agents' value distributions to give a more careful accounting of the welfare contribution of agents who win despite not having the highest value.

[88]
Title: TYDR - Track Your Daily Routine. Android App for Tracking Smartphone Sensor and Usage Data
Comments: Accepted for publication at the 5th IEEE/ACM International Conference on Mobile Software Engineering and Systems (MOBILESoft '18)
Subjects: Computers and Society (cs.CY); Human-Computer Interaction (cs.HC)

We present the Android app TYDR (Track Your Daily Routine) which tracks smartphone sensor and usage data and utilizes standardized psychometric personality questionnaires. With the app, we aim at collecting data for researching correlations between the tracked smartphone data and the user's personality in order to predict personality from smartphone data. In this paper, we highlight our approaches in addressing the challenges in developing such an app. We optimize the tracking of sensor data by assessing the trade-off of size of data and battery consumption and granularity of the stored information. Our user interface is designed to incentivize users to install the app and fill out questionnaires. TYDR processes and visualizes the tracked sensor and usage data as well as the results of the personality questionnaires. When developing an app that will be used in psychological studies, requirements posed by ethics commissions / institutional review boards and data protection officials have to be met. We detail our approaches concerning those requirements regarding the anonymized storing of user data, informing the users about the data collection, and enabling an opt-out option. We present our process for anonymized data storing while still being able to identify individual users who successfully completed a psychological study with the app.

[89]
Title: Detection under One-Bit Messaging over Adaptive Networks
Subjects: Multiagent Systems (cs.MA)

This work studies the operation of multi-agent networks engaged in binary decision tasks, and derives performance expressions and performance operating curves under challenging conditions with some revealing insights. One of the main challenges in the analysis is that agents are only allowed to exchange one-bit messages, and the information at each agent therefore consists of both continuous and discrete components. Due to this mixed nature, the steady-state distribution of the state of each agent cannot be inferred from direct application of central limit arguments. Instead, the behavior of the continuous component is characterized in integral form by using a log-characteristic function, while the behavior of the discrete component is characterized by means of an asymmetric Bernoulli convolution. By exploiting these results, the article derives reliable approximate performance expressions for the network nodes that match well with the simulated results for a wide range of system parameters. The results also reveal an important interplay between continuous adaptation under constant step-size learning and the binary nature of the messages exchanged with neighbors.

[90]
Title: Aggregating Strategies for Long-term Forecasting
Subjects: Learning (cs.LG); Probability (math.PR); Statistics Theory (math.ST); Machine Learning (stat.ML)

The article is devoted to investigating the application of aggregating algorithms to the problem of the long-term forecasting. We examine the classic aggregating algorithms based on the exponential reweighing. For the general Vovk's aggregating algorithm we provide its generalization for the long-term forecasting. For the special basic case of Vovk's algorithm we provide its two modifications for the long-term forecasting. The first one is theoretically close to an optimal algorithm and is based on replication of independent copies. It provides the time-independent regret bound with respect to the best expert in the pool. The second one is not optimal but is more practical and has $O(\sqrt{T})$ regret bound, where $T$ is the length of the game.

[91]
Title: Discriminative Learning of Latent Features for Zero-Shot Recognition
Subjects: Computer Vision and Pattern Recognition (cs.CV)

Zero-shot learning (ZSL) aims to recognize unseen image categories by learning an embedding space between image and semantic representations. For years, among existing works, it has been the center task to learn the proper mapping matrices aligning the visual and semantic space, whilst the importance to learn discriminative representations for ZSL is ignored. In this work, we retrospect existing methods and demonstrate the necessity to learn discriminative representations for both visual and semantic instances of ZSL. We propose an end-to-end network that is capable of 1) automatically discovering discriminative regions by a zoom network; and 2) learning discriminative semantic representations in an augmented space introduced for both user-defined and latent attributes. Our proposed method is tested extensively on two challenging ZSL datasets, and the experiment results show that the proposed method significantly outperforms state-of-the-art methods.

[92]
Title: The Strategic LQG System: A Dynamic Stochastic VCG Framework for Optimal Coordination
Authors: Ke Ma, P. R. Kumar
Subjects: Systems and Control (cs.SY)

The classic Vickrey-Clarke-Groves (VCG) mechanism ensures incentive compatibility, i.e., that truth-telling of all agents is a dominant strategy, for a static one-shot game. However, in a dynamic environment that unfolds over time, the agents' intertemporal payoffs depend on the expected future controls and payments, and a direct extension of the VCG mechanism is not sufficient to guarantee incentive compatibility. In fact, it does not appear to be feasible to construct mechanisms that ensure the dominance of dynamic truth-telling for agents comprised of general stochastic dynamic systems. The contribution of this paper is to show that such a dynamic stochastic extension does exist for the special case of Linear-Quadratic-Gaussian (LQG) agents with a careful construction of a sequence of layered payments over time. For a set of LQG agents, we propose a modified layered version of the VCG mechanism for payments that decouples the intertemporal effect of current bids on future payoffs, and prove that truth-telling of dynamic states forms a dominant strategy if system parameters are known and agents are rational.
An important example of a problem needing such optimal dynamic coordination of stochastic agents arises in power systems where an Independent System Operator (ISO) has to ensure balance of generation and consumption at all time instants, while ensuring social optimality. The challenge is to determine a bidding scheme between all agents and the ISO that maximizes social welfare, while taking into account the stochastic dynamic models of agents, since renewable energy resources such as solar/wind are stochastic and dynamic in nature, as are consumptions by loads which are influenced by factors such as local temperatures and thermal inertias of facilities.

[93]
Title: Optimal control policies for evolutionary dynamics with environmental feedback
Comments: Initial submission version to CDC 2018
Subjects: Systems and Control (cs.SY)

We study a dynamical model of a population of cooperators and defectors whose actions have long-term consequences on environmental "commons" - what we term the "resource". Cooperators contribute to restoring the resource whereas defectors degrade it. The population dynamics evolve according to a replicator equation coupled with an environmental state. Our goal is to identify methods of influencing the population with the objective to maximize accumulation of the resource. In particular, we consider strategies that modify individual-level incentives. We then extend the model to incorporate a public opinion state that imperfectly tracks the true environmental state, and study strategies that influence opinion. We formulate optimal control problems and solve them using numerical techniques to characterize locally optimal control policies for three problem formulations: 1) control of incentives, and control of opinions through 2) propaganda-like strategies and 3) awareness campaigns. We show numerically that the resulting controllers in all formulations achieve the objective, albeit with an unintended consequence. The resulting dynamics include cycles between low and high resource states - a dynamical regime termed an "oscillating tragedy of the commons". This outcome may have desirable average properties, but includes risks to resource depletion. Our findings suggest the need for new approaches to controlling coupled population-environment dynamics.

[94]
Title: Neural Architecture Construction using EnvelopeNets
Subjects: Neural and Evolutionary Computing (cs.NE); Computer Vision and Pattern Recognition (cs.CV)

In recent years, advances in the design of convolutional neural networks have resulted in significant improvements on the image classification and object detection problems. One of the advances is networks built by stacking complex cells, seen in such networks as InceptionNet and NasNet. These cells are either constructed by hand, generated by generative networks or discovered by search. Unlike conventional networks (where layers consist of a convolution block, sampling and non linear unit), the new cells feature more complex designs consisting of several filters and other operators connected in series and parallel. Recently, several cells have been proposed or generated that are supersets of previously proposed custom or generated cells. Influenced by this, we introduce a network construction method based on EnvelopeNets. An EnvelopeNet is a deep convolutional neural network of stacked EnvelopeCells. EnvelopeCells are supersets (or envelopes) of previously proposed handcrafted and generated cells. We propose a method to construct improved network architectures by restructuring EnvelopeNets. The algorithm restructures an EnvelopeNet by rearranging blocks in the network. It identifies blocks to be restructured using metrics derived from the featuremaps collected during a partial training run of the EnvelopeNet. The method requires less computation resources to generate an architecture than an optimized architecture search over the entire search space of blocks. The restructured networks have higher accuracy on the image classification problem on a representative dataset than both the generating EnvelopeNet and an equivalent arbitrary network.

[95]
Title: Sentiment Analysis of Code-Mixed Indian Languages: An Overview of SAIL_Code-Mixed Shared Task @ICON-2017
Subjects: Computation and Language (cs.CL)

Sentiment analysis is essential in many real-world applications such as stance detection, review analysis, recommendation system, and so on. Sentiment analysis becomes more difficult when the data is noisy and collected from social media. India is a multilingual country; people use more than one languages to communicate within themselves. The switching in between the languages is called code-switching or code-mixing, depending upon the type of mixing. This paper presents overview of the shared task on sentiment analysis of code-mixed data pairs of Hindi-English and Bengali-English collected from the different social media platform. The paper describes the task, dataset, evaluation, baseline and participant's systems.

[96]
Title: Experimental Verification of Rate Flexibility and Probabilistic Shaping by 4D Signaling
Comments: Presented at OFC'18, San Diego, CA, USA
Subjects: Information Theory (cs.IT)

The rate flexibility and probabilistic shaping gain of $4$-dimensional signaling is experimentally tested for short-reach, unrepeated transmission. A rate granularity of 0.5 bits/QAM symbol is achieved with a distribution matcher based on a simple look-up table.

[97]
Title: Scalar and Vectorial mu-calculus with Atoms
Subjects: Logic in Computer Science (cs.LO)

We study an extension of modal mu-calculus to sets with atoms and we study its basic properties. Model checking is decidable on orbit-finite structures, and a correspondence to parity games holds. On the other hand, satisfiability becomes undecidable. We also show expressive limitations of atom-enriched mu-calculi, and explain how their expressive power depends on the structure of atoms used, and on the choice between basic or vectorial syntax.

[98]
Title: A Machine Learning Approach for Power Allocation in HetNets Considering QoS
Comments: 7 pages, 7 figures, IEEE ICC'18
Subjects: Information Theory (cs.IT)

There is an increase in usage of smaller cells or femtocells to improve performance and coverage of next-generation heterogeneous wireless networks (HetNets). However, the interference caused by femtocells to neighboring cells is a limiting performance factor in dense HetNets. This interference is being managed via distributed resource allocation methods. However, as the density of the network increases so does the complexity of such resource allocation methods. Yet, unplanned deployment of femtocells requires an adaptable and self-organizing algorithm to make HetNets viable. As such, we propose to use a machine learning approach based on Q-learning to solve the resource allocation problem in such complex networks. By defining each base station as an agent, a cellular network is modelled as a multi-agent network. Subsequently, cooperative Q-learning can be applied as an efficient approach to manage the resources of a multi-agent network. Furthermore, the proposed approach considers the quality of service (QoS) for each user and fairness in the network. In comparison with prior work, the proposed approach can bring more than a four-fold increase in the number of supported femtocells while using cooperative Q-learning to reduce resource allocation overhead.

[99]
Title: Towards an Efficient Anomaly-Based Intrusion Detection for Software-Defined Networks
Subjects: Cryptography and Security (cs.CR); Learning (cs.LG)

Software-defined networking (SDN) is a new paradigm that allows developing more flexible network applications. SDN controller, which represents a centralized controlling point, is responsible for running various network applications as well as maintaining different network services and functionalities. Choosing an efficient intrusion detection system helps in reducing the overhead of the running controller and creates a more secure network. In this study, we investigate the performance of well-known anomaly-based intrusion detection approaches in terms of accuracy, false positive rate, area under ROC curve and execution time. Precisely, we focus on supervised machine-learning approaches where we use the following classifiers: Adaptive Neuro-Fuzzy Inference System (ANFIS), Decision Trees (DT), Extreme Learning Machine (ELM), Naive Bayes (NB), Linear Discriminant Analysis (LDA), Neural Networks (NN), Support Vector Machines (SVM), Random Forest (RT) and K Nearest-Neighbor (KNN). By using the NSL-KDD benchmark dataset, we observe that KNN achieves the best testing accuracy. However, in terms of execution time, we conclude that ELM shows the best results for both training and testing stages.

[100]
Title: Automated Localization for Unreproducible Builds
Subjects: Software Engineering (cs.SE)

Reproducibility is the ability of recreating identical binaries under pre-defined build environments. Due to the need of quality assurance and the benefit of better detecting attacks against build environments, the practice of reproducible builds has gained popularity in many open-source software repositories such as Debian and Bitcoin. However, identifying the unreproducible issues remains a labour intensive and time consuming challenge, because of the lacking of information to guide the search and the diversity of the causes that may lead to the unreproducible binaries.
In this paper we propose an automated framework called RepLoc to localize the problematic files for unreproducible builds. RepLoc features a query augmentation component that utilizes the information extracted from the build logs, and a heuristic rule-based filtering component that narrows the search scope. By integrating the two components with a weighted file ranking module, RepLoc is able to automatically produce a ranked list of files that are helpful in locating the problematic files for the unreproducible builds. We have implemented a prototype and conducted extensive experiments over 671 real-world unreproducible Debian packages in four different categories. By considering the topmost ranked file only, RepLoc achieves an accuracy rate of 47.09%. If we expand our examination to the top ten ranked files in the list produced by RepLoc, the accuracy rate becomes 79.28%. Considering that there are hundreds of source code, scripts, Makefiles, etc., in a package, RepLoc significantly reduces the scope of localizing problematic files. Moreover, with the help of RepLoc, we successfully identified and fixed six new unreproducible packages from Debian and Guix.

[101]
Title: SybilFuse: Combining Local Attributes with Global Structure to Perform Robust Sybil Detection
Subjects: Cryptography and Security (cs.CR); Social and Information Networks (cs.SI)

Sybil attacks are becoming increasingly widespread and pose a significant threat to online social systems; a single adversary can inject multiple colluding identities in the system to compromise security and privacy. Recent works have leveraged social network-based trust relationships to defend against Sybil attacks. However, existing defenses are based on oversimplified assumptions about network structure, which do not necessarily hold in real-world social networks. Recognizing these limitations, we propose SybilFuse, a defense-in-depth framework for Sybil detection when the oversimplified assumptions are relaxed. SybilFuse adopts a collective classification approach by first training local classifiers to compute local trust scores for nodes and edges, and then propagating the local scores through the global network structure via weighted random walk and loopy belief propagation mechanisms. We evaluate our framework on both synthetic and real-world network topologies, including a large-scale, labeled Twitter network comprising 20M nodes and 265M edges, and demonstrate that SybilFuse outperforms state-of-the-art approaches significantly. In particular, SybilFuse achieves 98% of Sybil coverage among top-ranked nodes.

[102]
Title: Composable Deep Reinforcement Learning for Robotic Manipulation
Subjects: Learning (cs.LG); Artificial Intelligence (cs.AI); Robotics (cs.RO); Machine Learning (stat.ML)

Model-free deep reinforcement learning has been shown to exhibit good performance in domains ranging from video games to simulated robotic manipulation and locomotion. However, model-free methods are known to perform poorly when the interaction time with the environment is limited, as is the case for most real-world robotic tasks. In this paper, we study how maximum entropy policies trained using soft Q-learning can be applied to real-world robotic manipulation. The application of this method to real-world manipulation is facilitated by two important features of soft Q-learning. First, soft Q-learning can learn multimodal exploration strategies by learning policies represented by expressive energy-based models. Second, we show that policies learned with soft Q-learning can be composed to create new policies, and that the optimality of the resulting policy can be bounded in terms of the divergence between the composed policies. This compositionality provides an especially valuable tool for real-world manipulation, where constructing new policies by composing existing skills can provide a large gain in efficiency over training from scratch. Our experimental evaluation demonstrates that soft Q-learning is substantially more sample efficient than prior model-free deep reinforcement learning methods, and that compositionality can be performed for both simulated and real-world tasks.

[103]
Title: White matter hyperintensity segmentation from T1 and FLAIR images using fully convolutional neural networks enhanced with residual connections
Comments: IEEE International Symposium on Biomedical Imaging (ISBI)
Subjects: Computer Vision and Pattern Recognition (cs.CV)

Segmentation and quantification of white matter hyperintensities (WMHs) are of great importance in studying and understanding various neurological and geriatric disorders. Although automatic methods have been proposed for WMH segmentation on magnetic resonance imaging (MRI), manual corrections are often necessary to achieve clinically practical results. Major challenges for WMH segmentation stem from their inhomogeneous MRI intensities, random location and size distributions, and MRI noise. The presence of other brain anatomies or diseases with enhanced intensities adds further difficulties. To cope with these challenges, we present a specifically designed fully convolutional neural network (FCN) with residual connections to segment WMHs by using combined T1 and fluid-attenuated inversion recovery (FLAIR) images. Our customized FCN is designed to be straightforward and generalizable, providing efficient end-to-end training due to its enhanced information propagation. We tested our method on the open WMH Segmentation Challenge MICCAI2017 dataset, and, despite our method's relative simplicity, results show that it performs amongst the leading techniques across five metrics. More importantly, our method achieves the best score for hausdorff distance and average volume difference in testing datasets from two MRI scanners that were not included in training, demonstrating better generalization ability of our proposed method over its competitors.

[104]
Title: Low Rank Matrix Approximation for Geometry Filtering
Comments: 12 pages, 20 figures. Corresponding author: Xuequan Lu (xuequanlu@ntu.edu.sg)
Subjects: Graphics (cs.GR)

We propose a robust, anisotropic normal estimation method for both point clouds and meshes using a low rank matrix approximation algorithm. First, we compute a local feature descriptor for each point and find similar, non-local neighbors that we organize into a matrix. We then show that a low rank matrix approximation algorithm can robustly estimate normals for both point clouds and meshes. Furthermore, we provide a new filtering method for point cloud data to anisotropically smooth the position data to fit the estimated normals. We show applications of our method to point cloud filtering, point set upsampling, surface reconstruction, mesh denoising, and geometric texture removal. Our experiments show that our method outperforms current methods in both visual quality and accuracy.

[105]
Title: TOMAAT: volumetric medical image analysis as a cloud service
Authors: Fausto Milletari
Subjects: Computer Vision and Pattern Recognition (cs.CV); Distributed, Parallel, and Cluster Computing (cs.DC)

Deep learning has been recently applied to a multitude of computer vision and medical image analysis problems. Although recent research efforts have improved the state of the art, most of the methods cannot be easily accessed, compared or used by either researchers or the general public. Researchers often publish their code and trained models on the internet, but this does not always enable these approaches to be easily used or integrated in stand-alone applications and existing workflows. In this paper we propose a framework which allows easy deployment and access of deep learning methods for segmentation through a cloud-based architecture. Our approach comprises three parts: a server, which wraps trained deep learning models and their pre- and post-processing data pipelines and makes them available on the cloud; a client which interfaces with the server to obtain predictions on user data; a service registry that informs clients about available prediction endpoints that are available in the cloud. These three parts constitute the open-source TOMAAT framework.

[106]
Title: The Optimal Compression Rate of Variable-to-Fixed Length Source Coding with a Non-Vanishing Excess-Distortion Probability
Subjects: Information Theory (cs.IT)

We consider the variable-to-fixed length lossy source coding (VFSC) problem. The optimal compression rate of the average length of variable-to-fixed source coding, allowing a non-vanishing probability of excess-distortion $\varepsilon$, is shown to be equal to $(1-\varepsilon)R(D)$, where $R(D)$ is the rate-distortion function of the source. In comparison to the related results of Koga and Yamamoto as well as Kostina, Polyanskiy, and Verd\'{u} for fixed-to-variable length source coding, our results demonstrate an interesting feature that variable-to-fixed length source coding has the same first-order compression rate as fixed-to-variable length source coding.

[107]
Title: Depth-aware CNN for RGB-D Segmentation
Subjects: Computer Vision and Pattern Recognition (cs.CV)

Convolutional neural networks (CNN) are limited by the lack of capability to handle geometric information due to the fixed grid kernel structure. The availability of depth data enables progress in RGB-D semantic segmentation with CNNs. State-of-the-art methods either use depth as additional images or process spatial information in 3D volumes or point clouds. These methods suffer from high computation and memory cost. To address these issues, we present Depth-aware CNN by introducing two intuitive, flexible and effective operations: depth-aware convolution and depth-aware average pooling. By leveraging depth similarity between pixels in the process of information propagation, geometry is seamlessly incorporated into CNN. Without introducing any additional parameters, both operators can be easily integrated into existing CNNs. Extensive experiments and ablation studies on challenging RGB-D semantic segmentation benchmarks validate the effectiveness and flexibility of our approach.

[108]
Title: Nonlocal Low-Rank Tensor Factor Analysis for Image Restoration
Subjects: Computer Vision and Pattern Recognition (cs.CV); Machine Learning (stat.ML)

Low-rank signal modeling has been widely leveraged to capture non-local correlation in image processing applications. We propose a new method that employs low-rank tensor factor analysis for tensors generated by grouped image patches. The low-rank tensors are fed into the alternative direction multiplier method (ADMM) to further improve image reconstruction. The motivating application is compressive sensing (CS), and a deep convolutional architecture is adopted to approximate the expensive matrix inversion in CS applications. An iterative algorithm based on this low-rank tensor factorization strategy, called NLR-TFA, is presented in detail. Experimental results on noiseless and noisy CS measurements demonstrate the superiority of the proposed approach, especially at low CS sampling rates.

[109]
Title: On Optimal Pricing of Services in On-demand Labor Platforms
Authors: Vijay Kamble
Subjects: Computer Science and Game Theory (cs.GT)

I consider the optimal pricing problem faced by a freelance worker on an on-demand labor platform. Service requests arriving while the worker is busy are lost forever. Thus an hourly (or per-unit-time in general) pricing strategy appropriately captures the opportunity cost of accepting a job. With the view of improving capacity utilization in the face of arrival uncertainty, the worker may be tempted to subsidize longer jobs by offering a lower hourly rate as compared to shorter jobs, even if the customers' hourly willingness to pay is believed to be statistically identical across jobs. I show that this intuition is unfounded. If the customer arrival process is Poisson, then the optimal policy charges an identical hourly rate for all jobs irrespective of their length. This result holds even when earnings are discounted over time. I derive the optimal pricing policy in this case under standard regularity assumptions.

[110]
Title: Attention-GAN for Object Transfiguration in Wild Images
Subjects: Computer Vision and Pattern Recognition (cs.CV)

This paper studies the object transfiguration problem in wild images. The generative network in classical GANs for object transfiguration often undertakes a dual responsibility: to detect the objects of interests and to convert the object from source domain to target domain. In contrast, we decompose the generative network into two separat networks, each of which is only dedicated to one particular sub-task. The attention network predicts spatial attention maps of images, and the transformation network focuses on translating objects. Attention maps produced by attention network are encouraged to be sparse, so that major attention can be paid to objects of interests. No matter before or after object transfiguration, attention maps should remain constant. In addition, learning attention network can receive more instructions, given the available segmentation annotations of images. Experimental results demonstrate the necessity of investigating attention in object transfiguration, and that the proposed algorithm can learn accurate attention to improve quality of generated images.

[111]
Title: Revisiting RCNN: On Awakening the Classification Power of Faster RCNN
Subjects: Computer Vision and Pattern Recognition (cs.CV)

Recent region-based object detectors are usually built with separate classification and localization branches on top of shared feature extraction networks. In this paper, we analyze failure cases of state-of-the-art detectors and observe that most hard false positives result from classification instead of localization. We conjecture that: (1) Shared feature representation is not optimal due to the mismatched goals of feature learning for classification and localization; (2) multi-task learning helps, yet optimization of the multi-task loss may result in sub-optimal for individual tasks; (3) large receptive field for different scales leads to redundant context information for small objects.We demonstrate the potential of detector classification power by a simple, effective, and widely-applicable Decoupled Classification Refinement (DCR) network. DCR samples hard false positives from the base classifier in Faster RCNN and trains a RCNN-styled strong classifier. Experiments show new state-of-the-art results on PASCAL VOC and COCO without any bells and whistles.

[112]
Title: Computational topology and the Unique Games Conjecture
Comments: Full version of a conference paper in 34th International Symposium on Computational Geometry (SoCG 2018)
Subjects: Computational Complexity (cs.CC); Computational Geometry (cs.CG); Discrete Mathematics (cs.DM); Algebraic Topology (math.AT)

Covering spaces of graphs have long been useful for studying expanders (as "graph lifts") and unique games (as the "label-extended graph"). In this paper we advocate for the thesis that there is a much deeper relationship between computational topology and the Unique Games Conjecture. Our starting point is Linial's 2005 observation that the only known problems whose inapproximability is equivalent to the Unique Games Conjecture - Unique Games and Max-2Lin - are instances of Maximum Section of a Covering Space on graphs. We then observe that the reduction between these two problems (Khot-Kindler-Mossel-O'Donnell, FOCS 2004; SICOMP, 2007) gives a well-defined map of covering spaces. We further prove that inapproximability for Maximum Section of a Covering Space on (cell decompositions of) closed 2-manifolds is also equivalent to the Unique Games Conjecture. This gives the first new "Unique Games-complete" problem in over a decade.
Our results partially settle an open question of Chen and Freedman (SODA 2010; Disc. Comput. Geom., 2011) from computational topology, by showing that their question is almost equivalent to the Unique Games Conjecture. (The main difference is that they ask for inapproximability over $\mathbb{Z}/2\mathbb{Z}$, and we show Unique Games-completeness over $\mathbb{Z}/k\mathbb{Z}$ for large $k$.) This equivalence comes from the fact that when the structure group $G$ of the covering space is Abelian - or more generally for principal $G$-bundles - Maximum Section of a $G$-Covering Space is the same as the well-studied problem of 1-Homology Localization.
Although our most technically demanding result is an application of Unique Games to computational topology, we hope that our observations on the topological nature of the Unique Games Conjecture will lead to applications of algebraic topology to the Unique Games Conjecture in the future.

[113]
Title: Alive Caricature from 2D to 3D
Subjects: Computer Vision and Pattern Recognition (cs.CV)

Caricature is an art form that expresses subjects in abstract, simple and exaggerated view. While many caricatures are 2D images, this paper presents an algorithm for creating expressive 3D caricatures from 2D caricature images with a minimum of user interaction. The key idea of our approach is to introduce an intrinsic deformation representation that has a capacity of extrapolation enabling us to create a deformation space from standard face dataset, which maintains face constraints and meanwhile is sufficiently large for producing exaggerated face models. Built upon the proposed deformation representation, an optimization model is formulated to find the 3D caricature that captures the style of the 2D caricature image automatically. The experiments show that our approach has better capability in expressing caricatures than those fitting approaches directly using classical parametric face models such as 3DMM and FaceWareHouse. Moreover, our approach is based on standard face datasets and avoids constructing complicated 3D caricature training set, which provides great flexibility in real applications.

[114]
Title: Acoustic feature learning cross-domain articulatory measurements
Subjects: Computation and Language (cs.CL)

Previous work has shown that it is possible to improve speech recognition by learning acoustic features from paired acoustic-articulatory data, for example by using canonical correlation analysis (CCA) or its deep extensions. One limitation of this prior work is that the learned feature models are difficult to port to new datasets or domains, and articulatory data is not available for most speech corpora. In this work we study the problem of acoustic feature learning in the setting where we have access to an external, domain-mismatched dataset of paired speech and articulatory measurements, either with or without labels. We develop methods for acoustic feature learning in these settings, based on deep variational CCA and extensions that use both source and target domain data and labels. Using this approach, we improve phonetic recognition accuracies on both TIMIT and Wall Street Journal and analyze a number of design choices.

[115]
Title: Centralized Caching with Unequal Cache Sizes
Subjects: Information Theory (cs.IT)

We address centralized caching problem with unequal cache sizes. We consider a system with a server of files connected through a shared error-free link to a group of cache-enabled users where one subgroup has a larger cache size than the rest. We investigate caching schemes with uncoded cache placement which minimize the load of worst-case demands over the shared link. We propose a caching scheme which improves upon existing schemes by either having a lower worst-case load, or decreasing the complexity of the scheme while performing within 1.1 multiplicative factor suggested by our numerical simulations.

[116]
Title: Categorical Buechi and Parity Conditions via Alternating Fixed Points of Functors
Subjects: Logic in Computer Science (cs.LO)

Categorical studies of recursive data structures and their associated reasoning principles have mostly focused on two extremes: initial algebras and induction, and final coalgebras and coinduction. In this paper we study their in-betweens. We formalize notions of alternating fixed points of functors using constructions that are similar to that of free monads. We find their use in categorical modeling of accepting run trees under the Buechi and parity acceptance condition. This modeling abstracts away from states of an automaton; it can thus be thought of as the "behaviors" of systems with the Buechi or parity conditions, in a way that follows the tradition of coalgebraic modeling of system behaviors.

[117]
Title: Weakly Supervised Object Localization on grocery shelves using simple FCN and Synthetic Dataset
Subjects: Computer Vision and Pattern Recognition (cs.CV)

We propose a weakly supervised method using two algorithms to predict object bounding boxes given only an image classification dataset. First algorithm is a simple Fully Convolutional Network (FCN) trained to classify object instances. We use the property of FCN to return a mask for images larger than training images to get a primary output segmentation mask during test time by passing an image pyramid to it. We enhance the FCN output mask into final output bounding boxes by a Convolutional Encoder-Decoder (ConvAE) viz. the second algorithm. ConvAE is trained to localize objects on an artificially generated dataset of output segmentation masks. We demonstrate the effectiveness of this method in localizing objects in grocery shelves where annotating data for object detection is hard due to variety of objects. This method can be extended to any problem domain where collecting images of objects is easy and annotating their coordinates is hard.

[118]
Title: ESPNet: Efficient Spatial Pyramid of Dilated Convolutions for Semantic Segmentation
Subjects: Computer Vision and Pattern Recognition (cs.CV)

We introduce a fast and efficient convolutional neural network, ESPNet, for semantic segmentation of high resolution images under resource constraints. ESPNet is based on a new convolutional module, efficient spatial pyramid (ESP), which is efficient in terms of computation, memory, and power. ESPNet is 22 times faster (on a standard GPU) and 180 times smaller than the state-of-the-art semantic segmentation network PSPNet, while its category-wise accuracy is only 8% less. We evaluated EPSNet on a variety of semantic segmentation datasets including Cityscapes, PASCAL VOC, and a breast biopsy whole slide image dataset. Under the same constraints on memory and computation, ESPNet outperforms all the current efficient CNN networks such as MobileNet, ShuffleNet, and ENet on both standard metrics and our newly introduced performance metrics that measure efficiency on edge devices. Our network can process high resolution images at a rate of 112 and 9 frames per second on a standard GPU and edge device, respectively.

[119]
Title: Swapping Colored Tokens on Graphs
Subjects: Data Structures and Algorithms (cs.DS)

We investigate the computational complexity of the following problem. We are given a graph in which each vertex has an initial and a target color. Each pair of adjacent vertices can swap their current colors. Our goal is to perform the minimum number of swaps so that the current and target colors agree at each vertex. When the colors are chosen from {1,2,...,c}, we call this problem c-Colored Token Swapping since the current color of a vertex can be seen as a colored token placed on the vertex. We show that c-Colored Token Swapping is NP-complete for c = 3 even if input graphs are restricted to connected planar bipartite graphs of maximum degree 3. We then show that 2-Colored Token Swapping can be solved in polynomial time for general graphs and in linear time for trees. Besides, we show that, the problem for complete graphs is fixed-parameter tractable when parameterized by the number of colors, while it is known to be NP-complete when the number of colors is unbounded.

[120]
Title: Artificial Intelligence Enabled Software Defined Networking: A Comprehensive Overview
Subjects: Artificial Intelligence (cs.AI); Networking and Internet Architecture (cs.NI)

In recent years, the increased demand for dynamic management of network resources in modern computer networks in general and in today's data centers in particular has resulted in a new promising architecture, in which a more flexible controlling functionalities can be achieved with high level of abstraction. In software defined networking (SDN) architecture, a central management of the forwarding elements (i.e. switches and routers) is accomplished by a central unit, which can be programmed directly to perform fundamental networking tasks or implementing any other additional services. Combining both central management and network programmability, opens the door to employ more advanced techniques such as artificial intelligence (AI) in order to deal with high-demand and rapidly-changing networks. In this study, we provide a detailed overview of current efforts and recent advancements to include AI in SDN-based networks.

[121]
Title: Using a Model-driven Approach in Building a Provenance Framework for Tracking Policy-making Processes in Smart Cities
Comments: 15 pages, 5 figures, 2 tables, Proc of the 21st International Database Engineering & Applications Symposium (IDEAS 2017)
Subjects: Software Engineering (cs.SE)

The significance of provenance in various settings has emphasised its potential in the policy-making process for analytics in Smart Cities. At present, there exists no framework that can capture the provenance in a policy-making setting. This research therefore aims at defining a novel framework, namely, the Policy Cycle Provenance (PCP) Framework, to capture the provenance of the policy-making process. However, it is not straightforward to design the provenance framework due to a number of associated policy design challenges. The design challenges revealed the need for an adaptive system for tracking policies therefore a model-driven approach has been considered in designing the PCP framework. Also, suitability of a networking approach is proposed for designing workflows for tracking the policy-making process.

[122]
Title: Music Style Transfer Issues: A Position Paper
Subjects: Sound (cs.SD)

Led by the success of neural style transfer on visual arts, there has been a rising trend very recently in the effort of music style transfer. However, "music style" is not yet a well-defined concept from a scientific point of view. The difficulty lies in the intrinsic multi-level and multi-modal character of music representation (which is very different from image representation). As a result, depending on their interpretation of "music style", current studies under the category of "music style transfer", are actually solving completely different problems that belong to a variety of sub-fields of Computer Music. Also, a vanilla end-to-end approach, which aims at dealing with all levels of music representation at once by directly adopting the method of image style transfer, leads to poor results. Thus, we see a vital necessity to re-define music style transfer more precisely and scientifically based on the uniqueness of music representation, as well as to connect different aspects of music style transfer with existing well-established sub-fields of computer music studies. Otherwise, an accumulated upcoming literature (all named after music style transfer) will lead to a great confusion of the underlying problems as well as negligence of the treasures in computer music before the age of deep learning. In addition, we discuss the current limitations of music style modeling and its future directions by drawing spirit from some deep generative models, especially the ones using unsupervised learning and disentanglement techniques.

[123]
Title: Production Line Technique for Autonomous Vehicle Scheduling
Authors: Nasser Aloufi
Subjects: Systems and Control (cs.SY)

This paper considers the problem of scheduling autonomous vehicles in intersections. A new system is proposed that could be an additional choice to the recently introduced Autonomous Intersection Management (AIM) model. The proposed system is based on the production line technique, where the environment of the intersection, vehicles position, speeds and turning are specified and determined in advance. The goal of the proposed system is to eliminate vehicle collision and the waiting time inside the intersection. Three different patterns of the vehicles flow toward the intersection have been considered for the evaluation of the model. The system requires less waiting time (compared to the other models) in the random case where the flow is unpredictable. The KNN algorithm is used to predict the right turn vehicle. The experimental results show that there is no single chance of collision inside the intersection, however, the system requires more free space in the traffic lane.

[124]
Title: Linear-time geometric algorithm for evaluating Bézier curves
Authors: Paweł Woźny
Subjects: Numerical Analysis (cs.NA); Graphics (cs.GR)

New algorithm for evaluating a point on a B\'ezier curve and on a rational B\'ezier curve is given. The method has a~geometric interpretation and a~convex hull property. The computational complexity of the new algorithm is linear with respect to the degree of the considered curve, and its memory complexity is $O(1)$.

[125]
Title: Cloud Provider Capacity Augmentation Through Automated Resource Bartering
Comments: 26 pages, 15 figures, 6 tables
Journal-ref: Future Generation Computer Systems Vol 81 pp 203-218 April 2018
Subjects: Software Engineering (cs.SE); Distributed, Parallel, and Cluster Computing (cs.DC); Multiagent Systems (cs.MA)

Growing interest in Cloud Computing places a heavy workload on cloud providers which is becoming increasingly difficult for them to manage with their primary datacenter infrastructures. Resource limitations can make providers vulnerable to significant reputational damage and it often forces customers to select services from the larger, more established companies, sometimes at a higher price. Funding limitations, however, commonly prevent emerging and even established providers from making continual investment in hardware speculatively assuming a certain level of growth in demand. As an alternative, they may strive to use the current inter-cloud resource sharing platforms which mainly rely on monetary payments and thus putting pressure on already stretched cash flows. To address such issues, we have designed and implemented a new multi-agent based Cloud Resource Bartering System (CRBS) that fosters the management and bartering of pooled resources without requiring costly financial transactions between providers. Agents in CRBS not only strengthen the trading relationship among providers but also enable them to handle surges in demand with their primary setup. Unlike existing systems, CRBS assigns resources by considering resource urgency which comparatively improves customers satisfaction and the resource utilization rate by more than 50%.The evaluation results provide evidence that our system assists providers to timely acquire the additional resources and to maintain sustainable service delivery. We conclude that the existence of such a system is economically beneficial for cloud providers and enables them to adapt to fluctuating workloads.

[126]
Title: An Adaptable System to Support Provenance Management for the Public Policy-Making Process in Smart Cities
Comments: 26 pages, 15 figures, 4 tables
Journal-ref: Informatics Vol5 No 3 pp 1-26 2018
Subjects: Databases (cs.DB); Computers and Society (cs.CY)

Government policies aim to address public issues and problems and therefore play a pivotal role in peoples lives. The creation of public policies, however, is complex given the perspective of large and diverse stakeholders involvement, considerable human participation, lengthy processes, complex task specification and the non-deterministic nature of the process. The inherent complexities of the policy process impart challenges for designing a computing system that assists in supporting and automating the business process pertaining to policy setup, which also raises concerns for setting up a tracking service in the policy-making environment. A tracking service informs how decisions have been taken during policy creation and can provide useful and intrinsic information regarding the policy process. At present, there exists no computing system that assists in tracking the complete process that has been employed for policy creation. To design such a system, it is important to consider the policy environment challenges; for this a novel network and goal based approach has been framed and is covered in detail in this paper. Furthermore, smart governance objectives that include stakeholders participation and citizens involvement have been considered. Thus, the proposed approach has been devised by considering smart governance principles and the knowledge environment of policy making where tasks are largely dependent on policy makers decisions and on individual policy objectives. Our approach reckons the human dimension for deciding and defining autonomous process activities at run time. Furthermore, with the network-based approach, so-called provenance data tracking is employed which enables the capture of policy process.

[127]
Title: MONICA in Hamburg: Towards Large-Scale IoT Deployments in a Smart City
Subjects: Networking and Internet Architecture (cs.NI)

Modern cities and metropolitan areas all over the world face new management challenges in the 21st century primarily due to increasing demands on living standards by the urban population. These challenges range from climate change, pollution, transportation, and citizen engagement, to urban planning, and security threats. The primary goal of a Smart City is to counteract these problems and mitigate their effects by means of modern ICT to improve urban administration and infrastructure. Key ideas are to utilise network communication to inter-connect public authorities; but also to deploy and integrate numerous sensors and actuators throughout the city infrastructure - such is also widely know as the Internet of Things (IoT). Thus, IoT technologies will be an integral part and key enabler to achieve many objectives of the Smart City vision.
The contributions of this paper are as follows. We first examine a number of IoT platforms, technologies and network standards that can help to foster a Smart City environment. Second, we introduce the EU project MONICA which aims for demonstration of large-scale IoT deployments at public, inner-city events and give an overview on its IoT platform architecture. And third, we provide a case-study report on SmartCity activities by the City of Hamburg and provide insights on recent (on-going) field tests of a vertically integrated, end-to-end IoT sensor application.

[128]
Title: An improved isomorphism test for bounded-tree-width graphs
Subjects: Data Structures and Algorithms (cs.DS); Discrete Mathematics (cs.DM); Combinatorics (math.CO)

We give a new fpt algorithm testing isomorphism of $n$-vertex graphs of tree width $k$ in time $2^{k\operatorname{polylog} (k)}\operatorname{poly} (n)$, improving the fpt algorithm due to Lokshtanov, Pilipczuk, Pilipczuk, and Saurabh (FOCS 2014), which runs in time $2^{\mathcal{O}(k^5\log k)}\operatorname{poly} (n)$. Based on an improved version of the isomorphism-invariant graph decomposition technique introduced by Lokshtanov et al., we prove restrictions on the structure of the automorphism groups of graphs of tree width $k$. Our algorithm then makes heavy use of the group theoretic techniques introduced by Luks (JCSS 1982) in his isomorphism test for bounded degree graphs and Babai (STOC 2016) in his quasipolynomial isomorphism test. In fact, we even use Babai's algorithm as a black box in one place.
We also give a second algorithm which, at the price of a slightly worse running time $2^{\mathcal{O}(k^2 \log k)}\operatorname{poly} (n)$, avoids the use of Babai's algorithm and, more importantly, has the additional benefit that it can also used as a canonization algorithm.

[129]
Title: Cloud Infrastructure Provenance Collection and Management to Reproduce Scientific Workflow Execution
Comments: 59 pages, 21 figures, 4 algorithms, 6 tables, Future Generation Computer Systems. March 2018
Subjects: Distributed, Parallel, and Cluster Computing (cs.DC)

The emergence of Cloud computing provides a new computing paradigm for scientific workflow execution. It provides dynamic, on-demand and scalable resources that enable the processing of complex workflow-based experiments. With the ever growing size of the experimental data and increasingly complex processing workflows, the need for reproducibility has also become essential. Provenance has been thought of a mechanism to verify a workflow and to provide workflow reproducibility. One of the obstacles in reproducing an experiment execution is the lack of information about the execution infrastructure in the collected provenance. This information becomes critical in the context of Cloud in which resources are provisioned on-demand and by specifying resource configurations. Therefore, a mechanism is required that enables capturing of infrastructure information along with the provenance of workflows executing on the Cloud to facilitate the re-creation of execution environment on the Cloud. This paper presents a framework, ReCAP, along with the proposed mapping approaches that aid in capturing the Cloud-aware provenance information and help in re-provisioning the execution resource on the Cloud with similar configurations. Experimental evaluation has shown the impact of different resource configurations on the workflow execution performance, therefore justifies the need for collecting such provenance information in the context of Cloud. The evaluation has also demonstrated that the proposed mapping approaches can capture Cloud information in various Cloud usage scenarios without causing performance overhead and can also enable the re-provisioning of resources on Cloud. Experiments were conducted using workflows from different scientific domains such as astronomy and neuroscience to demonstrate the applicability of this research for different workflows.

[130]
Title: Symbol-Level Precoding Design for Max-Min SINR in Multiuser MISO Broadcast Channels
Comments: Submitted to SPAWC 2018, 7 pages, 2 figures
Subjects: Information Theory (cs.IT); Signal Processing (eess.SP)

In this paper, we address the symbol level precoding (SLP) design problem under max-min SINR criterion in the downlink of multiuser multiple-input single-output (MISO) channels. First, we show that the distance preserving constructive interference regions (DPCIR) are always polyhedral angles (shifted pointed cones) for any given constellation point with unbounded decision region. Then we prove that any signal in a given unbounded DPCIR has a norm larger than the norm of the corresponding vertex if and only if the convex hull of the constellation contains the origin. Using these properties, we show that the power of the noiseless received signal lying on an unbounded DPCIR is an strictly increasing function of two parameters. This allows us to reformulate the originally non-convex SLP max-min SINR as a convex optimization problem. We discuss the loss due to our proposed convex reformulation and provide some simulation results.

[131]
Title: Multi-Codec DASH Dataset
Comments: 6 pages, submitted to ACM MMSys'18 (dataset track)
Subjects: Multimedia (cs.MM)

The number of bandwidth-hungry applications and services is constantly growing. HTTP adaptive streaming of audio-visual content accounts for the majority of today's internet traffic. Although the internet bandwidth increases also constantly, audio-visual compression technology is inevitable and we are currently facing the challenge to be confronted with multiple video codecs. This paper proposes a multi-codec DASH dataset comprising AVC, HEVC, VP9, and AV1 in order to enable interoperability testing and streaming experiments for the efficient usage of these codecs under various conditions. We adopt state of the art encoding and packaging options and also provide basic quality metrics along with the DASH segments. Additionally, we briefly introduce a multi-codec DASH scheme and possible usage scenarios. Finally, we provide a preliminary evaluation of the encoding efficiency in the context of HTTP adaptive streaming services and applications.

[132]
Title: On the streaming complexity of fundamental geometric problems
Subjects: Computational Geometry (cs.CG)

In this paper, we focus on lower bounds and algorithms for some basic geometric problems in the one-pass (insertion only) streaming model. The problems considered are grouped into three categories:
(i) Klee's measure
(ii) Convex body approximation, geometric query, and
(iii) Discrepancy
Klee's measure is the problem of finding the area of the union of hyperrectangles. Under convex body approximation, we consider the problems of convex hull, convex body approximation, linear programming in fixed dimensions. The results for convex body approximation implies a property testing type result to find if a query point lies inside a convex polyhedron. Under discrepancy, we consider both the geometric and combinatorial discrepancy. For all the problems considered, we present (randomized) lower bounds on space. Most of our lower bounds are in terms of approximating the solution with respect to an error parameter $\epsilon$. We provide approximation algorithms that closely match the lower bound on space for most of the problems.

[133]
Title: Parameterized complexity of fair deletion problems II
Subjects: Computational Complexity (cs.CC); Data Structures and Algorithms (cs.DS); Logic in Computer Science (cs.LO)

Vertex deletion problems are those where given a graph G and a graph property \pi, the goal is to find a subset of vertices W such that G \ W satisfies property \pi. Typically, we want to minimize size of the deletion set W. Unlike this, in fair vertex deletion problems we change the objective: we minimize the maximum number of vertices deleted in neighborhood of any vertex.
When the property \pi is expressible by an MSO formula we refer to this special case as to the MSO fair vertex deletion problem. We prove that there is an FPT algorithm for the MSO fair vertex deletion problem parametrized by the twin cover number.
We study parameterized complexity of the Fair Vertex Cover (FairVC) problem. It turns out that the FairVC problem is among the simplest problems with respect to the property \pi (here \pi describes an edgeless graph). We prove that the FairVC problem is W[1]-hard with parameterization by both treedepth and feedback vertex set of the input graph. On the positive side, we provide an FPT algorithm for the FairVC problem parameterized by modular width.

[134]
Title: A Mixture of Views Network with Applications to the Classification of Breast Microcalcifications
Subjects: Computer Vision and Pattern Recognition (cs.CV); Learning (cs.LG); Machine Learning (stat.ML)

In this paper we examine data fusion methods for multi-view data classification. We present a decision concept which explicitly takes into account the input multi-view structure, where for each case there is a different subset of relevant views. This data fusion concept, which we dub Mixture of Views, is implemented by a special purpose neural network architecture. It is demonstrated on the task of classifying breast microcalcifications as benign or malignant based on CC and MLO mammography views. The single view decisions are combined by a data-driven decision, according to the relevance of each view in a given case, into a global decision. The method is evaluated on a large multi-view dataset extracted from the standardized digital database for screening mammography (DDSM). The experimental results show that our method outperforms previously suggested fusion methods.

[135]
Title: Aerial LaneNet: Lane Marking Semantic Segmentation in Aerial Imagery using Wavelet-Enhanced Cost-sensitive Symmetric Fully Convolutional Neural Networks
Comments: Submitted to IEEE Transactions on Geoscience and Remote Sensing - IEEE is credited as copyright holder
Subjects: Computer Vision and Pattern Recognition (cs.CV)

The knowledge about the placement and appearance of lane markings is a prerequisite for the creation of maps with high precision, necessary for autonomous driving, infrastructure monitoring, lane-wise traffic management, and urban planning. Lane markings are one of the important components of such maps. Lane markings convey the rules of roads to drivers. While these rules are learned by humans, an autonomous driving vehicle should be taught to learn them to localize itself. Therefore, accurate and reliable lane marking semantic segmentation in the imagery of roads and highways is needed to achieve such goals. We use airborne imagery which can capture a large area in a short period of time by introducing an aerial lane marking dataset. In this work, we propose a Symmetric Fully Convolutional Neural Network enhanced by Wavelet Transform in order to automatically carry out lane marking segmentation in aerial imagery. Due to a heavily unbalanced problem in terms of number of lane marking pixels compared with background pixels, we use a customized loss function as well as a new type of data augmentation step. We achieve a very high accuracy in pixel-wise localization of lane markings without using 3rd-party information. In this work, we introduce the first high-quality dataset used within our experiments which contains a broad range of situations and classes of lane markings representative of current transportation systems. This dataset will be publicly available and hence, it can be used as the benchmark dataset for future algorithms within this domain.

[136]
Title: TBD: Benchmarking and Analyzing Deep Neural Network Training
Subjects: Learning (cs.LG); Machine Learning (stat.ML)

The recent popularity of deep neural networks (DNNs) has generated a lot of research interest in performing DNN-related computation efficiently. However, the primary focus is usually very narrow and limited to (i) inference -- i.e. how to efficiently execute already trained models and (ii) image classification networks as the primary benchmark for evaluation.
Our primary goal in this work is to break this myopic view by (i) proposing a new benchmark for DNN training, called TBD (TBD is short for Training Benchmark for DNNs), that uses a representative set of DNN models that cover a wide range of machine learning applications: image classification, machine translation, speech recognition, object detection, adversarial networks, reinforcement learning, and (ii) by performing an extensive performance analysis of training these different applications on three major deep learning frameworks (TensorFlow, MXNet, CNTK) across different hardware configurations (single-GPU, multi-GPU, and multi-machine). TBD currently covers six major application domains and eight different state-of-the-art models.
We present a new toolchain for performance analysis for these models that combines the targeted usage of existing performance analysis tools, careful selection of new and existing metrics and methodologies to analyze the results, and utilization of domain specific characteristics of DNN training. We also build a new set of tools for memory profiling in all three major frameworks; much needed tools that can finally shed some light on precisely how much memory is consumed by different data structures (weights, activations, gradients, workspace) in DNN training. By using our tools and methodologies, we make several important observations and recommendations on where the future research and optimization of DNN training should be focused.

[137]
Title: Table Based Detection of Degenerate Predicates in Free Space Construction
Comments: Full version of paper to appear in proceedings of Symposium on Computational Geometry 2018. 16 pages. 8 figures
Subjects: Computational Geometry (cs.CG)

The key to a robust and efficient implementation of a computational geometry algorithm is an efficient algorithm for detecting degenerate predicates. We study degeneracy detection in constructing the free space of a polyhedron that rotates around a fixed axis and translates freely relative to another polyhedron. The structure of the free space is determined by the signs of univariate polynomials, called angle polynomials, whose coefficients are polynomials in the coordinates of the vertices of the polyhedra. Every predicate is expressible as the sign of an angle polynomial $f$ evaluated at a zero $t$ of an angle polynomial $g$. A predicate is degenerate (the sign is zero) when $t$ is a zero of a common factor of $f$ and $g$. We present an efficient degeneracy detection algorithm based on a one-time factoring of every possible angle polynomial. Our algorithm is 3500 times faster than the standard algorithm based on greatest common divisor computation. It reduces the share of degeneracy detection in our free space computations from 90% to 0.5% of the running time.

[138]
Title: Unsupervised Semantic Deep Hashing
Authors: Sheng Jin
Subjects: Computer Vision and Pattern Recognition (cs.CV)

In recent years, deep hashing methods have been proved to be efficient since it employs convolutional neural network to learn features and hashing codes simultaneously. However, these methods are mostly supervised. In real-world application, it is a time-consuming and overloaded task for annotating a large number of images. In this paper, we propose a novel unsupervised deep hashing method for large-scale image retrieval. Our method, namely unsupervised semantic deep hashing (\textbf{USDH}), uses semantic information preserved in the CNN feature layer to guide the training of network. We enforce four criteria on hashing codes learning based on VGG-19 model: 1) preserving relevant information of feature space in hashing space; 2) minimizing quantization loss between binary-like codes and hashing codes; 3) improving the usage of each bit in hashing codes by using maximum information entropy, and 4) invariant to image rotation. Extensive experiments on CIFAR-10, NUSWIDE have demonstrated that \textbf{USDH} outperforms several state-of-the-art unsupervised hashing methods for image retrieval. We also conduct experiments on Oxford 17 datasets for fine-grained classification to verify its efficiency for other computer vision tasks.

[139]
Title: Newton: Gravitating Towards the Physical Limits of Crossbar Acceleration
Subjects: Learning (cs.LG); Hardware Architecture (cs.AR)

Many recent works have designed accelerators for Convolutional Neural Networks (CNNs). While digital accelerators have relied on near data processing, analog accelerators have further reduced data movement by performing in-situ computation. Recent works take advantage of highly parallel analog in-situ computation in memristor crossbars to accelerate the many vector-matrix multiplication operations in CNNs. However, these in-situ accelerators have two significant short-comings that we address in this work. First, the ADCs account for a large fraction of chip power and area. Second, these accelerators adopt a homogeneous design where every resource is provisioned for the worst case. By addressing both problems, the new architecture, Newton, moves closer to achieving optimal energy-per-neuron for crossbar accelerators.
We introduce multiple new techniques that apply at different levels of the tile hierarchy. Two of the techniques leverage heterogeneity: one adapts ADC precision based on the requirements of every sub-computation (with zero impact on accuracy), and the other designs tiles customized for convolutions or classifiers. Two other techniques rely on divide-and-conquer numeric algorithms to reduce computations and ADC pressure. Finally, we place constraints on how a workload is mapped to tiles, thus helping reduce resource provisioning in tiles. For a wide range of CNN dataflows and structures, Newton achieves a 77% decrease in power, 51% improvement in energy efficiency, and 2.2x higher throughput/area, relative to the state-of-the-art ISAAC accelerator.

[140]
Title: Approximating Flexibility in Distributed Energy Resources: A Geometric Approach
Comments: accepted for presentation at the Power Systems Computations Conference 2018
Subjects: Systems and Control (cs.SY); Optimization and Control (math.OC)

With increasing availability of communication and control infrastructure at the distribution systems, it is expected that the distributed energy resources (DERs) will take an active part in future power systems operations. One of the main challenges associated with integration of DERs in grid planning and control is in estimating the available flexibility in a collection of (heterogeneous) DERs, each of which may have local constraints that vary over time. In this work, we present a geometric approach for approximating the flexibility of a DER in modulating its active and reactive power consumption. The proposed method is agnostic about the type and model of the DERs, thereby facilitating a plug-and-play approach, and allows scalable aggregation of the flexibility of a collection of (heterogeneous) DERs at the distributed system level. Simulation results are presented to demonstrate the performance of the proposed method.

[141]
Title: Cloud Workload Prediction based on Workflow Execution Time Discrepancies
Subjects: Distributed, Parallel, and Cluster Computing (cs.DC)

Infrastructure as a service clouds hide the complexity of maintaining the physical infrastructure with a slight disadvantage: they also hide their internal working details. Should users need knowledge about these details e.g., to increase the reliability or performance of their applications, they would need solutions to detect behavioural changes in the underlying system. Existing runtime solutions for such purposes offer limited capabilities as they are mostly restricted to revealing weekly or yearly behavioural periodicity in the infrastructure. This article proposes a technique for predicting generic background workload by means of simulations that are capable of providing additional knowledge of the underlying private cloud systems in order to support activities like cloud orchestration or workflow enactment. Our technique uses long-running scientific workflows and their behaviour discrepancies and tries to replicate these in a simulated cloud with known (trace-based) workloads. We argue that the better we can mimic the current discrepancies the better we can tell expected workloads in the near future on the real life cloud. We evaluated the proposed prediction approach with a biochemical application on both real and simulated cloud infrastructures. The proposed algorithm has shown to produce significantly (~20%) better workload predictions for the future of simulated clouds than random workload selection.

[142]
Title: PyGOM - A Python Package for Simplifying Modelling with Systems of Ordinary Differential Equations
Subjects: Mathematical Software (cs.MS); Classical Analysis and ODEs (math.CA)

Ordinary Differential Equations (ODE) are used throughout science where the capture of rates of change in states is sought. While both pieces of commercial and open software exist to study such systems, their efficient and accurate usage frequently requires deep understanding of mathematics and programming. The package we present here, PyGOM, seeks to remove these obstacles for models based on ODE systems. We provide a simple interface for the construction of such systems backed by a comprehensive and easy to use tool--box. This tool--box implements functions to easily perform common operations for ODE systems such as solving, parameter estimation, and stochastic simulation. The package source is freely available and organized in a way that permits easy extension. With both the algebraic and numeric calculations performed automatically (but still accessible), the end user is freed to focus on model development.

[143]
Title: Inverse Visual Question Answering: A New Benchmark and VQA Diagnosis Tool
Subjects: Computer Vision and Pattern Recognition (cs.CV)

In recent years, visual question answering (VQA) has become topical. The premise of VQA's significance as a benchmark in AI, is that both the image and textual question need to be well understood and mutually grounded in order to infer the correct answer. However, current VQA models perhaps understand' less than initially hoped, and instead master the easier task of exploiting cues given away in the question and biases in the answer distribution. In this paper we propose the inverse problem of VQA (iVQA). The iVQA task is to generate a question that corresponds to a given image and answer pair. We propose a variational iVQA model that can generate diverse, grammatically correct and content correlated questions that match the given answer. Based on this model, we show that iVQA is an interesting benchmark for visuo-linguistic understanding, and a more challenging alternative to VQA because an iVQA model needs to understand the image better to be successful. As a second contribution, we show how to use iVQA in a novel reinforcement learning framework to diagnose any existing VQA model by way of exposing its belief set: the set of question-answer pairs that the VQA model would predict true for a given image. This provides a completely new window into what VQA models believe' about images. We show that existing VQA models have more erroneous beliefs than previously thought, revealing their intrinsic weaknesses. Suggestions are then made on how to address these weaknesses going forward.

[144]
Title: Deja Vu: Motion Prediction in Static Images
Comments: Published in the proceedings of the European Conference on Computer Vision (ECCV), 2014
Subjects: Computer Vision and Pattern Recognition (cs.CV)

This paper proposes motion prediction in single still images by learning it from a set of videos. The building assumption is that similar motion is characterized by similar appearance. The proposed method learns local motion patterns given a specific appearance and adds the predicted motion in a number of applications. This work (i) introduces a novel method to predict motion from appearance in a single static image, (ii) to that end, extends of the Structured Random Forest with regression derived from first principles, and (iii) shows the value of adding motion predictions in different tasks such as: weak frame-proposals containing unexpected events, action recognition, motion saliency. Illustrative results indicate that motion prediction is not only feasible, but also provides valuable information for a number of applications.

[145]
Title: Asymmetric kernel in Gaussian Processes for learning target variance
Comments: Accepted in Pattern Recognition Letters, 2018
Subjects: Learning (cs.LG); Computer Vision and Pattern Recognition (cs.CV); Machine Learning (stat.ML)

This work incorporates the multi-modality of the data distribution into a Gaussian Process regression model. We approach the problem from a discriminative perspective by learning, jointly over the training data, the target space variance in the neighborhood of a certain sample through metric learning. We start by using data centers rather than all training samples. Subsequently, each center selects an individualized kernel metric. This enables each center to adjust the kernel space in its vicinity in correspondence with the topology of the targets --- a multi-modal approach. We additionally add descriptiveness by allowing each center to learn a precision matrix. We demonstrate empirically the reliability of the model.

[146]
Title: AISC: Approximate Instruction Set Computer
Subjects: Hardware Architecture (cs.AR)

This paper makes the case for a single-ISA heterogeneous computing platform, AISC, where each compute engine (be it a core or an accelerator) supports a different subset of the very same ISA. An ISA subset may not be functionally complete, but the union of the (per compute engine) subsets renders a functionally complete, platform-wide single ISA. Tailoring the microarchitecture of each compute engine to the subset of the ISA that it supports can easily reduce hardware complexity. At the same time, the energy efficiency of computing can improve by exploiting algorithmic noise tolerance: by mapping code sequences that can tolerate (any potential inaccuracy induced by) the incomplete ISA-subsets to the corresponding compute engines.

[147]
Title: Techniques for Shared Resource Management in Systems with Throughput Processors
Subjects: Hardware Architecture (cs.AR)

The continued growth of the computational capability of throughput processors has made throughput processors the platform of choice for a wide variety of high performance computing applications. Graphics Processing Units (GPUs) are a prime example of throughput processors that can deliver high performance for applications ranging from typical graphics applications to general-purpose data parallel (GPGPU) applications. However, this success has been accompanied by new performance bottlenecks throughout the memory hierarchy of GPU-based systems. We identify and eliminate performance bottlenecks caused by major sources of interference throughout the memory hierarchy.
We introduce changes to the memory hierarchy for systems with GPUs that allow the memory hierarchy to be aware of both CPU and GPU applications' characteristics. We introduce mechanisms to dynamically analyze different applications' characteristics and propose four major changes throughout the memory hierarchy. We propose changes to the cache management and memory scheduling mechanisms to mitigate intra-application interference in GPGPU applications. We propose changes to the memory controller design and its scheduling policy to mitigate inter-application interference in heterogeneous CPU-GPU systems. We redesign the MMU and the memory hierarchy in GPUs to be aware of ddress-translation data in order to mitigate the inter-address-space interference. We introduce a hardware-software cooperative technique that modifies the memory allocation policy to enable large page support in order to further reduce the inter-address-space interference at the shared Translation Lookaside Buffer (TLB). Our evaluations show that the GPU-aware cache and memory management techniques proposed in this dissertation are effective at mitigating the interference caused by GPUs on current and future GPU-based systems.

[148]
Comments: 30 pages, submitted to ICFP'18
Subjects: Programming Languages (cs.PL)

Good tools can bring mechanical verification to programs written in mainstream functional languages. We use hs-to-coq to translate significant portions of Haskell's containers library into Coq, and verify it against specifications that we derive from a variety of sources including type class laws, the library's test suite, and interfaces from Coq's standard library. Our work shows that it is feasible to verify mature, widely-used, highly optimized, and unmodified Haskell code. We also learn more about the theory of weight-balanced trees, extend hs-to-coq to handle partiality, and -- since we found no bugs -- attest to the superb quality of well-tested functional code.

[149]
Title: Featureless: Bypassing feature extraction in action categorization
Comments: Published in the proceedings of the International Conference on Image Processing (ICIP), 2016
Subjects: Computer Vision and Pattern Recognition (cs.CV)

This method introduces an efficient manner of learning action categories without the need of feature estimation. The approach starts from low-level values, in a similar style to the successful CNN methods. However, rather than extracting general image features, we learn to predict specific video representations from raw video data. The benefit of such an approach is that at the same computational expense it can predict 2 D video representations as well as 3 D ones, based on motion. The proposed model relies on discriminative Waldboost, which we enhance to a multiclass formulation for the purpose of learning video representations. The suitability of the proposed approach as well as its time efficiency are tested on the UCF11 action recognition dataset.

[150]
Title: Polyglot Semantic Parsing in APIs
Subjects: Computation and Language (cs.CL)

Traditional approaches to semantic parsing (SP) work by training individual models for each available parallel dataset of text-meaning pairs. In this paper, we explore the idea of polyglot semantic translation, or learning semantic parsing models that are trained on multiple datasets and natural languages. In particular, we focus on translating text to code signature representations using the software component datasets of Richardson and Kuhn (2017a,b). The advantage of such models is that they can be used for parsing a wide variety of input natural languages and output programming languages, or mixed input languages, using a single unified model. To facilitate modeling of this type, we develop a novel graph-based decoding framework that achieves state-of-the-art performance on the above datasets, and apply this method to two other benchmark SP tasks.

[151]
Title: The environmental footprint of a distributed cloud storage
Authors: Lorenzo Posani
Subjects: Distributed, Parallel, and Cluster Computing (cs.DC)

Every time we access a file on the Cloud, a chain of routing servers is activated to transfer the payload from the storage facility to the user's device, and back. The data center, being densely packed and highly-performant, is hardly optimized in terms of energy consumption and environmental impact. The ICT (information communication technologies) ecosystem is estimated to be responsible for the 10 percent of the full worldwide energy consumption - equivalent to Germany and Japan total energy demand. However, being a fast-inflating and almost unregulated market, the environmental footprint caused by data storage and transfer on the internet shows no signs of slowing down. In this paper, we analyze a reversal paradigm for cloud storage (implemented by Cubbit - www.cubbit.io) in which data are stored and distributed over a network of p2p-interacting single board computers. Here we compare Cubbit to the traditional server-based solution in terms of environmental footprint and power/usage efficiency. First, being virtualized and distributed upon small single-board computers, storage is far more efficient than data centers racks and does not need any additional cooling. Secondly, the distributed architecture can leverage proximity of files to the user (e.g. within the same city) to minimize the consumption due to the powered route from the virtualized storage facility to the user's device. We quantify both these effects using the internet model published by Baliga et al. (Proceedings of the IEEE 99.1, 2011), and compare the estimation with the same model applied to a server-based cloud storage (e.g. Dropbox). Results show that a remarkable reduction in terms of energy consumption is obtained on both storage (less than 1/10 of consumptions) and transfers (c.a. 1/2 of consumptions).

[152]
Title: When Does Machine Learning FAIL? Generalized Transferability for Evasion and Poisoning Attacks
Subjects: Cryptography and Security (cs.CR); Learning (cs.LG)

Attacks against machine learning systems represent a growing threat as highlighted by the abundance of attacks proposed lately. However, attacks often make unrealistic assumptions about the knowledge and capabilities of adversaries. To evaluate this threat systematically, we propose the FAIL attacker model, which describes the adversary's knowledge and control along four dimensions. The FAIL model allows us to consider a wide range of weaker adversaries that have limited control and incomplete knowledge of the features, learning algorithms and training instances utilized. Within this framework, we evaluate the generalized transferability of a known evasion attack and we design StingRay, a targeted poisoning attack that is broadly applicable---it is practical against 4 machine learning applications, which use 3 different learning algorithms, and it can bypass 2 existing defenses. Our evaluation provides deeper insights into the transferability of poison and evasion samples across models and suggests promising directions for investigating defenses against this threat.

[153]
Title: Exact Distance Oracles Using Hopsets
Authors: Siddharth Gupta (UCI), Adrian Kosowski (GANG), Laurent Viennot (GANG)
Subjects: Data Structures and Algorithms (cs.DS)

For fixed $h \geq 2$, we consider the task of adding to a graph $G$ a set of weighted shortcut edges on the same vertex set, such that the length of a shortest $h$-hop path between any pair of vertices in the augmented graph is exactly the same as the original distance between these vertices in $G$. A set of shortcut edges with this property is called an exact $h$-hopset and may be applied in processing distance queries on graph $G$. In particular, a $2$-hopset directly corresponds to a distributed distance oracle known as a hub labeling. In this work, we explore centralized distance oracles based on $3$-hopsets and display their advantages in several practical scenarios. In particular, for graphs of constant highway dimension, and more generally for graphs of constant skeleton dimension, we show that $3$-hopsets require exponentially fewer shortcuts per node than any previously described distance oracle while incurring only a quadratic increase in the query decoding time, and actually offer a speedup when compared to simple oracles based on a direct application of $2$-hopsets. Finally, we consider the problem of computing minimum-size $h$-hopset (for any $h \geq 2$) for a given graph $G$, showing a polylogarithmic-factor approximation for the case of unique shortest path graphs. When $h=3$, for a given bound on the space used by the distance oracle, we provide a construction of hopsets achieving polylog approximation both for space and query time compared to the optimal $3$-hopset oracle given the space bound.

[154]
Title: Improving Transferability of Adversarial Examples with Input Diversity
Comments: Submitted to ECCV 2018, code available at this https URL
Subjects: Computer Vision and Pattern Recognition (cs.CV); Learning (cs.LG); Machine Learning (stat.ML)

[155]
Title: Synthesis of Logical Clifford Operators via Symplectic Geometry
Comments: Single column, main text: 20 pages, full length with appendices: 32 pages. Includes pseudo-codes for all algorithms. Part of this work has been submitted to the 2018 IEEE International Symposium on Information Theory
Subjects: Information Theory (cs.IT); Quantum Physics (quant-ph)

Quantum error-correcting codes can be used to protect qubits involved in quantum computation. This requires that logical operators acting on protected qubits be translated to physical operators (circuits) acting on physical quantum states. We propose a mathematical framework for synthesizing physical circuits that implement logical Clifford operators for stabilizer codes. Circuit synthesis is enabled by representing the desired physical Clifford operator in $\mathbb{C}^{N \times N}$ as a partial $2m \times 2m$ binary symplectic matrix, where $N = 2^m$. We state and prove two theorems that use symplectic transvections to efficiently enumerate all symplectic matrices that satisfy a system of linear equations. As an important corollary of these results, we prove that for an $[\![ m,m-k ]\!]$ stabilizer code every logical Clifford operator has $2^{k(k+1)/2}$ symplectic solutions. The desired physical circuits are then obtained by decomposing each solution as a product of elementary symplectic matrices. Our assembly of the possible physical realizations enables optimization over them with respect to a suitable metric. Furthermore, we show that any circuit that normalizes the stabilizer of the code can be transformed into a circuit that centralizes the stabilizer, while realizing the same logical operation. Our method of circuit synthesis can be applied to any stabilizer code, and this paper provides a proof of concept synthesis of universal Clifford gates for the $[\![ 6,4,2 ]\!]$ CSS code. We conclude with a classical coding-theoretic perspective for constructing logical Pauli operators for CSS codes. Since our circuit synthesis algorithm builds on the logical Pauli operators for the code, this paper provides a complete framework for constructing all logical Clifford operators for CSS codes. Programs implementing our algorithms can be found at https://github.com/nrenga/symplectic-arxiv18a.

[156]
Title: Experimental Molecular Communication Testbed Based on Magnetic Nanoparticles in Duct Flow
Comments: 6 pages, 4 figures, 1 table, invited paper in IEEE International Workshop on Signal Processing Advances in Wireless Communications (SPAWC) 2018
Subjects: Emerging Technologies (cs.ET)

Simple and easy to implement testbeds are needed to further advance molecular communication research. To this end, this paper presents an in-vessel molecular communication testbed using magnetic nanoparticles dispersed in an aqueous suspension as they are also used for drug targeting in biotechnology. The transmitter is realized by an electronic pump for injection via a Y-connector. A second pump provides a background flow for signal propagation. For signal reception, we employ a susceptometer, an electronic device including a coil, where the magnetic particles move through and generate an electrical signal. We present experimental results for the transmission of a binary sequence and the system response following a single injection. For this flow-driven particle transport, we propose a simple parameterized mathematical model for evaluating the system response.

[157]
Title: Specifying and Analyzing Virtual Network Services Using Queuing Petri Nets
Subjects: Networking and Internet Architecture (cs.NI)

For optimal placement and orchestration of network services, it is crucial that their structure and semantics are specified clearly and comprehensively and are available to an orchestrator. Existing specification approaches are either ambiguous or miss important aspects regarding the behavior of virtual network functions (VNFs) forming a service. We propose to formally and unambiguously specify the behavior of these functions and services using Queuing Petri Nets (QPNs). QPNs are an established method that allows to express queuing, synchronization, stochastically distributed processing delays, and changing traffic volume and characteristics at each VNF. With QPNs, multiple VNFs can be connected to complete network services in any structure, even specifying bidirectional network services containing loops.
We discuss how management and orchestration systems can benefit from our clear and comprehensive specification approach, leading to better placement of VNFs and improved Quality of Service. Another benefit of formally specifying network services with QPNs are diverse analysis options, which allow valuable insights such as the distribution of end-to-end delay. We propose a tool-based workflow that supports the specification of network services and the automatic generation of corresponding simulation code to enable an in-depth analysis of their behavior and performance.

[158]
Title: Multi-access Edge Computing: The driver behind the wheel of 5G-connected cars
Comments: submitted to IEEE Communications Standards Magazine
Subjects: Networking and Internet Architecture (cs.NI)

The automotive and telco industries have taken an investment bet on the connected car market, pushing for the digital transformation of the sector by exploiting recent Information and Communication Technology (ICT) progress. As ICT developments continue, it is expected that the technology advancements will be able to fulfill the sophisticated requirements for vehicular use cases, such as low latency and reliable communications for safety, high computing power to process large amount of sensed data, and increased bandwidth for on-board infotainment.
The aforementioned requirements have received significant focus during the ongoing definition of the 3GPP 5G mobile standards, where there has been a drive to facilitate vertical industries such as automotive, in addition to providing the core aspects of the communication infrastructure. Of the technology enablers for 5G, Multi-access Edge Computing (MEC) can be considered essential. That is, a cloud environment located at the edge of the network, in proximity of the end-users and coupled with the service provider's network infrastructure. Even before 5G is rolled out, current mobile networks can already target support for these challenging use cases using MEC technology. This is because MEC is able to fulfill low latency and high bandwidth requirements, and, in addition, it lends itself to be deployed at the vertical industrial sector premises such as road infrastructure, air/sea ports, smart factories, etc., thus, bringing computing power where it is needed most.
This work showcases the automotive use cases that are relevant for MEC, providing insights into the technologies specified and investigated by the ETSI MEC Industry Specification Group (ISG), who were the pioneer in creating a standardized computing platform for advanced mobile networks with regards to network edge related use cases.

[159]
Title: Radio bearing of sources with directional antennas in urban environment
Authors: Cezary Ziolkowski, Jan M. Kelner (Military University of Technology, Faculty of Electronics, Institute of Telecommunications, Warsaw, Poland)
Comments: 24 pages, 16 figures, 3 tables; Accepted in International Journal of Microwave and Wireless Technologies (from Cambridge University Press)
Subjects: Information Theory (cs.IT)

This paper focuses on assessing the limitations in the direction-finding process of radio sources with directional antennas in an urbanized environment, demonstrating how signal source antenna parameters, such as beamwidth and maximum radiation direction affect bearing accuracy in non-line-of-sight (NLOS) conditions. These evaluations are based on simulation studies, which use measurement-tested signal processing procedures. These procedures are based on a multi-elliptical propagation model, the geometry of which is related to the environment by the power delay profile or spectrum. The probability density function of the angle of arrival for different parameters of the transmitting antenna is the simulation result. This characteristic allows assessing the effect of the signal source antenna parameters on bearing error. The obtained results are the basis for practical correction bearing error and these show the possibility of improving the efficiency of the radio source location in the urbanized environment.

[160]
Title: Live Target Detection with Deep Learning Neural Network and Unmanned Aerial Vehicle on Android Mobile Device
Subjects: Computer Vision and Pattern Recognition (cs.CV)

This paper describes the stages faced during the development of an Android program which obtains and decodes live images from DJI Phantom 3 Professional Drone and implements certain features of the TensorFlow Android Camera Demo application. Test runs were made and outputs of the application were noted. A lake was classified as seashore, breakwater and pier with the proximities of 24.44%, 21.16% and 12.96% respectfully. The joystick of the UAV controller and laptop keyboard was classified with the proximities of 19.10% and 13.96% respectfully. The laptop monitor was classified as screen, monitor and television with the proximities of 18.77%, 14.76% and 14.00% respectfully. The computer used during the development of this study was classified as notebook and laptop with the proximities of 20.04% and 11.68% respectfully. A tractor parked at a parking lot was classified with the proximity of 12.88%. A group of cars in the same parking lot were classified as sports car, racer and convertible with the proximities of 31.75%, 18.64% and 13.45% respectfully at an inference time of 851ms.

[161]
Title: Solving coupled problems of lumped parameter models in a platform for severe accidents in nuclear reactors
Subjects: Computational Engineering, Finance, and Science (cs.CE); Computational Physics (physics.comp-ph)

This paper focuses on solving coupled problems of lumped parameter models. Such problems are of interest for the simulation of severe accidents in nuclear reactors~: these coarse-grained models allow for fast calculations for statistical analysis used for risk assessment and solutions of large problems when considering the whole severe accident scenario. However, this modeling approach has several numerical flaws. Besides, in this industrial context, computational efficiency is of great importance leading to various numerical constraints. The objective of this research is to analyze the applicability of explicit coupling strategies to solve such coupled problems and to design implicit coupling schemes allowing stable and accurate computations. The proposed schemes are theoretically analyzed and tested within CEA's procor platform on a problem of heat conduction solved with coupled lumped parameter models and coupled 1D models. Numerical results are discussed and allow us to emphasize the benefits of using the designed coupling schemes instead of the usual explicit coupling schemes.

[162]
Title: A black market for upvotes and likes
Authors: Mihály Héder
Subjects: Computers and Society (cs.CY)

Purpose: This research investigates controversial online marketing techniques that involve buying hundreds or even thousands of upvotes, likes, comments, etc.
Methodology: Observation and categorization of 7,426 campaigns posted on the crowdsourcing platform microworkers.com over a 365 day (i.e., yearlong) period were conducted. Hypotheses about the mechanics and effectiveness of these campaigns were established and evaluated.
Findings: The campaigns contained a combined 1,856,316 microtasks, 89.7% of which were connected to online promotion. Techniques for search engine manipulation, comment-generating in the scale of tens of thousands, online vote manipulation, mass account creation, methods for covering tracks were discovered. The article presents an assessment of the effectiveness of such campaigns as well as various security challenges created by these campaigns.
Research limitations: The observed campaigns form only a small portion of the overall "black market". This is due to invite-only campaigns and the presence of alternative, unobservable platforms.
Practical implications: The findings of this article could be input for detecting and avoiding such online campaigns.
Social implications: The findings show that in some conditions tremendous levels of manipulation of an online discourse can be achieved with a limited budget.
Originality: While there is related work on "follower factories" and "click/troll farms", those entities offer complete "solutions" and their techniques are rather opaque. By investigating a crowdsourcing platform, this research unveils the underlying mechanics and organization of such campaigns. The research is based on a uniquely large number of observations. Small, cheap campaigns, the manipulation of less significant platforms is also included, while the related work tends to focus on mass, politically motivated efforts.

[163]
Title: Factorised spatial representation learning: application in semi-supervised myocardial segmentation
Subjects: Computer Vision and Pattern Recognition (cs.CV)

The success and generalisation of deep learning algorithms heavily depend on learning good feature representations. In medical imaging this entails representing anatomical information, as well as properties related to the specific imaging setting. Anatomical information is required to perform further analysis, whereas imaging information is key to disentangle scanner variability and potential artefacts. The ability to factorise these would allow for training algorithms only on the relevant information according to the task. To date, such factorisation has not been attempted. In this paper, we propose a methodology of latent space factorisation relying on the cycle-consistency principle. As an example application, we consider cardiac MR segmentation, where we separate information related to the myocardium from other features related to imaging and surrounding substructures. We demonstrate the proposed method's utility in a semi-supervised setting: we use very few labelled images together with many unlabelled images to train a myocardium segmentation neural network. Compared to fully supervised networks we achieve comparable performance using a fraction of labelled images in experiments on ACDC and a private data set.

[164]
Title: Embedding graphs into two-dimensional simplicial complexes
Comments: Extended abstract to appear in Proceedings of the 34th International Symposium on Computational Geometry (SoCG 2018)
Subjects: Computational Geometry (cs.CG)

We consider the problem of deciding whether an input graph G admits a topological embedding into a two-dimensional simplicial complex C. This problem includes, among others, the embeddability problem of a graph on a surface and the topological crossing number of a graph, but is more general.
The problem is NP-complete when C is part of the input, and we give a polynomial-time algorithm if the complex C is fixed.
Our strategy is to reduce the problem to an embedding extension problem on a surface, which has the following form: Given a subgraph H' of a graph G', and an embedding of H' on a surface S, can that embedding be extended to an embedding of G' on S? Such problems can be solved, in turn, using a key component in Mohar's algorithm to decide the embeddability of a graph on a fixed surface (STOC 1996, SIAM J. Discr. Math. 1999).

[165]
Title: High Speed and Low Power Sensing Schemes for STT-MRAM with IPMTJs
Subjects: Emerging Technologies (cs.ET)

STT-MRAM with interfacial-anisotropy-type perpendicular MTJ (IPMTJ) is a powerful candidate for the low switching energy design of STT-MRAM. In the literature, the reading operation of STT-MRAM structured with IPMTJs have been not studied until this time, in our knowledge. We investigated the reading operation of STT-MRAM structured with IPMTJs. To enumerate the read operations of the NVSenseAmp have successfully been performed a 2.5X reduction in average low power and a 13X increase in average high speed compared with the previous works.

[166]
Title: Controlling Decoding for More Abstractive Summaries with Copy-Based Networks
Subjects: Computation and Language (cs.CL)

Attention-based neural abstractive summarization systems equipped with copy mechanisms have shown promising results. Despite this success, it has been noticed that such a system generates a summary by mostly, if not entirely, copying over phrases, sentences, and sometimes multiple consecutive sentences from an input paragraph, effectively performing extractive summarization. In this paper, we verify this behavior using the latest neural abstractive summarization system - a pointer-generator network. We propose a simple baseline method that allows us to control the amount of copying without retraining. Experiments indicate that the method provides a strong baseline for abstractive systems looking to obtain high ROUGE scores while minimizing overlap with the source article, substantially reducing the n-gram overlap with the original article while keeping within 2 points of the original model's ROUGE score.

[167]
Title: Time-Domain Multi-Beam Selection and Its Performance Improvement for mmWave Systems
Comments: Accepted by IEEE International Conference on Communications (ICC), Kansas City, MO, USA, May 2018
Subjects: Information Theory (cs.IT)

Multi-beam selection is one of the crucial technologies in hybrid beamforming systems for frequency-selective fading channels. Addressing the problem in the frequency domain facilitates the procedure of acquiring observations for analog beam selection. However, it is difficult to improve the quality of the contaminated observations at low SNR. To this end, this paper uses an idea that the significant observations are sparse in the time domain to further enhance the quality of signals as well as the beam selection performance. By exploiting properties of channel impulse responses and circular convolutions in the time domain, we can reduce the size of a Toeplitz matrix in deconvolution to generate periodic true values of coupling coefficients plus random noise signals. An arithmetic mean of these signals yields refined observations with minor noise effects and provides more accurate sparse multipath delay information. As a result, only the refined observations associated with the estimated multipath delay indices have to be taken into account for the analog beam selection problem.

[168]
Title: Simple random search provides a competitive approach to reinforcement learning
Comments: 22 pages, 5 figures, 9 tables
Subjects: Learning (cs.LG); Artificial Intelligence (cs.AI); Optimization and Control (math.OC); Machine Learning (stat.ML)

A common belief in model-free reinforcement learning is that methods based on random search in the parameter space of policies exhibit significantly worse sample complexity than those that explore the space of actions. We dispel such beliefs by introducing a random search method for training static, linear policies for continuous control problems, matching state-of-the-art sample efficiency on the benchmark MuJoCo locomotion tasks. Our method also finds a nearly optimal controller for a challenging instance of the Linear Quadratic Regulator, a classical problem in control theory, when the dynamics are not known. Computationally, our random search algorithm is at least 15 times more efficient than the fastest competing model-free methods on these benchmarks. We take advantage of this computational efficiency to evaluate the performance of our method over hundreds of random seeds and many different hyperparameter configurations for each benchmark task. Our simulations highlight a high variability in performance in these benchmark tasks, suggesting that commonly used estimations of sample efficiency do not adequately evaluate the performance of RL algorithms.

[169]
Title: Learning Region Features for Object Detection
Subjects: Computer Vision and Pattern Recognition (cs.CV)

While most steps in the modern object detection methods are learnable, the region feature extraction step remains largely hand-crafted, featured by RoI pooling methods. This work proposes a general viewpoint that unifies existing region feature extraction methods and a novel method that is end-to-end learnable. The proposed method removes most heuristic choices and outperforms its RoI pooling counterparts. It moves further towards fully learnable object detection.

[170]
Title: Setting up a Reinforcement Learning Task with a Real-World Robot
Comments: Submitted to 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
Subjects: Learning (cs.LG); Artificial Intelligence (cs.AI); Robotics (cs.RO); Machine Learning (stat.ML)

Reinforcement learning is a promising approach to developing hard-to-engineer adaptive solutions for complex and diverse robotic tasks. However, learning with real-world robots is often unreliable and difficult, which resulted in their low adoption in reinforcement learning research. This difficulty is worsened by the lack of guidelines for setting up learning tasks with robots. In this work, we develop a learning task with a UR5 robotic arm to bring to light some key elements of a task setup and study their contributions to the challenges with robots. We find that learning performance can be highly sensitive to the setup, and thus oversights and omissions in setup details can make effective learning, reproducibility, and fair comparison hard. Our study suggests some mitigating steps to help future experimenters avoid difficulties and pitfalls. We show that highly reliable and repeatable experiments can be performed in our setup, indicating the possibility of reinforcement learning research extensively based on real-world robots.

[171]
Title: D$^2$: Decentralized Training over Decentralized Data
Subjects: Distributed, Parallel, and Cluster Computing (cs.DC); Learning (cs.LG); Machine Learning (stat.ML)

While training a machine learning model using multiple workers, each of which collects data from their own data sources, it would be most useful when the data collected from different workers can be {\em unique} and {\em different}. Ironically, recent analysis of decentralized parallel stochastic gradient descent (D-PSGD) relies on the assumption that the data hosted on different workers are {\em not too different}. In this paper, we ask the question: {\em Can we design a decentralized parallel stochastic gradient descent algorithm that is less sensitive to the data variance across workers?} In this paper, we present D$^2$, a novel decentralized parallel stochastic gradient descent algorithm designed for large data variance \xr{among workers} (imprecisely, "decentralized" data). The core of D$^2$ is a variance blackuction extension of the standard D-PSGD algorithm, which improves the convergence rate from $O\left({\sigma \over \sqrt{nT}} + {(n\zeta^2)^{\frac{1}{3}} \over T^{2/3}}\right)$ to $O\left({\sigma \over \sqrt{nT}}\right)$ where $\zeta^{2}$ denotes the variance among data on different workers. As a result, D$^2$ is robust to data variance among workers. We empirically evaluated D$^2$ on image classification tasks where each worker has access to only the data of a limited set of labels, and find that D$^2$ significantly outperforms D-PSGD.

### Cross-lists for Tue, 20 Mar 18

[172]  arXiv:1712.10317 (cross-list from astro-ph.IM) [pdf, other]
Title: Non-negative Matrix Factorization: Robust Extraction of Extended Structures
Comments: 22 pages, 1 table, 12 figures, ApJ published. Updated reference and figure, fixed typos
Subjects: Instrumentation and Methods for Astrophysics (astro-ph.IM); Earth and Planetary Astrophysics (astro-ph.EP); Learning (cs.LG)

We apply the vectorized Non-negative Matrix Factorization (NMF) method to post-processing of direct imaging data for exoplanetary systems such as circumstellar disks. NMF is an iterative approach, which first creates a non-orthogonal and non-negative basis of components using given reference images, then models a target with the components. The constructed model is then rescaled with a factor to compensate for the contribution from a disk. We compare NMF with existing methods (classical reference differential imaging method, and the Karhunen-Lo\eve image projection algorithm) using synthetic circumstellar disks, and demonstrate the superiority of NMF: with no need for prior selection of references, NMF can detect fainter circumstellar disks, better preserve low order disk morphology, and does not require forward modeling. As an application to a well-known disk example, we process the archival Hubble Space Telescope (HST) STIS coronagraphic observations of HD~181327 with different methods and compare them. NMF is able to extract some circumstellar material inside the primary ring for the first time. In the appendix, we mathematically investigate the stability of NMF components during iteration, and the linearity of NMF modeling.

[173]  arXiv:1803.06375 (cross-list from physics.soc-ph) [pdf, other]
Title: Mobile phone records to feed activity-based travel demand models: MATSim for studying a cordon toll policy in Barcelona
Subjects: Physics and Society (physics.soc-ph); Social and Information Networks (cs.SI)

Activity-based models appeared as an answer to the limitations of the traditional trip-based and tour-based four-stage models. The fundamental assumption of activity-based models is that travel demand is originated from people performing their daily activities. This is why they include a consistent representation of time, of the persons and households, time-dependent routing, and microsimulation of travel demand and traffic. In spite of their potential to simulate traffic demand management policies, their practical application is still limited. One of the main reasons is that these models require a huge amount of very detailed input data hard to get with surveys. However, the pervasive use of mobile devices has brought a valuable new source of data. The work presented here has a twofold objective: first, to demonstrate the capability of mobile phone records to feed activity-based transport models, and, second, to assert the advantages of using activity-based models to estimate the effects of traffic demand management policies. Activity diaries for the metropolitan area of Barcelona are reconstructed from mobile phone records. This information is then employed as input for building a transport MATSim model of the city. The model calibration and validation process proves the quality of the activity diaries obtained. The possible impacts of a cordon toll policy applied to two different areas of the city and at different times of the day is then studied. Our results show the way in which the modal share is modified in each of the considered scenario. The possibility of evaluating the effects of the policy at both aggregated and traveller level, together with the ability of the model to capture policy impacts beyond the cordon toll area confirm the advantages of activity-based models for the evaluation of traffic demand management policies.

[174]  arXiv:1803.06442 (cross-list from cond-mat.dis-nn) [pdf, other]
Title: Replica Symmetry Breaking in Bipartite Spin Glasses and Neural Networks
Subjects: Disordered Systems and Neural Networks (cond-mat.dis-nn); Learning (cs.LG)

Some interesting recent advances in the theoretical understanding of neural networks have been informed by results from the physics of disordered many-body systems. Motivated by these findings, this work uses the replica technique to study the mathematically tractable bipartite Sherrington-Kirkpatrick (SK) spin glass model, which is formally similar to a Restricted Boltzmann Machine (RBM) neural network. The bipartite SK model has been previously studied assuming replica symmetry; here this assumption is relaxed and a replica symmetry breaking analysis is performed. The bipartite SK model is found to have many features in common with Parisi's solution of the original, unipartite SK model, including the existence of a multitude of pure states which are related in a hierarchical, ultrametric fashion. As an application of this analysis, the optimal cost for a graph partitioning problem is shown to be simply related to the ground state energy of the bipartite SK model. As a second application, empirical investigations reveal that the Gibbs sampled outputs of an RBM trained on the MNIST data set are more ultrametrically distributed than the input data itself.

[175]  arXiv:1803.06449 (cross-list from physics.chem-ph) [pdf, other]
Title: Note: Variational Encoding of Protein Dynamics Benefits from Maximizing Latent Autocorrelation
Subjects: Chemical Physics (physics.chem-ph); Learning (cs.LG); Biological Physics (physics.bio-ph); Machine Learning (stat.ML)

As deep Variational Auto-Encoder (VAE) frameworks become more widely used for modeling biomolecular simulation data, we emphasize the capability of the VAE architecture to concurrently maximize the timescale of the latent space while inferring a reduced coordinate, which assists in finding slow processes as according to the variational approach to conformational dynamics. We additionally provide evidence that the VDE framework (Hern\'{a}ndez et al., 2017), which uses this autocorrelation loss along with a time-lagged reconstruction loss, obtains a variationally optimized latent coordinate in comparison with related loss functions. We thus recommend leveraging the autocorrelation of the latent space while training neural network models of biomolecular simulation data to better represent slow processes.

[176]  arXiv:1803.06510 (cross-list from stat.ML) [pdf, other]
Title: Hidden Integrality of SDP Relaxation for Sub-Gaussian Mixture Models
Subjects: Machine Learning (stat.ML); Information Theory (cs.IT); Learning (cs.LG); Optimization and Control (math.OC); Statistics Theory (math.ST)

We consider the problem of estimating the discrete clustering structures under Sub-Gaussian Mixture Models. Our main results establish a hidden integrality property of a semidefinite programming (SDP) relaxation for this problem: while the optimal solutions to the SDP are not integer-valued in general, their estimation errors can be upper bounded in terms of the error of an idealized integer program. The error of the integer program, and hence that of the SDP, are further shown to decay exponentially in the signal-to-noise ratio. To the best of our knowledge, this is the first exponentially decaying error bound for convex relaxations of mixture models, and our results reveal the "global-to-local" mechanism that drives the performance of the SDP relaxation.
A corollary of our results shows that in certain regimes the SDP solutions are in fact integral and exact, improving on existing exact recovery results for convex relaxations. More generally, our results establish sufficient conditions for the SDP to correctly recover the cluster memberships of $(1-\delta)$ fraction of the points for any $\delta\in(0,1)$. As a special case, we show that under the $d$-dimensional Stochastic Ball Model, SDP achieves non-trivial (sometimes exact) recovery when the center separation is as small as $\sqrt{1/d}$, which complements previous exact recovery results that require constant separation.

[177]  arXiv:1803.06523 (cross-list from math.OC) [pdf, ps, other]
Title: Stochastic model-based minimization of weakly convex functions
Subjects: Optimization and Control (math.OC); Learning (cs.LG)

We consider an algorithm that successively samples and minimizes stochastic models of the objective function. We show that under weak-convexity and Lipschitz conditions, the algorithm drives the expected norm of the gradient of the Moreau envelope to zero at the rate $O(k^{-1/4})$. Our result yields new complexity guarantees for the stochastic proximal point algorithm on weakly convex problems and for the stochastic prox-linear algorithm for minimizing compositions of convex functions with smooth maps. Moreover, our result also recovers the recently obtained complexity estimate for the stochastic proximal subgradient method on weakly convex problems.

[178]  arXiv:1803.06591 (cross-list from physics.soc-ph) [pdf, ps, other]
Title: Recognizing number of communities and detecting community structures in complex networks
Subjects: Physics and Society (physics.soc-ph); Social and Information Networks (cs.SI)

Recognizing number of communities and detecting community structures of complex network are discussed in this paper. As a visual and feasible algorithm, block model has been successfully applied to detect community structures in complex network. In order to measure the quality of the block model, we first define an objective function WQ value. For obtaining block model B of a network, GSA algorithm is applied to optimize WQ with the help of random keys. After executing processes AO (Adding Ones) and RO (Removing Ones) on block model B, the number of communities of a network can be recognized distinctly. Furthermore, based on the advantage of block model that its sort order of nodes is in correspondence with sort order of communities, so a new fuzzy boundary algorithm for detecting community structures is proposed and successfully applied to some representative networks. Finally, experimental results demonstrate the feasibility of the proposed algorithm.

[179]  arXiv:1803.06622 (cross-list from q-bio.NC) [pdf, other]
Title: Learning recurrent dynamics in spiking networks
Subjects: Neurons and Cognition (q-bio.NC); Artificial Intelligence (cs.AI); Learning (cs.LG); Neural and Evolutionary Computing (cs.NE)

Spiking activity of neurons engaged in learning and performing a task show complex spatiotemporal dynamics. While the output of recurrent network models can learn to perform various tasks, the possible range of recurrent dynamics that emerge after learning remains unknown. Here we show that modifying the recurrent connectivity with a recursive least squares algorithm provides sufficient flexibility for synaptic and spiking rate dynamics of spiking networks to produce a wide range of spatiotemporal activity. We apply the training method to learn arbitrary firing patterns, stabilize irregular spiking activity of a balanced network, and reproduce the heterogeneous spiking rate patterns of cortical neurons engaged in motor planning and movement. We identify sufficient conditions for successful learning, characterize two types of learning errors, and assess the network capacity. Our findings show that synaptically-coupled recurrent spiking networks possess a vast computational capability that can support the diverse activity patterns in the brain.

[180]  arXiv:1803.06636 (cross-list from math.CO) [pdf, ps, other]
Title: Complexity problems in enumerative combinatorics
Authors: Igor Pak
Comments: 30 pages; an expanded version of the ICM 2018 paper (Section 4 added, refs expanded)
Subjects: Combinatorics (math.CO); Computational Complexity (cs.CC); Discrete Mathematics (cs.DM); History and Overview (math.HO); Probability (math.PR)

We give a broad survey of recent results in Enumerative Combinatorics and their complexity aspects.

[181]  arXiv:1803.06638 (cross-list from physics.data-an) [pdf, ps, other]
Title: Adaptive prior probabilities via optimization of risk and entropy
Subjects: Data Analysis, Statistics and Probability (physics.data-an); Statistical Mechanics (cond-mat.stat-mech); Computer Science and Game Theory (cs.GT)

An agent choosing between various actions tends to take the one with the lowest loss. But this choice is arguably too rigid (not adaptive) to be useful in complex situations, e.g. where exploration-exploitation trade-off is relevant, or in creative task solving. Here we study an agent that -- given a certain average utility invested into adaptation -- chooses his actions via probabilities obtained through optimizing the entropy. As we argue, entropy minimization corresponds to a risk-averse agent, whereas a risk-seeking agent will maximize the entropy. The entropy minimization can (under certain conditions) recover the epsilon-greedy probabilities known in reinforced learning. We show that the entropy minimization -- in contrast to its maximization -- leads to rudimentary forms of intelligent behavior: (i) the agent accounts for extreme events, especially when he did not invest much into adaptation. (ii) He chooses the action related to lesser loss (lesser of two evils) when confronted with two actions with comparable losses. (iii) The agent is subject to effects similar to cognitive dissonance and frustration. Neither of these features are shown by the risk-seeking agent whose probabilities are given by the maximum entropy. Mathematically, the difference between entropy maximization versus its minimization corresponds with maximizing a convex function (in a convex domain, i.e.convex programming) versus minimizing it (concave programming).

[182]  arXiv:1803.06705 (cross-list from math.OC) [pdf, other]
Title: Hierarchical Predictive Control Algorithms for Optimal Design and Operation of Microgrids
Comments: To appear in "Power Systems Computation Conference", Dublin, Ireland
Subjects: Optimization and Control (math.OC); Systems and Control (cs.SY)

In recent years, microgrids, i.e., disconnected distribution systems, have received increasing interest from power system utilities to support the economic and resiliency posture of their systems. The economics of long distance transmission lines prevent many remote communities from connecting to bulk transmission systems and these communities rely on off-grid microgrid technology. Furthermore, communities that are connected to the bulk transmission system are investigating microgrid technologies that will support their ability to disconnect and operate independently during extreme events. In each of these cases, it is important to develop methodologies that support the capability to design and operate microgrids in the absence of transmission over long periods of time. Unfortunately, such planning problems tend to be computationally difficult to solve and those that are straightforward to solve often lack the modeling fidelity that inspires confidence in the results. To address these issues, we first develop a high fidelity model for design and operations of a microgrid that include component efficiencies, component operating limits, battery modeling, unit commitment, capacity expansion, and power flow physics; the resulting model is a mixed-integer quadratically-constrained quadratic program (MIQCQP). We then develop an iterative algorithm, referred to as the Model Predictive Control (MPC) algorithm, that allows us to solve the resulting MIQCQP. We show, through extensive computational experiments, that the MPC-based method can scale to problems that have a very long planning horizon and provide high quality solutions that lie within 5\% of optimal.

[183]  arXiv:1803.06706 (cross-list from math.CO) [pdf, ps, other]
Title: Descent distribution on Catalan words avoiding a pattern of length at most three
Subjects: Combinatorics (math.CO); Discrete Mathematics (cs.DM)

Catalan words are particular growth-restricted words over the set of non-negative integers, and they represent still another combinatorial class counted by the Catalan numbers. We study the distribution of descents on the sets of Catalan words avoiding a pattern of length at most three: for each such a pattern $p$ we provide a bivariate generating function where the coefficient of $x^ny^k$ in its series expansion is the number of length $n$ Catalan words with $k$ descents and avoiding $p$. As a byproduct, we enumerate the set of Catalan words avoiding $p$, and we provide the popularity of descents on this set. Some of the obtained enumerating sequences are not yet recorded in the On-line Encyclopedia of Integer Sequences.

[184]  arXiv:1803.06710 (cross-list from math.CO) [pdf, other]
Title: Almost all string graphs are intersection graphs of plane convex sets
Comments: This is the full version of a paper appearing in the proceedings of SoCG 2018
Subjects: Combinatorics (math.CO); Computational Geometry (cs.CG)

A {\em string graph} is the intersection graph of a family of continuous arcs in the plane. The intersection graph of a family of plane convex sets is a string graph, but not all string graphs can be obtained in this way. We prove the following structure theorem conjectured by Janson and Uzzell: The vertex set of {\em almost all} string graphs on $n$ vertices can be partitioned into {\em five} cliques such that some pair of them is not connected by any edge ($n\rightarrow\infty$). We also show that every graph with the above property is an intersection graph of plane convex sets. As a corollary, we obtain that {\em almost all} string graphs on $n$ vertices are intersection graphs of plane convex sets.

[185]  arXiv:1803.06718 (cross-list from eess.AS) [pdf, other]
Title: Directional emphasis in ambisonics
Subjects: Audio and Speech Processing (eess.AS); Sound (cs.SD)

We describe an ambisonics enhancement method that increases the signal strength in specified directions at low computational cost. The method can be used in a static setup to emphasize the signal arriving from a particular direction or set of directions, much like a spotlight amplifies the visibility of objects. It can also be used in an adaptive arrangement where it sharpens directionality and reduces the distortion in timbre associated with low-degree ambisonics representations. The emphasis operator has very low computational complexity and can be applied to time-domain as well as time-frequency ambisonics representations. The operator maps a low-degree ambisonics representation into a higher degree representation.

[186]  arXiv:1803.06768 (cross-list from math.CO) [pdf, ps, other]
Title: Gallai's path decomposition conjecture for triangle-free planar graphs
Subjects: Combinatorics (math.CO); Discrete Mathematics (cs.DM)

A path decomposition of a graph $G$ is a collection of edge-disjoint paths of $G$ that covers the edge set of $G$. Gallai (1968) conjectured that every connected graph on $n$ vertices admits a path decomposition of cardinality at most $\lfloor (n+1)/2\rfloor$. Gallai's Conjecture has been verified for many classes of graphs. In particular, Lov\'asz (1968) verified this conjecture for graphs with at most one vertex with even degree, and Pyber (1996) verified it for graphs in which every cycle contains a vertex with odd degree. Recently, Bonamy and Perrett (2016) verified Gallai's Conjecture for graphs with maximum degree at most $5$, and Botler et al. (2017) verified it for graphs with treewidth at most $3$. In this paper, we verify Gallai's Conjecture for triangle-free planar graphs.

[187]  arXiv:1803.06775 (cross-list from quant-ph) [pdf, other]
Title: Comparing and Integrating Constraint Programming and Temporal Planning for Quantum Circuit Compilation
Comments: 9 pages, 2 figures, Proceedings of the 28th International Conference of Automated Planning and Scheduling 2018 (ICAPS-18)
Subjects: Quantum Physics (quant-ph); Artificial Intelligence (cs.AI); Emerging Technologies (cs.ET); Systems and Control (cs.SY)

Recently, the makespan-minimization problem of compiling a general class of quantum algorithms into near-term quantum processors has been introduced to the AI community. The research demonstrated that temporal planning is a strong approach for a class of quantum circuit compilation (QCC) problems. In this paper, we explore the use of constraint programming (CP) as an alternative and complementary approach to temporal planning. We extend previous work by introducing two new problem variations that incorporate important characteristics identified by the quantum computing community. We apply temporal planning and CP to the baseline and extended QCC problems as both stand-alone and hybrid approaches. Our hybrid methods use solutions found by temporal planning to warm start CP, leveraging the ability of the former to find satisficing solutions to problems with a high degree of task optionality, an area that CP typically struggles with. The CP model, benefiting from inferred bounds on planning horizon length and task counts provided by the warm start, is then used to find higher quality solutions. Our empirical evaluation indicates that while stand-alone CP is only competitive for the smallest problems, CP in our hybridization with temporal planning out-performs stand-alone temporal planning in the majority of problem classes.

[188]  arXiv:1803.06788 (cross-list from math.CO) [pdf, other]
Title: The Cohomology for Wu Characteristics
Authors: Oliver Knill
Subjects: Combinatorics (math.CO); Discrete Mathematics (cs.DM); Algebraic Topology (math.AT)

While Euler characteristic X(G)=sum_x w(x) super counts simplices, Wu characteristics w_k(G) = sum_(x_1,x_2,...,x_k) w(x_1)...w(x_k) super counts simultaneously pairwise interacting k-tuples of simplices in a finite abstract simplicial complex G. More general is the k-intersection number w_k(G_1,...G_k), where x_i in G_i. We define interaction cohomology H^p(G_1,...,G_k) compatible with w_k and invariant under Barycentric subdivison. It allows to distinguish spaces which simplicial cohomology can not: it can identify algebraically the Moebius strip and the cylinder for example. The cohomology satisfies the Kuenneth formula: the Poincare polynomials p_k(t) are ring homomorphisms from the strong ring to the ring of polynomials in t. The Dirac operator D=d+d^* defines the block diagonal Hodge Laplacian L=D^2 which leads to the generalized Hodge correspondence b_p(G)=dim(H^p_k(G)) = dim(ker(L_p)) and Euler-Poincare w_k(G)=sum_p (-1)^p dim(H^p_k(G)) for Wu characteristic. Also, like for traditional simplicial cohomology, isospectral Lax deformation D' = [B(D),D], with B(t)=d(t)-d^*(t)-ib(t), D(t)=d(t)+d(t)^* + b(t) can deform the exterior derivative d. The Brouwer-Lefschetz fixed point theorem generalizes to all Wu characteristics: given an endomorphism T of G, the super trace of its induced map on k'th cohomology defines a Lefschetz number L_k(T). The Brouwer index i_T,k(x_1,...,x_k) = product_j=1^k w(x_j) sign(T|x_j) attached to simplex tuple which is invariant under T leads to the formula L_k(T) = sum_T(x)=x i_T,k(x). For T=Id, the Lefschetz number L_k(Id) is equal to the k'th Wu characteristic w_k(G) of the graph G and the Lefschetz formula reduces to the Euler-Poincare formula for Wu characteristic.

[189]  arXiv:1803.06852 (cross-list from stat.ML) [pdf, ps, other]
Title: Confounder Detection in High Dimensional Linear Models using First Moments of Spectral Measures
Subjects: Machine Learning (stat.ML); Learning (cs.LG)

In this paper, we study the confounder detection problem in the linear model, where the target variable $Y$ is predicted using its $n$ potential causes $X_n=(x_1,...,x_n)^T$. Based on an assumption of rotation invariant generating process of the model, recent study shows that the spectral measure induced by the regression coefficient vector with respect to the covariance matrix of $X_n$ is close to a uniform measure in purely causal cases, but it differs from a uniform measure characteristically in the presence of a scalar confounder. Then, analyzing spectral measure pattern could help to detect confounding. In this paper, we propose to use the first moment of the spectral measure for confounder detection. We calculate the first moment of the regression vector induced spectral measure, and compare it with the first moment of a uniform spectral measure, both defined with respect to the covariance matrix of $X_n$. The two moments coincide in non-confounding cases, and differ from each other in the presence of confounding. This statistical causal-confounding asymmetry can be used for confounder detection. Without the need of analyzing the spectral measure pattern, our method does avoid the difficulty of metric choice and multiple parameter optimization. Experiments on synthetic and real data show the performance of this method.

[190]  arXiv:1803.06887 (cross-list from math.FA) [pdf, other]
Title: Lossless Analog Compression
Subjects: Functional Analysis (math.FA); Information Theory (cs.IT)

We establish the fundamental limits of lossless analog compression by considering the recovery of arbitrary m-dimensional real random vectors x from the noiseless linear measurements y=Ax with n x m measurement matrix A. Our theory is inspired by the groundbreaking work of Wu and Verdu (2010) on almost lossless analog compression, but applies to the nonasymptotic, i.e., fixed-m case, and considers zero error probability. Specifically, our achievability result states that, for almost all A, the random vector x can be recovered with zero error probability provided that n > K(x), where the description complexity K(x) is given by the infimum of the lower modified Minkowski dimensions over all support sets U of x. We then particularize this achievability result to the class of s-rectifiable random vectors as introduced in Koliander et al. (2016); these are random vectors of absolutely continuous distribution---with respect to the s-dimensional Hausdorff measure---supported on countable unions of s-dimensional differentiable manifolds. Countable unions of differentiable manifolds include essentially all signal models used in compressed sensing theory, in spectrum-blind sampling, and in the matrix completion problem. Specifically, we prove that, for almost all A, s-rectifiable random vectors x can be recovered with zero error probability from n>s linear measurements. This threshold is, however, found not to be tight as exemplified by the construction of an s-rectifiable random vector that can be recovered with zero error probability from n<s linear measurements. This leads us to the introduction of the new class of s-analytic random vectors, which admit a strong converse in the sense of n greater than or equal to s being necessary for recovery with probability of error smaller than one. The central conceptual tool in the development of our theory is geometric measure theory.

[191]  arXiv:1803.06914 (cross-list from math.CO) [pdf, ps, other]
Title: Mixing Time of Markov chain of the Knapsack Problem
Subjects: Combinatorics (math.CO); Data Structures and Algorithms (cs.DS); Probability (math.PR)

To find the number of assignments of zeros and ones satisfying a specific Knapsack Problem is $\#P$ hard, so only approximations are envisageable. A Markov chain allowing uniform sampling of all possible solutions is given by Luby, Randall and Sinclair. In 2005, Morris and Sinclair, by using a flow argument, have shown that the mixing time of this Markov chain is $\mathcal{O}(n^{9/2+\epsilon})$, for any $\epsilon > 0$. By using a canonical path argument on the distributive lattice structure of the set of solutions, we obtain an improved bound, the mixing time is given as $\tau_{_{x}}(\epsilon) \leq n^{3} \ln (16 \epsilon^{-1})$.

[192]  arXiv:1803.06915 (cross-list from math.CO) [pdf, other]
Title: Exploiting symmetry in network analysis
Comments: Main Text (7 pages) plus Supplementary Information (24 pages)
Subjects: Combinatorics (math.CO); Social and Information Networks (cs.SI); Data Analysis, Statistics and Probability (physics.data-an); Physics and Society (physics.soc-ph)

Virtually all network analyses involve structural measures or metrics between pairs of vertices, or of the vertices themselves. The large amount of redundancy present in real-world networks is inherited by such measures, and this has practical consequences which have not yet been explored in full generality, nor systematically exploited by network practitioners. Here we develop a complete framework to study and quantify the effect of redundancy on arbitrary network measures, and explain how to exploit redundancy in practice, achieving, for instance, remarkable lossless compression and computational reduction ratios in several real-world networks against some popular measures.

[193]  arXiv:1803.06916 (cross-list from physics.soc-ph) [pdf]
Title: Simulating the future urban growth in Xiongan New Area: a upcoming big city in China
Authors: Xun Liang
Subjects: Physics and Society (physics.soc-ph); Artificial Intelligence (cs.AI); Computers and Society (cs.CY)

China made the announement to create the Xiongan New Area in Hebei in April 1,2017. Thus a new magacity about 110km south west of Beijing will emerge. Xiongan New Area is of great practial significant and historical significant for transferring Beijing's non-capital function. Simulating the urban dynamics in Xiongan New Area can help planners to decide where to build the new urban and further manage the future urban growth. However, only a little research focus on the future urban development in Xiongan New Area. In addition, previous models are unable to simulate the urban dynamics in Xiongan New Area. Because there are no original high density urbna for these models to learn the transition rules.In this study, we proposed a C-FLUS model to solve such problems. This framework was implemented by coupling a modified Cellular automata(CA). An elaborately designed random planted seeds machanism based on local maximums is addressed in the CA model to better simulate the occurrence of the new urban. Through an analysis of the current driving forces, the C-FLUS can detect the potential start zone and simulate the urban development under different scenarios in Xiongan New Area. Our study shows that the new urban is most likely to occur in northwest of Xiongxian, and it will rapidly extend to Rongcheng and Anxin until almost cover the northern part of Xiongan New Area. Moreover, the method can help planners to evaluate the impact of urban expansion in Xiongan New Area.

[194]  arXiv:1803.06928 (cross-list from math.OC) [pdf]
Title: Optimization Based Solutions for Control and State Estimation in Non-holonomic Mobile Robots: Stability, Distributed Control, and Relative Localization
Comments: This a preprint of Mohamed Said's PhD thesis
Subjects: Optimization and Control (math.OC); Robotics (cs.RO)

Interest in designing, manufacturing, and using autonomous robots has been rapidly growing during the most recent decade. The main motivation for this interest is the wide range of potential applications these autonomous systems can serve in. The applications include, but are not limited to, area coverage, patrolling missions, perimeter surveillance, search and rescue missions, and situational awareness. In this thesis, the area of control and state estimation in non-holonomic mobile robots is tackled. Herein, optimization based solutions for control and state estimation are designed, analyzed, and implemented to such systems. One of the main motivations for considering such solutions is their ability of handling constrained and nonlinear systems such as non-holonomic mobile robots. Moreover, the recent developments in dynamic optimization algorithms as well as in computer processing facilitated the real-time implementation of such optimization based methods in embedded computer systems.

[195]  arXiv:1803.06937 (cross-list from cond-mat.dis-nn) [pdf, ps, other]
Title: Pearson's correlation coefficient in the theory of networks: A comment
Subjects: Disordered Systems and Neural Networks (cond-mat.dis-nn); Social and Information Networks (cs.SI)

In statistics, the Pearson correlation coefficient $r_{x,y}$ determines the degree of linear correlation between two variables and it is known that $-1 \le r_{x,y} \le 1$. In the theory of networks, an interesting expression proposed in [PRL {\bf 89} 208701 (2002)] for degree-degree correlation coefficient $r_{j,k}$ has been in use. We prove that the suggested expression is incorrect overall and one should rather use the conventional form.

[196]  arXiv:1803.06959 (cross-list from stat.ML) [pdf, other]
Title: On the importance of single directions for generalization
Subjects: Machine Learning (stat.ML); Learning (cs.LG); Neural and Evolutionary Computing (cs.NE)

Despite their ability to memorize large datasets, deep neural networks often achieve good generalization performance. However, the differences between the learned solutions of networks which generalize and those which do not remain unclear. Additionally, the tuning properties of single directions (defined as the activation of a single unit or some linear combination of units in response to some input) have been highlighted, but their importance has not been evaluated. Here, we connect these lines of inquiry to demonstrate that a network's reliance on single directions is a good predictor of its generalization performance, across networks trained on datasets with different fractions of corrupted labels, across ensembles of networks trained on datasets with unmodified labels, across different hyperparameters, and over the course of training. While dropout only regularizes this quantity up to a point, batch normalization implicitly discourages single direction reliance, in part by decreasing the class selectivity of individual units. Finally, we find that class selectivity is a poor predictor of task importance, suggesting not only that networks which generalize well minimize their dependence on individual units by reducing their selectivity, but also that individually selective units may not be necessary for strong network performance.

[197]  arXiv:1803.06969 (cross-list from stat.ML) [pdf, ps, other]
Title: Comparing Dynamics: Deep Neural Networks versus Glassy Systems
Subjects: Machine Learning (stat.ML); Disordered Systems and Neural Networks (cond-mat.dis-nn); Learning (cs.LG)

We analyze numerically the training dynamics of deep neural networks (DNN) by using methods developed in statistical physics of glassy systems. The two main issues we address are the complexity of the loss-landscape and of the dynamics within it, and to what extent DNNs share similarities with glassy systems. Our findings, obtained for different architectures and datasets, suggest that during the training process the dynamics slows down because of an increasingly large number of flat directions. At large times, when the loss is approaching zero, the system diffuses at the bottom of the landscape. Despite some similarities with the dynamics of mean-field glassy systems, in particular, the absence of barrier crossing, we find distinctive dynamical behaviors in the two cases, showing that the statistical properties of the corresponding loss and energy landscapes are different. In contrast, when the network is under-parametrized we observe a typical glassy behavior, thus suggesting the existence of different phases depending on whether the network is under-parametrized or over-parametrized.

[198]  arXiv:1803.06971 (cross-list from stat.ML) [pdf, other]
Title: What Doubling Tricks Can and Can't Do for Multi-Armed Bandits
Authors: Lilian Besson (IETR), Emilie Kaufmann (SEQUEL, CNRS)
Subjects: Machine Learning (stat.ML); Learning (cs.LG); Statistics Theory (math.ST)

An online reinforcement learning algorithm is anytime if it does not need to know in advance the horizon T of the experiment. A well-known technique to obtain an anytime algorithm from any non-anytime algorithm is the "Doubling Trick". In the context of adversarial or stochastic multi-armed bandits, the performance of an algorithm is measured by its regret, and we study two families of sequences of growing horizons (geometric and exponential) to generalize previously known results that certain doubling tricks can be used to conserve certain regret bounds. In a broad setting, we prove that a geometric doubling trick can be used to conserve (minimax) bounds in $R\_T = O(\sqrt{T})$ but cannot conserve (distribution-dependent) bounds in $R\_T = O(\log T)$. We give insights as to why exponential doubling tricks may be better, as they conserve bounds in $R\_T = O(\log T)$, and are close to conserving bounds in $R\_T = O(\sqrt{T})$.

[199]  arXiv:1803.06982 (cross-list from quant-ph) [pdf, other]
Title: Quantifying coherence with quantum addition
Subjects: Quantum Physics (quant-ph); Information Theory (cs.IT)

Quantum addition channels have been recently introduced in the context of deriving entropic power inequalities for finite dimensional quantum systems. We prove a reverse entropy power equality which can be used to analytically prove an inequality conjectured recently for arbitrary dimension and arbitrary addition weight. We show that the relative entropic difference between the output of such a quantum additon channel and the corresponding classical mixture quantitatively captures the amount of coherence present in a quantum system. This new coherence measure admits an upper bound in terms of the relative entropy of coherence and is utilized to formulate a state-dependent uncertainty relation for two observables. Our results may provide deep insights to the origin of quantum coherence for mixed states that truly come from the discrepancy between quantum addition and the classical mixture.

[200]  arXiv:1803.06989 (cross-list from math.ST) [pdf, other]
Title: Numerical Integration on Graphs: where to sample and how to weigh
Subjects: Statistics Theory (math.ST); Learning (cs.LG); Numerical Analysis (math.NA); Machine Learning (stat.ML)

Let $G=(V,E,w)$ be a finite, connected graph with weighted edges. We are interested in the problem of finding a subset $W \subset V$ of vertices and weights $a_w$ such that $$\frac{1}{|V|}\sum_{v \in V}^{}{f(v)} \sim \sum_{w \in W}{a_w f(w)}$$ for functions $f:V \rightarrow \mathbb{R}$ that are smooth' with respect to the geometry of the graph. The main application are problems where $f$ is known to somehow depend on the underlying graph but is expensive to evaluate on even a single vertex. We prove an inequality showing that the integration problem can be rewritten as a geometric problem (`the optimal packing of heat balls'). We discuss how one would construct approximate solutions of the heat ball packing problem; numerical examples demonstrate the efficiency of the method.

[201]  arXiv:1803.06992 (cross-list from stat.ML) [pdf]
Title: Estimating the intrinsic dimension of datasets by a minimal neighborhood information
Subjects: Machine Learning (stat.ML); Learning (cs.LG)

Analyzing large volumes of high-dimensional data is an issue of fundamental importance in data science, molecular simulations and beyond. Several approaches work on the assumption that the important content of a dataset belongs to a manifold whose Intrinsic Dimension (ID) is much lower than the crude large number of coordinates. Such manifold is generally twisted and curved, in addition points on it will be non-uniformly distributed: two factors that make the identification of the ID and its exploitation really hard. Here we propose a new ID estimator using only the distance of the first and the second nearest neighbor of each point in the sample. This extreme minimality enables us to reduce the effects of curvature, of density variation, and the resulting computational cost. The ID estimator is theoretically exact in uniformly distributed datasets, and provides consistent measures in general. When used in combination with block analysis, it allows discriminating the relevant dimensions as a function of the block size. This allows estimating the ID even when the data lie on a manifold perturbed by a high-dimensional noise, a situation often encountered in real world data sets. We demonstrate the usefulness of the approach on molecular simulations and image analysis.

[202]  arXiv:1803.07000 (cross-list from physics.soc-ph) [pdf, other]
Title: Generalized Rich-Club Ordering in Networks
Authors: Matteo Cinelli
Subjects: Physics and Society (physics.soc-ph); Social and Information Networks (cs.SI)

Rich-club ordering refers to tendency of nodes with a high degree to be more interconnected than expected. In this paper we consider the concept of rich-club ordering when generalized to structural measures different from the node degree and to non-structural measures (i.e. to node metadata). The differences in considering rich-club ordering (RCO) with respect to both structural and non-structural measures is then discussed in terms of employed coefficients and of appropriate null models (link rewiring vs metadata reshuffling). Once defined a framework for the evaluation of generalized rich-club ordering (GRCO), we investigate such a phenomenon in real networks provided with node metadata. By considering different notions of node richness we compare structural and non-structural rich-club ordering, observing how external information about the network nodes is able to validate the presence of elites in networked systems.

[203]  arXiv:1803.07033 (cross-list from math.OC) [pdf, other]
Title: Natural gradient via optimal transport I
Subjects: Optimization and Control (math.OC); Information Theory (cs.IT)

We study a natural Wasserstein gradient flow on manifolds of probability distributions with discrete sample spaces. We derive the Riemannian structure for the probability simplex from the dynamical formulation of the Wasserstein distance on a weighted graph. We pull back the geometric structure to the parameter space of any given probability model, which allows us to define a natural gradient flow there. In contrast to the natural Fisher-Rao gradient, the natural Wasserstein gradient incorporates a ground metric on sample space. We discuss implementations following the forward and backward Euler methods. We illustrate the analysis on elementary exponential family examples.

[204]  arXiv:1803.07043 (cross-list from math.OC) [pdf, other]
Title: Projective Splitting with Forward Steps: Asynchronous and Block-Iterative Operator Splitting
Subjects: Optimization and Control (math.OC); Learning (cs.LG)

This work is concerned with the classical problem of finding a zero of a sum of maximal monotone operators. For the projective splitting framework recently proposed by Combettes and Eckstein, we show how to replace the fundamental subproblem calculation using a backward step with one based on two forward steps. The resulting algorithms have the same kind of coordination procedure and can be implemented in the same block-iterative and potentially distributed and asynchronous manner, but may perform backward steps on some operators and forward steps on others. Prior algorithms in the projective splitting family have used only backward steps. Forward steps can be used for any Lipschitz-continuous operators provided the stepsize is bounded by the inverse of the Lipschitz constant. If the Lipschitz constant is unknown, a simple backtracking linesearch procedure may be used. For affine operators, the stepsize can be chosen adaptively without knowledge of the Lipschitz constant and without any additional forward steps. We close the paper by empirically studying the performance of several kinds of splitting algorithms on the lasso problem.

[205]  arXiv:1803.07065 (cross-list from physics.app-ph) [pdf]
Title: Sensorless Resonance Tracking of Resonant Electromagnetic Actuator through Back-EMF Estimation for Mobile Devices
Authors: Youngjun Cho
Subjects: Applied Physics (physics.app-ph); Systems and Control (cs.SY); Signal Processing (eess.SP); Dynamical Systems (math.DS)

Resonant electromagnetic actuators have been broadly used as vibration motors for mobile devices given their ability of generating relatively fast, strong, and controllable vibration force at a given resonant frequency. Mechanism of the actuators that is based on mechanical resonance, however, limits their use to a situation where their resonant frequencies are known and unshifted. In reality, there are many factors that alter the resonant frequency: for example, manufacturing tolerances, worn mechanical components such as a spring, nonlinearity in association with different input voltage levels. Here, we describe a sensorless resonance tracking method that actuates the motor and automatically detects its unknown damped natural frequency through the estimation of back electromotive force (EMF) and inner mass movements. We demonstrate the tracking performance of the proposed method through a series of experiments. This approach has the potential to control residual vibrations and then improve vibrotactile feedback, which can potentially be used for human-computer interaction, cognitive and affective neuroscience research.

### Replacements for Tue, 20 Mar 18

[206]  arXiv:1307.2783 (replaced) [pdf, ps, other]
Title: Coping with Unreliable Workers in Internet-based Computing: An Evaluation of Reputation Mechanisms
Subjects: Distributed, Parallel, and Cluster Computing (cs.DC); Computer Science and Game Theory (cs.GT)
[207]  arXiv:1408.1025 (replaced) [pdf, ps, other]
Title: Stable Throughput Region of Cognitive-Relay Networks with Imperfect Sensing and Finite Relaying Buffer
Authors: Ahmed M. Alaa
Subjects: Networking and Internet Architecture (cs.NI)
[208]  arXiv:1408.6771 (replaced) [pdf, other]
Title: Flat Foldings of Plane Graphs with Prescribed Angles and Edge Lengths
Journal-ref: J. Computational Geometry 9 (1): 71-91, 2018
Subjects: Computational Geometry (cs.CG); Data Structures and Algorithms (cs.DS)
[209]  arXiv:1505.03898 (replaced) [pdf, ps, other]
Title: Pinball Loss Minimization for One-bit Compressive Sensing: Convex Models and Algorithms
Subjects: Information Theory (cs.IT); Numerical Analysis (math.NA); Optimization and Control (math.OC); Machine Learning (stat.ML)
[210]  arXiv:1508.06269 (replaced) [pdf, other]
Title: A systematic process for evaluating structured perfect Bayesian equilibria in dynamic games with asymmetric information
Comments: 36 pages, 3 figures, IEEE Transactions on Automatic Control, vol. PP, no. 99, pp. 1-1, 2018
Subjects: Optimization and Control (math.OC); Computer Science and Game Theory (cs.GT); Systems and Control (cs.SY)
[211]  arXiv:1512.04234 (replaced) [pdf, ps, other]
Title: Two applications of the spectrum of numbers
Subjects: Number Theory (math.NT); Formal Languages and Automata Theory (cs.FL)
[212]  arXiv:1601.05661 (replaced) [pdf, other]
Title: Distortion Bounds for Source Broadcast Problems
Comments: Revision after the first round of review
Subjects: Information Theory (cs.IT)
[213]  arXiv:1601.06311 (replaced) [pdf, other]
Title: Incremental Maintenance of Maximal Cliques in a Dynamic Graph
Subjects: Data Structures and Algorithms (cs.DS); Databases (cs.DB)
[214]  arXiv:1604.05255 (replaced) [pdf, ps, other]
Title: Cascading Denial of Service Attacks on Wi-Fi Networks
Authors: Liangxiao Xin (1), David Starobinski (1), Guevara Noubir (2) ((1) Boston University, (2) Northeastern University)
Subjects: Networking and Internet Architecture (cs.NI)
[215]  arXiv:1606.07557 (replaced) [pdf, other]
Title: Dynamic Witnesses for Static Type Errors (or, Ill-Typed Programs Usually Go Wrong)
Subjects: Programming Languages (cs.PL)
[216]  arXiv:1606.09494 (replaced) [pdf, other]
Title: Matched Metrics to the Binary Asymmetric Channels
Subjects: Information Theory (cs.IT)
[217]  arXiv:1607.02970 (replaced) [pdf, other]
Title: The sequential functionals of type $(ι\rightarrow ι)^n \rightarrow ι$ form a dcpo for all $n \in \Bbb N$
Authors: Dag Normann
Subjects: Logic in Computer Science (cs.LO); Logic (math.LO)
[218]  arXiv:1609.03590 (replaced) [pdf, other]
Title: Survey and Taxonomy of Self-Aware and Self-Adaptive Autoscaling Systems in the Cloud
Comments: An extended and more comprehensive version of this paper has been accepted by ACM Computing Surveys (CSUR), please use the following citation information: T. Chen, R. Bahsoon and X. Yao, "A Survey and Taxonomy of Self-Aware and Self-Adaptive Cloud Autoscaling Systems," ACM Computing Surveys, vol. 51, no. 3, 2019
Journal-ref: ACM Computing Surveys, vol. 51, no. 3, 2019
Subjects: Software Engineering (cs.SE); Distributed, Parallel, and Cluster Computing (cs.DC)
[219]  arXiv:1609.04608 (replaced) [pdf, other]
Title: Recursive nearest agglomeration (ReNA): fast clustering for approximation of structured signals
Authors: Andrés Hoyos-Idrobo (PARIETAL, NEUROSPIN), Gaël Varoquaux (PARIETAL, NEUROSPIN), Jonas Kahn, Bertrand Thirion (PARIETAL)
Comments: IEEE Transactions on Pattern Analysis and Machine Intelligence, Institute of Electrical and Electronics Engineers, In press
Subjects: Machine Learning (stat.ML); Learning (cs.LG)
[220]  arXiv:1611.10268 (replaced) [pdf, other]
Title: On Equivalence of Binary Asymmetric Channels regarding the Maximum Likelihood Decoding
Comments: The presentation was improved with respect to the previous version. New examples, applications and references were included
Subjects: Information Theory (cs.IT)
[221]  arXiv:1701.03773 (replaced) [pdf, other]
Title: Model Theory and Proof Theory of Coalgebraic Predicate Logic
Subjects: Logic in Computer Science (cs.LO)
[222]  arXiv:1701.08006 (replaced) [pdf, other]
Title: Quasi-homography warps in image stitching
Subjects: Computer Vision and Pattern Recognition (cs.CV)
[223]  arXiv:1701.08524 (replaced) [pdf, other]
Title: An $ω$-Algebra for Real-Time Energy Problems
Subjects: Logic in Computer Science (cs.LO); Formal Languages and Automata Theory (cs.FL)
[224]  arXiv:1702.00131 (replaced) [pdf, ps, other]
Title: Optimal Caching in Content-Centric Mobile Hybrid Networks: A Variable Decoupling Approach
Comments: 19 pages, 7 figures, Part of this paper was presented at the IEEE International Symposium on Information Theory, Barcelona, Spain, July 2016
Subjects: Information Theory (cs.IT); Networking and Internet Architecture (cs.NI)
[225]  arXiv:1702.03062 (replaced) [pdf, other]
Title: Sparsity/Undersampling Tradeoffs in Anisotropic Undersampling, with Applications in MR Imaging/Spectroscopy
Subjects: Information Theory (cs.IT)
[226]  arXiv:1702.07619 (replaced) [pdf, other]
Title: Fast and robust curve skeletonization for real-world elongated objects
Comments: 47 pages; IEEE WACV 2018, main paper and supplementary material
Subjects: Computer Vision and Pattern Recognition (cs.CV); Graphics (cs.GR)
[227]  arXiv:1702.08501 (replaced) [pdf, other]
Title: Formal Synthesis of Control Strategies for Positive Monotone Systems
Comments: To appear in IEEE Transactions on Automatic Control (TAC) (2018), 16 pages, double column
Subjects: Systems and Control (cs.SY); Optimization and Control (math.OC)
[228]  arXiv:1703.02382 (replaced) [pdf, other]
Title: Assessing the Privacy Cost in Centralized Event-Based Demand Response for Microgrids
Subjects: Systems and Control (cs.SY); Cryptography and Security (cs.CR); Optimization and Control (math.OC)
[229]  arXiv:1704.03132 (replaced) [pdf, other]
Title: Odd Yao-Yao Graphs are Not Spanners
Subjects: Computational Geometry (cs.CG)
[230]  arXiv:1704.04299 (replaced) [pdf, other]
Title: An Empirical Analysis of Traceability in the Monero Blockchain
Subjects: Cryptography and Security (cs.CR)
[231]  arXiv:1705.00429 (replaced) [src]
Title: Polynomial-Time Algorithms for Sliding Tokens on Cactus Graphs and Block Graphs
Comments: The algorithm for block graphs in this manuscript contains some flaws. More precisely, Proposition 20 is not correct. Therefore, we withdraw this manuscript
Subjects: Discrete Mathematics (cs.DM); Combinatorics (math.CO)
[232]  arXiv:1705.07222 (replaced) [pdf, other]
Title: Quadruplet Network with One-Shot Learning for Fast Visual Object Tracking
Subjects: Computer Vision and Pattern Recognition (cs.CV)
[233]  arXiv:1705.08210 (replaced) [pdf, other]
Title: Parallel Accelerated Vector Similarity Calculations for Genomics Applications
Subjects: Distributed, Parallel, and Cluster Computing (cs.DC); Data Structures and Algorithms (cs.DS); Performance (cs.PF)
[234]  arXiv:1705.08213 (replaced) [pdf, other]
Title: Parallel Accelerated Custom Correlation Coefficient Calculations for Genomics Applications
Subjects: Distributed, Parallel, and Cluster Computing (cs.DC); Data Structures and Algorithms (cs.DS); Performance (cs.PF)
[235]  arXiv:1705.08504 (replaced) [pdf, other]
Title: Interpreting Blackbox Models via Model Extraction
Subjects: Learning (cs.LG)
[236]  arXiv:1706.02499 (replaced) [pdf, other]
Title: SliceType: Fast Gaze Typing with a Merging Keyboard
Subjects: Human-Computer Interaction (cs.HC)
[237]  arXiv:1706.03459 (replaced) [pdf, other]
Title: Optimal Auctions through Deep Learning
Subjects: Computer Science and Game Theory (cs.GT); Artificial Intelligence (cs.AI); Learning (cs.LG)
[238]  arXiv:1706.04629 (replaced) [pdf, other]
Title: Verification Studies for the Noh Problem using Non-ideal Equations of State and Finite Strength Shocks
Comments: 14 pages, 7 figures, 19 images
Subjects: Fluid Dynamics (physics.flu-dyn); Computational Engineering, Finance, and Science (cs.CE); Mathematical Physics (math-ph)
[239]  arXiv:1706.05593 (replaced) [pdf]
Title: Closed-Form Mathematical Representations of Interval Type-2 Fuzzy Logic Systems
Comments: 15 pages, 10 figures, 3 tables
Subjects: Systems and Control (cs.SY)
[240]  arXiv:1706.09152 (replaced) [pdf, other]
Title: Generative Bridging Network in Neural Sequence Prediction
Subjects: Artificial Intelligence (cs.AI); Learning (cs.LG); Machine Learning (stat.ML)
[241]  arXiv:1707.05797 (replaced) [pdf]
Title: Low-complexity implementation of convex optimization-based phase retrieval
Subjects: Information Theory (cs.IT); Optimization and Control (math.OC)
[242]  arXiv:1708.02582 (replaced) [pdf, other]
Comments: 16 pages, 9 figures, International Conference on Learning Representations (ICLR) 2018
Subjects: Machine Learning (stat.ML); Learning (cs.LG)
[243]  arXiv:1708.05063 (replaced) [pdf, other]
Title: Communicating Timed Processes with Perfect Timed Channels
Subjects: Formal Languages and Automata Theory (cs.FL)
[244]  arXiv:1708.08046 (replaced) [pdf]
Title: Generalized Short Circuit Ratio for Multi Power Electronic based Devices Infeed Systems: Defi-nition and Theoretical Analysis
Subjects: Systems and Control (cs.SY)
[245]  arXiv:1708.09757 (replaced) [pdf, other]
Title: Opportunities and challenges for quantum-assisted machine learning in near-term quantum computers
Comments: Contribution to the special issue of Quantum Science & Technology (QST) on "What would you do with 1000 qubits"
Subjects: Quantum Physics (quant-ph); Emerging Technologies (cs.ET)
[246]  arXiv:1708.09784 (replaced) [pdf, other]
Title: Quantum-assisted Helmholtz machines: A quantum-classical deep learning framework for industrial datasets in near-term devices
Comments: 11 pages, 2 figures. Minor revisions
Subjects: Quantum Physics (quant-ph); Emerging Technologies (cs.ET)
[247]  arXiv:1709.00483 (replaced) [pdf, other]
Title: Iteratively Linearized Reweighted Alternating Direction Method of Multipliers for a Class of Nonconvex Problems
Subjects: Numerical Analysis (cs.NA); Computer Vision and Pattern Recognition (cs.CV); Numerical Analysis (math.NA); Optimization and Control (math.OC); Machine Learning (stat.ML)
[248]  arXiv:1709.00727 (replaced) [src]
Title: Hand Gesture Real Time Paint Tool - Box
Comments: This paper needs a proper writing and experiments need to be implemented, thus we are withdrawing this submission
Subjects: Computer Vision and Pattern Recognition (cs.CV)
[249]  arXiv:1709.04049 (replaced) [pdf, other]
Comments: 9 pages, 2 figures, In Proceedings of the 17th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2018)
Subjects: Artificial Intelligence (cs.AI); Computers and Society (cs.CY); Multiagent Systems (cs.MA)
[250]  arXiv:1709.05101 (replaced) [pdf, other]
Title: Time-Optimal Path Tracking via Reachability Analysis
Comments: 6 pages, 3 figures, ICRA 2018
Subjects: Robotics (cs.RO)
[251]  arXiv:1709.06196 (replaced) [pdf, other]
Title: Online algorithms for POMDPs with continuous state, action, and observation spaces
Comments: New experiments show that PFT-DPW is as effective as POMCPOW
Subjects: Artificial Intelligence (cs.AI); Robotics (cs.RO); Systems and Control (cs.SY)
[252]  arXiv:1709.08375 (replaced) [pdf]
Title: Shadow Detection and Geo-tagged Image Information Based Strategic Infrastructure Characterization
Comments: Internation Conference On Recent Innovations in Engineering and Technology (ICRIET), Netherlands on 9-10 March 2018
Subjects: Computational Geometry (cs.CG)
[253]  arXiv:1709.08902 (replaced) [pdf, ps, other]
Title: A New Cooperative Framework for Parallel Trajectory-Based Metaheuristics
Journal-ref: Applied Soft Computing 65 (2018) 374-386
Subjects: Distributed, Parallel, and Cluster Computing (cs.DC)
[254]  arXiv:1709.09283 (replaced) [pdf, other]
Title: Fast Shadow Detection from a Single Image Using a Patched Convolutional Neural Network
Comments: 6 pages, 5 figures, Submitted to IROS 2018
Subjects: Computer Vision and Pattern Recognition (cs.CV)
[255]  arXiv:1709.09590 (replaced) [pdf, ps, other]
Title: An attentive neural architecture for joint segmentation and parsing and its application to real estate ads
Comments: Preprint - Accepted for publication in Expert Systems with Applications
Journal-ref: Expert Systems with Applications, Volume 102, 15 July 2018, Pages 100-112, ISSN 0957-4174
Subjects: Computation and Language (cs.CL)
[256]  arXiv:1709.10512 (replaced) [pdf, other]
Title: Learning to Roam Free from Small-Space Autonomous Driving with A Path Planner
Comments: Changes to previous version: Added further evaluations. Added tests in different environment. Added depth-input validation. Dropped multi-task learning aspect. Currently under review for publication at the ECCV 2018
Subjects: Robotics (cs.RO)
[257]  arXiv:1710.02238 (replaced) [pdf, other]
Title: How Much Chemistry Does a Deep Neural Network Need to Know to Make Accurate Predictions?
Comments: In Proceedings of 2018 IEEE Winter Conference on Applications of Computer Vision (WACV)
Subjects: Machine Learning (stat.ML); Artificial Intelligence (cs.AI); Computer Vision and Pattern Recognition (cs.CV); Learning (cs.LG)
[258]  arXiv:1710.03811 (replaced) [pdf, other]
Title: DeepSolarEye: Power Loss Prediction and Weakly Supervised Soiling Localization via Fully Convolutional Networks for Solar Panels
Comments: Accepted for publication at WACV 2018
Subjects: Computer Vision and Pattern Recognition (cs.CV)
[259]  arXiv:1710.03830 (replaced) [pdf, other]
Title: Inference on Auctions with Weak Assumptions on Information
Subjects: Econometrics (econ.EM); Computer Science and Game Theory (cs.GT); Learning (cs.LG); Statistics Theory (math.ST)
[260]  arXiv:1710.06462 (replaced) [pdf, other]
Title: S-Isomap++: Multi Manifold Learning from Streaming Data
Subjects: Machine Learning (stat.ML); Learning (cs.LG)
[261]  arXiv:1710.10570 (replaced) [pdf, other]
Title: Weight Initialization of Deep Neural Networks(DNNs) using Data Statistics
Comments: The work is currently under progress
Subjects: Learning (cs.LG); Machine Learning (stat.ML)
[262]  arXiv:1710.11070 (replaced) [pdf, ps, other]
Title: Convergence Rates of Latent Topic Models Under Relaxed Identifiability Conditions
Authors: Yining Wang
Comments: 26 pages, 1 table. Added significantly more expositions, and a numerical procedure to check the order of degeneracy. Proofs slightly altered with explicit constants given at various places
Subjects: Machine Learning (stat.ML); Learning (cs.LG)
[263]  arXiv:1710.11213 (replaced) [pdf, ps, other]
Title: Prophet Secretary for Combinatorial Auctions and Matroids
Comments: Preliminary version appeared in SODA 2018. This version improves the writeup on Fixed-Threshold algorithms
Subjects: Data Structures and Algorithms (cs.DS); Computer Science and Game Theory (cs.GT)
[264]  arXiv:1710.11439 (replaced) [pdf, other]
Title: Statistical Speech Enhancement Based on Probabilistic Integration of Variational Autoencoder and Non-Negative Matrix Factorization
Comments: 5 pages, 3 figures, version that Eqs. (9), (19), and (20) in v2 (submitted to ICASSP 2018) are corrected. Samples available here: this http URL
Subjects: Sound (cs.SD); Learning (cs.LG); Audio and Speech Processing (eess.AS); Machine Learning (stat.ML)
[265]  arXiv:1711.01434 (replaced) [pdf]
Title: Transaction Fraud Detection Using GRU-centered Sandwich-structured Model
Subjects: Cryptography and Security (cs.CR); Learning (cs.LG)
[266]  arXiv:1711.04449 (replaced) [pdf, other]
Title: A General Framework for Covariance Matrix Optimization in MIMO Systems
Comments: 16 Pages, 4 Figures, IEEE Communications
Subjects: Information Theory (cs.IT)
[267]  arXiv:1711.06454 (replaced) [pdf, other]
Title: Separating Style and Content for Generalized Style Transfer
Subjects: Computer Vision and Pattern Recognition (cs.CV)
[268]  arXiv:1711.07172 (replaced) [pdf, ps, other]
Title: Emerging Privacy Issues and Solutions in Cyber-Enabled Sharing Services
Subjects: Computers and Society (cs.CY); Cryptography and Security (cs.CR)
[269]  arXiv:1711.07289 (replaced) [pdf, other]
Title: Learning Steerable Filters for Rotation Equivariant CNNs
Subjects: Learning (cs.LG); Computer Vision and Pattern Recognition (cs.CV)
[270]  arXiv:1711.07530 (replaced) [pdf, other]
Title: Online Resource Inference in Network Utility Maximization Problems
Subjects: Networking and Internet Architecture (cs.NI)
[271]  arXiv:1711.09181 (replaced) [pdf, ps, other]
Title: Towards Accurate Deceptive Opinion Spam Detection based on Word Order-preserving CNN
Subjects: Computation and Language (cs.CL)
[272]  arXiv:1711.09504 (replaced) [pdf, other]
Title: A Typology of Social Capital and Associated Network Measures
Subjects: Physics and Society (physics.soc-ph); Social and Information Networks (cs.SI)
[273]  arXiv:1711.10125 (replaced) [pdf, other]
Title: Learning to cluster in order to transfer across domains and tasks
Subjects: Learning (cs.LG); Artificial Intelligence (cs.AI); Computer Vision and Pattern Recognition (cs.CV)
[274]  arXiv:1712.00434 (replaced) [pdf, other]
Title: On the treewidth of triangulated 3-manifolds
Comments: 23 pages, 3 figures, 1 table. An extended abstract of this paper is to appear in the Proceedings of the 34th International Symposium on Computational Geometry (SoCG 2018), Budapest, June 11-14 2018
Subjects: Geometric Topology (math.GT); Computational Geometry (cs.CG); Combinatorics (math.CO)
[275]  arXiv:1712.01975 (replaced) [pdf, other]
Title: Sparsity Regularization and feature selection in large dimensional data
Subjects: Learning (cs.LG); Numerical Analysis (cs.NA); Optimization and Control (math.OC)
[276]  arXiv:1712.02034 (replaced) [pdf, other]
Title: SMILES2Vec: An Interpretable General-Purpose Deep Neural Network for Predicting Chemical Properties
Subjects: Machine Learning (stat.ML); Artificial Intelligence (cs.AI); Computation and Language (cs.CL); Learning (cs.LG)
[277]  arXiv:1712.02734 (replaced) [pdf, other]
Title: Using Rule-Based Labels for Weak Supervised Learning: A ChemNet for Transferable Chemical Property Prediction
Subjects: Machine Learning (stat.ML); Artificial Intelligence (cs.AI); Computer Vision and Pattern Recognition (cs.CV); Learning (cs.LG)
[278]  arXiv:1712.04886 (replaced) [pdf, ps, other]
Title: Closing in on Time and Space Optimal Construction of Compressed Indexes
Authors: Dominik Kempa
Subjects: Data Structures and Algorithms (cs.DS)
[279]  arXiv:1712.07734 (replaced) [pdf, ps, other]
Title: Sheaf-Theoretic Stratification Learning
Subjects: Computational Geometry (cs.CG); Algebraic Topology (math.AT)
[280]  arXiv:1712.08268 (replaced) [pdf, other]
Title: Beyond saliency: understanding convolutional neural networks from saliency prediction on layer-wise relevance propagation
Subjects: Computer Vision and Pattern Recognition (cs.CV)
[281]  arXiv:1712.09025 (replaced) [pdf, other]
Title: Domain Adaptation Meets Disentangled Representation Learning and Style Transfer
Comments: 23 pages, 10 figures, 2 tables
Subjects: Computer Vision and Pattern Recognition (cs.CV)
[282]  arXiv:1712.09203 (replaced) [pdf, ps, other]
Title: Algorithmic Regularization in Over-parameterized Matrix Sensing and Neural Networks with Quadratic Activations
Comments: clarified that the sensing matrices can be symmetric wlog and revised the quadratic neural nets section
Subjects: Learning (cs.LG); Data Structures and Algorithms (cs.DS); Optimization and Control (math.OC); Machine Learning (stat.ML)
[283]  arXiv:1712.09381 (replaced) [pdf, other]
Title: Ray RLlib: A Framework for Distributed Reinforcement Learning
Subjects: Artificial Intelligence (cs.AI); Distributed, Parallel, and Cluster Computing (cs.DC); Learning (cs.LG)
[284]  arXiv:1712.09691 (replaced) [pdf, other]
Title: Scalable Entity Resolution Using Probabilistic Signatures on Parallel Databases
Subjects: Databases (cs.DB)
[285]  arXiv:1801.01165 (replaced) [pdf, other]
Title: Automorphism groups and Ramsey properties of sparse graphs
Comments: 40 pages, 3 figures, new proof of Theorem 3.19 and other minor revisions
Subjects: Group Theory (math.GR); Discrete Mathematics (cs.DM); Combinatorics (math.CO); Dynamical Systems (math.DS); Logic (math.LO)
[286]  arXiv:1801.01967 (replaced) [pdf, other]
Title: Visual Text Correction
Subjects: Computer Vision and Pattern Recognition (cs.CV); Computation and Language (cs.CL)
[287]  arXiv:1801.02455 (replaced) [pdf, other]
Title: Optimal Morphs of Planar Orthogonal Drawings
Subjects: Computational Geometry (cs.CG)
[288]  arXiv:1801.02716 (replaced) [pdf, other]
Title: The Android Update Problem: An Empirical Study
Comments: 11 pages, 4 figures, 4 tables
Subjects: Software Engineering (cs.SE)
[289]  arXiv:1801.02950 (replaced) [pdf, other]
Title: Adversarial Deep Learning for Robust Detection of Binary Encoded Malware
Comments: 1ST Deep Learning and Security Workshop (co-located with the 39th IEEE Symposium on Security and Privacy)
Subjects: Cryptography and Security (cs.CR); Learning (cs.LG); Machine Learning (stat.ML)
[290]  arXiv:1801.03183 (replaced) [pdf, other]
Title: Discrete Stratified Morse Theory: A User's Guide
Subjects: Computational Geometry (cs.CG); Algebraic Topology (math.AT)
[291]  arXiv:1801.04486 (replaced) [pdf, other]
Title: Can Computers Create Art?
Authors: Aaron Hertzmann
Comments: Submitted to Arts special issue on Machine as Artist (21st Century)
Subjects: Artificial Intelligence (cs.AI); Computer Vision and Pattern Recognition (cs.CV); Graphics (cs.GR); Learning (cs.LG)
[292]  arXiv:1801.04590 (replaced) [pdf, other]
Title: Frame-Recurrent Video Super-Resolution
Subjects: Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Machine Learning (stat.ML)
[293]  arXiv:1801.06519 (replaced) [pdf, other]
Subjects: Computer Vision and Pattern Recognition (cs.CV)
[294]  arXiv:1801.07337 (replaced) [pdf]
Title: Machine Learning and Finite Element Method for Physical Systems Modeling
Subjects: Computational Engineering, Finance, and Science (cs.CE); Computational Physics (physics.comp-ph)
[295]  arXiv:1801.07987 (replaced) [pdf, other]
Title: Near-lossless L-infinity constrained Multi-rate Image Decompression via Deep Neural Network
Authors: Xi Zhang, Xiaolin Wu
Subjects: Computer Vision and Pattern Recognition (cs.CV)
[296]  arXiv:1801.09665 (replaced) [pdf, ps, other]
Title: Cooperative repair: Constructions of optimal MDS codes for all admissible parameters
Comments: This version forms a substantive extension of Version 1. We provide a construction of MDS codes that support optimal cooperative repair for all admissible parameters. This resolves the general problem of constructing codes for cooperative repair with minimum repair bandwidth
Subjects: Information Theory (cs.IT)
[297]  arXiv:1801.10241 (replaced) [pdf, other]
Title: Data-Driven Search-based Software Engineering
Subjects: Software Engineering (cs.SE)
[298]  arXiv:1801.10321 (replaced) [pdf, other]
Title: Model-Free Error Detection and Recovery for Robot Learning from Demonstration
Subjects: Robotics (cs.RO)
[299]  arXiv:1802.00160 (replaced) [pdf, ps, other]
Title: Bipartite discrimination of independently prepared quantum states as a counterexample of a parallel repetition conjecture
Comments: In this version, we improve a proof of theorem 2
Subjects: Quantum Physics (quant-ph); Computational Complexity (cs.CC)
[300]  arXiv:1802.00285 (replaced) [pdf, other]
Title: Virtual-to-Real: Learning to Control in Visual Semantic Segmentation
Comments: 7 pages, submitted to IJCAI-18
Subjects: Computer Vision and Pattern Recognition (cs.CV); Robotics (cs.RO); Systems and Control (cs.SY)
[301]  arXiv:1802.00758 (replaced) [pdf, other]
Title: Strategy Representation by Decision Trees in Reactive Synthesis
Comments: Full version of the paper. To appear in TACAS 2018
Subjects: Logic in Computer Science (cs.LO)
[302]  arXiv:1802.03656 (replaced) [pdf, ps, other]
Title: TextZoo, a New Benchmark for Reconsidering Text Classification
Comments: a benchmark need to be completed
Subjects: Computation and Language (cs.CL)
[303]  arXiv:1802.05856 (replaced) [pdf, other]
Title: Algorithmic Complexity and Reprogrammability of Chemical Structure Networks
Subjects: Molecular Networks (q-bio.MN); Computational Engineering, Finance, and Science (cs.CE); Information Theory (cs.IT)
[304]  arXiv:1802.06204 (replaced) [pdf, ps, other]
Title: Approximate Set Union Via Approximate Randomization
Subjects: Data Structures and Algorithms (cs.DS); Computational Complexity (cs.CC); Computational Geometry (cs.CG); Discrete Mathematics (cs.DM)
[305]  arXiv:1802.06404 (replaced) [pdf]
Title: Using 3D Hahn Moments as A Computational Representation of ATS Drugs Molecular Structure
Subjects: Computer Vision and Pattern Recognition (cs.CV)
[306]  arXiv:1802.07444 (replaced) [pdf, other]
Title: Scaling-up Split-Merge MCMC with Locality Sensitive Sampling (LSS)
Subjects: Learning (cs.LG); Artificial Intelligence (cs.AI); Data Structures and Algorithms (cs.DS); Methodology (stat.ME); Machine Learning (stat.ML)
[307]  arXiv:1802.08735 (replaced) [pdf, other]
Title: A DIRT-T Approach to Unsupervised Domain Adaptation
Subjects: Machine Learning (stat.ML); Computer Vision and Pattern Recognition (cs.CV); Learning (cs.LG)
[308]  arXiv:1802.09734 (replaced) [pdf, other]
Title: To Stay or to Leave: Churn Prediction for Urban Migrants in the Initial Period
Subjects: Social and Information Networks (cs.SI)
[309]  arXiv:1802.10117 (replaced) [pdf, other]
Title: Fundamental Values of Cryptocurrencies and Blockchain Technology
Subjects: Pricing of Securities (q-fin.PR); Cryptography and Security (cs.CR); Economics (q-fin.EC)
[310]  arXiv:1802.10437 (replaced) [pdf]
Title: A Simple Method to improve Initialization Robustness for Active Contours driven by Local Region Fitting Energy
Subjects: Computer Vision and Pattern Recognition (cs.CV)
[311]  arXiv:1802.10467 (replaced) [pdf, other]
Title: Quantitative Separation Logic
Comments: A revised version that fixes a few technical issues in the appendix
Subjects: Logic in Computer Science (cs.LO)
[312]  arXiv:1803.00197 (replaced) [pdf, other]
Title: TSSD: Temporal Single-Shot Object Detection Based on Attention-Aware LSTM
Subjects: Computer Vision and Pattern Recognition (cs.CV); Robotics (cs.RO)
[313]  arXiv:1803.01137 (replaced) [pdf, ps, other]
Title: Security issues in a group key establishment protocol
Authors: Chris J Mitchell
Subjects: Cryptography and Security (cs.CR)
[314]  arXiv:1803.01417 (replaced) [pdf]
Title: Efficient and Accurate MRI Super-Resolution using a Generative Adversarial Network and 3D Multi-Level Densely Connected Network
Comments: 10 pages, 2 figures, 2 tables. submitted to MICCAI 2018. v2 fixed network figures and some minor typos
Subjects: Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV)
[315]  arXiv:1803.02987 (replaced) [pdf, other]
Title: Instance Similarity Deep Hashing for Multi-Label Image Retrieval
Subjects: Computer Vision and Pattern Recognition (cs.CV)
[316]  arXiv:1803.03259 (replaced) [pdf, other]
Title: A reciprocal formulation of non-exponential radiative transfer. 1: Sketch and motivation
Authors: Eugene d'Eon
Comments: 14 pages, fixed typos, more clarity, notation summary table, tweaked abstract and title
Subjects: Computational Physics (physics.comp-ph); Graphics (cs.GR); Nuclear Theory (nucl-th)
[317]  arXiv:1803.03550 (replaced) [pdf, other]
Title: On Optimal Polyline Simplification using the Hausdorff and Fréchet Distance
Comments: Full version of the SoCG2018 paper
Subjects: Computational Geometry (cs.CG)
[318]  arXiv:1803.04051 (replaced) [pdf, other]
Title: Representation Learning over Dynamic Graphs
Subjects: Learning (cs.LG); Machine Learning (stat.ML)
[319]  arXiv:1803.04315 (replaced) [pdf, other]
Title: Power-Efficient Deployment of UAVs as Relays
Authors: Erdem Koyuncu
Subjects: Information Theory (cs.IT); Signal Processing (eess.SP)
[320]  arXiv:1803.04636 (replaced) [pdf, other]
Title: TOM-Net: Learning Transparent Object Matting from a Single Image
Comments: CVPR 2018. Project Page: this https URL
Subjects: Computer Vision and Pattern Recognition (cs.CV)
[321]  arXiv:1803.04792 (replaced) [pdf, other]
Title: Testing Deep Neural Networks
Subjects: Learning (cs.LG); Computer Vision and Pattern Recognition (cs.CV); Software Engineering (cs.SE)
[322]  arXiv:1803.04926 (replaced) [src]
Title: Active Reinforcement Learning with Monte-Carlo Tree Search
Comments: 11 pages, 10 figures S. Schulze is in a sponsored PhD position which stipulates that the sponsor needs to be notified 30 days in advance of any publication and give his consent. As this has not happened to this point, we would like to withdraw the publication
Subjects: Learning (cs.LG); Machine Learning (stat.ML)
[323]  arXiv:1803.05768 (replaced) [pdf, ps, other]
Title: PAC-Reasoning in Relational Domains
Comments: Fixed typos ($l$ replaced by $k$ in Definition 4, a typo fixed in the appendix)
Subjects: Artificial Intelligence (cs.AI); Learning (cs.LG)
[324]  arXiv:1803.05790 (replaced) [pdf, other]
Title: Temporal Human Action Segmentation via Dynamic Clustering
Comments: comparing with the 1st version, only corrected typos
Subjects: Computer Vision and Pattern Recognition (cs.CV)
[325]  arXiv:1803.05844 (replaced) [pdf, other]
Title: Joint Turbo Receiver for LDPC-Coded MIMO Systems Based on Semi-definite Relaxation
Authors: Kun Wang, Zhi Ding
Comments: 5 pages, 4 figures, conference
Subjects: Information Theory (cs.IT)
[326]  arXiv:1803.05859 (replaced) [pdf, other]
Title: Neural Network Quine
Subjects: Artificial Intelligence (cs.AI); Neural and Evolutionary Computing (cs.NE)
[327]  arXiv:1803.05860 (replaced) [pdf, other]
Title: Power Grid Decomposition Based on Vertex Cut Sets and Its Applications to Topology Control and Power Trading
Subjects: Systems and Control (cs.SY)
[328]  arXiv:1803.05863 (replaced) [pdf, other]
Title: Learned Iterative Decoding for Lossy Image Compression Systems
Subjects: Computer Vision and Pattern Recognition (cs.CV)
[329]  arXiv:1803.05982 (replaced) [pdf, other]
Title: Real-time Deep Registration With Geodesic Loss
Comments: This work has been submitted to TMI
Subjects: Computer Vision and Pattern Recognition (cs.CV)
[330]  arXiv:1803.06058 (replaced) [pdf, other]
Title: Constant-Time Predictive Distributions for Gaussian Processes
Subjects: Learning (cs.LG); Machine Learning (stat.ML)
[331]  arXiv:1803.06075 (replaced) [pdf, other]
Title: Formal Analysis of Non-functional Properties for a Cooperative Automotive System
Comments: 77 pages, 112 figures, technical report, reference of SAC2018 Conference
Subjects: Software Engineering (cs.SE)
[332]  arXiv:1803.06103 (replaced) [pdf, other]
Title: Model-based Verification and Validation of an Autonomous Vehicle System
Comments: 54 pages, 58 figures, technical report reference of QRS2017 conference
Subjects: Software Engineering (cs.SE)
[333]  arXiv:1803.06192 (replaced) [pdf, other]
Title: Monocular Fisheye Camera Depth Estimation Using Semi-supervised Sparse Velodyne Data
Subjects: Computer Vision and Pattern Recognition (cs.CV)
[ total of 333 entries: 1-333 ]
[ showing up to 2000 entries per page: fewer | more ]

Disable MathJax (What is MathJax?)