Datasets:

Modalities:
Image
Text
Formats:
parquet
Libraries:
Datasets
Dask
Dataset Viewer (First 5GB)
Auto-converted to Parquet Duplicate
paper_id
stringlengths
10
10
title
stringlengths
16
163
abstract
stringlengths
0
3.11k
GA_figure
imagewidth (px)
123
1.98k
GA_caption
stringlengths
0
1.44k
figures
dict
research_fields
listlengths
1
6
journal_ref
stringclasses
75 values
conference
stringclasses
32 values
2003.00467
NeuroTac: A Neuromorphic Optical Tactile Sensor applied to Texture Recognition
Developing artificial tactile sensing capabilities that rival human touch is a long-term goal in robotics and prosthetics. Gradually more elaborate biomimetic tactile sensors are being developed and applied to grasping and manipulation tasks to help achieve this goal. Here we present the neuroTac, a novel neuromorphic ...
Transduction, encoding and decoding mechanisms for the neuroTac sensor. The sensor mimics biological processes by accumulating pixel events (Potentials) from an event-based camera and combining them into taxel events (Spikes).
{ "image": [ { "src": "/static-proxy?url=https%3A%2F%2Fdatasets-server.huggingface.co%2Fassets%2Fiyatomilab%2FSciGA-for-experiments-hf%2F--%2F7a0ca341a87aa5e19ff0c335f227d55dc42a286e%2F--%2Fdefault%2Ftrain%2F0%2Ffigures%2Fimage-1863037c.jpg%3FExpires%3D1776404316%26amp%3BSignature%3DSaNdVzuYnXTfS3x643ujYFnGHeiT~rUiOoZQJBllTMV7Wt3Rf5U4sLWzlHqEfcU4TjlOhP3AD8I2EVbC6f39...%3C%2Fspan%3E%3C%2Fdiv%3E
[ "cs.RO" ]
null
ICRA
2003.01936
Automatic Signboard Detection and Localization in Densely Populated Developing Cities
Most city establishments of developing cities are digitally unlabeled because of the lack of automatic annotation systems. Hence location and trajectory services such as Google Maps, Uber etc remain underutilized in such cities. Accurate signboard detection in natural scene images is the foremost task for error-free in...
Problem complexity scenario: (a): Signboard characteristics of a developed city. (b): Scenario of developing or less developed city which reveals the complexity of our case.
{ "image": [ { "src": "/static-proxy?url=https%3A%2F%2Fdatasets-server.huggingface.co%2Fassets%2Fiyatomilab%2FSciGA-for-experiments-hf%2F--%2F7a0ca341a87aa5e19ff0c335f227d55dc42a286e%2F--%2Fdefault%2Ftrain%2F1%2Ffigures%2Fimage-1863037c.jpg%3FExpires%3D1776404316%26amp%3BSignature%3DIFK7DO9q6YX9EZbVtuVgZMkKutRha~8bUlMnsq8qACFybtSgDmWxxvgNo79k5vGJ38FP8DOnXRTfsUbZe4Zs...%3C%2Fspan%3E%3C%2Fdiv%3E
[ "cs.CV" ]
null
null
2004.03378
Error-Corrected Margin-Based Deep Cross-Modal Hashing for Facial Image Retrieval
Cross-modal hashing facilitates mapping of heterogeneous multimedia data into a common Hamming space, which can be utilized for fast and flexible retrieval across different modalities. In this paper, we propose a novel cross-modal hashing architecture-deep neural decoder cross-modal hashing (DNDCMH), which uses a binar...
Cross modal hashing for facial image retrieval: a bald man wearing Sunglass.
{ "image": [ { "src": "/static-proxy?url=https%3A%2F%2Fdatasets-server.huggingface.co%2Fassets%2Fiyatomilab%2FSciGA-for-experiments-hf%2F--%2F7a0ca341a87aa5e19ff0c335f227d55dc42a286e%2F--%2Fdefault%2Ftrain%2F2%2Ffigures%2Fimage-1863037c.jpg%3FExpires%3D1776404316%26amp%3BSignature%3DCNVOPdF1-xhaEYySGD2KvGiaKkyR0PTdWRQBQtHzRGgZjyNGzz6uKX-vbD8K94IziFSuMt2tSab0ZQroeacA...%3C%2Fspan%3E%3C%2Fdiv%3E
[ "cs.CV" ]
null
null
2004.09199
Generative Feature Replay For Class-Incremental Learning
Humans are capable of learning new tasks without forgetting previous ones, while neural networks fail due to catastrophic forgetting between new and previously-learned tasks. We consider a class-incremental setting which means that the task-ID is unknown at inference time. The imbalance between old and new classes typi...
Comparison of generative image replay and the proposed generative feature replay. Instead of replaying images <MATH> x </MATH> the proposed method uses a generator <MATH> G </MATH> to replay features <MATH> u </MATH> . To prevent forgetting in the feature extractor <MATH> F </MATH> we apply feature distillation. Featur...
{ "image": [ { "src": "/static-proxy?url=https%3A%2F%2Fdatasets-server.huggingface.co%2Fassets%2Fiyatomilab%2FSciGA-for-experiments-hf%2F--%2F7a0ca341a87aa5e19ff0c335f227d55dc42a286e%2F--%2Fdefault%2Ftrain%2F3%2Ffigures%2Fimage-1863037c.jpg%3FExpires%3D1776404316%26amp%3BSignature%3Di~YsGpprIDBQ3g~x1it8AxslXUhAdu52mvd~FUP3UoGIcsGsIg59jj-GDGnLEMsHIHiN5zsYjlws4iTbSiMQ...%3C%2Fspan%3E%3C%2Fdiv%3E
[ "cs.CV", "cs.LG" ]
null
CVPR (workshop)
2005.01351
Anchors Based Method for Fingertips Position Estimation from a Monocular RGB Image using Deep Neural Network
In Virtual, augmented, and mixed reality, the use of hand gestures is increasingly becoming popular to reduce the difference between the virtual and real world. The precise location of the fingertip is essential/crucial for a seamless experience. Much of the research work is based on using depth information for the est...
An illustration showing use of fingertips to interact with various object in virtual/augmented reality environment. (Source:https://www.wikipedia.org/).
{ "image": [ { "src": "/static-proxy?url=https%3A%2F%2Fdatasets-server.huggingface.co%2Fassets%2Fiyatomilab%2FSciGA-for-experiments-hf%2F--%2F7a0ca341a87aa5e19ff0c335f227d55dc42a286e%2F--%2Fdefault%2Ftrain%2F4%2Ffigures%2Fimage-1863037c.jpg%3FExpires%3D1776404316%26amp%3BSignature%3DRlNHSRHY8wgcHTcQAQUEr~8l~UWq-Cd5kXzbubQgTMu-0xG4J9nPyh0OXkTJO5gRHKvx4CUEA48aAScMPwQf...%3C%2Fspan%3E%3C%2Fdiv%3E
[ "cs.CV", "cs.HC", "eess.IV" ]
null
null
2006.05873
WasteNet: Waste Classification at the Edge for Smart Bins
Smart Bins have become popular in smart cities and campuses around the world. These bins have a compaction mechanism that increases the bins’ capacity as well as automated real-time collection notifications. In this paper, we propose WasteNet, a waste classification model based on convolutional neural networks that can...
Waste Categories
{ "image": [ { "src": "/static-proxy?url=https%3A%2F%2Fdatasets-server.huggingface.co%2Fassets%2Fiyatomilab%2FSciGA-for-experiments-hf%2F--%2F7a0ca341a87aa5e19ff0c335f227d55dc42a286e%2F--%2Fdefault%2Ftrain%2F5%2Ffigures%2Fimage-1863037c.jpg%3FExpires%3D1776404316%26amp%3BSignature%3DrDro6YQ8dvA1k-Zw3dEJcYWI~ZFgpckBMnkc1dvG3B2XfSUTjVhQyFn-kLchLKilDv~H0ciFRUIIhy6y7vou...%3C%2Fspan%3E%3C%2Fdiv%3E
[ "cs.CV", "cs.CY" ]
null
null
2006.09917
FISHING Net: Future Inference of Semantic Heatmaps In Grids
For autonomous robots to navigate a complex environment, it is crucial to understand the surrounding scene both geometrically and semantically. Modern autonomous robots employ multiple sets of sensors, including lidars, radars, and cameras. Managing the different reference frames and characteristics of the sensors, and...
FISHING Net Architecture: multiple neural networks, one for each sensor modality (lidar, radar and camera) take in a sequence of input sensor data from <MATH> t=-2,-1.5.-1,-0.5,0s </MATH> and output a sequence of shared top-down semantic grids at <MATH> t=0,0.5,1,1.5,2s </MATH> representing 3 object classes (Vulnerable...
{ "image": [ { "src": "/static-proxy?url=https%3A%2F%2Fdatasets-server.huggingface.co%2Fassets%2Fiyatomilab%2FSciGA-for-experiments-hf%2F--%2F7a0ca341a87aa5e19ff0c335f227d55dc42a286e%2F--%2Fdefault%2Ftrain%2F6%2Ffigures%2Fimage-1863037c.jpg%3FExpires%3D1776404316%26amp%3BSignature%3DWWF0AL16ItCBLB-iNc4AXlf6slv2i1jNDFPJAiGY6k769wRnzjnSp~BSQcEj15c9L8ihW4S7ZwM5EssusC9v...%3C%2Fspan%3E%3C%2Fdiv%3E
[ "cs.CV", "cs.LG" ]
null
null
2006.14787
On Equivariant and Invariant Learning of Object Landmark Representations
Given a collection of images, humans are able to discover landmarks by modeling the shared geometric structure across instances. This idea of geometric equivariance has been widely used for the unsupervised discovery of object landmark representations. In this paper, we develop a simple and effective approach by combin...
Equivariant and invariant learning. (a) Equivariant learning requires representations across locations to be invariant to a geometric transformation <MATH> g </MATH> while being distinctive across locations. (b) Invariant learning encourages the representations to be invariant to transformations while being distinctive...
{ "image": [ { "src": "/static-proxy?url=https%3A%2F%2Fdatasets-server.huggingface.co%2Fassets%2Fiyatomilab%2FSciGA-for-experiments-hf%2F--%2F7a0ca341a87aa5e19ff0c335f227d55dc42a286e%2F--%2Fdefault%2Ftrain%2F7%2Ffigures%2Fimage-1863037c.jpg%3FExpires%3D1776404316%26amp%3BSignature%3DDQ0dWLjV~gBICaNkSxKm4B-CqOw-8ScmmfJdyoJ8U6E28Fulad5UodF8-qFDVlL9PaTMTweRGpdSMrsb-gRg...%3C%2Fspan%3E%3C%2Fdiv%3E
[ "cs.CV" ]
null
null
2007.10891
Representative-Discriminative Learning for Open-set Land Cover Classification of Satellite Imagery
Land cover classification of satellite imagery is an important step toward analyzing the Earth’s surface. Existing models assume a closed-set setting where both the training and testing classes belong to the same label set. However, due to the unique characteristics of satellite imagery with extremely vast area of vers...
Open-set land-cover classification: Data samples corresponding to ground truth categories are from the known class set (K). It is likely that some categories are not known during training and will be encountered at testing, i.e., samples from unknown class set (U). The goal is to identify pixels coming from (U), while ...
{ "image": [ { "src": "/static-proxy?url=https%3A%2F%2Fdatasets-server.huggingface.co%2Fassets%2Fiyatomilab%2FSciGA-for-experiments-hf%2F--%2F7a0ca341a87aa5e19ff0c335f227d55dc42a286e%2F--%2Fdefault%2Ftrain%2F8%2Ffigures%2Fimage-1863037c.jpg%3FExpires%3D1776404316%26amp%3BSignature%3DRpmM9mWI7u1g7gnqTaHccZtBK530LrmXhTEUNmXGKDOhGuIEOqH1eBk8oujwmEIpSUi4jEfwCHZjeYCsEkJq...%3C%2Fspan%3E%3C%2Fdiv%3E
[ "cs.CV" ]
null
ECCV
2009.08511
Smartphone Camera De-identification while Preserving Biometric Utility
The principle of Photo Response Non Uniformity (PRNU) is often exploited to deduce the identity of the smartphone device whose camera or sensor was used to acquire a certain image. In this work, we design an algorithm that perturbs a face image acquired using a smartphone camera such that (a) sensor-specific details pe...
The objective of our work. The original biometric image is modified such that the sensor classifier associates it with a different sensor, while the biometric matcher successfully matches the original image with the modified image.
{ "image": [ { "src": "/static-proxy?url=https%3A%2F%2Fdatasets-server.huggingface.co%2Fassets%2Fiyatomilab%2FSciGA-for-experiments-hf%2F--%2F7a0ca341a87aa5e19ff0c335f227d55dc42a286e%2F--%2Fdefault%2Ftrain%2F9%2Ffigures%2Fimage-1863037c.jpg%3FExpires%3D1776404316%26amp%3BSignature%3DGRBcX62ZBfTVmjINS33Xc6V6IIQvRDDldonmaAhPDnAqEYTRwTka2q-NgF~2K2MNghukEm88n8v3wymdPKKp...%3C%2Fspan%3E%3C%2Fdiv%3E
[ "cs.CV", "eess.IV" ]
Proc. of 10th IEEE International Conference on Biometrics: Theory, Applications and Systems (BTAS), (Tampa, USA), September 2019
null
2009.14719
Between Shapes, Using the Hausdorff Distance
Given two shapes <MATH> A </MATH> and <MATH> B </MATH> in the plane with Hausdorff distance <MATH> 1 </MATH> , is there a shape <MATH> S </MATH> with Hausdorff distance <MATH> 1/2 </MATH> to and from <MATH> A </MATH> and <MATH> B </MATH> ? The answer is always yes, and depending on convexity of <MATH> A </MATH> and/or ...
Hausdorff morphs between three shapes.
{ "image": [ { "src": "/static-proxy?url=https%3A%2F%2Fdatasets-server.huggingface.co%2Fassets%2Fiyatomilab%2FSciGA-for-experiments-hf%2F--%2F7a0ca341a87aa5e19ff0c335f227d55dc42a286e%2F--%2Fdefault%2Ftrain%2F10%2Ffigures%2Fimage-1863037c.jpg%3FExpires%3D1776404316%26amp%3BSignature%3DiOueJJPgGAoVmK8tQNG-kCMOBA-4ArNYPCqsTxXi~eV9hU2So1Zh07zVfvyjDBpwUO6hoDpvvXvHkZ8qAXq...%3C%2Fspan%3E%3C%2Fdiv%3E
[ "cs.CG" ]
null
null
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
24