首页

> 期刊投稿知识库

首页 期刊投稿知识库 问题

计算机视觉应该看哪个期刊的论文

发布时间:

计算机视觉应该看哪个期刊的论文

CVPR这两年变味了

推荐下计算机视觉这个领域,依据学术范标准评价体系得出的近年来最重要的9篇论文吧: (对于英语阅读有困难的同学,访问后可以使用翻译功能) 一、Deep Residual Learning for Image Recognition  摘要:Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation. 全文链接: 文献全文 - 学术范 (xueshufan.com) 二、Very Deep Convolutional Networks for Large-Scale Image Recognition 摘要:In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision. 全文链接: 文献全文 - 学术范 (xueshufan.com) 三、U-Net: Convolutional Networks for Biomedical Image Segmentation 摘要:There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net. 全文链接: 文献全文 - 学术范 (xueshufan.com) 四、Microsoft COCO: Common Objects in Context 摘要:We present a new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding. This is achieved by gathering images of complex everyday scenes containing common objects in their natural context. Objects are labeled using per-instance segmentations to aid in precise object localization. Our dataset contains photos of 91 objects types that would be easily recognizable by a 4 year old. With a total of 2.5 million labeled instances in 328k images, the creation of our dataset drew upon extensive crowd worker involvement via novel user interfaces for category detection, instance spotting and instance segmentation. We present a detailed statistical analysis of the dataset in comparison to PASCAL, ImageNet, and SUN. Finally, we provide baseline performance analysis for bounding box and segmentation detection results using a Deformable Parts Model. 全文链接: 文献全文 - 学术范 (xueshufan.com) 五、Rethinking the Inception Architecture for Computer Vision 摘要:Convolutional networks are at the core of most state of-the-art computer vision solutions for a wide variety of tasks. Since 2014 very deep convolutional networks started to become mainstream, yielding substantial gains in various benchmarks. Although increased model size and computational cost tend to translate to immediate quality gains for most tasks (as long as enough labeled data is provided for training), computational efficiency and low parameter count are still enabling factors for various use cases such as mobile vision and big-data scenarios. Here we are exploring ways to scale up networks in ways that aim at utilizing the added computation as efficiently as possible by suitably factorized convolutions and aggressive regularization. We benchmark our methods on the ILSVRC 2012 classification challenge validation set demonstrate substantial gains over the state of the art: 21:2% top-1 and 5:6% top-5 error for single frame evaluation using a network with a computational cost of 5 billion multiply-adds per inference and with using less than 25 million parameters. With an ensemble of 4 models and multi-crop evaluation, we report 3:5% top-5 error and 17:3% top-1 error on the validation set and 3:6% top-5 error on the official test set. 全文链接: 文献全文 - 学术范 (xueshufan.com) 六、Mask R-CNN 摘要:We present a conceptually simple, flexible, and general framework for object instance segmentation. Our approach efficiently detects objects in an image while simultaneously generating a high-quality segmentation mask for each instance. The method, called Mask R-CNN, extends Faster R-CNN by adding a branch for predicting an object mask in parallel with the existing branch for bounding box recognition. Mask R-CNN is simple to train and adds only a small overhead to Faster R-CNN, running at 5 fps. Moreover, Mask R-CNN is easy to generalize to other tasks, e.g., allowing us to estimate human poses in the same framework. We show top results in all three tracks of the COCO suite of challenges, including instance segmentation, bounding-box object detection, and person keypoint detection. Without tricks, Mask R-CNN outperforms all existing, single-model entries on every task, including the COCO 2016 challenge winners. We hope our simple and effective approach will serve as a solid baseline and help ease future research in instance-level recognition. Code will be made available. 全文链接: 文献全文 - 学术范 (xueshufan.com) 七、Feature Pyramid Networks for Object Detection 摘要:Feature pyramids are a basic component in recognition systems for detecting objects at different scales. But pyramid representations have been avoided in recent object detectors that are based on deep convolutional networks, partially because they are slow to compute and memory intensive. In this paper, we exploit the inherent multi-scale, pyramidal hierarchy of deep convolutional networks to construct feature pyramids with marginal extra cost. A top-down architecture with lateral connections is developed for building high-level semantic feature maps at all scales. This architecture, called a Feature Pyramid Network (FPN), shows significant improvement as a generic feature extractor in several applications. Using a basic Faster R-CNN system, our method achieves state-of-the-art single-model results on the COCO detection benchmark without bells and whistles, surpassing all existing single-model entries including those from the COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU and thus is a practical and accurate solution to multi-scale object detection. Code will be made publicly available. 全文链接: 文献全文 - 学术范 (xueshufan.com) 八、ORB: An efficient alternative to SIFT or SURF 摘要:Feature matching is at the base of many computer vision problems, such as object recognition or structure from motion. Current methods rely on costly descriptors for detection and matching. In this paper, we propose a very fast binary descriptor based on BRIEF, called ORB, which is rotation invariant and resistant to noise. We demonstrate through experiments how ORB is at two orders of magnitude faster than SIFT, while performing as well in many situations. The efficiency is tested on several real-world applications, including object detection and patch-tracking on a smart phone. 全文链接: 文献全文 - 学术范 (xueshufan.com) 九、DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs 摘要:In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First , we highlight convolution with upsampled filters, or ‘atrous convolution’, as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second , we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third , we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed “DeepLab” system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7 percent mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online. 全文链接: 文献全文 - 学术范 (xueshufan.com) 希望对你有帮助!

PAMI:IEEE Transactions on Pattern Analysis and Machine Intelligence,IEEE 模式分析与机器智能杂志IJCV:International Journal on Computer Vision,国际计算机视觉杂志 TIP:IEEE Transactions on Image Processing,IEEE图像处理杂志CVIU:Computer Vision and Image Understanding,计算机视觉与图像理解PR:Pattern Recognition,模式识别PRL:Pattern Recognition Letters,模式识别快报

ieee计算机视觉论文期刊

PAMI:IEEE Transactions on Pattern Analysis and Machine Intelligence,IEEE 模式分析与机器智能杂志IJCV:International Journal on Computer Vision,国际计算机视觉杂志 TIP:IEEE Transactions on Image Processing,IEEE图像处理杂志CVIU:Computer Vision and Image Understanding,计算机视觉与图像理解PR:Pattern Recognition,模式识别PRL:Pattern Recognition Letters,模式识别快报

CVPR论文可以说是世界顶级水平论文。

图片来源于网络

CVPR是IEEE Conference on Computer Vision and Pattern Recognition的缩写,即IEEE国际计算机视觉与模式识别会议。该会议是由IEEE举办的计算机视觉和模式识别领域的顶级会议。这是一个一年一次的会议,举办地从来没有出过美国。正如它的名字一样,这个会上除了视觉的文章,还会有不少模式识别的文章,当然两方面的结合自然也是重点。

下面是前几年CVPR论文的接收情况:

图片来源于网络

cvpr录用标准相当严格,通常会议整体的录取率不超过25%,而口头报告的论文比例更只占5%不到。其会议的组织方是一个循环的志愿群体,其成员遴选一般会在某次会议召开的三年前进行。cvpr的审稿过程中会议的审稿方与投稿方均不知道对方的信息。而且一篇论文经常需要由三位审稿者进行审读。最后再由会议的领域主席决定是否接收。

在各种学术会议统计中,CVPR被认为有着很强的影响力和很高的排名。目前在中国计算机学会推荐国际学术会议的排名中,CVPR为人工智能领域的A类会议。

是的。首先介绍计算机视觉领域的4个顶级代表性期刊吧,IEEETransactionsonPa,ComputationalIntelligence,中等偏上,要求较高,杂志级别不错。

北大核心期刊计算机视觉

计算机科学与应用

计算机科学与应用,是rccse核心

计算机视觉sci有中文期刊吗

SCI没有有中文论文。《科学引文索引》(Science Citation Index, 简称 SCI )美国科学信息研究所( ISI)的尤金·加菲尔德(Eugene Garfield)于1957 年在美国费城创办的引文数据库。

SCI(科学引文索引)、EI(工程索引)、ISTP(科技会议录索引)是世界著名的三大科技文献检索系统,是国际公认的进行科学统计与科学评价的主要检索工具。

收录内容

SCI所收录期刊的内容主要涉及数、理、化、农、林、医、生物等基础科学研究领域,选用刊物来源于40多个国家,50多种文字,其中主要的国家有美国、英国、荷兰、德国、俄罗斯、法国、日本、加拿大等,也收录部分中国(包括港澳台)刊物。

以上内容参考 百度百科-科学引文索引

SCI一般都是英文的,中文一般没有呢,如果你想发中文的期刊就可以选择核心期刊,是国内比较权威的期刊了。

扩展资料:

国内公认的三大核心期刊:

有的,SCI收录的国内期刊以英文刊为主,中文刊有18种。

最近几年内均没有被新收入SCI的中文期刊。但随着被认为是"SCI预备队”的ESCI数据库开始增加中文期刊,是不是意味着接下来会有更多的中文刊被SCI检索

《科学引文索引》简介

《科学引文索引》(Science Citation Index,简称SCI)美国科学信息研究所(ISI)的尤金·加菲尔德(Eugene Garfield)于1957年在美国费城创办的引文数据库。SCI(科学引文索引)、EI(工程索引)、ISTP(科技会议录索引)是世界著名的三大科技文献检索系统,是国际公认的进行科学统计与科学评价的主要检索工具。

2020年2月,教育部、科技部印发了《关于规范高等学校SCI论文相关指标使用树立正确评价导向的若干意见》,该文件要破除论文“SCI至上”,也要以此为突破口,拿出针对性强、操作性强的实招硬招,破除“唯论文”,树立正确的评价导向。

SCI都是英文的杂志,没有中文,但是国内也有SCI的期刊的 辑文编译老师非常了得,知道得很多,可以去问问

发表计算机视觉论文灌水期刊

计算机科学与应用

相关百科

热门百科

首页
发表服务