您当前的位置:首页 > 发表论文>论文发表

计算机视觉论文

2023-12-12 13:42 来源:学术参考网 作者:未知

计算机视觉论文

推荐下计算机视觉这个领域,依据学术范标准评价体系得出的近年来最重要的9篇论文吧:

(对于英语阅读有困难的同学,访问后可以使用翻译功能)

一、Deep Residual Learning for Image Recognition 

摘要:Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

全文链接: 文献全文 - 学术范 (xueshufan.com)

二、Very Deep Convolutional Networks for Large-Scale Image Recognition

摘要:In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.

全文链接: 文献全文 - 学术范 (xueshufan.com)

三、U-Net: Convolutional Networks for Biomedical Image Segmentation

摘要:There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net.

全文链接: 文献全文 - 学术范 (xueshufan.com)

四、Microsoft COCO: Common Objects in Context

摘要:We present a new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding. This is achieved by gathering images of complex everyday scenes containing common objects in their natural context. Objects are labeled using per-instance segmentations to aid in precise object localization. Our dataset contains photos of 91 objects types that would be easily recognizable by a 4 year old. With a total of 2.5 million labeled instances in 328k images, the creation of our dataset drew upon extensive crowd worker involvement via novel user interfaces for category detection, instance spotting and instance segmentation. We present a detailed statistical analysis of the dataset in comparison to PASCAL, ImageNet, and SUN. Finally, we provide baseline performance analysis for bounding box and segmentation detection results using a Deformable Parts Model.

全文链接: 文献全文 - 学术范 (xueshufan.com)

五、Rethinking the Inception Architecture for Computer Vision

摘要:Convolutional networks are at the core of most state of-the-art computer vision solutions for a wide variety of tasks. Since 2014 very deep convolutional networks started to become mainstream, yielding substantial gains in various benchmarks. Although increased model size and computational cost tend to translate to immediate quality gains for most tasks (as long as enough labeled data is provided for training), computational efficiency and low parameter count are still enabling factors for various use cases such as mobile vision and big-data scenarios. Here we are exploring ways to scale up networks in ways that aim at utilizing the added computation as efficiently as possible by suitably factorized convolutions and aggressive regularization. We benchmark our methods on the ILSVRC 2012 classification challenge validation set demonstrate substantial gains over the state of the art: 21:2% top-1 and 5:6% top-5 error for single frame evaluation using a network with a computational cost of 5 billion multiply-adds per inference and with using less than 25 million parameters. With an ensemble of 4 models and multi-crop evaluation, we report 3:5% top-5 error and 17:3% top-1 error on the validation set and 3:6% top-5 error on the official test set.

全文链接: 文献全文 - 学术范 (xueshufan.com)

六、Mask R-CNN

摘要:We present a conceptually simple, flexible, and general framework for object instance segmentation. Our approach efficiently detects objects in an image while simultaneously generating a high-quality segmentation mask for each instance. The method, called Mask R-CNN, extends Faster R-CNN by adding a branch for predicting an object mask in parallel with the existing branch for bounding box recognition. Mask R-CNN is simple to train and adds only a small overhead to Faster R-CNN, running at 5 fps. Moreover, Mask R-CNN is easy to generalize to other tasks, e.g., allowing us to estimate human poses in the same framework. We show top results in all three tracks of the COCO suite of challenges, including instance segmentation, bounding-box object detection, and person keypoint detection. Without tricks, Mask R-CNN outperforms all existing, single-model entries on every task, including the COCO 2016 challenge winners. We hope our simple and effective approach will serve as a solid baseline and help ease future research in instance-level recognition. Code will be made available.

全文链接: 文献全文 - 学术范 (xueshufan.com)

七、Feature Pyramid Networks for Object Detection

摘要:Feature pyramids are a basic component in recognition systems for detecting objects at different scales. But pyramid representations have been avoided in recent object detectors that are based on deep convolutional networks, partially because they are slow to compute and memory intensive. In this paper, we exploit the inherent multi-scale, pyramidal hierarchy of deep convolutional networks to construct feature pyramids with marginal extra cost. A top-down architecture with lateral connections is developed for building high-level semantic feature maps at all scales. This architecture, called a Feature Pyramid Network (FPN), shows significant improvement as a generic feature extractor in several applications. Using a basic Faster R-CNN system, our method achieves state-of-the-art single-model results on the COCO detection benchmark without bells and whistles, surpassing all existing single-model entries including those from the COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU and thus is a practical and accurate solution to multi-scale object detection. Code will be made publicly available.

全文链接: 文献全文 - 学术范 (xueshufan.com)

八、ORB: An efficient alternative to SIFT or SURF

摘要:Feature matching is at the base of many computer vision problems, such as object recognition or structure from motion. Current methods rely on costly descriptors for detection and matching. In this paper, we propose a very fast binary descriptor based on BRIEF, called ORB, which is rotation invariant and resistant to noise. We demonstrate through experiments how ORB is at two orders of magnitude faster than SIFT, while performing as well in many situations. The efficiency is tested on several real-world applications, including object detection and patch-tracking on a smart phone.

全文链接: 文献全文 - 学术范 (xueshufan.com)

九、DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs

摘要:In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First , we highlight convolution with upsampled filters, or ‘atrous convolution’, as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second , we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third , we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed “DeepLab” system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7 percent mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.

全文链接: 文献全文 - 学术范 (xueshufan.com)

希望对你有帮助!

如何发表计算机视觉顶级论文CVPR,ICCV,ECCV

目前,公认的计算机视觉三大会议分别为ICCV,ECCV,CVPR。
1、ICCV
ICCV的全称是 IEEE International Conference on Computer Vision,国际计算机视觉大会,是计算机视觉方向的三大顶级会议之一,通常每两年召开一次,2005 年 10 月曾经在北京召开。
会议收录论文的内容包括:底层视觉与感知,颜色、光照与纹理处理,分割与聚合,运动与跟踪,立体视觉与运动结构重构,基于图像的建模,基于物理的建模,视觉中的统计学习,监控,物体、事件和场景的识别,基于视觉的图形学,图片和的获取,性能评估,具体应用等。
ICCV是计算机视觉领域最高级别的会议,会议的论文集代表了计算机视觉领域最新的发展方向和水平。会议的收录率较低,以 2007 年为例,会议共收到论文1200余篇,接受的论文仅为244篇。会议的论文会被 EI 检索。
2、ECCV
ECCV的全称是Europeon Conference on Computer Vision,两年一次,是计算机视觉三大会议(另外两个是ICCV和CVPR)之一。很明显,ECCV是一个欧洲会议,欧洲人一般比较看中理论,但是从最近一次会议来看,似乎大家也开始注重应用了,oral里面的demo非常之多,演示效果很好,让人赏心悦目、叹为观止。不过欧洲的会有一个不好,就是他们的人通常英语口音很重,有些人甚至不太会说英文,所以开会和交流的时候,稍微有些费劲。
3、CVPR
CVPR的全称是Internaltional Conference on Computer Vision and Pattern Recogintion。这是一个一年一次的会议,举办地从来没有出过美国,因此想去美国旅游的同学不要错过。正如它的名字一样,这个会上除了视觉的文章,还会有不少模式识别的文章,当然两方面的结合自然也是重点。

iccv论文是什么级别

ICCV论文是计算机视觉领域最高级别的会议论文。

计算机视觉是使用计算机及相关设备对生物视觉的一种模拟。它的主要任务就是通过对采集的图片或视频进行处理以获得相应场景的三维信息,就像人类和许多其他类生物每天所做的那样。

计算机视觉是一门研究如何使机器“看”的科学,更进一步的说,就是是指用摄影机和电脑代替人眼对目标进行识别、跟踪和测量等机器视觉,并进一步做图形处理,使电脑处理成为更适合人眼观察或传送给仪器检测的图像。

CVPR录用标准

CVPR有着较为严苛的录用标准,会议整体的录取率通常不超过30%,而口头报告的论文比例更是不高于5%。而会议的组织方是一个循环的志愿群体,通常在某次会议召开的三年之前通过遴选产生。CVPR的审稿一般是双盲的,也就是说会议的审稿与投稿方均不知道对方的信息。

通常某一篇论文需要由三位审稿者进行审读。最后再由会议的领域主席(area chair)决定论文是否可被接收。

计算机视觉毕业论文和别人用了相同方法行不行

没有问题,结论不一样就行
用别人同样的思路同样的方法做,只是换一个材料,换一个研究对象,得到不同的结果,这样做工作虽然做不出太好的工作,但是还不算是抄袭。

如何发表计算机视觉顶级论文CVPR,ICCV,ECCV

毕业论文是教学科研过程的一个环节,也是学业成绩考核和评定的一种重要方式。毕业论文的目的在于总结学生在校期间的学习成果,培养学生具有综合地创造性地运用所学的全部专业知识和技能解决较为复杂问题的能力并使他们受到科学研究的基本训练。标题标题是文章的眉目。各类文章的标题,样式繁多,但无论是何种形式,总要以全部或不同的侧面体现作者的写作意图、文章的主旨。毕业论文的标题一般分为总标题、副标题、分标题几种。总标题总标题是文章总体内容的体现。常见的写法有:①揭示课题的实质。这种形式的标题,高度概括全文内容,往往就是文章的中心论点。它具有高度的明确性,便于读者把握全文内容的核心。诸如此类的标题很多,也很普遍。如《关于经济体制的模式问题》、《经济中心论》、《县级行政机构改革之我见》等。②提问式。这类标题用设问句的方式,隐去要回答的内容,实际上作者的观点是十分明确的,只不过语意婉转,需要读者加以思考罢了。这种形式的标题因其观点含蓄,轻易激起读者的注重。如《家庭联产承包制就是单干吗?》、《商品经济等同于资本主义经济吗?》等。③交代内容范围。这种形式的标题,从其本身的角度看,看不出作者所指的观点,只是对文章内容的范围做出限定。拟定这种标题,一方面是文章的主要论点难以用一句简短的话加以归纳;另一方面,交代文章内容的范围,可引起同仁读者的注重,以求引起共鸣。这种形式的标题也较普遍。如《试论我国农村的双层经营体制》、《正确处理中心和地方、条条与块块的关系》、《战后西方贸易自由化剖析》等。④用判定句式。这种形式的标题给予全文内容的限定,可伸可缩,具有很大的灵活性。文章研究对象是具体的,面较小,但引申的思想又须有很强的概括性,面较宽。这种从小处着眼,大处着手的标题,有利于科学思维和科学研究的拓展。如《从乡镇企业的兴起看中国农村的希望之光》、《科技进步与农业经济》、《从“劳动创造了美”看美的本质》等。⑤用形象化的语句。如《激励人心的治理体制》、《科技史上的曙光》、《普照之光的理论》等。

相关文章
学术参考网 · 手机版
https://m.lw881.com/
首页