【参考文献】[1] 赵燕.视觉传达设计概论纲要[J];包装世界,1996年01期[2] 杨屏.视觉传达设计的现代化与国际化趋向[J].西北美术,1999年04期[3] 陈湘.谈现代视觉传达设计的起源和发展[J].财贸研究,2000年06期[4] 何洁.当代视觉传达设计散论[J].装饰,1998年01期[5] 鬲波飞.网络媒体的视觉传达设计研究[D].湖南大学学报,2002年[6] 华佳.视觉传达设计与视觉思维[D].苏州大学学报,2004年[7] 孙超英.信息时代的视觉传达设计[N].中华读书报,2002年 【参考文献】[1] 潜铁宇 熊兴福 《视觉传达设计》 武汉理工大学出版社[2] 杨伟 《关于网络传播中视觉传达设计的研究》 电化教育研究[3] 李金辉 《简述多媒体中的视觉传达》 理论观察2008年第5期[4] 葛中 《多媒体设计与视觉传达》 考试周刊2007年第23期[5] 余义虎 《视觉传达设计多元特征与表现》甘肃联合大学学报(社会科学版) 【参考文献】[1]刘丹.视觉元素在网页设计中的运用[J].合肥工业大学学报,2006 (02)[2]杨西惠.网页设计的视觉要素与编排创意[J].装饰,2005 (04) 【参考文献】[1]《现代十大设计理念》.何晓佑.江苏美术出版社.2001. 8 [2]《视觉传达设计实践》.靳埭强.上海文艺出版社. [3]《视觉传达色彩设计》.崔唯;周钧.中国青年出版社. [4]靳埭强·身度心道——中国文化为本的设计·绘画·教育[M].安徽美术出版社,2008,(5) [参考文献]: [1] 设计[EB/OL]. 百度百科. [2] 尹定邦. 设计学概论[M]. 长沙:湖南科学技术出版社,. 第54-55页 [3] 霍思比. 牛津高阶英汉双解词典(第四版)[Z]. 北京:商务印书馆,1997 .09. 第388页-389页 [4] 朱彧. 设计艺术概论[M]. 长沙:湖南大学出版社. . 第3-4页、第158页 [5] 视觉传达设计[EB/OL]. 百度百科. . [6] 李砚祖. 视觉传达设计的历史与美学[M]. 北京:北京人民大学出版社,. 第2-3页 [7] 曾宪楷. 视觉传达设计[M]. 北京:北京理工大学出版社. . 第1-2页 [8] 吴鑫. 设计教育当随时代——论信息技术的发展与视觉传达设计学科的课程改革[D]. 中国优秀博硕学位论文全文数据库,2006. [9] 冯玉雪. 艺术设计与社会经济发展[J]. 美与时代, 2003(09). 第51-52页 [10] 胡锦涛在全国科学技术大会上的讲话[C/OL]. 新华网. [11] 黄梅荣. 艺术设计教学创新——数码媒体的重要性[J]. 装饰, 2004(12). 第88-89页 [12] 贾凤兰. 文化创意产业的由来与发展[J]. 求是,2009 (24). 第53页 [13] 张幼云. 美术教育大视野——中外高等美术教育比较研究[M]. 北京:高等教育出版社,. 第4页、第46页、第150-153页 [14] 国内外创意产业的发展概况[EB/OL]. 金羊网. [15] 杨德广 王勤. 从经济全球化到教育国际化的思考[J]. 教学研究(河北) ,2000年第23卷第四期 第292页 [16] 张小鹭. 现代美术教育学[M]. 重庆:西南师范大学出版社. . 第2页、第175页 [17] 吕村. 艺术设计教育学科课程设置的回顾与现状[J]. 中州大学学报, 2009(02). 第79页 [18] 陈晓英. 对高校艺术设计专业教育现状的分析与思索[J]. 电影评介, 2008(22). 第85页 [19] 张梦. 日本武藏野美术大学视觉传达设计教育的考察[J]. 艺术教育, 2009(05). 第14页 [20] 彭亮. 台湾高等设计教育特色及对大陆设计教育的启示[J]. 家具与室内装饰, 2003(12) 第68-71页 [21] 曹田泉. 设计艺术概论[M]. 上海:上海人民出版社, 2005. 09. 第17页 [22] 翁丽芬. 中国高校设计艺术教育发展的现状与改革对策研究[D]. 中国优秀博硕学位论文全文数据库, 2007. [23] 汪晓春. 美国辛辛那提大学设计、建筑、艺术和规划学校的设计教学[J/OL]. 中国设计在线网站. [24] 赵倩. 设计艺术教育与人文精神——论高校设计艺术专业教育中人文社科课程设置的必要性[D]. 中国优秀博硕学位论文全文数据库, 2007. [25] 林采霖. 设计院校与设计业界建立双向多元化合作之研究[J]. 装饰, 2005 (03). 第77页
推荐下计算机视觉这个领域,依据学术范标准评价体系得出的近年来最重要的9篇论文吧: (对于英语阅读有困难的同学,访问后可以使用翻译功能) 一、Deep Residual Learning for Image Recognition 摘要:Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation. 全文链接: 文献全文 - 学术范 () 二、Very Deep Convolutional Networks for Large-Scale Image Recognition 摘要:In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision. 全文链接: 文献全文 - 学术范 () 三、U-Net: Convolutional Networks for Biomedical Image Segmentation 摘要:There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at . 全文链接: 文献全文 - 学术范 () 四、Microsoft COCO: Common Objects in Context 摘要:We present a new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding. This is achieved by gathering images of complex everyday scenes containing common objects in their natural context. Objects are labeled using per-instance segmentations to aid in precise object localization. Our dataset contains photos of 91 objects types that would be easily recognizable by a 4 year old. With a total of million labeled instances in 328k images, the creation of our dataset drew upon extensive crowd worker involvement via novel user interfaces for category detection, instance spotting and instance segmentation. We present a detailed statistical analysis of the dataset in comparison to PASCAL, ImageNet, and SUN. Finally, we provide baseline performance analysis for bounding box and segmentation detection results using a Deformable Parts Model. 全文链接: 文献全文 - 学术范 () 五、Rethinking the Inception Architecture for Computer Vision 摘要:Convolutional networks are at the core of most state of-the-art computer vision solutions for a wide variety of tasks. Since 2014 very deep convolutional networks started to become mainstream, yielding substantial gains in various benchmarks. Although increased model size and computational cost tend to translate to immediate quality gains for most tasks (as long as enough labeled data is provided for training), computational efficiency and low parameter count are still enabling factors for various use cases such as mobile vision and big-data scenarios. Here we are exploring ways to scale up networks in ways that aim at utilizing the added computation as efficiently as possible by suitably factorized convolutions and aggressive regularization. We benchmark our methods on the ILSVRC 2012 classification challenge validation set demonstrate substantial gains over the state of the art: 21:2% top-1 and 5:6% top-5 error for single frame evaluation using a network with a computational cost of 5 billion multiply-adds per inference and with using less than 25 million parameters. With an ensemble of 4 models and multi-crop evaluation, we report 3:5% top-5 error and 17:3% top-1 error on the validation set and 3:6% top-5 error on the official test set. 全文链接: 文献全文 - 学术范 () 六、Mask R-CNN 摘要:We present a conceptually simple, flexible, and general framework for object instance segmentation. Our approach efficiently detects objects in an image while simultaneously generating a high-quality segmentation mask for each instance. The method, called Mask R-CNN, extends Faster R-CNN by adding a branch for predicting an object mask in parallel with the existing branch for bounding box recognition. Mask R-CNN is simple to train and adds only a small overhead to Faster R-CNN, running at 5 fps. Moreover, Mask R-CNN is easy to generalize to other tasks, ., allowing us to estimate human poses in the same framework. We show top results in all three tracks of the COCO suite of challenges, including instance segmentation, bounding-box object detection, and person keypoint detection. Without tricks, Mask R-CNN outperforms all existing, single-model entries on every task, including the COCO 2016 challenge winners. We hope our simple and effective approach will serve as a solid baseline and help ease future research in instance-level recognition. Code will be made available. 全文链接: 文献全文 - 学术范 () 七、Feature Pyramid Networks for Object Detection 摘要:Feature pyramids are a basic component in recognition systems for detecting objects at different scales. But pyramid representations have been avoided in recent object detectors that are based on deep convolutional networks, partially because they are slow to compute and memory intensive. In this paper, we exploit the inherent multi-scale, pyramidal hierarchy of deep convolutional networks to construct feature pyramids with marginal extra cost. A top-down architecture with lateral connections is developed for building high-level semantic feature maps at all scales. This architecture, called a Feature Pyramid Network (FPN), shows significant improvement as a generic feature extractor in several applications. Using a basic Faster R-CNN system, our method achieves state-of-the-art single-model results on the COCO detection benchmark without bells and whistles, surpassing all existing single-model entries including those from the COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU and thus is a practical and accurate solution to multi-scale object detection. Code will be made publicly available. 全文链接: 文献全文 - 学术范 () 八、ORB: An efficient alternative to SIFT or SURF 摘要:Feature matching is at the base of many computer vision problems, such as object recognition or structure from motion. Current methods rely on costly descriptors for detection and matching. In this paper, we propose a very fast binary descriptor based on BRIEF, called ORB, which is rotation invariant and resistant to noise. We demonstrate through experiments how ORB is at two orders of magnitude faster than SIFT, while performing as well in many situations. The efficiency is tested on several real-world applications, including object detection and patch-tracking on a smart phone. 全文链接: 文献全文 - 学术范 () 九、DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs 摘要:In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First , we highlight convolution with upsampled filters, or ‘atrous convolution’, as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second , we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third , we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed “DeepLab” system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching percent mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online. 全文链接: 文献全文 - 学术范 () 希望对你有帮助!
1、论文题目:要求准确、简练、醒目、新颖。2、目录:目录是论文中主要段落的简表。(短篇论文不必列目录)3、提要:是文章主要内容的摘录,要求短、精、完整。字数少可几十字,多不超过三百字为宜。4、关键词或主题词:关键词是从论文的题名、提要和正文中选取出来的,是对表述论文的中心内容有实质意义的词汇。关键词是用作机系统标引论文内容特征的词语,便于信息系统汇集,以供读者检索。 每篇论文一般选取3-8个词汇作为关键词,另起一行,排在“提要”的左下方。主题词是经过规范化的词,在确定主题词时,要对论文进行主题,依照标引和组配规则转换成主题词表中的规范词语。5、论文正文:(1)引言:引言又称前言、序言和导言,用在论文的开头。 引言一般要概括地写出作者意图,说明选题的目的和意义, 并指出论文写作的范围。引言要短小精悍、紧扣主题。〈2)论文正文:正文是论文的主体,正文应包括论点、论据、 论证过程和结论。主体部分包括以下内容:a.提出-论点;b.分析问题-论据和论证;c.解决问题-论证与步骤;d.结论。6、一篇论文的参考文献是将论文在和写作中可参考或引证的主要文献资料,列于论文的末尾。参考文献应另起一页,标注方式按《GB7714-87文后参考文献著录规则》进行。中文:标题--作者--出版物信息(版地、版者、版期):作者--标题--出版物信息所列参考文献的要求是:(1)所列参考文献应是正式出版物,以便读者考证。(2)所列举的参考文献要标明序号、著作或文章的标题、作者、出版物信息。
348 浏览 3 回答
180 浏览 4 回答
334 浏览 3 回答
113 浏览 3 回答
157 浏览 2 回答
344 浏览 3 回答
334 浏览 3 回答
333 浏览 3 回答
86 浏览 4 回答
338 浏览 5 回答
294 浏览 4 回答
300 浏览 4 回答
138 浏览 5 回答
208 浏览 4 回答