1Wireless technology was little more than just a distant idea for the majority of ordinary consumers ten years ago. However, it has exploded over recent years with the use of 3G phones and wireless home computing increasingly would be foolish to suggest that wireless communication has reached its peak. Whilst mobile phones and home computing will continue to be the major focus in the quest for ever increasing sophistication within the technology, new applications are emerging company, Securecom Technologies, based in Ireland, have been at the forefront of harnessing wireless technologies in the area of personal safety. They already have a number of products in the marketplace designed to enable users to activate an alarm signal to a remote emergency centre wirelessly. Their Benefon range of applications are used by vulnerable elderly people, lone workers and VIPs to increase their sense of security and ability to effortlessly get in touch with help at the touch of the are now in the process of developing PERUSE1, which stands for 'Personal Safety System Utilising Satellite combined with Emerging Technologies'. The Peruse project will develop a Wireless Personal Alarm (WPA) solution which will be carried by or worn on a person and will allow the user to summon help at the touch of a button. When the alarm has been activated, the WPA will transmit a low power signal to a satellite communications headset which will forward a message to an authorised number. This will include the identity of the person in distress, as well as their current location. However, the ingenuity of the technology goes further as it will also have the potential to transmit the user's current state of health and local environmental is envisaged that the recipient of the users SOS signal will be a fully equipped Emergency Monitoring Centre to whom the user will have previously given full instructions as to the steps they would wish to have the Centre take on their behalf in the event of an are two core components that are in the development phase. The wireless personal alarm (WPA) and a 'dongle' which provides the handset for satellite communication use which will have a low power wireless link to the important issues here are that the two components will need to take into account size, cost, accuracy of location and battery autonomy. The main benefits will be that the device will be able to be worn or carried on a person discreetly. This makes it ideal for professions such as personal security, where the ability to communicate a message quickly and without fuss can often be of paramount importance. It will herald a new era in satellite communication. No longer will the user have to tap a keypad to enter a number nor will they have to move the handset for optimal signal strength prior to sending an emergence message. This technology will be invaluable to professions such as mountain rescue and will also be a tremendous benefit to those who enjoy hiking and climbing in the course of their leisure pursuits where conventional mobile phone technology can often be rendered are currently no known competitors for this potentially life saving technology for which Securecom has filed for both Irish and European Patent Applications. Prototypes have already been manufactured and pilot programmes and laboratory tests are well under (Ultra Wide Band)2 is another example of emerging wireless technology. Alongside traditional wireless uses, UWB can also detect images through solid objects, such as people on the opposite side of a wall. This has led to an equal number of supporters and UWB can be used for consumer applications in a similar fashion to Bluetooth technology such as cable elimination between a PC and its peripheral equipment, the more interesting applications focus on its 'radar 'like imagery. These applications could be used to find people trapped in a burning building, locating hostages and captors behind a thick wall and finding objects such as those that might be buried in the ground. Heightened security at airports and other public buildings can use UWB technology to detect weapons on people and bombs in luggage and packages. In this age of heightened security, post 9/11, the benefits of this emerging technology should not be few companies have started to develop UWB products, including XtremeSpectrum, Time Domain and Aether Wire. XtremeSpectrum is developing products to enable the sending and receiving of multiple streams of digital audio and video for both battery powered and other consumer devices such as digital cameras, DVDs, DVRs, camcorders, MP3 players and set top boxes. Time Domain has developed a UWB chip set targeting three core technologies: wireless communication, precision location and tracking, and high definition portable radar whilst Aether Wire is working on miniature, distributed-position location and low data-rate communication devices. One of its goals is to develop coin sized devices that are capable of localisation to centimetre accuracy over kilometre , privacy violation is one of the major concerns of the technology's opponents. Any technology that can 'see' through solid objects can be used for illegal purposes as well as legitimate ones. In theory, a UWB-enabled system could 'look through' the walls of a house to locate valuable objects and could detect when the occupants are not at home. Supporters, however, could rightly point out that this is a dilemma shared by many technologies that are used to enhance public safety - the juggling act between increased security versus decreased personal freedom. It could be argued that baggage searches at airports via x-ray and metal detection are common examples of us giving up privacy for better security, a price most people are willing to other area is more at the forefront of the emergence of innovation in wireless technology than space exploration. Future missions to nearby planets like Mars will require space communication technologies that can provide an interplanetary satellite and navigation infrastructure via space systems that are far more compact and efficient than seen ever before. A longer term commitment will be necessary to resolve the challenges of efficient planetary communication due to the increase in distances involved as space exploration ventures further out into the solar system. To support planetary exploration, techniques developed for Earth-bound usage will be transferred to other planets as well. Exploration of Mars, for example, will require a high accuracy positioning capability such as a 'Martian GPS' as an aid to exploratory roving very day, the 'Mars Spirit' space rover continues to send data back to Earth, almost 18 months after it touched down on the red planet, surviving more than 4 times its expected mission length. One day it is highly likely that we may see astronauts walking on Mars carting around wi-fi enabled PCs. In a remote Arizona meteor crater, NASA has already begun testing a mobile wi-fi system that could enable those on a Mars mission to easily deploy wireless data connectivity at a transmission rate of just more than a megabit per second over a 2 square mile area, and then change that coverage area at will through the use of mobile access points, making it entirely feasible to explore different terrain on any given Networks3 developed the technology which NASA has adopted whereby the astronauts could have inter-connectivity via a three node mesh network. They would first establish a base communications station near their spacecraft and then set up an Ethernet connection between that base and a main access point. Then each node in the network would pick up its wireless connectivity from the access is still in its infancy and there is some way to go before astronauts would be strutting their stuff on Mars and communicating wirelessly with one and other and with mission control in this , the Mars Spirit space rover is still sending back images and data from the red planet today, relying heavily on wireless technology to do so. It may appear that these vehicles have been designed solely for the purpose of space exploration but closer scrutiny reveals applications that could also be modified and used on Earth. Unlike, say, a car manufacturing robot which knows where and when the engine or body appears on the assembly line, the Mars rovers are working in an unstructured and unknown environment. As a result, the rovers have had to learn about their new home through their own sensors, including a set of nine cameras on each rover. The rovers have two navigation cameras for a 3D view of their surroundings, two hazard avoidance cameras for a 3D view of nearby terrain and panoramic cameras to capture the images of the planet's surface. However, the rovers cannot just look around them, process the images and know where to go. Neither can the mission controllers on Earth grab a joystick and start steering the rovers whilst watching images being beamed back from thousands of miles away. A key reason is processing power. The central processor in each rover has a top speed of 20 MHz. Instead, during the Martian night, while a rover is 'asleep', a team on Earth with much more powerful computers programs its activities for the day ahead, and then sends basic instructions on where to go and how to get there. Along with taking pictures, each rover is examining the planet with several instruments on a robotic arm. The arms have 'shoulder', 'elbow' and 'wrist' joints for manoeuvrability and are equipped with four sensors: a microscopic camera for close up pictures of rocks, an alpha particle x ray spectrometer for determining the mineral content of rocks, another spectrometer for detecting iron and a rock abrasion tool for cutting through the layer of oxidation that forms on the surfaces of Martian rocks. As with the movement of the rovers, the arms are controlled mostly via prepared commands from mission observers have noted that some of these applications may prove useful here on Earth. For example, a robotic arm that doesn't require real time human control might be good for disabled people who use wheelchairs and can't control a joystick with their hands. Using its own sensors, it could reach out and get things for the person in the wheelchair, for addition, a robot that can deal with new and unknown environments might save manufacturers money. In current factories with 'robotic' workers, when the company shifts to making a new product, the whole factory floor has to be reconfigured and the robots reprogrammed to deal with the new arrangement. A robot that could use feedback from sensors to figure out where things are could adapt to changes by itself, saving the company the time and effort of building a new structured environment and reprogramming the all the emerging technologies around and, inevitably, with more to come, the inevitable hurdle will be one of convergence and integration as the IT industry seeks to develop the tools that will be most sought after. Inevitably, there will be winners and , there is no doubt that the wireless phenomenon is reshaping enterprise connectivity worldwide and is definitely here to stay. Business needs information mobility for better customer interaction. Employees will be even more equipped to perform their job functions from their workplace of choice and, though this sounds like utopia, a societal change from office based to 'wherever they feel like being' based might conjure up an horrific vision of the future for company leaders who have enjoyed the traditions of having all their employees working from under the same major issue has to be one of security. There are many issues when it comes to security over wireless networks. Wireless networks do not follow the rules of traditional wired networks. Many times, the signals are carried far beyond the physical parameters they are meant to be controlled within making it easier to intercept signals and capture will also be the question posed of what happens to the have nots? - Those people and developing countries in particular that don't have the resources to wirelessly interact with others. The same thing could be said about the Internet itself but satellites could alleviate that problem far more quickly than the ability to put broadband connections in every office and home throughout the major hurdle has to be that business and society can only adapt at a certain pace. Technology evolves far more quickly and there may be many a product developed for which the demand is not yet there. But the mobile phone and PC market driven by what the consumer wants will determine what the future of wireless there is no question that wireless communication is here to stay and will grow even of the new wireless technologies abound. Consumers are setting up wireless local area networks (WLANS) in their homes. These allow multiple computers to hook up to one fast internet connection or laptop users to connect from the comfort of their sofa or back garden patio. Away from home, 'Hotspots' that permit wireless connection to the internet are popping up everywhere, in book stores, coffee shops, airports and even pubs. Within the next year, airlines are expected to announce the availability of wi-fi during flights. However, until there is increased competition in the market place, this new epoch will be there for the privileged few as opposed to the mass market who will still be relying solely on their mobile phones for wireless connectivity on the move. It remains to be seen whether the new generation of 3G phones has arrived too late to push aside wi-fi and it's even conceivable that mobile phone companies could one day find themselves obsolete unless they look for new ways to attract and retain issues like security, along with the problems of cost, intrusion on privacy and identifying such things as hotspot locations is not going to hold wireless communication and technology back. In the end, there will always be solutions to problems and wi-fi is no different in this Reed, an adjunct professor at MIT's Media Lab in Cambridge, Massachusetts has been studying the future of wireless communications. He draws a comparison with the new wi-fi revolution with that of the 'paperless society' which was often mooted in offices and homes all over the world with the advent of the PC. He said, The market will push us towards a wireless future. People love paper but I can't find a single person who can say that about more wi-fi systems are developed which will, in turn, drive the cost down it will become an increasingly less disruptive way to communicate in the future and it will become very difficult for anything else out there to compete with is used by millions of people every minute. For many people the Internet is a "room" that is situated somewhere behind their computer screens in a cyberspace. Though the Internet exists for about a decade it has become the medium of the new network society. The popular and commercial spreading of the Internet has been exceedingly significant - promoting changes in almost every sphere of human activity and society. From the very beginning of the Internet in 1991, it has completely changed the way firms do business, as well as the way customers buy and use products and services. The Internet gives extra opportunities for marketing. The spreading of the Internet has been so impetuous that it has been the point for well-grounded analysis. The Internet, virtual reality, can or cannot have negative effects on our culture and society? This paper is concentrated on the Internet phenomenon and on the spreading of the Internet culture and its effects on people. The first ideas appeared in the 1950s. In the 1980s, technologies that became the basis of the modern Internet began to spread worldwide. In the 1990s the World Wide Web was used all over the world. The infrastructure of the Internet spread all over the world and the modern world wide network of computers have appeared. It spread amidst the western countries, then came into the developing countries and created a worldwide admittance to communications and data and a digital divide in admittance to this new infrastructure. While studying the amount of Internet users, the Internet had 30 million users on 10 million computers linked to over 240,000 networks in about 100 states. The last figures indicate the fact that International Data Corp values that 40 million people are home web users in the USA in 1999, which consists of 15% of the population. “Le Monde” in 1998 published that 100 million people use the Internet all over the world. Jupiter Communications estimates that active Internet users - 4 to 5 million USA customers - shop regularly on the Internet by 2000, which represents 3% of Internet is a very attractive marketing tool with the possibility to customize pages, as well as new promotional systems, giving firms the possibility of communication and promotion effectively by adapting to consumers’ likings. Interactive traits of the Internet permit asking customers their likings, and then the firm can adapt product offers and promotions to these likings. It provides the effective recruit of new customers. For instance, some car manufacturers ask Internet users for concrete information and in return give potential customers a $1,000 discount coupon or a free CD player coupon. 这里有很多,不知是通讯的具体什么方面,看看这里,找你想要的吧
社会网络分析理论: 在社会网络[63]由人类学家Barnes最早提出的概念,他在社会网络的分析基础上统地研究挪威一个小渔村的跨亲缘与阶级的关系。在社会网络分析中,存在一些经典的理论。这些理论主要包括:六度分割理论、弱关系理论、150法则、小世界网络理论、马太效应等。基于社会网络有关的研究方向和内容,在不同的领域着发挥着各自的作用,例如,社会影响力分析,社区发现,信息传播模型,链接预测,基于社会网络的推荐。 150法则是指一个人能保持稳定社交关系的人数上限通常为150人。1929年由英国罗宾•邓巴教授(Robin Dunbar)提出了经典的”150定律”理论,该定律同时也被称为“邓巴数字”[64]。这个定律在我们的实际日常生活中的应用是相当普遍的,SIM卡中只能存储150个联系人的电话,微软的MSN中也只可以最多把150位联系人的信息添加到自己的名单中[64]等等。 小世界网络是一种具有特殊结构的复杂网络,在这种网络中大部份的节点是不相邻的,但绝大部份节点之间是连通的且距离很短。六度分割理论也是小世界网络理论的一种体现。在多数现实世界的社会网络中,尽管网络中的节点数量巨大,网络中相邻的节点相对较少,但每两个节点间往往只需要很短的距离便能连通。 六度分割就是指一个人与其他任何一个人之间建立起联系,最多都只需要经过六个人。所以,即便邓巴数字告诉我们,我们是能力上维持一个特别大的社交圈的,但是六度分割理论却可以告诉我们,通过我们现有的社交人脉圈以及网络可以无限扩张我们的人脉圈,在需要的时候都能够和地球中想要联系的任何人取得联系。 弱关系理论弱关系(Weak Tie)是指需要较少或不需要情感联系的人们之间的社会联系,这种联系几乎不需要耗费个人的时间或精力来维系,但这种联系却很有作用。美国社会学家Mark Granovetter在研宄人们在求职过程中如何获取工作信息时发现[65],由家人、好友等构成的强关系在获取工作信息过程中起到的作用很有限,而那些关系较疏远的同学、前同事等反而能够提供更加有用的求职信息。 马太效应可以理解为达尔文进化论中适者生存的理念。在社交网络的发展过程如同生物进化的过程,存在强者越强、弱者越弱的现象。也就是说,在社交网络中越是处于网络核心的节点很大可能会变来越核心,而那些处于社交网络中边缘地带的节点或许会越来越不重要甚至直至消失。那些在社交网络中相比其他节点拥有更大影响力的节点,其带给该网络的影响也要比那些拥有弱影响力的节点所带来的影响要强。 从不同角度探索节点影响力挖掘算法: 1.基于邻节点中心性的方法。这类方法最简单最直观,它根据节点在网络中的位置来评估节点的影响力。度中心性[13]考察网络中节点的直接邻居数目,半局部中心性[14]考察网络中节点四层邻居的信息,ClusterRank[15]同时考虑了网络中节点的度和聚类系数。 2.基于路径中心性的方法。这类方法考察了节点在控制信息流方面的能力,并刻画节点的重要性。这类方法包括子图中心性[16]、数中心性[17](一些演化算法包括:路由介数中心性[18],流介数中心性[19],连通介数中心性[20],随机游走介数中心性[21]等)及其他基于路径的挖掘方法。 3.迭代寻优排序方法。这类方法不仅考虑了网络中节点邻居的数量,并且考虑邻居质量对节点重要性的影响,包括了特征向量中心性[13],累积提名[22],PageRank算法[23]及其变种[24-32]。 4.基于节点位置的排序算法。这类方法最显著的特点是,算法并没有给出一个计算节点重要性的定义,而是通过确定节点在网络中的位置,以此来确定节点的重要程度。在网络核心位置的节点,其重要性就相对较高,相反的,若节点处于网络边缘,那么它的重要性就会比较低。基于节点位置的以及不同应用场景的推荐算法具有重要的研究意义[34-37]。 节点影响力评估方法: 在社交网络节点影响力的评估方法主要可以分为三类,基于静态统计量的评估方法、基于链接分析算法的评估方法,基于概率模型的评估方法。 众学者在静态统计量的方法上,结合不同社交网络中相关信息,借鉴链接分析法以及建立概率模型来评估节点影响力,对社交网络节点影响力可以做到更有效的评估[66]。 1)基于静态统计量度量方法 主要是通过网络中节点的一些静态属性特征来简单直接地体现节点的影响力,但面对社交网络中复杂信息以及不同平台,并不能有效地度量不同社交网络中节点影响力。如度中心性,主观认为节点的重要性取决于与其他节点连接数决定,即认为一个节点的邻居节点越多,影响力越大。在有向网络中,根据边的方向,分为入度和出度,在有权网络中,节点的度可以看作强度,即边的权重之和。度中心性刻画了节点的直接影响力,度中心性指标的特点是简单、直观、计算复杂度低,也具有一定合理性。 但针对不同平台的网络结构中,度中心性的影响力效果未必能达到目标效果,而且社交网络中用户间关系的建立具有一定的偶然性,而且不同的用户间的关系强度也不同。度中心性没有考虑了节点的最局部信息,虽然对影响力进行了直接描述,但是没有考虑周围节点处所位置以及更高阶邻居。众学者在静态统计量的方法上,结合不同社交网络中相关信息,借鉴链接分析法以及建立概率模型来评估节点影响力,对社交网络节点影响力可以做到更有效的评估[66-67]。 2)基于链接分析算法的方法 链接分析算法(Link Analysis)主要应用在万维网中用来评估网页的流行性。通过超链接,万维网中的网页连接成一个网络,同时这个网络也具备了小世界网络的特征,且微博平台中的关注和粉丝关系与网页的链入与链出十分相似,因此链接分析法的思想也被应用在了微博社交网络中节点影响力的评估中。经典的算法是PageRank[68]和HITS算法[69](Hyperlink-Induced Topic Search)。 PageRank算法模型,是Google在搜索引擎结果中对网站排名的核心算法,核心思想通过计算页面链接的数量和质量,来确定网站的重要性的粗略估计,即节点的得分取决于指向它的节点的数量和这些节点的本身得分。即有越多的优质节点指向某节点时它的得分越高。 HITS算法是由Jon Kleinberg于1997年提出的。HITS算法模型中,有两类节点,权威(Authority)节点,和枢纽(Hub)节点。权威节点在网络中具有高权威性,枢纽节点具有很个指向边的节点。通过计算网络中每个节点的Authority权威值和Hub枢纽值来寻找高权威性的节点。即求值过程是在迭代中计算Authority和Hub值,直到收敛状态。Hub值和Authority值计算公式。 通过多数研究者发现,将链接分析法结合社交网络特性可以更好的对用户影响力进行评估,由于技术的快速发展,社交网络的多变性,因此如何将社交网络中的复杂数据和用户行为与相关算法进行结合,仍是需要我们继续研究的方向。 3)基于概率模型的方法 主要是建立概率模型对节点影响力进行预测。这么多学者将用户影响力作为参数对社交网络中的节点用户行为建立概率模型,并根据社交网络中已有的用户数据求解概率模型,得出用户影响力。 文献[70]认为用户间影响力越大、被影响用户的活跃度和转发意愿越高,则其转发另一个用户的信息的概率越大,所以利用用户影响力、转发意愿和活跃度等构建转发概率模型。通过用户发布的tweet数量、转发的tweet数和用户的历史转发行为数据,计算出用户活跃度、转发意愿和转发概率,进而社交网络中用户影响力。 文献[71]在度量影响力时融合了用户发布信息的主题生成过程,认为兴趣相似或经常联系的用户间影响力较强,用户的行为受其朋友的影响也受其个人兴趣的影响。基于这些假设,结合文本信息和网络结构对LDA模型进行扩展,在用户发布信息的基础上建立模型,通过解模型计算得出用户间基于主题的影响力。 文献[72]认为转发概率同样可以体现用户间的影响力,根据用户间的关注关系。历史转发记录,利用贝叶斯模型预测用户间的转发概率。 文献[73]考虑了用户建立关注关系的原因,用户被关注可能是与关注者兴趣投,也可能受用户的影响力影响。将基于用户的主题建模和基于主题的影响力评估相结合,并在同一个生成模型中进行计算,提出基于LDA算法模型的扩展算法模型FLDA模型(Followship-LDA)。[13] P. Bonacich. Factoring and weighting approaches to status scores and clique identification[J]. Journal of Mathematical Sociology, 1972, 2(1): 113-120 [14]ü,[J]. Physica A, 2012, 391(4): 1777-1787 [15] D. B. Chen, H. Gao, L. Lü, et al. Identifying influential nodes in large-scale directed networks: The role of clustering[J]. PLoS One, 2013, 8(10): e77455 [16], . [J].Physical Review E, 2005, 71(5): 122-133 [17][J].Sociometry,1977, 40(1): 35-41 [18] S. Dolev, Y. Elovici, R. Puzis. Routing betweenness centrality[J].Journal of the ACM, 2010, 57(4): 710-710 [19] Y. Gang,, H. Bo,etal. Efficientroutingoncomplexnetworks[J].PhysicalReviewE, 2005, 73(4): 46108 [20] E. Estrada, D. J. Higham, N. Hatano. Communicability betweenness in complex networks[J]. Physica A, 2009, 388(5): 764-774 [21][J].Social networks, 2005, 27(1): 39-54 [22] networks[J]. Social networks, 2000, 22(3): 187-200 [23] B. S. Brin, L. Page. The anatomy of a large scale hypertextual Web search engine[J]. Computer Networks & ISDN Systems, 1998, 30: 107-117 [24] P. Jomsri, S. Sanguansintukul, W. Choochaiwattana. CiteRank: combination similarity and static ranking with research paper searching[J]. International Journal of Internet Technology & Secured Transactions, 2011, 3(2): 161-177 [13][25][D].California: University of California. 2012 [26] J. Weng, E. P. Lim, J. Jiang, et al. Twitterrank: finding topic-sensitive influential twitterers[C]. Third International Conference on Web Search & Web Data Mining, ACM, 2010, 261-270 [27]: distinguishingbetweenprestigeandpopularity[J].NewJournalofPhysics,2012,14(14): 33033-33049 [28] J. Xuan, H. Jiang, , et al. Developer prioritization in bug repositories[C]. International Conference on Software Engineering, 2012, 25-35 [29]ü,[J]. Physica A, 2013, 404(24)47-55 [30] L. Lü, Y. C. Zhang, C H Yeung, et in social networks, the delicious case[J]. PLoS One, 2011, 6(6): e21202 [31][J].Authoritative sources in a hyperlinked environmen, 1999, 46(5): 604-632 [32](SALSA)andthe TKC effect[J]. Computer Networks, 2000, 33(2): 387-401 [33][J].Physical Review E, 2014, 90(5): 052808 [34] A. Banerjee, A. G. Chandrasekhar, E. Duflo, et al. Gossip: Identifying central individuals in a social network[R]. National Bureau of Economic Research, 2014. [35] percolation in social networks[J]. arXiv preprint arXiv:, 2015. [36] S. Y. Tan, J. Wu, L. Lü, et al. Efficient network disintegration under incomplete information: the comic effect of link prediction[J]. Scientific Reports, 2016, 6. [37]任晓龙,吕琳媛.网络重要节点排序方法综述[J].科学通报, 2014,59(13): 1175-1197 [63]贝克,晓冬.社会资本制胜:如何挖掘个人与企业网络中的隐性资源[M].上海交通大学出版社,2002. [64]天涯.六度分隔理论和150法则[EB/OL].|.[2010-07-14]. [65]Granovetter M Strength of Weak Ties[J]. American journal of sociology, 1973: 1360-1380. [66]王梓.社交网络中节点影响力评估算法研究[D].北京邮电大学, 2014. [67] Meeyoung Cha, Hamed Haddadi,Fabricio Benevenutoets. Measuring User Influence in Twitter: The Million Follower Fallacy[C]. Proceedings of the 4th International AAAI Conference on Weblogs and Social Media (ICWSM),2010:10-17 [3][68] Page, Lawrence, Brin, et al. The PageRank citation ranking[C]// BringingOrder to the Web. Stanford InfoLab. 1998: 1-14. [4][69]Kleinberg J M. Authoritative sources in a hyperlinked environment[J]. Journal of the ACM, 1999, 46(5): 604-632. [70]Zibin Yin, Ya Zhang. Measuring Pair-Wise Social Influence inMicroblog[C], 2012 ASE/IEEE International Conference on SocialComputing and 2012 ASE/IEEE International Conference on Privacy,Security, Risk and Trust, 2012: 502-507. [71]Lu Liu, Jie Tang, Jiawei Han, Meng Jiang, Shiqiang Yang. Mining topic-level influence in heterogeneous networks[C]. Proceedings of the 19th ACMinternational conference on information and knowledge management, 2010: 199-208. [72] Qianni Deng, Yunjing Dai. How Your Friends Influence You: Quantifying Pairwise Influences on Twitter[C], International Conference on Cloud and Service Computing, 2012:185-192. [73] Bi, Bin, et al. Scalable Topic-Specific Influence Analysis on Microblogs[C], Proceedings of the 7th ACM international conference on Web search and data mining,2014: 513-522.
(第一篇)这篇简单介绍了TCP/IP协议。 可供参考。What is TCP/IP? TCP/IP (Transmission Control Protocol/Internet Protocol) is the basic communication language or protocol of the Internet. It can also be used as a communications protocol in a private network (either an intranet or an extranet). When you are set up with direct access to the Internet, your computer is provided with a copy of the TCP/IP program just as every other computer that you may send messages to or get information from also has a copy of TCP/ is a two-layer program. The higher layer, Transmission Control Protocol, manages the assembling of a message or file into smaller packets that are transmitted over the Internet and received by a TCP layer that reassembles the packets into the original message. The lower layer, Internet Protocol, handles the address part of each packet so that it gets to the right destination. Each gateway computer on the network checks this address to see where to forward the message. Even though some packets from the same message are routed differently than others, they'll be reassembled at the uses the client/server model of communication in which a computer user (a client) requests and is provided a service (such as sending a Web page) by another computer (a server) in the network. TCP/IP communication is primarily point-to-point, meaning each communication is from one point (or host computer) in the network to another point or host computer. TCP/IP and the higher-level applications that use it are collectively said to be "stateless" because each client request is considered a new request unrelated to any previous one (unlike ordinary phone conversations that require a dedicated connection for the call duration). Being stateless frees network paths so that everyone can use them continuously. (Note that the TCP layer itself is not stateless as far as any one message is concerned. Its connection remains in place until all packets in a message have been received.)Many Internet users are familiar with the even higher layer application protocols that use TCP/IP to get to the Internet. These include the World Wide Web's Hypertext Transfer Protocol (HTTP), the File Transfer Protocol (FTP), Telnet (Telnet) which lets you logon to remote computers, and the Simple Mail Transfer Protocol (SMTP). These and other protocols are often packaged together with TCP/IP as a "suite."Personal computer users with an analog phone modem connection to the Internet usually get to the Internet through the Serial Line Internet Protocol (SLIP) or the Point-to-Point Protocol (PPP). These protocols encapsulate the IP packets so that they can be sent over the dial-up phone connection to an access provider's related to TCP/IP include the User Datagram Protocol (UDP), which is used instead of TCP for special purposes. Other protocols are used by network host computers for exchanging router information. These include the Internet Control Message Protocol (ICMP), the Interior Gateway Protocol (IGP), the Exterior Gateway Protocol (EGP), and the Border Gateway Protocol (BGP). (第二篇)这篇介绍了TCP/IP的发展。Development of TCP/IPThe original research was performed in the late 1960s and early 1970s by the Advanced Research Projects Agency (ARPA), which is the research arm of the US Department of Defense (DOD). The DOD wanted to build a network to connect a number of military sites. The key requirements for the network were as follows: * It must continue to function during nuclear war (development took place during the 'cold war'). The 7/8th rule required that the network should continue to function even when 7/8th of the network was not operational * It must be completely decentralized with no key central installation that could be destroyed and bring down the whole network * It must be fully redundant and able to continue communication between A and B even though intermediate sites and links might stop functioning during the conversation * The architecture must be flexible as the envisaged range of applications for the network was wide (anything from file transfer to time-sensitive data such as voice)ARPA hired a firm called BBN to design the network. The prototype was a research network called ARPANET (first operational in 1972). This connected four university sites using a system described as a packet switching to this development, any two computers wanting to communicate had to open a direct channel (known as a circuit) and information was then sent. If this circuit were broken, the computers would stop communicating immediately, which the DOD specifically wanted to computer could forward information to another by using packet-switching, so it superseded circuit-switched networks. To ensure information reached the correct destination, each packet was addressed with a source and destination and the packet was then transferred using any available pathway to the destination was divided into small chunks or packets (originally 1008 bits). Sending large chunks of information has always presented problems, often because the full message fails to reach its destination at the first attempt, and the whole message then has to be resent. The facilities within the new protocol to divide large messages into numerous small packets meant that a single packet could be resent if it was lost or damaged during transmission, rather than the whole new network was decentralized with no one computer controlling its operation where the packet switching protocol controlled most of the network is a very robust protocol and can automatically recover from any communication link failures. It re-routes data packets if transmission lines are damaged or if a computer fails to respond, utilizing any available network path. The figure below shows an example of an Internet system. A packet being sent from Network A to Network F may be sent via Network D (the quickest route). If this route becomes unavailable, the packet is routed using an alternate route (for example, A B C E F).Once ARPANET was proven, the DOD built MILNET (Military Installation in US) and MINET (Military Installation in Europe). To encourage the wide adoption of TCP/IP, BBN and the University of California at Berkeley were funded by the US Government to implement the protocol in the Berkeley version of Unix. UNIX was given freely to US universities and colleges, allowing them to network their computers. Researchers at Berkeley developed a program interface to the network protocol called sockets and wrote many applications using this the early 1980s, the National Science Foundation (NSF) used Berkeley TCP/IP to create the Computer Science Network (CSNET) to link US universities. They saw the benefit of sharing information between universities and ARPANET provided the infrastructure. Meanwhile, in 1974 a successor to ARPANET was developed named NSFNET. This was based on a backbone of six supercomputers into which many regional networks were allowed to first stage in the commercial development of the Internet occurred in 1990 when a group of telecommunications and computer companies formed a non-profit making organization called Advanced Networks and Services (ANS). This organization took over NSFNET and allowed commercial organizations to connect to the system. The commercial Internet grew from these networks.上述两篇都可供参考。 一、TCP/IP协议簇简介TCP/IP(传输控制协议/网间协议)是一种网络通信协议,它规范了网络上的所有通信设备,尤其是一个主机与另一个主机之间的数据往来格式以及传送方式。TCP/IP是 INTERNET的基础协议,也是一种电脑数据打包和寻址的标准方法。在数据传送中,可以形象地理解为有两个信封,TCP和IP就像是信封,要传递的信息被划分成若干段,每一段塞入一个TCP信封,并在该信封面上记录有分段号的信息,再将TCP信封塞入IP大信封,发送上网。在接受端,一个TCP软件包收集信封,抽出数据,按发送前的顺序还原,并加以校验,若发现差错,TCP将会要求重发。因此,TCP/IP在INTERNET中几乎可以无差错地传送数据。在任何一个物理网络中,各站点都有一个机器可识别的地址,该地址叫做物理地址.物理地址有两个特点:(1)物理地址的长度,格式等是物理网络技术的一部分,物理网络不同,物理地址也不同.(2)同一类型不同网络上的站点可能拥有相同的物理地址.以上两点决定了,不能用物理网络进行网间网通讯.在网络术语中,协议中,协议是为了在两台计算机之间交换数据而预先规定的标准。TCP/IP并不是一个而是许多协议,这就是为什么你经常听到它代表一个协议集的原因,而TCP和IP只是其中两个基本协议而已。你装在计算机-的TCP/IP软件提供了一个包括TCP、IP以及TCP/IP协议集中其它协议的工具平台。特别是它包括一些高层次的应用程序和FTP(文件传输协议),它允许用户在命令行上进行网络文件传输。TCP/IP 是美国政府资助的高级研究计划署(ARPA)在二十世纪七十年代的一个研究成果,用来使全球的研究网络联在一起形成一个虚拟网络,也就是国际互联网。原始的Internet通过将已有的网络如ARPAnet转换到TCP/IP上来而形成,而这个Internet最终成为如今的国际互联网的骨干网。如今TCP/IP如此重要的原因,在于它允许独立的网格加入到Internet或组织在一起形成私有的内部网(Intranet)。构成内部网的每个网络通过一种-做路由器或IP路由器的设备在物理上联接在一起。路由器是一台用来从一个网络到另一个网络传输数据包的计算机。在一个使用TCP/IP的内部网中,信息通过使用一种独立的叫做IP包(IPpacket)或IP数据报(IP datagrams)的数据单元进--传输。TCP/IP软件使得每台联到网络上的计算机同其它计算机“看”起来一模一样,事实上它隐藏了路由器和基本的网络体系结构并使其各方面看起来都像一个大网。如同联入以太网时需要确认一个48位的以太网地址一样,联入一个内部网也需要确认一个32位的IP地址。我们将它用带点的十进制数表示,如。给定一个远程计算机的IP地址,在某个内部网或Internet上的本地计算机就可以像处在同一个物理网络中的两台计算机那样向远程计算机发送数据。TCP/IP 提供了一个方案用来解决属于同一个内部网而分属不同物理网的两台计算机之间怎样交换数据的问题。这个方案包括许多部分,而TCP/IP协议集的每个成员则用来解决问题的某一部分。如TCP/IP协议集中最基本的协议-IP协议用来在内部网中交换数据并且执行一项重要的功能:路由选择--选择数据报从A主机到B主机将要经过的路径以及利用合适的路由器完成不同网络之间的跨越(hop)。TCP 是一个更高层次的它允许运行在在不同主机上的应用程序相互交换数据流。TCP将数据流分成小段叫做TCP数据段(TCP segments),并利用IP协议进行传输。在大多数情况下,每个TCP数据段装在一个IP数据报中进行发送。但如需要的话,TCP将把数据段分成多个数据报,而IP数据报则与同一网络不同主机间传输位流和字节流的物理数据帧相容。由于IP并不能保证接收的数据报的顺序相一致,TCP会在收信端装配 TCP数据段并形成一个不间断的数据流。FTP和Telnet就是两个非常流行的依靠TCP的TCP/IP应用程序。另一个重要的TCP/IP协议集的成员是用户数据报协议(UDP),它同TCP相似但比TCP原始许多。TCP是一个可靠的协议,因为它有错误检查和握手确认来保证数据完整的到达目的地。UDP是一个“不可靠”的协议,因为它不能保证数据报的接收顺序同发送顺序相同,甚至不能保证它们是否全部到达。如果有可靠性要求,则应用程序避免使用它。同许多TCP/IP工具同时提供的SNMP(简单网络管理协议)就是一个使用UDP协议的应用例子。其它TCP/IP协议在TCP/IP网络中工作在幕后,但同样也发挥着重要作用。例如地址转换协议(ARP)将IP地址转换为物理网络地址如以太网地址。而与其对应的反向地址转换协议(RARP)做相反的工作,即将物理网络地址转换为IP地址。网际控制报文协议(ICMP)则是一个支持性协议,它利用IP完成IP数据报在传输时的控制信息和错误信息的传输。例如,如果一个路由器不能向前发送一个IP数据报,它就会利用ICMP来告诉发送者这里出现了问题。 这个不是原版翻译,不过相差不多。 -0-。你先要的是英文版啊~ 囧~ 要不你再发个帖,找人翻译下。
What is TCP/IP? TCP/IP (Transmission Control Protocol/Internet Protocol) is the basic communication language or protocol of the Internet. It can also be used as a communications protocol in a private network (either an intranet or an extranet). When you are set up with direct access to the Internet, your computer is provided with a copy of the TCP/IP program just as every other computer that you may send messages to or get information from also has a copy of TCP/ is a two-layer program. The higher layer, Transmission Control Protocol, manages the assembling of a message or file into smaller packets that are transmitted over the Internet and received by a TCP layer that reassembles the packets into the original message. The lower layer, Internet Protocol, handles the address part of each packet so that it gets to the right destination. Each gateway computer on the network checks this address to see where to forward the message. Even though some packets from the same message are routed differently than others, they'll be reassembled at the uses the client/server model of communication in which a computer user (a client) requests and is provided a service (such as sending a Web page) by another computer (a server) in the network. TCP/IP communication is primarily point-to-point, meaning each communication is from one point (or host computer) in the network to another point or host computer. TCP/IP and the higher-level applications that use it are collectively said to be "stateless" because each client request is considered a new request unrelated to any previous one (unlike ordinary phone conversations that require a dedicated connection for the call duration). Being stateless frees network paths so that everyone can use them continuously. (Note that the TCP layer itself is not stateless as far as any one message is concerned. Its connection remains in place until all packets in a message have been received.)Many Internet users are familiar with the even higher layer application protocols that use TCP/IP to get to the Internet. These include the World Wide Web's Hypertext Transfer Protocol (HTTP), the File Transfer Protocol (FTP), Telnet (Telnet) which lets you logon to remote computers, and the Simple Mail Transfer Protocol (SMTP). These and other protocols are often packaged together with TCP/IP as a "suite."Personal computer users with an analog phone modem connection to the Internet usually get to the Internet through the Serial Line Internet Protocol (SLIP) or the Point-to-Point Protocol (PPP). These protocols encapsulate the IP packets so that they can be sent over the dial-up phone connection to an access provider's related to TCP/IP include the User Datagram Protocol (UDP), which is used instead of TCP for special purposes. Other protocols are used by network host computers for exchanging router information. These include the Internet Control Message Protocol (ICMP), the Interior Gateway Protocol (IGP), the Exterior Gateway Protocol (EGP), and the Border Gateway Protocol (BGP).
309 浏览 6 回答
287 浏览 4 回答
329 浏览 2 回答
228 浏览 2 回答
148 浏览 4 回答
226 浏览 3 回答
210 浏览 6 回答
194 浏览 4 回答
211 浏览 2 回答
279 浏览 4 回答
247 浏览 3 回答
309 浏览 4 回答
313 浏览 4 回答
97 浏览 3 回答
341 浏览 3 回答