Communications satellites are used for military communications applications, such as Global Command and Control Systems. Examples of military systems that use use communication satellites are the MILSTAR, the DSCS, and the FLTSATCOM of the United States, NATO satellites, United Kingdom satellites, and satellites of the former Soviet Union. Many military satellites operate in the X-band, and some also use UHF radio links, while MILSTAR also utilizes Ka band.
Submarine communications cable
The first submarine communications cables carried telegraphy traffic. Subsequent generations of cables carried first telephony traffic, then data communications traffic. All modern cables use optical fiber technology to carry digital payloads, which are then used to carry telephone traffic as well as Internet and private data traffic. They are typically 69 millimetres (2.7 in) in diameter and weigh around 10 kilograms per meter (7 lb/ft), although thinner and lighter cables are used for deep-water sections.[1]
As of 2003, submarine cables link all the world's continents except Antarctica.
Trials
After William Cooke and Charles Wheatstone had introduced their working telegraph in 1839, the idea of a submarine line across the Atlantic Ocean began to be thought of as a possible triumph of the future. Samuel Morse proclaimed his faith in it as early as the year 1840, and in 1842 he submerged a wire, insulated with tarred hemp and India rubber,[2][3] in the water of New York harbour, and telegraphed through it. The following autumn Wheatstone performed a similar experiment in Swansea bay. A good insulator to cover the wire and prevent the electric current from leaking into the water was necessary for the success of a long submarine line. India rubber had been tried by Moritz von Jacobi, the Prussian electrical engineer, as far back as the early 1800s.
Another insulating gum which could be melted by heat and readily applied to wire made its appearance in 1842. Gutta-percha, the adhesive juice of the Palaquium gutta tree, was introduced to Europe by William Montgomerie, a Scottish surgeon in the service of the British East India Company. Twenty years earlier he had seen whips made of it in Singapore, and he believed that it would be useful in the fabrication of surgical apparatuses. Michael Faraday and Wheatstone soon discovered the merits of gutta-percha as an insulator, and in 1845 the latter suggested that it should be employed to cover the wire which was proposed to be laid from Dover to Calais. It was tried on a wire laid across the Rhine between Deutz and Cologne. In 1849 C.V. Walker, electrician to the South Eastern Railway, submerged a wire coated with it, or, as it is technically called, a gutta-percha core, along the coast off Dover.
The first commercial cables
In August 1850, John Watkins Brett's Anglo-French Telegraph Company laid the first line across the English Channel. It was simply a copper wire coated with gutta-percha, without any other protection. The experiment served to keep alive the concession, and the next year, on November 13, 1851, a protected core, or true cable, was laid from a government hulk, the Blazer, which was towed across the Channel. The next year, Great Britain and Ireland were linked together. In 1852, a cable laid by the Submarine Telegraph Company linked London to Paris for the first time. In May, 1853, England was joined to the Netherlands by a cable across the North Sea, from Orford Ness to The Hague. It was laid by the Monarch, a paddle steamer which had been fitted for the work.
The first commercial cables
In August 1850, John Watkins Brett's Anglo-French Telegraph Company laid the first line across the English Channel. It was simply a copper wire coated with gutta-percha, without any other protection. The experiment served to keep alive the concession, and the next year, on November 13, 1851, a protected core, or true cable, was laid from a government hulk, the Blazer, which was towed across the Channel. The next year, Great Britain and Ireland were linked together. In 1852, a cable laid by the Submarine Telegraph Company linked London to Paris for the first time. In May, 1853, England was joined to the Netherlands by a cable across the North Sea, from Orford Ness to The Hague. It was laid by the Monarch, a paddle steamer which had been fitted for the work.
Transatlantic telegraph cable
Five attempts to lay it were made over a nine-year period—in 1857, two in 1858, in 1865, and in 1866—before lasting connections were finally achieved by the SS Great Eastern captained by Sir James Anderson with the 1866 cable and the repaired 1865 cable. Additional cables were laid between Foilhommerum and Heart's Content in 1873, 1874, 1880 and 1894. By the end of the 19th century, British-, French-, German- and American-owned cables linked Europe and North America in a sophisticated web of telegraphic communications.
Cyrus West Field was the force behind the first transatlantic telegraph cable, attempted unsuccessfully in 1857 and completed on August 5, 1858. Although not considered particularly successful or long-lasting, it was the first transatlantic cable project to yield practical results. The first official telegram to pass between two continents was a letter of congratulation from Queen Victoria of the United Kingdom to the President of the United States James Buchanan on August 16. The cable was destroyed the following month when Wildman Whitehouse applied excessive voltage to it while trying to achieve faster telegraph operation. The shortness of the period of use undermined public and investor confidence in the project, and delayed efforts to restore a connection. A next attempt was undertaken in 1865 with much-improved material and, following some setbacks, a connection was completed and put into service on July 28, 1866. This time the connection was more durable, and increased public confidence resulted when the 1865 cable was repaired and put into service shortly afterwards.
Whereas previously communication could only happen over ship, the transatlantic telegraph cable sped up communication to within minutes, allowing communication to happen within the same day. In the 1870s, duplex and quadruplex transmission and receiving systems were set up that could relay multiple messages over the cable. In cross-Atlantic currency trading, the pound sterling came to be referred as "cable" and to this day a cable in financial jargon is one million pounds[1]. The great utility of the cable built on itself, and multiple cables were established soon afterward.
Cable to India, Singapore, Far East and Australasia
Submarine cable across the Pacific
This was completed in 1902–03, linking the US mainland to Hawaii in 1902 and Guam to the Philippines in 1903.[5] Canada, Australia, New Zealand and Fiji were also linked in 1902.[6]
The North Pacific Cable system was the first regenerative (repeatered) system to completely cross the Pacific from the US mainland to Japan. The US portion of NPC was manufactured in Portland, Oregon, from 1989–1991 at STC Submarine Systems, and later Alcatel Submarine Networks. (The plant was shut down in 2001.) The system was laid by Cable & Wireless Marine on the CS Cable Venture in 1991.
Construction
Transatlantic cables of the 19th century consisted of an outer layer of iron and later steel wire, wrapping India rubber, wrapping gutta-percha, which surrounded a multi-stranded copper wire at the core. The portions closest to each shore landing had additional protective armor wires. Gutta-percha, a natural polymer similar to rubber, had nearly ideal properties for insulating submarine cables, with the exception of a rather high dielectric constant which made cable capacitance high. Gutta-percha was not replaced as a cable insulation until polyethylene was introduced in the 1930s. In the 1920s, the American military experimented with rubber-insulated cables as an alternative to gutta-percha, since American interests controlled significant supplies of rubber but no gutta-percha manufacturers.
Bandwidth problems
Early long-distance submarine telegraph cables exhibited formidable electrical problems. Unlike modern cables, the technology of the 19th century did not allow for in-line repeater amplifiers in the cable. Large voltages were used to attempt to overcome the electrical resistance of their tremendous length but the cables' distributed capacitance and inductance combined to distort the telegraph pulses in the line, severely limiting the data rate for telegraph operation. Thus, the cables had very limited bandwidth.
As early as 1823,[citation needed] Francis Ronalds had observed that electric signals were retarded in passing through an insulated wire or core laid underground, and the same effect was noticed by Latimer Clark (1853) on cores immersed in water, and particularly on the lengthy cable between England and The Hague. Michael Faraday showed that the effect was caused by capacitance between the wire and the earth (or water) surrounding it. Faraday had noted that when a wire is charged from a battery (for example when pressing a telegraph key), the electric charge in the wire induces an opposite charge in the water as it travels along. As the two charges attract each other, the exciting charge is retarded. The core acts as a capacitor distributed along the length of the cable which, coupled with the resistance and inductance of the cable limits the speed at which a signal travels through the conductor of the cable.
Early cable designs failed to analyze these effects correctly. Famously, E.O.W. Whitehouse had dismissed the problems and insisted that a transatlantic cable was feasible. When he subsequently became electrician of the Atlantic Telegraph Company he became involved in a public dispute with William Thomson. Whitehouse believed that, with enough voltage, any cable could be driven. Because of the excessive voltages recommended by Whitehouse, Cyrus West Field's first transatlantic cable never worked reliably, and eventually short circuited to the ocean when Whitehouse increased the voltage beyond the cable design limit.
Thomson designed a complex electric-field generator that minimized current by resonating the cable, and a sensitive light-beam mirror galvanometer for detecting the faint telegraph signals. Thomson became wealthy on the royalties of these, and several related inventions. Thomson was elevated to Lord Kelvin for his contributions in this area, chiefly an accurate mathematical model of the cable, which permitted design of the equipment for accurate telegraphy. The effects of atmospheric electricity and the geomagnetic field on submarine cables also motivated many of the early polar expeditions.
Thomson had produced a mathematical analysis of propagation of electrical signals into telegraph cables based on their capacitance and resistance, but since long submarine cables operated at slow rates, he did not include the effects of inductance. By the 1890s, Oliver Heaviside had produced the modern general form of the telegrapher's equations which included the effects of inductance and which were essential to extending the theory of transmission lines to higher frequencies required for high-speed data and voice.
Transatlantic telephony
In 1942, Siemens Brothers of Charlton, London in conjunction with the United Kingdom National Physical Laboratory, adapted submarine communications cable technology to create the world's first submarine oil pipeline in Operation Pluto during World War II.
TAT-1 (Transatlantic No. 1) was the first transatlantic telephone cable system. Between 1955 and 1956, cable was laid between Gallanach Bay, near Oban, Scotland and Clarenville, Newfoundland and Labrador. It was inaugurated on September 25, 1956, initially carrying 36 telephone channels.
In the 1960s, transoceanic cables were coaxial cables that transmitted frequency-multiplexed voiceband signals. A high voltage direct current on the inner conductor powered the repeaters. The first-generation repeaters are among the most reliable vacuum tube amplifiers ever designed.[7] Later ones were transistorized. Many of these cables are still usable, but abandoned because their capacity is too small to be commercially viable. Some have been used as scientific instruments to measure earthquake waves and other geomagnetic events.[8]
Optical telephone cables
Modern optical fiber repeaters use a solid-state optical amplifier, usually an Erbium-doped fiber amplifier. Each repeater contains separate equipment for each fiber. These comprise signal reforming, error measurement and controls. A solid-state laser dispatches the signal into the next length of fiber. The solid-state laser excites a short length of doped fiber that itself acts as a laser amplifier. As the light passes through the fiber, it is amplified. This system also permits wavelength-division multiplexing, which dramatically increases the capacity of the fiber.
Repeaters are powered by a constant direct current passed down the conductor near the center of the cable, so all repeaters in a cable are in series. Power feed equipment is installed at the terminal stations. Typically both ends share the current generation with one end providing a positive voltage and the other a negative voltage. A virtual earth point exists roughly half way along the cable under normal operation. The amplifiers or repeaters derive their power from the potential difference drop across them.
The optic fiber used in undersea cables is chosen for its exceptional clarity, permitting runs of more than 100 kilometers between repeaters to minimize the number of amplifiers and the distortion they cause.
Originally, submarine cables were simple point-to-point connections. With the development of submarine branching units (SBUs), more than one destination could be served by a single cable system. Modern cable systems now usually have their fibers arranged in a self-healing ring to increase their redundancy, with the submarine sections following different paths on the ocean floor. One driver for this development was that the capacity of cable systems had become so large that it was not possible to completely back-up a cable system with satellite capacity, so it became necessary to provide sufficient terrestrial back-up capability. Not all telecommunications organizations wish to take advantage of this capability, so modern cable systems may have dual landing points in some countries (where back-up capability is required) and only single landing points in other countries where back-up capability is either not required, the capacity to the country is small enough to be backed up by other means, or having back-up is regarded as too expensive.
A further redundant-path development over and above the self-healing rings approach is the "Mesh Network" whereby fast switching equipment is used to transfer services between network paths with little to no effect on higher-level protocols if a path becomes inoperable. As more paths become available to use between two points, the less likely it is that one or two simultaneous failures will prevent end-to-end service.
Cable repair
Cables can be broken by fishing trawlers, anchoring, undersea avalanches and even shark bites. Breaks were common in the early cable laying era due to the use of simple materials and the laying of cables directly on the ocean floor rather than burying the cables in trenches in more vulnerable areas. Cables were also sometimes cut by enemy forces in wartime. Cable breaks are by no means a thing of the past, with more than 50 repairs a year in the Atlantic alone,[9] and significant breaks in 2006 and 2008.
To effect repairs on deep cables, the damaged portion is brought to the surface using a grapple. Deep cables must be cut at the seabed and each end separately brought to the surface, whereupon a new section is spliced in. The repaired cable is longer than the original, so the excess is deliberately laid in a 'U' shape on the seabed. A submersible can be used to repair cables that lie in shallower waters.
A number of ports near important cable routes became homes to specialised cable repair ships. Halifax, Nova Scotia was home to a half dozen such vessels for most of the 20th century including long-lived vessels such as the CS Cyrus West Field, CS Minia and CS Mackay-Bennett. The latter two were contracted to recover victims from the sinking of the RMS Titanic. The crews of these vessels developed many new techniques and devices to repair and improve cable laying, such as the "plough
Intelligence gathering
Underwater cables, which cannot be kept under constant surveillance, have tempted intelligence-gathering organizations since the late 19th century. Frequently at the beginning of wars nations have cut the cables of the other sides in order to shape the information flows into cables that were being monitored. The most ambitious efforts occurred in World War I, when British and German forces systematically attempted to destroy the others' worldwide communications systems by cutting their cables with surface ships or submarines.[10] During the Cold War the United States Navy and National Security Agency (NSA) succeeded in placing wire taps on Soviet underwater communication lines in Operation Ivy Bells.
Notable events
The Newfoundland earthquake of 1929 broke a series of trans-Atlantic cables by triggering a massive undersea avalanche. The sequence of breaks helped scientists chart the progress of the avalanche.
In July 2005, a portion of the SEA-ME-WE 3 submarine cable located 35 kilometres (22 mi) south of Karachi that provided Pakistan's major outer communications became defective, disrupting almost all of Pakistan's communications with the rest of the world, and affecting approximately 10 million Internet users.[11][12][13]
The 2006 Hengchun earthquake on December 26, 2006 rendered numerous cables near Taiwan inoperable.
In March, 2007, pirates stole an 11 kilometres (6.8 mi) section of the T-V-H submarine cable that connected Thailand, Vietnam, and Hong Kong, affecting Vietnam's Internet users with far slower speeds. The thieves attempted to sell the 100 tons of illicit cargo as scrap.[14]
The 2008 submarine cable disruption was a series of cable outages, two of the three Suez Canal cables, two disruptions in the Persian Gulf, and one in Malaysia. It caused massive communications disruptions to India and the Middle East.
Submarine communications cable
The first submarine communications cables carried telegraphy traffic. Subsequent generations of cables carried first telephony traffic, then data communications traffic. All modern cables use optical fiber technology to carry digital payloads, which are then used to carry telephone traffic as well as Internet and private data traffic. They are typically 69 millimetres (2.7 in) in diameter and weigh around 10 kilograms per meter (7 lb/ft), although thinner and lighter cables are used for deep-water sections.[1]
Ring Networks
Ring networks operate like bus networks with the exception of a terminating computer. In this configuration, the computers in the ring link to a main communication cable. The network receives information via a "token" containing information requested by one or more computers on the network. The token passes around the ring until the requesting computer(s) have received the data. The token uses a packet of information that serves as an address for the computer that requested the information. The computer then "empties" the token, which continues to travel the ring until another computer requests information to be put into the token. Figure 5 illustrates this topology.
An advanced version of the ring network uses two communication cables sending information in both directions. Known as a counter-rotating ring, this creates a fault tolerant network that will redirect transmission in the other direction, should a node on the network detect a disruption. This network uses fiber optic transceiver, one controlling unit in set in "master" mode along with several nodes that have been set as "remote" units. The first remote data transceiver receives the transmission from the master unit and retransmits it to the next remote unit as well as transmitting it back to the master unit. An interruption in the signal line on the first ring is bypassed via the second ring, allowing the network to maintain integrity. Figure 6 illustrates this configuration as it might be used in a ITS installation.Star Network
Star networks incorporate multiport star couplers in to achieve the topology. Once again, a main controlling computer or computer server interconnects with all the other computers in the network. As with the bus topology with a backbone, the failure of one computer node does not cause a failure in the network. Figure 4 illustrates a star network topology. Both the bus and the star network topologies use a central computer that controls the system inputs and outputs. Also called a server, this computer has external connections, to the Internet for example, as well as connections to the computer nodes in the network.
Bus Network
A bus network topology, also called a daisy-chain topology has each computer directly connected on a main communication line. One end has a controller, and the other end has a terminator. Any computer that wants to talk to the main computer must wait its turn for access to the transmission line. In a straight network topology, only one computer can communicate at a time. When a computer uses the network, the information is sent to the controller, which then sends the information down the line of computers until it reaches the terminating computer. Each computer in the line receives the same information. Figure 2 illustrates a bus network topology. A bus network with a backbone operates in the same fashion, but each computer has an individual connection to the network. A bus network with a backbone offers greater reliability than a simple bus topology. In a simple bus, if one computer in the network goes down, the network is broken. A backbone adds reliability in that the loss of one computer does not disrupt the entire network. Figure 3 illustrates this topology with a backbone.
Fiber Optic Network Topologies for ITS and Other Systems
All networks involve the same basic principle: information can be sent to, shared with, passed on, or bypassed within a number of computer stations (nodes) and a master computer (server). Network applications include LANs, MANs, WANs, SANs, intrabuilding and interbuilding communications, broadcast distribution, intelligent transportation systems (ITS), telecommunications, supervisory control and data acquisition (SCADA) networks, etc. In addition to its oft-cited advantages (i.e., bandwidth, durability, ease of installation, immunity to EMI/RFI and harsh environmental conditions, long-term economies, etc.), optical fiber better accommodates today's increasingly complex network. architectures than copper alternatives. Figure 1 illustrates the interconnection between these types of networks.Networks can be configured in a number of topologies. These include a bus, with or without a backbone, a star network, a ring network, which can be redundant and/or self-healing, or some combination of these. Each topology has its strengths and weaknesses, and some network types work better for one application while another application would use a different network type. Local, metropolitan, or wide area networks generally use a combination, or "mesh" topology.
Dispersion vs. Wavelength
Single-mode fiber dispersion varies with wavelength and is controlled by fiber design (see Figure 11). The wavelength at which dispersion equals zero is called the zero-dispersion wavelength (λ º ). This is the wavelength at which fiber has its maximum information-carrying capacity. For standard single-mode fibers, this is in the region of 1310 nm. The units for dispersion are also shown in the pic.
Dispersion
Dispersion is the time distortion of an optical signal that results from the time of flight differences of different components of that signal, typically resulting in pulse broadening (see Figure 10). In digital transmission, dispersion limits the maximum data rate, the maximum distance, or the information-carrying capacity of a single-mode fiber link. In analog transmission, dispersion can cause a waveform to become significantly distorted and can result in unacceptable levels of composite second-order distortion (CSO).
Attenuation
Attenuation is the reduction of signal strength or light power over the length of the light-carrying medium. Fiber attenuation is measured in decibels per kilometer (dB/km).
Optical fiber offers superior performance over other transmission media because it combines high bandwidth with low attenuation. This allows signals to be transmitted over longer distances while using fewer regenerators or amplifiers, thus reducing cost and improving signal reliability.
Attenuation of an optical signal varies as a function of wavelength (see Figure 9). Attenuation is very low, as compared to other transmission media (i.e., copper, coaxial cable, etc.), with a typical value of 0.35 dB/km at 1300 nm for standard single-mode fiber. Attenuation at 1550 nm is even lower, with a typical value of 0.25 dB/km. This gives an optical signal, transmitted through fiber, the ability to travel more than 100 km without regeneration or amplification.
Attenuation is caused by several different factors, but primarily scattering and absorption. The scattering of light from molecular level irregularities in the glass structure leads to the general shape of the attenuation curve (see Figure 9). Further attenuation is caused by light absorbed by residual materials, such as metals or water ions, within the fiber core and inner cladding. It is these water ions that cause the “water peak” region on the attenuation curve, typically around 1383 nm. The removal of water ions is of particular interest to fiber manufacturers as this “water peak” region has a broadening effect and contributes to attenuation loss for nearby wavelengths. Some manufacturers now offer low water peak single-mode fibers, which offer additional bandwidth and flexibility compared with standard single-mode fibers. Light leakage due to bending, splices, connectors, or other outside forces are other factors resulting in attenuation.
How to Choose Optical Fiber
Single-Mode Fiber Performance CharacteristicsThe key optical performance parameters for single-mode fibers are attenuation, dispersion, and mode-field diameter.
Optical fiber performance parameters can vary significantly among fibers from different manufacturers in ways that can affect your system's performance. It is important to understand how to specify the fiber that best meets system requirements.
Business Voice and Multimedia Demonstrations
Nortel's business voice and multimedia solutions, including Nortel's IP Powered Business solution, demonstrate how small and medium sized businesses (SMBs) and enterprises can stay connected while on-the-go with applications that increase productivity. The demonstration will highlight Nortel Mobile Extension, a fixed mobile convergence (FMC) application that allows business subscribers to turn their mobile phone into a business extension that uses a single number, single voicemail and call grab capabilities for both their desk phone and mobile phones.
Optical and WDM-PON Solution Presentations
Nortel's 40G/100G Adaptive Optical Engine is a plug, play and evolve technology that is deployable over any fiber, allowing operators to reduce engineering, eliminate equipment and upgrade quickly and cost-effectively from 10G to 40G - and ultimately, all the way to 100G. The demonstration will also showcase Nortel's WDM-PON based Ethernet Access solution that eliminates bottlenecks and enables the delivery of triple play and high performance business and residential applications such as video streaming and VoIP.
The Cable Show 2009: Nortel Showcases Optical Technology, Voice and Multimedia Applications for Consumer and Business
WASHINGTON - Nortel* [TSX: NT, OTC: NRTLQ] invites attendees of The Cable Show 2009 ** (April 1-3, 2009 in Washington) to visit with our executives and other experts to discuss how 40G/100G optical solutions as well as business and consumer voice and multimedia applications can advance the delivery of information, entertainment and communications, while creating new business opportunities for the cable industry.
As the worldwide leader in carrier VoIP and an expert in next-generation optical technologies, Nortel is in a unique position to provide cable and multi-service operators (MSOs) network solutions that can reduce operational costs, support quadruple the network traffic and help drive new revenues through innovative service offerings.
At The Cable Show, Nortel experts in carrier VoIP and applications and optical network technology will be available to show how:
- Business voice and multimedia solutions keep busy professionals connected while on-the-go
Optical Scattering Instrument Characterization:
Integrated light scatter instruments can be characterized with respect to their ability to measure microroughness on different length scales. A methodology and computer program has been developed which allows instrument manufacturers to determine the transfer functions for their instruments. See Spatial Frequency Response Function.
Characterization of light scattering methodologies, such as determining instrument signature functions, play an important role in our work. For example, the BRDF that an instrument measures for a perfectly flat and defectless surface is dominated by the Rayleigh scatter in the air within the field of view of the instrument. This Rayleigh-equivalent polarized BRDF has been calculated and experimentally verified.
Resources:
A laser-based goniometric optical scatter instrument (GOSI) is available for measuring the bidirectional reflectance distribution function (BRDF), its polarization counterpart (Mueller matrix BRDF), or other light scattering ellipsometry parameters, from a variety of samples or surfaces. Another instrument, the Scanning Optical Scatter Instrument, is being developed to yield the scattering distribution in multiple directions at once, with partial polarimetric capabilities. These facilities are housed in clean environments to maintain sample integrity. See Bidirectional Optical Scattering Facility for details. Other instruments exist within the division under Spectrophotometry.
Model Software:
SCATMECH: Polarized Light Scattering C++ Class Library -- A C++ object class library has been developed to distribute models for polarized light scattering from surfaces. It is the intent of this library to allow researchers in the light-scattering community to fully utilize the models described in the publications found below. Included in the library are also a number of classes that may be useful to anyone working with polarized light. The library is constructed so that it can easily be expanded to include new models.
MIST: Modeled Integrated Scatter Tool -- The MIST program has been developed to provide users with a general application to model an integrated scattering system. The program performs an integration of the bidirectional reflectance distribution function (BRDF) over solid angles specified by the user and allows the dependence of these integrals on model parameters to be investigated. The models are provided by the SCATMECH library of scattering codes.
Light Scattering Ellipsometry:
The polarization of scattered light can often indicate the source of that scattered light. Using Light Scattering Ellipsometry, whereby the polarization of light scattered into directions out of the plane of incidence is measured for a fixed incident polarization, scattering from microroughness, subsurface defects, and particulate contamination can be distinguished. Experimental measurements and theoretical modeling have been carried out to demonstrate this effect in a variety of systems:
- Roughness of a single material (silicon, glass, steel, and titanium nitride)
- Subsurface defects (fused silica, glass ceramic, and subsurface defects in silicon)
- Roughness of a dielectric layer (SiO2 and polymer films on silicon)
- Particles above a single interface (polystyrene, copper, and gold spheres on silicon)
- Particles above a thin film (polystyrene spheres on polystyrene films on silicon)
- Special-effect pigmented coatings (metallic and pearlescent flakes)
- Overlay structures
Placing the technique on a firm metrological basis, so that it is quantitatively accurate, is a high priority of the program. Polarized light scattering in the Stokes-Mueller representation is also studied.
Optical Scattering From Surfaces
We study how material properties, surface topography, and contaminants affect the distribution of light scattered from surfaces, with an aim toward
- Developing standard measurement methods and standard artifacts for use in industry, and
- Providing a basis for interpreting scattered light distributions so that industry can optimize their use of optical scatter methods.
Topologies
LightPointe's optical wireless products, based on the latest in FSO technology, are designed and engineered to work in any network topology, including point-to-point, mesh, point-to-multipoint, and ring with spurs. This simple approach provides Enterprises and Mobile Carriers the ability to easily build and extend networks that deliver fiber-optic speeds for customers. FSO-based products enable cost-effective deployment and the highest throughput with same-day connections possible from roof-to-roof, roof-to-window and window-to-window — all without tearing up streets
and sidewalks.
Light scattering
The propagation of light through the core an optical fiber is based on total internal reflection of the lightwave. Rough and irregular surfaces, even at the molecular level of the glass, can cause light rays to be reflected in many random directions. We refer to this type of reflection as “diffuse reflection”, and it is typically characterized by wide variety of reflection angles. Most of the objects that you see with the naked eye are visible due to diffuse reflection. Another term commonly used for this type of reflection is “light scattering”. Light scattering from the surfaces of objects is our primary mechanism of physical observation. [16] [17]
Light scattering depends on the wavelength of the light being scattered. Thus, limits to spatial scales of visibility arise, depending on the frequency of the incident lightwave and the physical dimension (or spatial scale) of the scattering center, which is typically in the form of some specific microstuctural feature. Since visible light has a wavelength of the order of one micron (one millionth of a meter) scattering centers will have dimensions on a similar spatial scale.
Thus, attenuation results from the incoherent scattering of light at internal surfaces and interfaces. In (poly)crystalline materials such as metals and ceramics, in addition to pores, most of the internal surfaces or interfaces are in the form of grain boundaries that separate tiny regions of crystalline order. It has recently been shown that when the size of the scattering center (or grain boundary) is reduced below the size of the wavelength of the light being scattered, the scattering no longer occurs to any significant extent. This phenomenon has given rise to the production of transparent ceramic materials.
Similarly, the scattering of light in optical quality glass fiber is caused by molecular level irregularities (compositional fluctuations) in the glass structure. Indeed, one emerging school of thought is that a glass is simply the limting case of a polycrystalline solid. Within this framework, "domains" exhibiting various degress of short-range order become the building blocks of both metals and alloys, as well as glasses and ceramics. Distributed both between and within these domains are microstructural defects which will provide the most ideal locations for the occurrence of light scattering. This same phenomenon is seen as one of the limiting factors in the tranparency of IR missile domesMulti-mode fiber
Fiber with large core diameter (greater than 10 micrometers) may be analyzed by geometric optics. Such fiber is called multi-mode fiber, from the electromagnetic analysis (see below). In a step-index multi-mode fiber, rays of light are guided along the fiber core by total internal reflection. Rays that meet the core-cladding boundary at a high angle (measured relative to a line normal to the boundary), greater than the critical angle for this boundary, are completely reflected. The critical angle (minimum angle for total internal reflection) is determined by the difference in index of refraction between the core and cladding materials. Rays that meet the boundary at a low angle are refracted from the core into the cladding, and do not convey light and hence information along the fiber. The critical angle determines the acceptance angle of the fiber, often reported as a numerical aperture. A high numerical aperture allows light to propagate down the fiber in rays both close to the axis and at various angles, allowing efficient coupling of light into the fiber. However, this high numerical aperture increases the amount of dispersion as rays at different angles have different path lengths and therefore take different times to traverse the fiber. A low numerical aperture may therefore be desirable.
Other uses of optical fibers
Fibers are widely used in illumination applications. They are used as light guides in medical and other applications where bright light needs to be shone on a target without a clear line-of-sight path. In some buildings, optical fibers are used to route sunlight from the roof to other parts of the building (see non-imaging optics). Optical fiber illumination is also used for decorative applications, including signs, art, and artificial Christmas trees. Swarovski boutiques use optical fibers to illuminate their crystal showcases from many different angles while only employing one light source. Optical fiber is an intrinsic part of the light-transmitting concrete building product, LiTraCon.
Optical fiber is also used in imaging optics. A coherent bundle of fibers is used, sometimes along with lenses, for a long, thin imaging device called an endoscope, which is used to view objects through a small hole. Medical endoscopes are used for minimally invasive exploratory or surgical procedures (endoscopy). Industrial endoscopes (see fiberscope or borescope) are used for inspecting anything hard to reach, such as jet engine interiors.
In spectroscopy, optical fiber bundles are used to transmit light from a spectrometer to a substance which cannot be placed inside the spectrometer itself, in order to analyze its composition. A spectrometer analyzes substances by bouncing light off of and through them. By using fibers, a spectrometer can be used to study objects that are too large to fit inside, or gasses, or reactions which occur in pressure vessels.[13][14][15]
An optical fiber doped with certain rare-earth elements such as erbium can be used as the gain medium of a laser or optical amplifier. Rare-earth doped optical fibers can be used to provide signal amplification by splicing a short section of doped fiber into a regular (undoped) optical fiber line. The doped fiber is optically pumped with a second laser wavelength that is coupled into the line in addition to the signal wave. Both wavelengths of light are transmitted through the doped fiber, which transfers energy from the second pump wavelength to the signal wave. The process that causes the amplification is stimulated emission.
Optical fibers doped with a wavelength shifter are used to collect scintillation light in physics experiments.
Optical fiber can be used to supply a low level of power (around one watt) to electronics situated in a difficult electrical environment. Examples of this are electronics in high-powered antenna elements and measurement devices used in high voltage transmission equipment.
Fiber optic sensors
Fibers have many uses in remote sensing. In some applications, the sensor is itself an optical fiber. In other cases, fiber is used to connect a non-fiberoptic sensor to a measurement system. Depending on the application, fiber may be used because of its small size, or the fact that no electrical power is needed at the remote location, or because many sensors can be multiplexed along the length of a fiber by using different wavelengths of light for each sensor, or by sensing the time delay as light passes along the fiber through each sensor. Time delay can be determined using a device such as an optical time-domain reflectometer.
Optical fibers can be used as sensors to measure strain, temperature, pressure and other quantities by modifying a fiber so that the quantity to be measured modulates the intensity, phase, polarization, wavelength or transit time of light in the fiber. Sensors that vary the intensity of light are the simplest, since only a simple source and detector are required. A particularly useful feature of such fiber optic sensors is that they can, if required, provide distributed sensing over distances of up to one meter.
Extrinsic fiber optic sensors use an optical fiber cable, normally a multi-mode one, to transmit modulated light from either a non-fiber optical sensor, or an electronic sensor connected to an optical transmitter. A major benefit of extrinsic sensors is their ability to reach places which are otherwise inaccessible. An example is the measurement of temperature inside aircraft jet engines by using a fiber to transmit radiation into a radiation pyrometer located outside the engine. Extrinsic sensors can also be used in the same way to measure the internal temperature of electrical transformers, where the extreme electromagnetic fields present make other measurement techniques impossible. Extrinsic sensors are used to measure vibration, rotation, displacement, velocity, acceleration, torque, and twisting.
Optical fiber communication
Optical fiber can be used as a medium for telecommunication and networking because it is flexible and can be bundled as cables. It is especially advantageous for long-distance communications, because light propagates through the fiber with little attenuation compared to electrical cables. This allows long distances to be spanned with few repeaters. Additionally, the per-channel light signals propagating in the fiber can be modulated at rates as high as 111 gigabits per second,[12] although 10 or 40 Gb/s is typical in deployed systems.[citation needed] Each fiber can carry many independent channels, each using a different wavelength of light (wavelength-division multiplexing (WDM)). The net data rate (data rate without overhead bytes) per fiber is the per-channel data rate reduced by the FEC overhead, multiplied by the number of channels (usually up to eighty in commercial dense WDM systems as of 2008[update]).
Over short distances, such as networking within a building, fiber saves space in cable ducts because a single fiber can carry much more data than a single electrical cable.[vague] Fiber is also immune to electrical interference; there is no cross-talk between signals in different cables and no pickup of environmental noise. Non-armored fiber cables do not conduct electricity, which makes fiber a good solution for protecting communications equipment located in high voltage environments such as power generation facilities, or metal communication structures prone to lightning strikes. They can also be used in environments where explosive fumes are present, without danger of ignition. Wiretapping is more difficult compared to electrical connections, and there are concentric dual core fibers that are said to be tap-proof.
Although fibers can be made out of transparent plastic, glass, or a combination of the two, the fibers used in long-distance telecommunications applications are always glass, because of the lower optical attenuation. Both multi-mode and single-mode fibers are used in communications, with multi-mode fiber used mostly for short distances, up to 550 m (600 yards), and single-mode fiber used for longer distance links. Because of the tighter tolerances required to couple light into and between single-mode fibers (core diameter about 10 micrometers), single-mode transmitters, receivers, amplifiers and other components are generally more expensive than multi-mode components.
Examples of applications are TOSLINK, Fiber distributed data interface, Synchronous optical networkingHistory
Fiber optics, though used extensively in the modern world, is a fairly simple and old technology. Guiding of light by refraction, the principle that makes fiber optics possible, was first demonstrated by Daniel Colladon and Jacques Babinet in Paris in the early 1840s. John Tyndall included a demonstration of it in his public lectures in London a dozen years later.[1] Tyndall also wrote about the property of total internal reflection in an introductory book about the nature of light in 1870: "When the light passes from air into water, the refracted ray is bent towards the perpendicular... When the ray passes from water to air it is bent from the perpendicular... If the angle which the ray in water encloses with the perpendicular to the surface be greater than 48 degrees, the ray will not quit the water at all: it will be totally reflected at the surface.... The angle which marks the limit where total reflexion begins is called the limiting angle of the medium. For water this angle is 48°27', for flint glass it is 38°41', while for diamond it is 23°42'."[2][3]
Practical applications, such as close internal illumination during dentistry, appeared early in the twentieth century. Image transmission through tubes was demonstrated independently by the radio experimenter Clarence Hansell and the television pioneer John Logie Baird in the 1920s. The principle was first used for internal medical examinations by Heinrich Lamm in the following decade. In 1952, physicist Narinder Singh Kapany conducted experiments that led to the invention of optical fiber. Modern optical fibers, where the glass fiber is coated with a transparent cladding to offer a more suitable refractive index, appeared later in the decade.[1] Development then focused on fiber bundles for image transmission. The first fiber optic semi-flexible gastroscope was patented by Basil Hirschowitz, C. Wilbur Peters, and Lawrence E. Curtiss, researchers at the University of Michigan, in 1956. In the process of developing the gastroscope, Curtiss produced the first glass-clad fibers; previous optical fibers had relied on air or impractical oils and waxes as the low-index cladding material. A variety of other image transmission applications soon followed.
Jun-ichi Nishizawa, a Japanese scientist at Tohoku University, was the first to propose the use of optical fibers for communications in 1963.[4] Nishizawa invented other technologies that contributed to the development of optical fiber communications as well.[5] Nishizawa invented the graded-index optical fiber in 1964 as a channel for transmitting light from semiconductor lasers over long distances with low loss.[6]
In 1965, Charles K. Kao and George A. Hockham of the British company Standard Telephones and Cables (STC) were the first to promote the idea that the attenuation in optical fibers could be reduced below 20 decibels per kilometer, allowing fibers to be a practical medium for communication.[7] They proposed that the attenuation in fibers available at the time was caused by impurities, which could be removed, rather than fundamental physical effects such as scattering. The crucial attenuation level of 20 dB/km was first achieved in 1970, by researchers Robert D. Maurer, Donald Keck, Peter C. Schultz, and Frank Zimar working for American glass maker Corning Glass Works, now Corning Incorporated. They demonstrated a fiber with 17 dB/km attenuation by doping silica glass with titanium. A few years later they produced a fiber with only 4 dB/km attenuation using germanium dioxide as the core dopant. Such low attenuations ushered in optical fiber telecommunications and enabled the Internet. In 1981, General Electric produced fused quartz ingots that could be drawn into fiber optic strands 25 miles (40 km) long.[8]
Attenuations in modern optical cables are far less than those in electrical copper cables, leading to long-haul fiber connections with repeater distances of 50–80 kilometres (31–50 mi). The erbium-doped fiber amplifier, which reduced the cost of long-distance fiber systems by reducing or even in many cases eliminating the need for optical-electrical-optical repeaters, was co-developed by teams led by David N. Payne of the University of Southampton, and Emmanuel Desurvire at Bell Laboratories in 1986. The more robust optical fiber commonly used today utilizes glass for both core and sheath and is therefore less prone to aging processes. It was invented by Gerhard Bernsee in 1973 of Schott Glass in Germany.[9]
In 1991, the emerging field of photonic crystals led to the development of photonic-crystal fiber[10] which guides light by means of diffraction from a periodic structure, rather than total internal reflection. The first photonic crystal fibers became commercially available in 2000.[11] Photonic crystal fibers can be designed to carry higher power than conventional fiber, and their wavelength dependent properties can be manipulated to improve their performance in certain applications.
Optical fiber
An optical fiber (or fibre) is a glass or plastic fiber that carries light along its length. Fiber optics is the overlap of applied science and engineering concerned with the design and application of optical fibers. Optical fibers are widely used in fiber-optic communications, which permits transmission over longer distances and at higher bandwidths (data rates) than other forms of communications. Fibers are used instead of metal wires because signals travel along them with less loss, and they are also immune to electromagnetic interference. Fibers are also used for illumination, and are wrapped in bundles so they can be used to carry images, thus allowing viewing in tight spaces. Specially designed fibers are used for a variety of other applications, including sensors and fiber lasers.
Light is kept in the core of the optical fiber by total internal reflection. This causes the fiber to act as a waveguide. Fibers which support many propagation paths or transverse modes are called multi-mode fibers (MMF), while those which can only support a single mode are called single-mode fibers (SMF). Multi-mode fibers generally have a larger core diameter, and are used for short-distance communication links and for applications where high power must be transmitted. Single-mode fibers are used for most communication links longer than 550 metres (1,800 ft).
Joining lengths of optical fiber is more complex than joining electrical wire or cable. The ends of the fibers must be carefully cleaved, and then spliced together either mechanically or by fusing them together with an electric arc. Special connectors are used to make removable connections.
Fiber fuse
At high optical intensities, above 2 megawatts per square centimeter, when a fiber is subjected to a shock or is otherwise suddenly damaged, a fiber fuse can occur. The reflection from the damage vaporizes the fiber immediately before the break, and this new defect remains reflective so that the damage propagates back toward the transmitter at 1–3 meters per second (4−11 km/h, 2–8 mph).[25][26] The open fiber control system, which ensures laser eye safety in the event of a broken fiber, can also effectively halt propagation of the fiber fuse.[27] In situations, such as undersea cables, where high power levels might be used without the need for open fiber control, a "fiber fuse" protection device at the transmitter can break the circuit to prevent any damage.
Free-space coupling
It often becomes necessary to align an optical fiber with another optical fiber or an optical device such as a light-emitting diode, a laser diode, or an optoelectronic device such as a modulator. This can involve either carefully aligning the fiber and placing it in contact with the device to which it is to couple, or can use a lens to allow coupling over an air gap. In some cases the end of the fiber is polished into a curved form that is designed to allow it to act as a lens.
In a laboratory environment, the fiber end is usually aligned to the device or other fiber with a fiber launch system that uses a microscope objective lens to focus the light down to a fine point. A precision translation stage (micro-positioning table) is used to move the lens, fiber, or device to allow the coupling efficiency to be optimized.
Termination and splicing
Optical fibers are connected to terminal equipment by optical fiber connectors. These connectors are usually of a standard type such as FC, SC, ST, LC, or MTRJ.
Optical fibers may be connected to each other by connectors or by splicing, that is, joining two fibers together to form a continuous optical waveguide. The generally accepted splicing method is arc fusion splicing, which melts the fiber ends together with an electric arc. For quicker fastening jobs, a "mechanical splice" is used.
Fusion splicing is done with a specialized instrument that typically operates as follows: The two cable ends are fastened inside a splice enclosure that will protect the splices, and the fiber ends are stripped of their protective polymer coating (as well as the more sturdy outer jacket, if present). The ends are cleaved (cut) with a precision cleaver to make them perpendicular, and are placed into special holders in the splicer. The splice is usually inspected via a magnified viewing screen to check the cleaves before and after the splice. The splicer uses small motors to align the end faces together, and emits a small spark between electrodes at the gap to burn off dust and moisture. Then the splicer generates a larger spark that raises the temperature above the melting point of the glass, fusing the ends together permanently. The location and energy of the spark is carefully controlled so that the molten core and cladding don't mix, and this minimizes optical loss. A splice loss estimate is measured by the splicer, by directing light through the cladding on one side and measuring the light leaking from the cladding on the other side. A splice loss under 0.1 dB is typical. The complexity of this process makes fiber splicing much more difficult than splicing copper wire.
Mechanical fiber splices are designed to be quicker and easier to install, but there is still the need for stripping, careful cleaning and precision cleaving. The fiber ends are aligned and held together by a precision-made sleeve, often using a clear index-matching gel that enhances the transmission of light across the joint. Such joints typically have higher optical loss and are less robust than fusion splices, especially if the gel is used. All splicing techniques involve the use of an enclosure into which the splice is placed for protection afterward.
Fibers are terminated in connectors so that the fiber end is held at the end face precisely and securely. A fiber-optic connector is basically a rigid cylindrical barrel surrounded by a sleeve that holds the barrel in its mating socket. The mating mechanism can be "push and click", "turn and latch" ("bayonet"), or screw-in (threaded). A typical connector is installed by preparing the fiber end and inserting it into the rear of the connector body. Quick-set adhesive is usually used so the fiber is held securely, and a strain relief is secured to the rear. Once the adhesive has set, the fiber's end is polished to a mirror finish. Various polish profiles are used, depending on the type of fiber and the application. For single-mode fiber, the fiber ends are typically polished with a slight curvature, such that when the connectors are mated the fibers touch only at their cores. This is known as a "physical contact" (PC) polish. The curved surface may be polished at an angle, to make an "angled physical contact" (APC) connection. Such connections have higher loss than PC connections, but greatly reduced back reflection, because light that reflects from the angled surface leaks out of the fiber core; the resulting loss in signal strength is known as gap loss. APC fiber ends have low back reflection even when disconnected.
Optical fiber cables
In practical fibers, the cladding is usually coated with a tough resin buffer layer, which may be further surrounded by a jacket layer, usually plastic. These layers add strength to the fiber but do not contribute to its optical wave guide properties. Rigid fiber assemblies sometimes put light-absorbing ("dark") glass between the fibers, to prevent light that leaks out of one fiber from entering another. This reduces cross-talk between the fibers, or reduces flare in fiber bundle imaging applications.[20][21]
Modern cables come in a wide variety of sheathings and armor, designed for applications such as direct burial in trenches, high voltage isolation, dual use as power lines,[22][not in citation given] installation in conduit, lashing to aerial telephone poles, submarine installation, and insertion in paved streets. The cost of small fiber-count pole-mounted cables has greatly decreased due to the high Japanese and South Korean demand for fiber to the home (FTTH) installations.
Fiber cable can be very flexible, but traditional fiber's loss increases greatly if the fiber is bent with a radius smaller than around 30 mm. This creates a problem when the cable is bent around corners or wound around a spool, making FTTX installations more complicated. "Bendable fibers", targeted towards easier installation in home environments, have been standardized as ITU-T G.657. This type of fiber can be bent with a radius as low as 7.5 mm without adverse impact. Even more bendable fibers have been developed.[23] Bendable fiber may also be resistant to fiber hacking, in which the signal in a fiber is surreptitiously monitored by bending the fiber and detecting the leakageProcess of opti fiber
Standard optical fibers are made by first constructing a large-diameter preform, with a carefully controlled refractive index profile, and then pulling the preform to form the long, thin optical fiber. The preform is commonly made by three chemical vapor deposition methods: inside vapor deposition, outside vapor deposition, and vapor axial deposition.[19]
With inside vapor deposition, the preform starts as a hollow glass tube approximately 40 centimetres (16 in) long, which is placed horizontally and rotated slowly on a lathe. Gases such as silicon tetrachloride (SiCl4) or germanium tetrachloride (GeCl4) are injected with oxygen in the end of the tube. The gases are then heated by means of an external hydrogen burner, bringing the temperature of the gas up to 1900 K (1600 °C, 3000 °F), where the tetrachlorides react with oxygen to produce silica or germania (germanium dioxide) particles. When the reaction conditions are chosen to allow this reaction to occur in the gas phase throughout the tube volume, in contrast to earlier techniques where the reaction occurred only on the glass surface, this technique is called modified chemical vapor deposition.
The oxide particles then agglomerate to form large particle chains, which subsequently deposit on the walls of the tube as soot. The deposition is due to the large difference in temperature between the gas core and the wall causing the gas to push the particles outwards (this is known as thermophoresis). The torch is then traversed up and down the length of the tube to deposit the material evenly. After the torch has reached the end of the tube, it is then brought back to the beginning of the tube and the deposited particles are then melted to form a solid layer. This process is repeated until a sufficient amount of material has been deposited. For each layer the composition can be modified by varying the gas composition, resulting in precise control of the finished fiber's optical properties.
In outside vapor deposition or vapor axial deposition, the glass is formed by flame hydrolysis, a reaction in which silicon tetrachloride and germanium tetrachloride are oxidized by reaction with water (H2O) in an oxyhydrogen flame. In outside vapor deposition the glass is deposited onto a solid rod, which is removed before further processing. In vapor axial deposition, a short seed rod is used, and a porous preform, whose length is not limited by the size of the source rod, is built up on its end. The porous preform is consolidated into a transparent, solid preform by heating to about 1800 K (1500 °C, 2800 °F).
The preform, however constructed, is then placed in a device known as a drawing tower, where the preform tip is heated and the optic fiber is pulled out as a string. By measuring the resultant fiber width, the tension on the fiber can be controlled to maintain the fiber thickness.
Manufacturing
Materials
Glass optical fibers are almost always made from silica, but some other materials, such as fluorozirconate, fluoroaluminate, and chalcogenide glasses, are used for longer-wavelength infrared applications. Like other glasses, these glasses have a refractive index of about 1.5. Typically the difference between core and cladding is less than one percent.
Plastic optical fibers (POF) are commonly step-index multi-mode fibers with a core diameter of 0.5 millimeters or larger. POF typically have higher attenuation co-efficients than glass fibers, 1 dB/m or higher, and this high attenuation limits the range of POF-based systems.
Commercialization:
LightPointe is not the only U.S. provider of FSO equipment, but it does have a product line, a product development plan, customers, partners for field testing, adequate funding, a multinational presence, and a reasonably strong intellectual property position with multiple patents. A direct descendant of BMDO SBIR-sponsored technology is now commercially available for beta trials. This is a 2.5Gb/s capacity FlightSpectrum™ product operating with multibeams and RF out-of-band management at the 1550nm wavelength over a distance of up to 1000 meters. The company anticipates the general release of this product in the first quarter of 2002.
The current Flight™ product line features three models (FlightLite™, FlightPath™, and FlightSpectrum) and two networking tools (FlightManager™ and FlightNavigator™) that enable LightPointe to meet customer demands for communication equipment directly and also indirectly through service providers (who have their own networks and customers). The capacity offered ranges from 10Mb/s up to 1.25Gb/s, operating at 850nm wavelength and lower power. The company is beginning to introduce the higher powered 1550nm wavelength lasers. All of these products operate at layer one and accommodate any protocol, connect to existing network equipment, and require no licensing. Costs vary depending upon transmitter power and management software, but the range runs anywhere from $5,000 to $50,000 per pair of transceiver units. The company is aggressively pricing its FlightLite 1550 product, providing 155Mb/s over a distance of up to 500 meters, for $8000. This compares favorably with the cost of laying fiber-optic cable, which in U.S. metropolitan areas can run between $100,000 to $200,000 per kilometer.
Of the work directly funded by BMDO regarding the use of a RF backup hybrid system, LightPointe has one patent pending.
In the year 2000, Lightpointe obtained more than $1 million in revenue from the sale of FSO products and services. For the year 2001, it anticipates a much higher total revenue figure, but below $10 million. Existing customers include Rockefeller Group Telecommunications Services, Inc.; The Smithsonian Institution; Barclays Bank; Dain Rauscher; and New School University. The company also has established working relationships with more than one dozen carriers located in 31 nations for the purpose of field- and beta- testing new equipment and obtaining lifecycle information on existing equipment.
In September 2000, LightPointe received $12 million in venture capital funding from Sevin Rosen Funds, Ampersand Ventures, and Telecom Partners. In January 2001, LightPointe obtained $6.5 additional working capital and debt financing from Silicon Valley Bank and GATX Ventures Inc. Later in 2001, first-round venture capital firms, Cisco Systems, Inc., and Corning Innovation Ventures invested an additional $33 million.
In general, LightPointe intends to capture a significant share of the international FSO market by emphasizing its ability to meet different customer needs. It can provide a range of capacity and price accordingly; it can supply service quickly (within days); and because its equipment requires no permanent infrastructure, the company can enter into leasing arrangements with service providers. LightPointe’s business strategy relies heavily on the continuing evolution of telecommunications to all-optical configurations (avoiding or eliminating electro-optic conversion).
Technology Description:
LightPointe’s patented technology uses a combination of adaptive power control techniques, active tracking systems, spatial diversity for both transmitters and receiving lenses, microwave radio frequency out-of-band management, higher powered lasers operating at 1550nm wavelength, and protocol-independent physical-layer (layer one) equipment. For carrier-grade reliability (one bad bit out of every ten billion carried) at a data transfer rate of OC-48 (2.5Gb/s) through dry air, one kilometer is the current maximum distance between LightPointe transceivers. If an active tracking system is employed, that range might be doubled. A newer version currently undergoing “beta testing” will transmit four separate wavelengths that could provide either four OC-48 signals or the capacity of one OC-192 (10 Gb/s).
LightPointe’s solution to problems of scintillation (atmospheric turbulence) and Mie scattering (dense fog) is an approach called “spatial diversity”. A transceiver actually houses three laser transmitters separated by approximately 200mm. By sending three beams simultaneously, it is highly probable that at least one will get through unperturbed. Likewise, the use of multiple, spatially separated, large-aperture receiving lenses also reduces problems associated with scintillation.
FSO: Optical or Wireless?
Speed of fiber — flexibility of wireless.
Optical wireless, based on FSO-technology, is an outdoor wireless product category that provides the speed of fiber, with the flexibility of wireless. It enables optical transmission at speeds of up to 1.25 Gbps and, in the future, is capable of speeds of 10 Gbps using WDM. This is not possible with any fixed wireless or RF technology. Optical wireless also eliminates the need to buy expensive spectrum (it requires no FCC or municipal license approvals worldwide), which further distinguishes it from fixed wireless technologies. Moreover, FSO technology’s narrow beam transmission is typically two meters versus 20 meters and more for traditional, even newer radio-based technologies such as millimeter-wave radio. Optical wireless products' similarities with conventional wired optical solutions enable the seamless integration of access networks with optical core networks and helps to realize the vision of an
all-optical network.
How it Works
FSO technology is surprisingly simple. It's based on connectivity between FSO-based optical wireless units, each consisting of an optical transceiver with a transmitter and a receiver to provide full-duplex (bi-directional) capability. Each optical wireless unit uses an optical source, plus a lens or telescope that transmits light through the atmosphere to another lens receiving the information. At this point, the receiving lens or telescope connects to a high-sensitivity receiver via
optical fiber.
This FSO technology approach has a number of advantages:
- Requires no RF spectrum licensing.
- Is easily upgradeable, and its open interfaces support equipment from a variety of vendors, which helps enterprises and service providers protect their investment in embedded telecommunications infrastructures.
- Requires no security software upgrades.
- Is immune to radio frequency interference or saturation.
- Can be deployed behind windows, eliminating the need for costly rooftop rights.
History
Originally developed by the military and NASA, FSO has been used for more than three decades in various forms to provide fast communication links in remote locations. LightPointe has extensive experience in this area: its chief scientists were in the labs developing prototype FSO systems in Germany in the late 1960s, even before the advent of fiber-optic cable. To view a copy of the original FSO white paper in German, published in Berlin, Germany, in the journal Nachrichtentechnik, in June 1968 by Dr. Erhard Kube, LightPointe's Chief Scientist and widely regarded as the "father of FSO technologyWhile fiber-optic communications gained worldwide acceptance in the telecommunications industry, FSO communications is still considered relatively new. FSO technology enables bandwidth transmission capabilities that are similar to fiber optics, using similar optical transmitters and receivers and even enabling WDM-like technologies to operate through free space.
The Technology at the Heart of Optical Wireless
Imagine a technology that offers full-duplex Gigabit Ethernet throughput. A technology that can be installed license-free worldwide, can be installed in less than a day. A technology that offers a fast, high ROI.
That technology is free-space optics (FSO).
This line-of-sight technology approach uses invisible beams of light to provide optical bandwidth connections. It's capable of sending up to 1.25 Gbps of data, voice, and video communications simultaneously through the air — enabling fiber-optic connectivity without requiring physical fiber-optic cable. It enables optical communications at the speed of light. And it forms the basis of a new category of products — optical wireless products from LightPointe, the recognized leader in outdoor wireless bridging communications.
This site is intended to provide valuable background and resource information on FSO technology. Whether you're a student, an engineer, account manager, partner, or customer, this site provides the FSO insight you may require. And for providing high-speed connections, across Enterprises and between cell-site towers, it is the best technology available.FSO is a line-of-sight technology that uses invisible beams of light to provide optical bandwidth connections that can send and receive voice, video, and data information. Today, FSO technology — the foundation of LightPointe's optical wireless offerings — has enabled the development of a new category of outdoor wireless products that can transmit voice, data, and video at bandwidths up to 1.25 Gbps. This optical connectivity doesn't require expensive fiber-optic cable or securing spectrum licenses for radio frequency (RF) solutions. FSO technology requires light. The use of light is a simple concept similar to optical transmissions using fiber-optic cables; the only difference is the medium. Light travels through air faster than it does through glass, so it is fair to classify FSO technology as optical communications at the speed of light.