T H E   W H I T E   H O U S E

4. Information and Communication

Help Site Map Text Only

4. Information and Communication

Perhaps more than any other technical area, information and communication technologies are what make our society "modern." The ability to rapidly access and share vast amounts of information has been the driving force in economic growth and improved quality of life in the latter part of the twentieth century. Accordingly, information and communication technologies are essential to meeting national goals in economic growth and national security, and in helping other technical areas to realize their full potential. The technologies identified as critical include those contributing to the leading edge of:

Components
Communications
Computer Systems
Information Management
Intelligent Complex Adaptive Systems
Sensors
Software and Toolkits

Of these areas, components, computer systems, communications, information management, and software and toolkits have the highest potential to contribute to economic growth. Computer systems (particularly high performance systems), intelligent complex adaptive systems and sensors have significant potential to contribute to national security.

With the exception of high-definition displays and high- resolution scanning, the U.S. is ahead, or at least at parity, in almost all the fields comprising the Information and Communication technology category. The U.S. invented and widely deployed such technologies as UNIX, the Ethernet, the Internet, LANs, and most of the field of artificial intelligence; U.S.-developed operating systems for personal computers are the world standard; our digital HDTV plans lead the world. The National Display Initiative will help to fill in the gap in high-definition displays, and in most areas where we have some weakness, U.S. firms are forming alliances with other firms in Japan and Europe, leading to multinational initiatives. The specifics of the assessment are presented in the text of the section. The summary of the U.S. relative position and trends from 1990 to 1994 are shown in Figure 4.1.


Components

Components comprise two general areas of technology. Data storage--the ability to retain information for future retrieval--is at the heart of most computing and networking applications. Peripherals--components of a computer concerned with input and output--that remain critical to future development include displays and scanners.

High-density data storage

High-density data storage comprises three main technologies: random access memory (RAM); high-density magnetic storage and optical storage. (Biomolecular electronics is part of the Living Systems section.) High-density random access memory is contained on semiconducting materials lined with electric transistor circuits making up an integrated circuit. Advances in random access memory are simply the extension of current trends towards more complex integrated circuits for the purpose of supplying more functionality at reduced costs. RAM's relatively simple circuit designs serve as excellent testbed for advanced semiconductor manufacturing technology, as well as for creating and sustaining the information infrastructure.

High-density magnetic storage is characterized by the ability to store information as a pattern of magnetic domains on a thin layer of ferromagnetic material on the surface of a disk or tape. To meet the demands of the computer, the recorded information must have very high density (that is, each bit must occupy a very small area), and reading and writing must be done at very high speed. The highest densities available commercially are 20 megabits per square centimeter for tape and 10 megabits per square centimeter for rigid disks. The theoretical limit to magnetic recording density is very high- -about 16 gigabits per square centimeter for media based on iron. However, advances in other technologies such as recording heads are required before engineers can approach these densities. High density magnetic storage is critical to the success of the National Information Infrastructure and the National Electronics Manufacturing Initiative. It also has a significant array of applications in national defense.

Optical storage systems offer high information density per unit area. Compared to magnetic storage systems, optical systems have generally had slower access times, but offer significant advantages over magnetic storage, for example because it offers the prospect of three dimensional storage. Optical storage media also offer significant advantages for military applications particularly those where magnetic storage would be considered a vulnerability or where high temperatures would disable systems. Applications include radar and other detection/targeting systems. Storage in optical form enables writing data to, and reading data from, storage media without physical contact with the media, thus reducing wear and providing opportunities for the use of a number of different media. CD-ROM is becoming increasingly popular for the input of text, images, and data that need only be written once. Optical and magneto-optical storage remain active basic research areas, with research efforts in the U.S., Japan, and Europe in particular.

Parallel disk storage can theoretically occur on any of the storage media currently available or in development, including diamagnetic and paramagnetic materials, ferromagnetic materials, and antiferromagnetic. The challenges to parallel disk storage development come in increasing the density and speed of recording activities. Currently, there is a tremendous challenge in the design of recording heads. Competing technologies are magnetic and magneto-optic technologies, all still in development. In addition, advances in software will need to accompany hardware developments in order to see the full potential of this technology.

Parallel disk storage can have significant benefits to enabling warfighting capabilities. Military applications, including ballistic missile controls and multi-theater troop management, afford considerable challenges for rapid data storage and retrieval. Health applications, particularly biomedical research, often require similar high density, rapid access disk storage capabilities that could benefit from advances in this technology.

Enhanced storage capacity will contribute to job creation in the information sector, and will help improve the competitiveness of the manufacturing sector. It will be critical to improved health and education of the U.S. population. Among other things, improved delivery of health care depends on improved high-density storage as the volume of information about health care treatments continues to grow, and as the need to maintain patient information increases. Patient information will need to be stored in three dimensional images in the future, greatly increasing the need for enhanced data storage and high resolution displays.

Overall, the United States has a slight technology lead in data storage technology including a lead in rigid magnetic disk drives, the largest segment of the $65 billion worldwide computer peripheral equipment market. Producers such as IBM and Fujitsu, building disk units for their own equipment, account for nearly half of total production, with the remainder, the so-called merchant market, supplied primarily by five U.S. vendors--Seagate, Conner Peripherals, Quantum, Maxtor, and Western Digital. Leadership in rigid disk drive technology and fierce price competition have enabled these five U.S. manufacturers to dominate world markets. These firms are clear technology leaders in such advanced magnetic disk storage developments as high- performance magnetoresistive head assemblies and glass substrates. Japanese companies are forming alliances with U.S. firms, exchanging their production knowhow for U.S.-made designs.

Japanese, Korean, European, and U.S. firms are approximately equal in advanced DRAM technology--a significant change from the situation several years ago when Japan appeared in position to use its market share dominance to also attain a strong lead in technology. Leading firms from all regions are now producing 16M DRAMs, have developed 64M devices, and are well along in research on 256M designs. Japanese firms frequently dominate at technical conferences with the latest DRAM technology, but competitors generally have introduced DRAMs to the market at about the same time as the Japanese. The South Koreans, by concentrating government and industry efforts on the DRAM sector, are approaching--and, in the case of Samsung, exceeding--the Japanese in some aspects of DRAM device technology and manufacturing productivity. Some industry experts have rated Samsung's 16M DRAM as the world's most advanced and noted that Samsung also has the most advanced processes for producing them. In addition, Samsung beat the Japanese in moving to eight-inch wafer technology and equipment from the older six-inch technology for the production of its 16M DRAMs. As a result of its strength in device and production technology, Samsung became the first company to produce a million 16M DRAMs per month--a striking contrast to its two and a half year lag behind the Japanese leaders in reaching production of a million 256K DRAMs per month in the mid-1980s. The challenges of developing technology for future DRAMs and the billion dollar costs of fabrication facilities are forcing most firms into international research and production alliances. These alliances are resulting in a transfer and sharing of technology, which should keep DRAM technology roughly equal in all major geographic regions. Hitachi and Texas Instruments are cooperating on 64M DRAM R&D; IBM, Germany's Siemens, and Toshiba on 64M DRAMs and 256M DRAMs; Hitachi and South Korea's Goldstar on 16M DRAMs; and South Korea's Samsung and Japan's NEC on 256M DRAMs.

High-definition displays

"High-definition displays" is the term applied to a range of technologies that provide simultaneous presentation of high- resolution images, full color depth, and smooth motion to take full advantage of the capabilities of the human eye. Many promising alternatives are emerging for display technology. Color liquid-crystal displays with diagonal sizes greater than 25 centimeters are currently in production. Emerging flat panel displays that will be superior in resolution and weight to cathode ray tubes are being researched. Plasma displays which use photoluminescent phosphors that are excited by an ultraviolet gas discharge offer significant possibilities for application in the near term. Researchers are also exploring the possibilities of full color, active matrix liquid crystal display for portable computers. Another promising technology is liquid crystal based on ferroelectric materials. These materials allow displays to have a wider viewing angle.

Warfare in the future will become increasingly dependent on these technologies. In particular, flat-panel displays will be widely used to update outmoded technologies. For example, much of the sensor and imaging information used on military transports is presented in a fashion that is extremely complex. Flat-panel displays will provide the capability to integrate these complex interfaces into simple, large screens. Flat-panel displays are also far more reliable than the current CRTs, consume less power, and are much lighter and compact. All of these factors translate into a large cost savings to the maintenance of military systems requiring displays. Advanced display systems will be used in fighter planes, helicopters, and by individual soldiers carrying personal digital systems.

High-definition displays also promise to have many applications in the commercial world ranging from medical imaging, to computers, to avionics displays in commercial aircraft, to high definition television (HDTV). High- resolution displays are critical to meeting the President's goals for the interface between the Next Generation Vehicle with the Intelligent Vehicle Highway System. Success in manufacturing an efficient and cost-competitive display could create a large number of new jobs in both the display industry and in spill-over sectors including other high- volume electronic manufacturing.

Japan has used its dominant market position to acquire a substantial technology lead over U.S. firms in manufacturing of flat panel displays (FPDs). The primary computer application of FPDs is currently the notebook computer, but most industry experts believe FPDs will capture a portion of the desktop workstation market as production technology improves. Japanese firms dominate the FPD market with 90 percent of the $4.5 billion in 1993, and 95 percent of the $2.2 billion market for active matrix liquid crystal displays (AMLCDs)--the most advanced FPD currently available for computer applications. U.S. and European firms are minor players in both markets. The technology strengths of Japanese companies are primarily in production know-how gained through experience in high-volume manufacturing and these strengths are formidable. Japanese firms were the first to commit to full scale manufacturing of AMLCDs in the 8-10 inch sizes needed for notebook computer applications. Since 1991, these firms have improved yields from 5 percent to nearly 80 percent, as production rates increased from pilot levels to 100,000 per month at some facilities. The top producer and technology leader, Sharp, demonstrated a number of advanced prototypes in 1993, and is constructing an advanced large-scale production facility.

In recent years U.S. firms have made substantial advances in manufacturing equipment and process technology, in part, through R&D sponsored by the Advanced Research Projects Agency. For example, U.S. strengths in next generation photolithographic steppers for AMLCDs, excimer laser annealing equipment, advanced chemical vapor deposition machines, and in-line inspection, test, and repair technology are narrowing the AMLCD technology gap with Japan. If firms in the United States pursue large scale AMLCD production either independently or through the National Display Initiative, they will probably achieve technological parity with Japan by the turn of the century. European firms are currently at technological parity with U.S. firms, but have a limited degree of production experience. They also significantly lag U.S. capabilities in next generation production equipment and are therefore likely to continue to lag.

High-resolution scanning technologies

Laser scanning is an emerging application in high-resolution scanning technology. Laser scanners are capable of digitizing virtually any black and white or color image into a format that is readily archived and processed by image processing/enhancement techniques. Work in this area eliminates film and the associated processing chemicals, substituting an appropriate detector and digitizing chain.

While it has significant potential for both commercial and military applications, laser scanner technology is at least five years from having a significant impact in the marketplace because of problems with software. High- resolution scanning technology would have a direct and highly significant impact on the ability of health care workers to process medical information.

Japanese government and industry researchers have long been interested in image processing technology, in large part because of requirements for working with the Japanese language. In addition, Japanese firms have leading-edge capability in optoelectronics and other technologies related to scanning, As a result, Japan has a slight lead over the United States in advanced laser scanning image processing techniques. NEC, one of the leading Japanese firms in this area, is pursuing medical and other applications.


Communications

In communications, global access to increasing amounts of information, as exemplified by the World Wide Web on the Internet, will contribute to the increasingly rapid diffusion of knowledge to more and more people. Network-based finance and commerce are giving users a decided competitive advantage in an increasingly global marketplace. Advances in the ability to provide networked services will contribute particularly to the goal of harnessing information technology for economic growth, but the potential of networking as a tool for competitiveness, security, and education is nearly endless.

Data compression

Current compression technology falls into four categories: data (numerous techniques), audio (ADPCM), image (JPEG), and motion video (MPEG). Virtually any time that information is transmitted, or stored, there is an opportunity to pack it into a more efficient format than the one which was used for presentation or application. The cost of bandwidth, power, memory or storage media, and the competition for them, require that they be treated as a valuable resources.

For example, one key to high definition TV (HDTV) ability to sending twice as many lines of video per image frame and, correspondingly, twice as many pixels per line (four times as many pixels per image), is compression. The United States is finalizing its own unique HDTV standards in order to suppress foreign competition, domestically. However, recent mathematical breakthroughs in wavelet theory, which is more compactly supportable than the JPEG cosine transform, indicate that we can achieve far greater compression ratios, depending on the application. MPEG2 is likely to become the U.S. HDTV standard.

Data compression is essential to harnessing information technology since it enables the enormous data flows involved. It also supports other national goals, such as enabling advances in health care and education technologies and allowing U.S. companies to create communication and video equipment which is competitive in world markets.

Data compression has extensive applications for national security and warfighting. The enormous volume of imagery required for our advanced strategic and tactical planning requires that the imagery be appropriately compressed at each transmission. The initial transmissions from the sensor must be lossless, to facilitate photo-interpreter processing and annotating. Conversely, the imagery provided to those carrying out the mission, e.g., gunners, can withstand significant lossy compression without reducing its utility as long as they know what target they are looking at and where that target is located.

The United States has a slight lead over Japan and Europe in data compression components and algorithms. The U.S. lead is evident from such original work as a high-definition television digital compression standard that leap-frogged heavily funded foreign efforts to develop analog HDTV systems. The international research community in this area is very tightly connected (especially in academia) so that progress is very quickly shared, resulting in a short time between original result and its use by other organizations. China and Russia generally lag the U.S. by several years, although some outstanding theoretical work has been done in Russia.

Signal conditioning and validation

Virtually every signal, analog or digital, is acted upon by its environment--the signal is contaminated to some degree, and that contamination can have major impacts. Signal conditioning is the application of technology prior to, during, and after the exposure of a signal to its environment. In the analog domain, signal conditioning includes output line drivers that impress far more power on the pristine signal than can be countered by a noisy environment. Input line receivers are matched to the line which conducts the signal from its source. They perform the matching operation which minimizes distortion and optimizes coupling with successive circuits. In the digital domain, signal conditioning includes adding error detection codes to signals which are about to be sent through a contaminating medium. Decoding circuits, at the receiver, determine if specific portions of the signal have been changed. Damaged data are usually discarded and re-requested. Error correcting codes add more overhead at both the transmitter and receiver, but they enable the reconstruction of the damaged data. Additionally, an entire field of reconstruction, compensation, enhancement, etc., exists whereby contaminated data can be made more usable.

The primary contribution of signal conditioning and validation is to harnessing information technology. Electronic transactions of all kinds, including banking, stock and bond trading, medical diagnoses, distributed integrated simulation, require some appropriate form of conditioning, depending on the required fidelity of the data. Crashes on the information superhighway would result in bad decisions, wrong answers, incorrect numerical data, incorrect names, and destinations.

The errors in the national security domain could result in similar "crashes," but these would carry a much higher cost: casualties, materiel, infrastructure. This is why signal conditioning is very important for national security and warfighting. In general, military doctrine overlays a set of checks and balances because of the great potential losses involved. Digitization of the battlefield, increased use of imagery and data, routing via commercial space, etc., all put a greater emphasis on signal conditioning to suppress erroneous information from being applied to key decisions.

The United States is on a par with Europe and Japan in signal conditioning and validation technologies. The literature and codes are open to the market in general. Virtually every chip, manufactured anywhere, has embedded conditioning circuits which enable it to interface with the other chips on the circuit board. Therefore, other countries are able to follow the example of the leaders, and compete by virtue of their cost-effective production capabilities. With the proliferation of electronics, these essential capabilities have become rather commonplace geographically.

Telecommunications and data routing

The function of routing data using telecommunications involves a number of technologies and applications. The first three applications--long-haul terrestrial, subscriber loop, and undersea transmission--are generally associated with telephone services. Two others--local-area network and metropolitan area network--are generally associated with computer services. The differences between these applications is blurring, however, in the face of intense worldwide efforts to achieve interoperability (compatible operation) across diverse network architectures and diverse hardware implementations of networking equipment. Network transmission technologies include signal carriers such as conducted electronic signals and radio frequency waves, and transmission media such as twisted pair wires, coax, free- space, and optical fibers. In the past, most of this information used analog techniques. Digital techniques are being employed increasingly in almost all new network systems, placing significant challenges to the applications and technologies of communication routing.

There are active research programs in the U.S. and overseas providing the framework for the higher-speed networking services that will be needed as demand grows. There is a robust industry providing the necessary infrastructure, both for local and wide-area services. There are concerns about the structure of the protocols that will be needed for higher-speed networking, but the international standards community is actively studying this area. There is increasing use of digital (as opposed to analog) circuitry for networking, with fiber optics the dominant technology for exploiting this. Wireless networking is also a very active developmental area at the present time. Regulatory issues (such as the allocation of channels) are often as important in this area as the technical issues.

Networking is most important for harnessing information technology which will, in turn, enable the nation to realize various other goals. It will enable the sharing of vast amounts of information and advanced software available for improved health care and education; it will enable efficient use of computer-aided design and manufacturing techniques for greater competitiveness and productivity; and it will allow the strategic and tactical decision-makers to get military information in a timely manner, which will increase U.S. national security and improve warfighting capabilities.

Europe and Japan both lag slightly behind the United States in switching and transmission technology for public telecommunications networks. Two technologies for broadband networks are having a revolutionary impact on switching and transmission--asynchronous transfer mode (ATM) switching and new high-speed transmission systems called synchronous optical networks (SONET) in the United States and synchronous digital hierarchy (SDH) in Europe. These technologies are blurring lines between switching and transmission, hardware and software, private and public networks, and telecommunications and computer technology.

AT&T was the first to develop ATM technology and the United States remains the technology leader, with a number of trials underway and some commercial ATM services already available. The U.S. ATM lead is underscored by strong capabilities in ATM semiconductors and technology leadership in network operational support systems. European firms are slightly behind but ATM technology has been a major focus of research conducted under the European RACE program. An ATM trial involving 18 operators in 15 countries is giving European firms valuable test data with which to improve their capabilities. Although AT&T and Canada's Northern Telecom are participating, European vendors are the main suppliers and could catch up as they build on their experience and their huge installed base worldwide. European vendors rely on U.S. software developers for some important network operating systems, however.

Japanese ATM capabilities also lag slightly, despite some notable achievements. Fujitsu was the first to introduce a commercial ATM switch, followed shortly thereafter by NEC; Fujitsu has a contract with a U.S. operator for volume sales of its ATM switch. NTT has committed considerable R&D effort to ATM technology for its proposed broadband network and has begun limited deployment of ATM switches. These early achievements in the emerging U.S. ATM public switch market are in sharp contrast to Japanese failure in the U.S. market for the current generation of digital switches. However, as stronger switching manufacturers introduce more capable ATM switches, limitations of the Japanese switches-- particularly in software--are more apparent. Future Japanese technology improvements are expected to lag the already stronger U.S. and European ATM leaders. Japan's technology position could continue to improve, however, as both Fujitsu and NEC have moved production of ATM switches to U.S. facilities, and are improving software capabilities through active participation in U.S. standards organizations, and cooperation with U.S. software firms.

Foreign firms now are only slightly behind the United States in longhaul transmission technology, with some research equal to the best U.S. efforts. The United States, by 1985, had the first standardization effort for synchronous digital transmission system technology; but by the following year, Europe's CCITT began work on a competing standards which is now called SDH. Many of the key conflicts between the two standards were resolved in 1988, resulting in a surge of interest by both equipment manufacturers and public telephone operators. Japanese and European firms now produce equipment with features and speeds similar to U.S. manufactured equipment. AT&T's long distance division has even bought some Alcatel SDH digital cross-connects in preference to equipment made by AT&T's Network Systems. United States firms retain a slight lead because European manufacturers often rely on U.S. sources for some critical components such as microprocessor designs, optoelectronic devices and system software. Because differences in SDH/SONET technology are only slight, sales are often determined by availability, price, and added features.

U.S. firms have a slight lead in subscriber loop technologies such as digital loop carriers, fiber-to-the-curb, and asymmetric digital subscriber line technologies. Japanese companies continue to have a price advantage in component manufacturing, however; and European firms are improving their subscriber technologies as a result of several RACE sponsored programs and the financial incentives provided by Germany's massive infusion of fiber optics into its eastern states.

Foreign firms could face difficulties in the transmission arena as software becomes an increasingly important determinant of technological leadership. U.S. firms have taken the lead in developing highly efficient operating and network management software for networks with gigabit per second data streams, control software for integrating equipment from different vendors, and complex routing algorithms to handle system failures in bi-directional rings. Foreign firms are gaining access to some U.S. software expertise to offset weakness; Fujitsu and Alcatel currently base their SONET network management systems on U.S. software.

Foreign firms trail in mobile data such as cellular digital packet data and other wireless systems. U.S. developed CDMA technology, proposed for both cellular and satellite-based systems, may have advantages over the European alternative for data. The United States was also the first to develop wireless LANs, including radio-frequency and infrared systems. Europe lags slightly in mobile data and is close to parity in wireless PBXs and wireless LANS.

The United States is far ahead of Japan and Europe in local area network (LAN) technologies, which include PC LAN network interface cards or adapters, network operating systems, intelligent wiring hubs, and internetworking devices such as bridges and routers. European vendors have some strengths in systems integration to the end user--which is gaining in importance as LAN interconnect becomes a larger market. Japan has a clear lag in LAN technology. In addition to the more limited use of PCs and lack of connectivity, Japanese manufacturers underestimated the early user demand allowing foreign manufacturers the opportunity to position themselves strongly as the leading providers of these technologies to Japanese end users. Firms in the United States are introducing new technologies so rapidly that Japanese firms cannot keep up. European companies, Canada's Newbridge, and several U.S. firms are at overall technological parity in wide area network (WAN) technologies, which include high- speed transmission lines (up to 1.544 Mbits/s), multiplexers to combine independent data streams, packet switching, and modems. Japanese firms lag significantly in WAN technology, in large part because of limited consumer demand for public data communications services. Most observers believe that demand growth has been slowed by the Japanese telecommunications regulatory environment. In general, rapidly evolving markets are best served by firms that are in very close communication with customer requirements--thus, it is difficult to develop technology leads using an export-led strategy.


Computer Systems

Computer systems are now pervasive. The most critical class of these is the high-performance computing systems, which by definition are the most powerful computing capabilities available at any given time. High-performance computing is the enabling technology for many important commercial and defense-related activities, particularly the management, simulation and design of complex systems and processes. Advanced aircraft and spacecraft design, for example, use computer simulation rather than physical models, for hardware development. The challenges being addressed by the High Performance Computing and Communications Initiative, such as weather prediction and the human genome project, require the most powerful computers possible. Defense use of high- performance computers for weapon design and intelligence analysis continues to give the U.S. a considerable advantage over potential adversaries.

Interoperability

A number of problems arise when computer systems must be made to interoperate with each other. The problem is particularly acute for large-scale integrated systems. Given the custom nature of such systems, open standards have rarely been used in the past for their data management tasks, even where such standards existed. As a result, it is a monumental task to interconnect such systems, as the defense modeling and simulation community has discovered.

The majority of data exchange standards as well as product data exchange mechanisms in widespread use today in the computer industry did not evolve through the coordinated efforts of international standard setting bodies. Instead, most of these standards started as vendor-specific specifications that eventually became widespread de facto standards due to the strong market success of particular products. In many cases, vendors, of these products seeking to capitalize on the economic advantages of owning an industry de facto standard, made their particular specifications widely available, primarily though licensing agreements. Usually, only after a de facto standard has been absorbed throughout the industry, do standard making bodies attempt to define and control such a standard.

Because of the importance of interoperability to military systems, the Defense Modeling and Simulation Office (DMSO), has invested significantly during the past few years to develop appropriate standards. The continuation of such standardization efforts is a critical requirement for the foreseeable future. It is also increasingly important in commercial systems as larger and larger computer systems are used for design, manufacturing, and education.

The bulk of de facto standards in widespread use throughout the commercial computer industry were defined by U.S. computer makers. For example, one of the most standardized operating systems, Unix, which includes a rich set of hardware-independent data transfer protocols, was initially developed and widely licensed by the U.S. vendor, AT&T. In another case, the standard operating system, DOS, and later Windows, provided a widespread common hardware independent software platform on which a number of applications software vendors base their products. This process enabled vendors of different systems to communicate easily using the operating system's communications facilities as a common language.

Both Europe and Japan have lagged the U.S. in developing their own standards in this area. Neither Japan nor Europe has the worldwide market presence needed to establish a de facto standard. Japanese and European computer makers are still primarily limited to serving their domestic markets and lack strong capabilities to successfully market products internationally. For its part, Japan has concentrated on protecting its company-specific proprietary hardware and software base and has for the most part avoided interoperability issues until only recently. In Europe, most vendors have struggled towards standardized products, but they have been frustrated by a lack of a unified customer base. More so than their U.S. counterparts, European firms face wide-ranging and often divergent user requirements that cut across customer segments, such as the financial industry, manufacturing, government, as well as those that cut across country boundaries. Both Japanese and European firms recognize their lag in this area. Closing the gap will be difficult as the ability to establish standards arises not from any specific technology capability, but more as an offshoot of being a major, or at least significant, international supplier in a particular market sector. That requirement has eluded suppliers from both these regions in the past and will continue to be a formidable barrier.

Parallel processing

Parallel processing is the capability to simultaneously conduct a large number of computing functions offering significant advantages in terms of speed and capacity. There are now several companies marketing so-called massively parallel processors (MPP), consisting of from tens to thousands of individual processors and memories interconnected by a variety of methods within the same machine. Successful exploitation of particular multiple processor architectures such as systolic arrays and hypercubes remains a real challenge to the U.S. research community. In addition to technical difficulties with memory capacity, another difficulty facing the MPP field is the lack of appropriate software. These systems are still very difficult to program efficiently, in spite of considerable research investment in the relevant software over the past decade or more. This lack may limit the use of such machines to a small (but important) niche market. Even so, massively parallel processing offers considerable promise, and research into this area continues to flourish both in universities and commercial companies.

Among the specific parallel processing computing technologies, the one that has drawn the most interest over the past few years has been the so-called MIMD (for multiple instruction, multiple data stream) machines. These consist of from tens to thousands of individual processors and memories interconnected by a variety of methods within the same machine. Many startup companies, often with federal government support, have been formed to exploit this technology. This technology has the advantage over the traditional vector supercomputer in that it is cheaper to build, and scales more easily to larger capacity. In particular, there is a consensus that this is the only technology with the potential to reach teraflop speeds (a million million floating point instructions per second) in the foreseeable future.

Massively parallel computing has direct applications in national defense warfighting and weapons control, war gaming, in the Partnership for the Next Generation Vehicle, and the Global Climate Change and Human Genome research programs. While commercial applications are further off than research applications, parallel processing can have a secondary impact on the capabilities of U.S. science and engineering to maintain world class status.

The U.S. has a technology lead in nearly all aspects of high- performance computers. Japanese computer firms continue to lag their U.S. counterparts in parallel computer technology and are facing mixed prospects. Japanese computer firms-- primarily Fujitsu, NEC, and Hitachi--possess strong capabilities in some key supporting technologies. These firms have, for example, outstanding semiconductor component and circuit interconnection capabilities that could give them a distinct advantage over many of their U.S. counterparts in the hardware area. Japanese capabilities in components and board-level interconnection designs give them tremendous freedom to design innovative architectures. However, Japanese firms must overcome some tough technical hurdles before they can develop commercially successful parallel computer systems. Their most important task will be to close the large gap that exists with leading U.S. firms in parallel operating systems and applications software. Because the capabilities of a parallel system are largely determined by the capabilities of the systems software, any serious attempts by the Japanese to challenge U.S. dominance in parallel computers will require significant improvements in the software area.

The United States and Europe have pursued parallel computing through increasing integration of processors, whereas Japanese efforts have emphasized peak vector processor performance. As a consequence of this difference in strategy, Japan has not produced massively parallel machines on a par with those in the United States. While Japanese multi-processor computers have a much higher theoretical peak performance than do their U.S. or European counterparts, U.S. systems can generally sustain higher performance for important applications. Most Japanese development projects started out emphasizing hardware systems tailored to specialized uses--such as image processing and simulation-- but are now moving to more general applications in order to respond to market requirements.

Europe also generally lags the United States in parallel processing computer technology. Individual European countries have the necessary R&D talent, but lack the capability to build a marketable product in a timely fashion. Despite the aid of numerous government programs to develop computational hardware technology, European firms remain far behind companies in the United States and Japan. Although there are many organizations within academia, government as well as in the private sector conducting high-quality research in this area, most of these efforts will never move beyond the research stage. Many European governments recognize parallel processing as a critical technology for the next generation of high performance computers, and have initiated a number of research programs to address Europe's shortcomings. Many of these programs are highly speculative in nature and likely will not result in any significant technology or competitive advantage.


Information Management

The ability to control and manage the increasing amounts of information available to us today remains a major technological challenge. The elements of this that are particularly critical are: data fusion, i.e., the ability to integrate data from a variety of different sources into a meaningful form; the design and manufacture of increasingly more complicated information systems to manipulate and analyze the available information; health information systems and services, and integrated navigation systems that increasingly control more and more of the nation's transportation system.

Data fusion

Data fusion involves forming useful relationships between data from different sources to provide salient information which is more readily assimilated. Examples of data fusion are:

  • Sensor fusion: where an image is synthesized which combines the key information from disparate sensors, such as infrared and radar. In this case key information would be emphasizing objects which have both the desired thermal and range (or scatterer) characteristics.

  • Visualization: where complicated relationships between otherwise abstract or multi-dimensional variables are displayed in some metaphor space. For instance, a three- dimensional space could be constructed which clarifies the "volume" of acceptable combinations of output, cost, time, and risk.

On one hand the information age imposes a certain degree of information overload, and on the other hand it can provide the filters, "digesters," and human interfaces, both to highlight key relationships and to suppress extraneous data. These have direct application to economic optimization by decision-makers and their staffs, and by organizations which provide input to decision-makers. Alternatively they provide the tools for modeling and synthesizing new products, such as pharmaceuticals, exotic materials, lower cost common materials. Therefore, data fusion is an important part of harnessing information technology, and to obtaining better information for other endeavors.

There is also important applicability of data fusion to national security and warfighting. Highlighting key relationships, and simultaneously suppressing extraneous data can have direct application to tactical and strategic decisions made by commanders. They would have presented to them a digitized battlefield, plus its companion decision support system. This real-time data concerning locations of friendly and enemy forces, casualties, resource expenditures (such as ammunition, fuel, food, water, etc.) can be presented in a manner which is customized for the particular style of each commander who receives it. Other key applications are in reducing casualties and in force multiplication. Automatic Target Recognizers, Pilot's Associate, and other automated aids help to both reduce the vulnerability, and increase the lethality of our numerically limited forces.

The United States is probably at the forefront of this immature field. Japan is next, with Europe close behind. Because of its history of leadership in super-computing, remote sensing, and movie making, the U.S. has the most experience in operating on complex data to put it into a useful form, be it displaying turbulent flow, multispectral satellite imagery, or complex, imaginary worlds on the silver screen. The new NASA EOSDIS (Earth Observation Satellite Distributed Implementation System) will be the biggest user of data fusion in the world. However, discriminating salient information from a morass of multi-dimensional data, and then operating on it to produce cognitively transparent relationships is a challenge. The understanding of how to do this will likely evolve in response to a parallel evolution in algorithm development, new interfaces such as virtual realities, as well as an improved understanding of the cognitive process, itself.

Large-scale information systems

Large-scale information systems contain millions of lines of code, usually distributed over many computers and geographic sites. Price tags for such systems in the tens to hundreds of millions of dollars are not uncommon. They are needed for a variety of complex tasks upon which the U.S. economy is based, such as airline ticket reservations systems; the Federal Aviation Agency's (FAA) flight control systems; national and international banking networks exchanging billions of dollars of transactions per day; control software for nuclear power plants, and thousands of other such applications. These systems tend to be produced in small quantities, ranging from only one to tens of copies. In each case, the system is customized. Their development is routinely plagued by cost and schedule overruns, and not infrequently the abandonment of entire multi-million-dollar development projects (e.g., the recent California Department of Motor Vehicles attempt at integrating driver's license records with vehicle registrations).

Such systems are vital to the U.S. economy, since they are the basis of critical systems upon which many of our transactions depend. These large-scale systems are also needed to operate and control U.S. defense logistics, command and control, intelligence gathering and dissemination, and a variety of other military operations. The ability to create these systems effectively, and to meet schedule and cost goals, is critical to their future development.

Large-scale systems tend to have a strong dual-use potential. For example, a system for Olympics-level sporting competition management would require the integration of high-speed, redundant, computers, local area networks, large-scale graphics displays, communications protocols, database management techniques, and so on. The same capabilities would be required in a system used for military logistics.

Both Japan and Europe lag the United States in the underlying software technology needed to design and build large-scale information systems. In database software, Japanese dependency on custom software has retarded development of Japanese database software because such development has proven to be too expensive for single-client products that comprise the bulk of Japanese software development. The larger Japanese computer firms have traditionally concentrated their development activities on mainframe platforms and have poor programming skills for PC and distributed computing platforms that comprise the basis for distributed information applications. In network software, for example, Japan's slower development of the PC market forced Japanese users to rely on foreign networked products in the Japanese market. As a result, Japanese firms are attempting to catch up primarily by forming alliances with leading U.S. network software firms. In network software, European users have developed capabilities for interconnection software for their systems. Most of the firms supplying products however, look to U.S. counterparts to define the technology, and implement the first products.

Health information systems and services

Integrated health information systems will be at the heart of the decision support systems needed to enable physicians, payers, and patients to choose among an array of evolving alternatives. Integrated health information systems will also be essential to the decision support systems needed to enable community, state, and national public health officials to detect emerging health threats and to allocate resources among competing public health problems that affect the populations as they age. To capitalize on the natural experiments occurring in health care settings and to identify and evaluate potential cost-effective alternatives for improving the personal and public health care systems is an on-going challenge. Clinical decision making must be improved, and this could be done by computer-aided diagnostic systems enabling efficient selection by physicians and patients among an increasingly complex range of alternatives in diagnosis and treatments. Improved outcome data on comparable and competitive approaches will provide the basis for informed patient management and the evaluation of alternative services structures. While many isolated systems have been developed and demonstrated in well-bounded settings, the availability of an integrated system of information on clinical practice, patient management and outcomes is still on the far horizon. Because most data bases are administrative in nature, few contain any meaningful clinical or outcome information; and because of the nature of our health insurance system, they do not contain population-based or longitudinal data. Emphasis must be placed on linking the ultimate outcome and health status changes for many patients over time with data regarding the specific preventive/clinical intervention/treatment as well as effects of financing structures, organization, demographics, procedures, guidelines, and care processes. Cost-effective public health surveillance requires the ability to link aggregate data obtained from the personal care system to regional, state, and local data on environmental pollution, occupational hazards, disease vectors, etc.

Such work is dependent upon development of data systems, agreement on standardized data elements to be collected for computer-based patient records, and administrative data files as well as consumer surveys. A major focus of activity also needs to be the linkage of existing personal care and public health data systems and incorporation of meaningful clinical and outcomes data (measures and reporting formats) that can be electronically exchanged. The results should improve the ability of physicians to stay abreast of state-of-the-art treatments and outcomes in specific circumstances, and enable patients to take a more actively informed role in their own health care. The broader acceptance of telemedicine and the emergence of virtual medical groups may enable more fully informed decisions to be made in remote clinics or community hospitals. These systems would also provide increased national security by allowing the military to minimize battlefield casualties by facilitating out-of-theater support for limited forward medical teams.

The U.S. has developed a broad technology base at the sub- systems level in both hardware and software, but has not yet been able to benefit from the synergy of decision support systems, networking and communications, and large scale data storage and retrieval capabilities. The development of standards for data exchange is in progress to provide commonality of definitions, messaging and data formats which will be necessary to link large, presently independent systems. As part of the High Performance Computer Consortium (HPCC) program, health care related systems include testbed networks and collaborative applications to link remote and urban patients and providers to the information they need. This includes database technologies to collect and share patient health records in secure, privacy assured environments, advanced biomedical devices and sensors, and the system architectures to build and maintain the complex health information network. Virtual reality technology is being used to simulate operations for medical training, and combined with teleoperator technology for remote surgical procedures. Graphic image reconstruction software and visualization techniques are being combined with high resolution serial sections and CT and MRI imagery to development a virtual atlas of human anatomy for training and education.

Yet, the history of further development into commercial products with multi-institutional adoption has not been encouraging, encountering a host of non-technical barriers which have not been related to the system's ability to improve patient management and outcomes, reduce length of stay and related health care costs. The barriers include physician resistance, cultural differences, and conflicts between societal values. Confidentiality, privacy, and security issues may be an on-going challenge to systems implementors because a patient's right to privacy has been defined by the U.S. Court of Appeals as a constitutional right which is more compelling than "just" a statutory right, and administrative convenience was not deemed to be a defense against compromising patient privacy. To truly benefit from the capabilities of integrated health care information systems, these systems will have to bridge a multiplicity of medical institutions, third party payer reporting requirements, and even widely varying public health reporting requirements on a state by state basis. Telemedicine and the inter-state transfer of medical decision support software have already met regulatory challenges that demonstrate that 60-year-old policies need to be revised.

The breadth of the U.S. AI/knowledge based support systems, often developed at universities with close working relationships to major medical research centers, has led to the U.S. having an unsurpassed base of experienced researchers and demonstration projects. Networking and communications infrastructure will play a significant role in wide-spread application, again an area benefiting from U.S. leadership. European interest in Medical Informatics has also been strong for over 20 years, with a history of involvement with AI/decision support research. While the Japanese have shown a strong interest in quantitative medicine and the application of their biosensor technologies, they have not had the extensive development of AI/decision support systems, and have been further limited by user interface, networking and institutional issues. Universal solutions are unlikely as an integrated system must encompass patient and physician education, medical diagnosis and patient management, third party payers and national public health data acquisition needs while complying with individual privacy rights. Implementation will be incremental, modules will be developed, certified and integrated locally and regionally; while there will be strong economic incentives for commonality, cultural differences will play an enormous role in actual utilization.

Integrated navigation systems

Navigation systems are good examples of large-scale integrated systems. Each commercial airliner and large seagoing vessel includes a system for this purpose. Many such systems are customized for the particular vessel. In other words, although the individual components may be similar, the whole system is one-of-a-kind. Land-based systems for the command and control of inland waterways are another example.

The greatest change in the past few years in this area has been the increasing use of the global positioning system (GPS) for navigation. By now, even the smallest seagoing vessels rely on a GPS receiver for this purpose, as do increasing numbers of airplanes. GPS use is projected to bring about major changes in the commercial air traffic control system in the next few years, which will require redesign of most of the airline and FAA navigational systems now in use.

Given the specialized nature and the large scale of the systems required for integrated navigation, there are a limited number of enterprises that produce them. Although there are more such enterprises in the U.S. than elsewhere, bidding in this area is very competitive, and organizations in both Japan and Europe have been equally successful in competitions against U.S. firms. For example, Mitsui in Japan has been successfully building maritime navigation systems for many years, and Siemens in Germany has built major control systems for a variety of applications.


Intelligent Complex Adaptive Systems

One of the most critical challenges facing the information and communications community is the ability to include "intelligence" in the systems needed to manage many of the more onerous and difficult tasks, particularly those that involve hazardous materials or situations.

Autonomous robotic devices

Robotics, in its full meaning, refers to an autonomous system, capable of responding to much greater uncertainty in its environment than the flexible manufacturing systems currently in use. As robots become more autonomous, it becomes possible to substitute them for people in particularly hazardous situations such as fire fighting, mine removal, processing of hazardous materials and in space.

The competitiveness of U.S. industry depends critically on the increasing automation of many of its manufacturing processes in order to reduce labor costs. For such devices to be most effective, they must also be very flexible so that they can be easily reprogrammed for another task. The importance of such development has been recognized by NIST in their support of automated manufacturing systems. Machine tool companies, particularly those in Japan and Germany, are also making considerable investments in robotic devices for manufacturing. Automobile companies such as Mercedes Benz have announced research programs in autonomous on-road vehicles.

Autonomous robotic device technology contributes to job creation and economic growth in many ways. It is an important part of future automated factories which will include autonomous robots, as well as currently available robotic machine tools. It also allows greater automation of building and construction, especially in dangerous situations. Development of robotics also contributes to harnessing information technology by stimulating research into software algorithms for combining theoretical part dimensions with material characteristics, sensor outputs, and information about motion in space.

There are important applications of robotics in improving national security. It contributes to the improvement of the defense manufacturing base in the same ways as it contributes to the civilian manufacturing base. In addition, robots have the potential to significantly improve U.S. warfighting capabilities and reduce casualties. They can safely remove mines, handle and dispose of hazardous materials, and perform construction under fire. They can also provide real-time reconnaissance information and target spotting without exposing humans to danger.

Europe, Japan, and the United States are at rough parity in robotics technology in terms of repeatability, accuracy, mobility, and speed of motion; but have strengths in different technology areas. For example, Europeans have what some industry experts believe are the best vision systems recognition algorithms, Japan is generally strong in robotics hardware such as manipulator and servomotor technology, and the United States excels in systems integration, robot control, sensors, and vision software. Despite the overall technology parity enjoyed by the United States, the world market for robots is dominated by European and Japanese firms. The current world leader is Asea Brown Boveri followed closely by the Japanese firms Fanuc, Yaskawa, Matsushita, and Kawasaki. Japanese firms are far ahead of the rest of the world in the number of industrial robots installed but most of these are low- or medium-technology products. Japanese market success has been based on high- volume, less sophisticated robots while European firms have been more successful in advanced robots.

Artificial intelligence

The aim of the discipline of artificial intelligence (AI) is to permit computers to act in such a manner that, if a human acted similarly, his or her actions would be considered intelligent behavior. It embraces such fields as voice recognition; pattern and image recognition; "expert systems" containing numerous rules of behavior that act according to those rules; control of robots and similar devices; game playing; neural nets and genetic algorithms that learn patterns of behavior from examples, feedback and (in the case of genetic algorithms) mutation to produce new solution strategies or possibilities.

The definition of AI is constantly evolving. Computer behavior that used to appear intelligent, once understood and more commonplace, tends to become regarded as merely programming; examples include list processing and logic systems capable of deduction. Therefore, in practice, AI is often regarded (at least by its aficionados) as the cutting edge of computer science, where programming of computer behaviors is attempted that has not been done before, or whose logic and structure are as yet poorly understood.

Artificial intelligence is important to U.S. economic goals precisely because it embodies much of advanced computer science--pushing the limits of what computers are capable of. Within AI, novel programming and computer architecture techniques are discovered that, in turn, can lead to export and patent advantages. (As recognition of this, Japan has embarked on a variety of advanced computing initiatives, such as the "Fifth Generation" program that had very explicit artificial intelligence-related goals and aspirations.)

AI is vital to U.S. security interests because computer-based intelligence is needed to process the huge volumes of satellite photography and other signals intelligence, looking "intelligently" for patterns of interest within vast signals databases. Advanced pattern recognition and interpretation is also vital to the smart weapons that can provide pinpoint accuracy and thus save the lives, sorties, and munitions required to deliver "dumber" weapons.

A study of knowledge-based systems (KBS) in Japan, sponsored by ARPA and NSF and published in May, 1993, concluded that the U.S. was ahead or about even with Japan in about the same number of areas as the number of areas in which the U.S. was lagging. The areas of knowledge-based research and application in which Japan was ahead of the U.S., and with a trend that was level or gaining, were:

  • quantity and quality of fuzzy logic systems initiatives
  • quality of parallel symbolic computation initiatives
  • quality of applied R&D in advanced KBS research in industry
  • the support structure for applications of expert systems (ES)
  • applications of ES in consumer products
  • integration of ES with other software

The Japanese are strongest in applying fuzzy logic concepts to consumer products and control systems (e.g., for railroads, elevators). This assessment still represents the current relative state of development.

European AI research at several top institutions (University of Edinburgh, the Turing Institute in Glasgow) is comparable to that at top U.S. institutions, but does not appear to be ahead of the U.S. in any area, and in Europe as a whole it has less depth and breadth than within the U.S.


Sensors

Sensor systems record and disseminate information about their immediate environment. Sensors are important components in information systems because they provide input data that, depending on the circumstances, can be used for either imaging or non-imaging applications. Beyond this support role, certain sensors also could have exciting roles in the very mechanisms by which computation and storage processes are performed. When the transducers are combined with processing circuits and actuators, microelectro-mechanical systems (MEMS) can be created that, in addition to providing detection and identification of a wide range of phenomena, can create control devices to respond to these observed phenomena. The field of microsensors is related to transducers and actuators, such as found in MEMS production.

Physical devices

Sensors can be divided into categories in different ways. One common distinction is between active and passive sensors, i.e., between sensors which send out a signal and react to the response (like radar) and sensors which simply process information about the ambient environment (like thermometers). Another way to sub-divide sensors is into imaging, i.e., those that produce a "picture" of the physical object they sense, and non-imaging, which do not. As an example of imaging sensors, charge-coupled devices (CCD) are used in cameras to give high-resolution images limited by the number of pixels used. Sensor arrays containing millions of pixels, each a few microns across, are possible. As pixel size has shrunk and data available to the system has grown, processing has gained importance. Because of their particular importance, imaging, non-imaging, and passive sensors are singled out in this discussion.

A variety of civilian and military applications are dependent on imaging sensors. Imaging sensors are also critical in remote sensing from space, scanning microscopy, and machine vision (an important area of robotics). Imaging sensors contribute to a number of national goals, including healthy and educated citizenry, job creation and economic growth, harnessing information technology, improved environmental quality, and enhanced national security.

Although imaging sensors are very important in providing "visual" information, a collection of non-imaging sensors can be used to measure a vast range of phenomenology. Examples include devices that measure temperature, pressure, humidity, radiation, voltage, current, or presence of a particular chemical or biological material. In addition to passive sensors, there are active sensors such as laser or radar altimeters. Specialized microsensors can be used to detect particular chemical or biological agents.

Non-imaging sensors are used in a variety of industries and applications. Environmental monitoring and hazardous site characterization are important applications for non-imaging sensors, including biosensors and chemical sensors. Miniaturization of biosensors is important to medical diagnostics, food process control quality assurance. Small and inexpensive if produced in large volumes, biosensors can detect small changes permitting earlier treatment with smaller doses of medication. Chronically implanted devices employing a microbiosensor can be therapeutic as well as diagnostic. For example, a smart device employing a microbiosensor could respond to changes in metabolic rates and circulating biochemicals, and adapting to the patients present physiological status, automatically release the proper dosage of a therapeutic medication.

The transportation industries are finding uses for microsensors and MEMS. Microsensors have use in the automotive industry for system controls and diagnostics. Non-imaging sensors have a role in remote sensing. Laser profilers (LIDARs) and radar altimeters can be used to measure range, and hence altitude giving surface. These can be used as navigation aids, and are related to imaging systems to the extent that surface profiles are being probed. Soil-sounding radars are an important recent development. Robotics is an important area for non-imaging sensors. Tactile sensors are often considered second in importance only to machine vision. Sensors are also important for balance and kinematics. Military applications of biosensors, chemical sensors, and microsensor variants include detection and warning of the use of chemical or biological agents in warfare.

Given the wide range of applications to which non-imaging sensor technologies contribute, they can be said to contribute to meeting almost all of the President's goals.

Passive sensors have a special importance in military applications because they do not reveal their location or characteristics to an adversary. There are several important applications that can be emphasized for passive sensors. For example, they can provide warning of an adversary's active sensors, enhance night vision, or be used for thermal imaging to identify and target military assets and then perform damage assessment. Radar guided missiles, laser designation systems, and laser range finders are a few examples of offensive systems that detectors could search for, and upon detection alert friendly forces to imminent threats. Damage assessment is a vital task to follow-up strikes, and with the advent of new weapon types remains particularly critical.

Europe and Japan have lost their leads in chemical and biosensor technologies over the last several years, although the Japanese are involved in a very broad range of biosensor development for both biomedical and bioprocess control applications. U.S. firms have stepped up R&D efforts as a result of environmental monitoring needs, a rekindled interest in developing better chemical and biological defense detection capability, and the marketing success of some biosensor-based medical diagnostic kits, e.g., "consumer- friendly" home pregnancy and blood sugar tests. But, there are specific areas in which Europe and Japan continue to excel. Sweden has made impressive strides in the development and successful commercialization of surface plasmon resonance-based biosensing systems. In Japan there has been excellent work in food quality testing (e.g., flavor sensors) and human fatigue sensing (e.g. a prototype wristwatch-like device that measures fatigue). Likewise, in chemical sensor technology there are areas of excellence around the world. Germany, for example, has made notable gains in mass spectroscopy-based sensors and is one of the world leaders in this area.

The United States and Europe are the current leaders in machine vision research. As charge coupled device cameras for capturing images have become readily available, technology advantages in machine vision are accruing to those firms most adept at processing the images produced. Europe is on a par with the United States in machine vision technology with the Dutch company, Delft, for example, having vision systems ranking among the best in the world. Another example of European excellence is Germany's Siemens AG which has developed a unique three-dimensional recognition system using lasers instead of cameras. The Japanese have made advances in machine vision over the last decade, but still lag, primarily because of weaknesses in the design and implementation of software to perform the functions required in sophisticated vision systems. Much of the work on vision processing in Japan has been to reduce computational requirements through new algorithms, the application of fuzzy logic and neural networks, and the use of special purpose processors--all areas in which Japan lags. Japanese companies have some notable achievements. Yaskawa Electric Mfg. Co., Ltd., developed "Moto-Eye," a vision system with improved image processing designed for use on assembly lines. Fujitsu developed a video sensor eye that can distinguish objects almost as clearly as the human eye; the robot eye operates at 30 frames per second compared with human eyes, which can capture 60 images per second.

The Japanese have advanced to rough parity with the United States and lead the Europeans in the development of tactile sensor technology. Tactile sensors provide information about the contact between the workpiece and industrial robots. They are critical to many assembly activities, because vision systems cannot supply all the information needed for delicate assembly operations, such as determining whether an object is being grasped properly or whether there is any friction between the fit of two parts. Japanese R&D has already yielded some commercial applications. The Daihen Corporation has a touch sensor for arc welding application used to find mispositioned workpieces and to adjust welds for deviations in the workpiece size. The Tokyo Sensor Company manufactures tactile sensors for use in precision robots with an incremental sensitivity of 10 grams. These sensors were among the first commercialized that were small enough and had the sensitivity to be used at the fingertips of robot hands. Despite some leading edge research, Europe lags both Japan and the United States.

Integrated signal processing

Signal processing technologies enable the extraction of relevant information from signals received from sensors. Signal processing is present whenever a signal, or combination of signals, electrical, optical, fluidic, etc., is intentionally acted upon to increase the over-all usefulness, or value. Signal processing can be applied to monitoring and measuring, such as, for example, when an image is formed of a slice through a person's brain (magnetic resonance imaging) by combining numerous non-invasive images taken around the head. Signal processing can also be used to influence, or control, dynamic processes. For example, some fighter aircraft are only conditionally stable. It is the task of a control system, incorporating signal processing, to keep the multi-dimensional state of that aircraft within its performance envelope. Many systems additionally "push back" on the pilot's controls to give him a "feel" for the maneuver, because he is flying a computer while the computer is flying the plane.

Signal processing is a vast enabling technology, whose boundaries sometimes overlap those of other fields, such as software, integrated circuits, communication, imaging, display, etc. Signal processing technologies include microelectronics, specific hardware designs, software correlation techniques, neural networks and algorithm development. Advances in signal processing support reconnaissance and surveillance systems, machine vision, robotics, and autonomous systems. They also have application in law enforcement. It is a key element of the manufacturing, test, diagnosis, and repair process. As signal processing technologies advance, decision-making processes can be automated. It is increasingly becoming integrated into the very products which are being manufactured, tested, diagnosed, and repaired.

Signal processing holds a key to both economic advantage (by providing superior, or previously nonexistent, performance) and also to military advantage (for the very same reasons).

Integrated signal processing is the field wherein the processing is physically integrated (often within the same microcircuit substrate) with the sensor. MEMS (micro electro-mechanical systems) technology best illustrates this field at the current time. The same technology which is used to etch and deposit layers of materials to form microcircuits is also used to micro-machine structures.

An important commercial application of sensor-integrated processing is pursued by Caltech's Carver A. Mead, who is developing a silicon retina which possesses both the photo- sensor and the rudimentary neural capabilities of an eye. The silicon retina reads cursive writing on checks automatically, and then inputs the information into a data base. Mead and his colleagues have developed several additional generations of neuromorphic vision systems which are now being developed through a venture-capital-financed start-up. Future applications have to do with automated manufacturing, operator aids to reduce workload, and improved precision and uniformity in virtually any manual operation. They also lead to manufactured products which perform their functions in a more optimal, and more unsupervised, manner. Thus, integrated signal processing could make a significant contribution to job creation and economic growth by improving productivity in U.S. industry and enabling new manufacturing capabilities.

National defense/security applications include missile guidance, unmanned air vehicle autopilots, engine monitoring and control systems.

The United States is the world leader in digital signal processing (DSP) technologies, driven by a variety of applications including the home entertainment market. In particular, the strong U.S. position in multimedia computing is a major asset. In addition, the military, though no longer the primary technology driver, is still funding important DSP R&D. The Japanese produce many DSP components including DSP "cores" around which other functional components can be added to construct an application-specific DSP device. Japanese firms are world leaders in the technologies to actually fabricate DSP devices but not in their architectures and implementation. Europe, like Japan, trails the United States on most theoretical aspects of DSP. European commercial firms also lag in producing state-of-the-art components.

While the U.S. leads in the underlying technology, Japan leads in exploiting it in the commercial marketplace. Europe is closer than Japan to challenging much of our military technology, while it is simultaneously an active competitor to Japan in many commercial venues. The U.S. may lead in military applications such as synthetic aperture radar technology, but the Japanese camcorder processes such as image motion compensation, digital zoom, automatic light controls, etc., have virtually no current competition in the U.S. Furthermore, because of the profit-driven need to demonstrate leadership, the U.S. often creates its own competition by revealing costly lessons for the free publicity. The trends are not in favor of the U.S. in this technology.


Software and Toolkits

Software

Computer software is one of a few key technologies that daily affect almost every aspect of our lives. The instructions embodied in software run the telephone switching system; make our automobile transmissions shift smoothly by reacting to dozens of sampled factors many times a second; encode and route electronic funds transfers among the nation's--and the world's--banks; provide the displays and communications vital to our air traffic control system and the control systems of individual planes; guide machine tools in forming complex parts; and hundreds of thousands of other such applications, including such routine but vital business functions as word processing, spreadsheet calculations, and electronic mail.

Software is critical to U.S. economic prosperity for the following reasons:

  • It is so ubiquitous in consumer products, manufacturing processes, and the provision of services that, if other countries were to leapfrog the U.S. in the ability to design and deliver effective tested software, the result would be a relative increase in manufacturing costs and services delivery costs that would seriously affect the U.S. export position and trade balance;

  • National infrastructure (utilities, monetary flows and control, traffic control, telecommunications) increasingly depends on software processes. If the U.S. cannot better assure the security and safety of these systems, both from intentional attack--ranging from hacking to terrorist or government-sponsored intrusions--and from inadvertent embedded errors not caught during testing, then the nation faces risks that are fundamental to our safety and well- being;

  • Software production is a key industry in the U.S. It accounted for $2.26 billion in exports in 1993 data.

  • Software is an enabling technology in the development of other technologies. Most other scientific and engineering progress is directly dependent on software. In many cases, software is the limiting factor on how fast the other technologies can evolve.

Software is critical to U.S. national security for at least the following reasons:

  • Ultra-smart weapons, capable of pinpoint accuracy, are critically dependent on advanced software technologies. Such weapons will be dominant factors in future conflicts, with the potential-illustrated in Desert Storm operations-of greatly reducing number of sorties required, tonnage of ammunition required, and lives lost.

  • Intelligence analysis and dissemination requires ever more advanced software technologies. With gigabytes of photographic and other signals intelligence becoming available each minute from satellite systems, the volume of raw data to be culled for interesting developments is overwhelming traditional manual photo interpretation and signal interpretation methods. Highly advanced-perhaps even self-adaptive-pattern recognition and interpretation software is increasingly required to perform first-level automatic scanning of incoming data for interesting patterns and changes marked for subsequent human review and analysis.

  • Command and control systems distribute information, as it becomes available, to those who need it. Information from diverse sources undergoes "fusion" to create an integrated, consistent picture of events. On today's distributed and joint-operations battlefields, linked to stateside (CONUS) intelligence and command systems, highly advanced software is critical in routing, fusing, and disseminating voluminous information from its sources to those commanders and support units who need it, when they need it, and in a form in which it can be assimilated despite battlefield distractions.

  • As defense budgets continue to shrink, it is possible to envisage significant reductions in training costs, but with a concomitant increase in effectiveness, through training based heavily on simulations and virtual reality-type environments. Recent demonstrations using a Distributed Interactive Simulation testbed, and plans for a Defense Simulation Network show early promising results in this direction.

  • Software, in the form of database management systems, communication network routing instructions, transaction-based systems, etc., makes logistics management possible in an environment where military roles and missions have never been so numerous and diverse. Next steps in adopting common data object formats among services, and redesigning many "stovepiped" systems for true interoperability, are critical software issues upon which logistics management depends in the coming decade.

International differences tend to be common across the various classes of software, so all technology sub-areas in this technology area will be addressed as a group.

The United States is the world leader in software development across a wide range of products including network and systems software, applications packages, and software production tools. Japanese software developers are trying to respond to recent demands for packaged software in their home market, but face an uphill battle with developers from the United States. Because unique Japanese language requirements have limited Japanese interface software development, most Japanese computer makers are increasingly trying to adapt U.S.-designed programs. Added to this, the strategic error by Japanese firms of pushing proprietary operating systems has left them far behind their U.S. competitors in the market for standard operating systems such as DOS, OS2, and Unix. The slow acceptance of PCs in Japan also stunted the growth of network software. Japanese firms are attempting to catch up, primarily by forming alliances with leading U.S. network software firms. Developing competitive packaged applications software, such as database management, has been much too expensive for the customized, single-client products that have traditionally been produced by the large Japanese firms. Smaller, more entrepreneurial firms have never been as significant in the Japanese software sector as they have in the United States.

While Japan remains substantially behind the United States, its relative position has improved somewhat over the last several years. The Japanese have come to fully realize their weakness in software and have undertaken measures to improve their position. Japan is emphasizing management of the software development process and initiatives commonly known as "Software Factories" that emphasize quality, make great efforts to reuse code, and rely more on programmer experience than on computer science education. The Japanese use a variety of conventional software tools that, although different from those used by U.S. programmers, appear to be comparable in sophistication. To cope with a shortage of software professionals, the Japanese Ministry of International Trade and Industry recently implemented several programs specifically designed to improve Japanese capabilities to produce software and expand production capacity, but these efforts have not been as successful as hoped. Japan is less capable in expert system technology, but the Japanese have developed strong programs and devoted considerable resources to develop and improve fuzzy logic software and continue neural network R&D.

European software development skills, although superior to those of their Japanese counterparts, still generally lag behind those of the United States. There are a large number of skilled applications software developers in Europe, but most of them are employed by U.S. firms. The European applications software market tends to be fragmented into specific industries and regions, making it difficult for any single firm to have a major impact. European software vendors have focused on customizing software for particular industries or regions, while following the standards and programming paradigms defined by leading U.S. software houses. As a result they are vulnerable to the major technological shifts that frequently occur in this sector. For example, despite their best efforts to take a technology leadership role, most European software houses had to wait for U.S. firms to define and implement the early open system products--like Unix operating systems and related network software--before they could begin development of their own products. European users have been in the forefront in demanding interconnection software for their systems, but most vendors look to U.S. counterparts to define the technology and implement the first products. There are some areas in which European research work is quite good. A German software firm, SAP, has done excellent work in network software and the French have artificial intelligence technologies (primarily expert and knowledge-based systems) that are only slightly behind comparable U.S. efforts. The leading European firms are also among the world leaders in many areas of software for automated manufacturing systems.

Europe is at parity with the United States in software packages for computational structural mechanics. European firms have produced fewer general purpose finite element codes but have some notable successes. Dassault (France) is regarded as the technology leader with the integrated CATIA/ELFINI analysis and optimization code. Dassault has successfully implemented a module in CATIA to more accurately distribute aero loads, which dramatically improves performance of the analysis and optimization code ELFINI. Some experts regard this as a convincing advantage over any U.S. origin aerostructural code. Germany, Sweden, and the United Kingdom also have active programs in analysis and optimization, and have reached maturity with use on the most recent commercial and military aircraft design efforts. European capabilities in hydrocodes for impact dynamics are state-of-the-art. A French firm, ESI, originated the PAM- CRASH code for automotive crash analysis. In weapons-related R&D, the French Ministry of Defense has investigated ballistic analysis for armor materials, projectile/target interaction, and the constitutive laws for materials at very high strain rates. (See discussions of CIM Support Software in Manufacturing and Modeling/Simulation Tools.)

Education/training software

Software for education and training permits a wide variety of learning modes, from repetitive drill with an infinitely patient computer to advanced simulations and "virtual realities" that allow users to immerse themselves in, and interact with, rich environments representing aspects of the world with which they wish to become more familiar or more skilled.

Examples of education and training software might be simple games to teach arithmetic or typing; information retrieval software for accessing large databases of relevant information; commercially available simulations such as SimCity; or just software that lets teachers, students, tutors, and resource experts exchange electronic mail or "chat" regarding homework assignments, schedules, useful peer-provided information, etc.

The use of simulators has become more prevalent for job training which involves the interaction between humans and complex systems, reflecting the need for realistic re- creation of the systems' possible failure modes so that responses can be developed, studied and practiced. For example, simulation training enables flight crews to be trained and familiar with threatening situations well before the events occur, and to maintain their proficiency in responding to otherwise rare events. Coupling human factors engineering (see "Transportation" and "Living Systems") with high-fidelity simulation has proven its worth in commercial and military aircraft as well as tanks and advanced highway vehicle systems.

The lifelong education and training of U.S. citizens is vital to our economic and social health. If relevant information can be accessed by people where and when they need it, in order to improve job skills or political/social awareness, the entire U.S. economy benefits. With decreasing trade barriers, the U.S. niche in the world economy will increasingly involve high-technology, information-intensive jobs, continuing access to education and training software is vital to keep the U.S. at the forefront of new services, technologies, and manufacturing techniques.

U.S. defense forces conduct perhaps the most intensive education and training programs anywhere under one authority. They have tremendous need for continual training programs in advanced weapon systems, obtaining and retaining pilot skills, and thousands of other skilled jobs and tasks. Much of this training now involves live bullets, tanks, ships, and planes--at great taxpayer expense. If a significant portion of this training can be replaced with software creating sufficiently realistic conditions for effective learning, there is the potential for greatly reduced expense and improved training (for example, because alternatives can be explored that would be excessively dangerous or costly in real life). Recent initiatives in distributed interactive simulations via networks and the Defense Simulation Network show promise in this area.

Network and system software

Network software controls routing and allocation of network nodes, lines and information packets that increasingly carry the transactions of our society. As more of our economy moves into "cyberspace," the network software that allows the smooth operation of these networks becomes critical to our nation. Network software exists in a variety of data networks tying computers together: local area networks (LANs), metropolitan area networks (MANs), wide area networks (WANs)--as well as many other specialized applications, such as nationwide and international telephone switching systems.

It is vital that the U.S. retain control of its voice, cable, satellite, and other networks, since the transactions upon which our economy and national defense depend are increasingly carried on these nets. We must not become dependent on network software created by others, that may contain flaws (unintentional or deliberate) that we might have avoided. The security of these systems from attacks by hackers, terrorists, foreign commercial enterprises, or even other nations, will be of increasing concern as our economy depends increasingly on transactions carried out in this "cyberspace."

System software controls the operation of computers. It ranges from rather simple operating systems such as DOS (Disk Operating System) on personal computers, to software that allocates tasks among various computing units accessible via network, and perhaps allocates computer resources in real time to keep up with data flows emanating from processes occurring in the external world. Among the many important services provided by system software are ones dedicated to securing the system from unauthorized access, or modification or destruction of information.

System software is the essential underpinning to our nation's information systems. The availability and security of these systems are only as good as the system software on which they are based. The U.S. has traditionally led the world in operating system software (DOS, Macintosh Operating System, Multics, Unix, IBM mainframe operating systems, etc.). By controlling this fundamental software component, we have been well positioned to create application software, such as word processors, spreadsheets, database transaction systems, etc., capitalizing on the operating system's features and facilities. Continued U.S. strength in this area is vital to maintaining that critical advantage, and thereby supporting the U.S. export advantage in software systems.

National security is equally dependent on effective and trustworthy system software. DOD will increasingly use commercially available off-the-shelf (COTS) system software for its needs, rather than developing and maintaining specialized systems. Our defense systems are therefore dependent on an economically healthy and advanced system software development industry within the U.S.

Modeling and simulation software

Modeling and simulation software allows the creation of software models, and subsequent exploration of them through simulation, of a variety of physical, social, communication, and other systems. Simulations may be time-stepped or event- based (or some combination of both). They may be localized, or distributed over a variety of cooperating "nodes" on a network. Simulations may represent the formation of galaxies, molecular interactions, the complexities of social interactions, portions of our economy or specific markets, the operation of a specific aircraft for pilot training, the operation of an anthill, and a boundless set of other alternatives.

The nation that can simulate better can analyze more deeply, predict more accurately, train more thoroughly, and provide a more substantive education in the development and operation of complex systems. As such, it is a critical technology for analysis, training, education, and much of the advance of science and technology.

U.S. national security depends critically on modeling and simulation software; this is emphasized by the recent creation of DOD's Defense Modeling and Simulation Office (DMSO) to coordinate the thousands of models and simulations, and the databases on which they depend, within our Defense establishment. Among the many goals of DMSO is better verification (assuring that the software is a faithful representation of the algorithms that it encodes), validation (assuring the simulation accurately represents the portion of reality it seeks to represent), and accreditation (assuring that the model and its data are appropriate to the task for which they are to be used).

Software engineering tools

Software engineering (SE) involves the creation of software systems. Tools for SE are software programs that enable and facilitate that process, and help assure the timeliness and accuracy of the resulting software. Such tools often use the rubric CASE (computer-aided software engineering).

Perhaps the greatest weakness in computing and communication systems worldwide is in software engineering: software engineers are often unable to create complex software systems on time and on budget, and with assurance that they will perform as required or desired. Advances in software engineering are vital if we are to continue developing software of the complexity required to manage and control our networks, transaction systems, simulations, and computers themselves. Integrated circuit (IC) design is also increasingly dependent on the effectiveness of the design software. If any nation creates a breakthrough in their effectiveness in creating complex software, it will give that nation a tremendous advantage in the world economy-- especially since software development is one of those symbolic jobs that can easily migrate (via data networks and telecommunications) to any country or locale on earth.

The U.S. Department of Defense is highly dependent on the development and maintenance of large scale software systems. Better tools for software development are of vital interest to them, as software development is the critical element (pacing schedule and cost) in many new weapon and control systems. It is also important that our critical defense systems continue to be developed by teams of people who are U.S. based, and whose integrity and trust can be assured. It is therefore important that the U.S. leads in software development tools, so that we can maintain control of critical systems development, and not be dependent on more advanced tools and techniques available elsewhere.

Pattern recognition

Pattern recognition is of critical importance in many commercial and military applications. Computationally intensive processes such as speech recognition, image, understanding and handwriting recognition depend on pattern matching techniques for their success. Although much progress has been made during the past few years, there is still much more research necessary before these techniques are widely available. The approaches being considered involve both hardware and software solutions. Several parallel processing architectures are very effective at pattern recognition but are currently expensive to produce. New software techniques such as neural networks promise effective solutions on more conventional hardware.

Much of the recent work on pattern recognition has been conducted as a subfield of artificial intelligence. Further examples can be found in that section.

Software production

Despite the trends in packaged software, there is a continuing strong demand for special, custom-developed software to integrate systems of hardware and software, often connected by wide area networks. Examples of such systems include air traffic control systems, environmental monitoring systems, marine navigation systems, factory automation systems and military-related command and control systems. Also included are smaller embedded systems such as medical scanners and life-support systems. These systems often involve processes that take place and are controlled by computer systems more or less in real time, with actions occurring at rates too fast for human intervention. In addition to performing conventional computing tasks, such systems must often be capable of effectively integrating and responding to sensor information. The growth in systems integration reflects the increasing sophistication and complexity of many larger computing systems.

Software production tools allow rapid prototyping and testing of complex code, helping overcome the greatest barriers to efficient creation of software. By speeding production and reducing development costs, software production tools contribute to improved productivity, job creation and economic growth in the software industry and other industries which require customized software. They also contribute to the goal of harnessing information technology, to enhanced national security.


President and First Lady | Vice President and Mrs. Gore
Record of Progress | The Briefing Room
Gateway to Government | Contacting the White House
White House for Kids | White House History
White House Tours | Help | Text Only

Privacy Statement