Friday, February 09, 2007

-Input Media -
1st and 2nd Generation
PunchCards







A punch card or punched card (or punch card or Hollerith card or IBM card), is a piece of stiff paper that contains digital information represented by the presence or absence of holes in predefined positions. Almost an obsolescent recording medium, punched cards were widely used throughout the nineteenth century for controlling textile looms and through the twentieth century in unit record machines for input, processing, and data storage. Digital computers used punched cards, later scanned by card readers, as the primary medium for input of both computer programs and data, with offline data entry on key punch machines. Some voting machines have used punch cards.

Punched cards were first used around 1725 by Basile Bouchon and Jean-Baptiste Falcon as a more robust form of the perforated paper rolls then in use for controlling textile looms in France. This technique was greatly improved by Joseph Marie Jacquard in his Jacquard loom in 1801. Herman Hollerith developed punched card data processing technology for the 1890 US census and founded the Tabulating Machine Company (1896) which was one of three companies that merged to form Computing Tabulating Recording Corporation (CTR), later renamed IBM. IBM manufactured and marketed a variety of unit record machines for creating, sorting, and tabulating punched cards, even after expanding into computers in the late 1950s. IBM developed punch card technology into a powerful tool for business data-processing and produced an extensive line of general purpose unit record machines. By 1950, the IBM card and IBM unit record machines had become ubiquitous in industry and government. The warning often printed on cards that were to be individually handled, "Do not fold, spindle or mutilate," became a motto for the post-World War II era (even though many people had no idea what spindle meant).

3rd Generation
Tapes and Disks







Magnetic tape was first invented for recording sound by Fritz Pfleumer in 1926 in Germany, based on the invention of magnetic wire recording by Valdemar Poulsen in 1898. Pfleumer's invention used an oxide powder coating on a long strip of paper. This invention was further developed by the German electronics company AEG, which manufactured the recording machines and BASF, which manufactured the tape. An important discovery made in this period was the technique of AC biasing which dramatically improved the fidelity of the recorded audio signal.

Magnetic disk and magnetic storage are terms from engineering referring to the storage of data on a magnetised medium. Magnetic storage uses different patterns of magnetization on a magnetically coated surface to store data and is non-volatile memory. The information is accessed using one or more read/write heads. Since the read/write head only covers a part of the surface, magnetic storage is sequential access memory and must seek, cycle or both. As of 2006, magnetic storage media, primarily hard disks and tape cartridges, are widely used to store computer data as well as audio and video signals. In the field of computing, the term magnetic storage is preferred and in the field of audio and video production, the term magnetic recording is more commonly used. The distinction is less technical and more a matter of preference.

Magnetic disk was first suggested by Oberlin Smith in 1888. The first working magnetic recorder was invented by Valdemar Poulsen in 1898. Poulsen's device recorded a signal on a wire wrapped around a drum. In 1928, Fritz Pfleumer developed the first magnetic tape recorder. Early magnetic storage devices were designed to record analog audio signals. Modern magnetic storage devices are designed for recording digital data.

In early computers, magnetic storage was also used for primary storage in a form of magnetic drum, or core memory, core rope memory, thin film memory, twistor memory or bubble memory. Also unlike modern computers, magnetic tape was often used for secondary storage.




4th Generation
Keyboard, Pointing Devices, and Optical Scanning





Most of the more common keyboard layouts (QWERTY-based and similar) were designed in the era of the mechanical typewriters, so their ergonomics had to be slightly compromised in order to tackle some of the technical limitations of the typewriters. With the advent of modern electronics, this is no longer necessary. The letters were attached to levers that needed to move freely; jamming would result if commonly-used letters were placed too close to one another. QWERTY layouts and their brethren had been a de facto standard for decades prior to the introduction of the very first computer keyboard, and were primarily adopted for electronic keyboards for this reason. Alternative layouts do exist, such as the Dvorak Simplified Keyboard; however, these layouts have yet to gain mainstream popularity.

The number of keys on a keyboard varies from the original standard of 101 keys to the 104-key windows keyboards and all the way up to 130 keys or more, with many of the additional keys being symbol-less programmable keys that can simulate multiple such as starting a web browser or e-mail client. There also were "Internet keyboards," sold in America in the late 1990s, that replaced the function keys with pre-programmed internet shortcuts. Pressing the shortcut keys would launch a browser to go to that website.

A pointing device is any computer hardware component (specifically human interface device) that allows a user to input spatial (ie, continuous and multi-dimensional) data to a computer. CAD systems and graphical user interfaces (GUI) allow the user to control and provide data to the computer using physical gestures - point, click, and drag - typically by moving a hand-held mouse across the surface of the physical desktop and activating switches on the mouse. Movements of the pointing device are echoed on the screen by movements of the mouse pointer (or cursor) and other visual changes.

A "pointing device" can also refer to a special "stick" (sometimes telescopic, to reduce the length when not in use), or a lamp with a narrow light beam that is pointed at a map, blackboard, slide screen, movie screen, etc.; sometimes the light is in the form of an arrow.

Optical character recognition, usually abbreviated to OCR, is a type of computer software designed to translate images of handwritten or typewritten text (usually captured by a scanner) into machine-editable text, or to translate pictures of characters into a standard encoding scheme representing them (e.g. ASCII or Unicode). OCR began as a field of research in pattern recognition, artificial intelligence and machine vision. Though academic research in the field continues, the focus on OCR has shifted to implementation of proven techniques.

Optical character recognition (using optical techniques such as mirrors and lenses) and digital character recognition (using scanners and computer algorithms) were originally considered separate fields. Because very few applications survive that use true optical techniques, the optical character recognition term has now been broadened to cover digital character recognition as well.

Early systems required training (the provision of known samples of each character) to read a specific font. "Intelligent" systems with a high degree of recognition accuracy for most fonts are now common. Some systems are even capable of reproducing formatted output that closely approximates the original scanned page including images, columns and other non-textual components.



5th Generation
Touch Devices and Handwriting Recognition







Multi -touch is the name of a human-computer interaction technique and the hardware devices that implement it. It is a kind of touch screen or touch tablet / touchpad that recognizes multiple simultaneous touch points, frequently including the pressure or degree of each independently, as well as position. This allows gestures and interaction with multiple fingers or hands, chording, and can provide rich interaction, including direct manipulation, through intuitive gestures. Depending largely on their size, some multi-touch devices support more than one user on the same device simultaneously. One salient aspect of this technique is that it makes easy to zoom in or out in a Zooming User Interface with two fingers, for example, thereby providing a more direct mapping than with a single-point device like a mouse or stylus.

FingerWorks produced a line of keyboards that incorporated multi-touch gestures. FingerWorks has since been purchased by Apple, who has incorporated the technology into its iPhone. The firm Tactex Controls is one supplier of multi-touch pads.

Multi-touch has at least a 25 year history, beginning in 1982, with pioneering work being done at the University of Toronto (multi-touch tablets) and Bell Labs (multi-touch screens).



Handwriting recognition is commonly used as an input method for PDAs. The first PDA to provide written input was the Apple Newton, which exposed the public to the advantage of a streamlined user interface. However, the device was not a commercial success, owing to the unreliability of the software, which tried to learn a user's writing patterns. By the time of the release of the Newton OS 2.0, wherein the handwriting recognition was greatly improved, including unique features still not found in current recognition systems such as modeless error correction, the largely negative first impression had been made. Another effort was Go's tablet computer using Go's Penpoint operating system and manufactured by various hardware makers such as NCR and IBM. IBM's Thinkpad tablet computer was based on Penpoint operating system and used IBM's handwriting recognition. This recognition system was later ported to Microsoft Windows for Pen, and IBM's Pen for OS/2. None of these were commercially successful.



6th Generation
Speech or Voice Recognition




In terms of technology, most of the technical text books nowadays emphasize the use of Hidden Markov Model as the underlying technology. The dynamic programming approach, the neural network-based approach and the knowledge-based learning approach have been studied intensively in the 1980s and 1990s.

The performance of a speech recognition systems is usually specified in terms of accuracy and speed. Accuracy is measured with the word error rate, whereas speed is measured with the real time factor.

Most speech recognition users would tend to agree that dictation machines can achieve very high performance in controlled conditions. Part of the confusion mainly comes from the mixed usage of the term speech recognition and dictation.

Speaker-dependent dictation systems requiring a short period of training can capture continuous speech with a large vocabulary at normal pace with a very high accuracy. Most commercial companies claim that recognition software can achieve between 98% to 99% accuracy (getting one to two words out of one hundred wrong) if operated under optimal conditions. These optimal conditions usually means the test subjects have 1) matching speaker characteristics with the training data, 2) proper speaker adaptation, and 3) clean environment (e.g. office space). (This explains why some users, especially accented, might actually find that the recognition rate could be perceptually much lower than the expected 98% to 99%).

Other, limited vocabulary, systems requiring no training can recognize a small number of words (for instance, the ten digits) from most speakers. Such systems are popular for routing incoming phone calls to their destinations in large organizations.

0 Comments:

Post a Comment

<< Home