understanding the world wide web

6
Understanding the World Wide Web By Chelse Benham 010505 In 1969, computer scientist, Ken Thompson and mathematician, Dennis Ritchie, worked together at Bell Telephone Laboratories (BTL), for Multics (Multiplexed Information and Computing Service) – a collaboration between BTL, General Electric, and Massachusetts Institute of Technology – to create an operating system for a large computer which could accommodate up to a thousand simultaneous users which was a requirement for its use in telephony. The result was a system called UNICS (Uniplexed Information and Computing Service), but was later changed to UNIX. “The most important thing about the development of UNIX was allowing programmers the opportunity to experiment. Before UNIX, mainframe operations required huge computers that didn’t allow programmers easy access,” said Graham Toal, information technology security officer for the University of Texas-Pan American. “UNIX was a very productive environment in which a programmer could experiment much more than before. In the long term UNIX is still serving as the operating system of choice for servers and research and development programming.” During the formation of UNIX, other United States computer scientists began researching computer networking. This research was funded by The Defense Advanced Research Projects Agency (DARPA), which was established as a separate defense agency under the Office of the Secretary of Defense and gave the Internet its first name, the ARPANET, in January 1969. ARPANET was used to test “packet-switched networks,” – computer networks that transfer information in the form of little packets that move independent of each other through

Upload: chelse-benham

Post on 13-May-2015

350 views

Category:

Career


2 download

TRANSCRIPT

Page 1: Understanding the world wide web

Understanding the World Wide WebBy Chelse Benham

010505

In 1969, computer scientist, Ken Thompson and mathematician, Dennis Ritchie, worked together at Bell Telephone Laboratories (BTL), for Multics (Multiplexed Information and Computing Service) – a collaboration between BTL, General Electric, and Massachusetts Institute of Technology – to create an operating system for a large computer which could accommodate up to a thousand simultaneous users which was a requirement for its use in telephony. The result was a system called UNICS (Uniplexed Information and Computing Service), but was later changed to UNIX.

“The most important thing about the development of UNIX was allowing programmers the opportunity to experiment. Before UNIX, mainframe operations required huge computers that didn’t allow programmers easy access,” said Graham Toal, information technology security officer for the University of Texas-Pan American. “UNIX was a very productive environment in which a programmer could experiment much more than before. In the long term UNIX is still serving as the operating system of choice for servers and research and development programming.”

During the formation of UNIX, other United States computer scientists began researching computer networking. This research was funded by The Defense Advanced Research Projects Agency (DARPA), which was established as a separate defense agency under the Office of the Secretary of Defense and gave the Internet its first name, the ARPANET, in January 1969.

ARPANET was used to test “packet-switched networks,” – computer networks that transfer information in the form of little packets that move independent of each other through various networks until they reach their final destination. (www.bilk.ac.uk)

The Department of Defense realized this network was a robust communication system, ideal for wartime, and began using it, but despite its military background, the ARPANET eventually became a primary means of instant communication between computer researchers and academics across the country.

In 1983, ARPANET was divided into two networks, reserving ARPANET for civilian use and creating MILNET for military use.

In an interview at The Book & Computer.com in 2004, Alan Kay, the father of the idea of “linking” computers together, explains his invention. Kay took a position at Xerox PARC (Palo Alto Research Center) in 1972. He was instrumental in

Page 2: Understanding the world wide web

developing the graphical user interface and the personal computer while at Xerox.

Well before his arrival at Xerox, however, Kay had envisioned the Dynabook, which he described as "a portable interactive personal computer, as accessible as a book. The Dynabook would be linked to a network and offer users a synthesis of text, visuals, animation and audio.” Ultimately, this idea of “linking” information was the basis upon which the World Wide Web was founded.

“Alan Kay was the first person to consider the idea of linking machines to other machines. Before that computers ran information internally without communicating with other computers. Because of his access to the ARPNET, Tim Berners-Lee developed Kay’s idea and constructed a system that implemented “linking” and integrating computers into one big network,” Toal explains.

According to MIT’s Magazine of Innovation Technology Review vol. 107/no.8 article, in 1980, Englishman and computer scientist, Berners-Lee invented an information retrieval program called Enquire at the European Organization for Nuclear Research (CERN) in Geneva, Switzerland.

Ten years after Enquire’s release, Berners-Lee finished writing the tools for a larger information retrieval system he called the World Wide Web. He released the program to CERN in March 1991. (livinginnet.com) The early 1990’s were the beginning of a great technological fascination sweeping the world. Several major events were occurring simultaneously that ultimately burgeoned into one of the greatest explosions of the technological age.

Home computers became more proficient and less expensive. The Web information retrieval and communication system was released and navigational software was made available at no cost to computer users.

The first Web browser was NCSA Mosaic. Mosaic’s programming team then developed the first commercial Web browser called Netscape Navigator, later renamed Communicator, then renamed back to just Netscape. (livinginnet.com)

“Initially, you couldn’t really find any information that was useful. It was primarily restricted to academic papers at the time,” Toal said. “The Web reached critical mass when people began putting their own servers online. Anyone who was a specialist in their area would put their material on the Web for everyone to find. Most of these early servers were UNIX based because the Microsoft Windows environment wasn’t very friendly to programming.”

The last event to shape the Internet, and make into one of the greatest informational, promotional and communication tools in the world, was opening it

Page 3: Understanding the world wide web

up to commercial traffic. Since then, the Internet has experienced rapid growth, and is no longer used solely by academics and the military. Ironically, the Internet is neither owned nor controlled by any government or body.

The Internet is a worldwide system permanently connecting millions of computers together to form a global network. The Web uses the medium of the Internet to access information stored on servers around the globe.

Moreover, the Web is just one way of sharing information. Information can also be shared using e-mail or Simple Mail Transfer Protocol (SMTP), instant messaging and FTP (File Transfer Protocol). Because of the Internet, information can be moved around the planet at great speed and minimal cost.

Browsers – navigational software which enables you to 'surf' the Web– such as Netscape and Microsoft Internet Explorer, access Web pages. Information or documents are stored in “pages.” Most Web servers have a starting page, known as the home page, which contains links to other pages on the same server and to other servers. Netscape Navigator is a graphical browser, a program which can read these documents and display both text and pictures on your computer screen. These pages are interconnected by means of hypertext links.

Hypertext is what makes the Web so intuitively navigable. Hypertext is a type of document that contains links (called hyperlinks) that point your Web browser to another resource on the Internet. A hyperlink can be in the form of a word, several words, or even an image. When you select a hyperlink with your Web browser (usually by clicking the link with your mouse), your browser automatically loads whatever the selected link indicates.

A Web address such as The University of Texas-Pan American’s http://www.panam.edu is very specific and allows any computer, with navigational software and access to the Internet, to go directly to the site.

HTTP stands for Hypertext Transport Protocol, which is the protocol (access rules) used by computers on the Web. This acronym gives us the common Internet address prefix, http://, which always precedes addresses for Web pages. The Web uses HTTP protocol, but it is just one of the languages spoken over the Internet.

With an address that ends with the acronym HTML such as www. utpa broncs.com/sports/ m-baskbl/recaps/010305aaa. html , the HTML stands for Hypertext Markup Language. This is the script language used to create Web pages. This language uses HTML tags to tell your Web browser how to display each document. HTML tags are not only responsible for determining

Page 4: Understanding the world wide web

what gets bolded or italicized in a document, but they also are responsible for making hyperlinks.

In an address such as http://www.panam.edu the last part of a Web address indicates the type of site it is. Many computer addresses in the United States end with one of the following abbreviations indicating the certain categories the sites are divided into.

edu educational sitemil military sitecom corporate or company sitegov government sitenet administrative organization for a networkorg private organizations that don’t fit the

above categories

The full Web address is called a Uniform Resource Locater (URL). It enables your browser to know which of the millions of sites on the Internet you are requesting, and from which server to go and fetch it.

A search engine is “a tool that enables users to locate information on the Internet. Search engines use keywords entered by users to find Web sites which contain the information sought.” (www.getnetwise.org/glossary)

Search engines are the powerhouses of Internet information retrieval. By knowing how to navigate the Web, you have resources for mining information, thus giving weight to the adage “knowledge is power.” There are numerous search engines. The following are some of the most widely used: AltaVista; Google; Lycos!; Dogpile; Ask Jeeves; Yahoo!; HotBot; Scirus (scientific information); ViVisimo; Education World; MSN Search; Teoma; Artcyclopedia (fine art information); Northern Light and Infoseek.

An important thing to remember is that the Internet is huge; it is unstructured and the content is as diverse as the people and cultures of the world. The Web is an invaluable tool in today’s business environment. You can find a seemingly endless amount of information on the Web: products and services, pay scales, job opportunities, career advice and professional information. You can browse information from health advice to computer basics. The key is not to be afraid of the technology. Embrace it and put power at your finger tips.

“Science and technology multiply around us. To an increasing extent they dictate the languages in which we speak and think. Either we use those languages, or we remain mute.” – J. G. Ballard, (1930) American novelist and writer