|[ directory ]|
1.2 Web Applications
In this book, by Web application we mean a distributed application based on Web technology: HTTP, HTML, and the family of XML technologies.
1.2.1 From Static Contents to Dynamic Contents
The Web was originally designed as a mechanism to deliver static pages to clients on the Internet. When a browser sends a GET request to a Web server, the server fetches the requested file from its file system and returns it through the HTTP connection, as shown in Figure 1.2.
It is well known that the response to a GET request should not necessarily be a stored page that was created by a human author. The Common Gateway Interface, or CGI, is a mechanism in a Web server that invokes an external program upon receiving a request and returns the output of the program as the response as if it had been stored as a file. This is a powerful idea that allows the page to be created dynamically when a request arrives. For example, when a real-time stock quote service receives a request, it executes a database retrieval program to fetch the latest stock prices and returns them as an HTML page.
As the Web and browsers rapidly became popular through the efforts of Netscape Communications Corporation and others, people started to think about the possibility of using a browser as a universal user interface. Almost all modern application programs have a graphical user interface (GUI) so that they can be used without learning an application-specific command-line language. Programming with a Windows API or an X Window API requires a certain cost and a set of skills that are not readily available everywhere. The idea of the three-tier model was, instead of writing a GUI using these APIs, to use HTTP/HTML to interact with a user. Two major merits of this idea are that (1) any client platform can be used as a user terminal and (2) the user can use the same look and feel as the accustomed Web interface.
In the three-tier model, an application program is built as a CGI program or one of its variations and is invoked by a request from a browser. Therefore, application programs run on a Web server, which in most cases is on the UNIX or Windows platform. For these applications to be useful, they are usually connected to existing database systems or transaction systems that usually run on large host systems. These are called backend systems.
Thus, three-tier applications, consisting of the following three functional units, gained popularity as the standard way of building a distributed application program in both the Internet and intranet environments.
1.2.2 From B2C to B2B桭rom Web for Eyeballs to Web for Programs
One of the essential points of the three-tier model is the use of a browser for a human interface. The protocols designed for a human interface梩hat is, HTTP and HTML梐re used for this purpose. Isn't it possible to use the same protocols for communications between two applications?
Let us consider an example in which a program, instead of a human user, accesses a Web site and uses the results for its computation. Assume that there is a weather report service on the Web and we can get the current temperature in White Plains, New York, at the URL http://www.example.com/White_Plains_NY_US.html. When the Web is accessed with this URL, an HTML page such as that shown in Figure 1.3 is returned.
Suppose that we are asked by a site manager of a shopping mall in White Plains to develop an application program that monitors the temperature and issues a power overload warning if the city's temperature is above 100 degrees for more than three consecutive hours. The information necessary to develop this application is already on the Web at the previous URL.
The following code shows an HTML page that is returned from that URL. We need to obtain this page every hour, analyze and extract the temperature information from the page, and test whether the temperature exceeds 100 degrees for more than three hours.
<html> <head> <title>Weather Report</title> </head> <body> <h2>Weather Report -- White Plains, NY</h2> <table border=1> <tr><td>Date/Time</td><td align=center>11 AM EDT Sat Jul 25 1998</td></tr>  <tr><td>Current Temp.</td><td align=center>70°</td></tr> <tr><td>Today's High</td><td align=center>82°</td></tr> <tr><td>Today's Low</td><td align=center>62°</td></tr> </table> </body> </html>
A quick and dirty way to extract the current temperature of White Plains is to use the following strategy: The current temperature of White Plains starts at line 9, column 45 of the page until the next ampersand. Any experienced software engineer can point out problems with this strategy. For example, a slight change in the page, such as inserting a blank line or removing the whitespace on the left of the <tr> tags, will prevent our program from functioning properly even though these changes do not affect the usefulness for human readers of the page.
HTML was originally designed to represent a logical structure of a document, where the natural components are header, title, paragraph, and so on, so tags are defined for expressing these. Notions such as current_temperature are defined nowhere in HTML. If we need to express data in an HTML page, we have to embed it into document-oriented tags. That is good enough for human users, but it is not good for programs to treat them as data.
With XML, we can define a structure directly representing data. For example, the weather service URL might return the following XML document, instead of an HTML page, for the HTTP request for the URL http://www.example.com/White_Plains_NY_US.xml.
<?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE WeatherReport SYSTEM "http://www.example.com/WeatherReport.dtd"> <WeatherReport> <City>White Plains</City> <State>NY</State> <Date>1998-07-25</Date> <Time>11:00:00-04:00</Time> <CurrTemp unit="Fahrenheit">70</CurrTemp> <High unit="Fahrenheit">82</High> <Low unit="Fahrenheit">62</Low> </WeatherReport>
With this XML document, we can use the third strategy: The current temperature is in the <CurrTemp> tag. This XML document is a logical representation of the weather data梩hat is, it is independent from how the page is displayed for human users. Of course, we need to have agreed on the use of the <CurrTemp> tag in a weather report with the weather service provider, and this is done by publishing the specification of the XML document. The published specification can be considered as a sort of contract; it asserts the service provider's commitment that the required information can be consistently found at the stated position in a retrieved document. One way to describe the specification is to use a DTD, a syntactic specification of a class of XML documents. In our example, the DTD published by www.example.com would assert that a <CurrTemp> tag is always present as a direct child of the root element of a retrieved document.
Now our PowerWarning application has a more direct connection with the WeatherReport application. This is the first of many things that we can achieve by using XML. Suppose our PowerWarning system works well and becomes popular. We could decide to provide this service to other customers through our own Web site at www.powerwarning.com. Our subscribers (for example, a site management application) would connect to our site and set parameters such as the city to be monitored, the conditions for issuing a warning (for example, temperature and duration), and the method of notification (for example, pager or e-mail). Our generalized program running on our Web server would periodically poll the weather services of specified cities and issue a warning if a preset condition is met.
Our system, which is another Web application because it provides services through HTTP, can be called by other Web applications, as shown in Figure 1.4. This is no longer the traditional three-tier model. It is more like a "Web of Web applications." Although the three-tier model connects human users and backend systems, the general Web application model may connect one Web application to another. XML is the data format of choice for such communication.
1.2.3 Interoperability Is Everything
Today, few new computer applications are developed from scratch; most are built by integrating other existing applications. These existing applications to be integrated frequently run on many different platforms and operating systems. Therefore, integration requires connecting them through a network. Some applications may even reside in a separate company because many companies today seek ways to outsource business processes that are not their core competency. Outsourced business processes have to be integrated with other business processes by connecting these applications using intercompany networks.
What are the requirements for intercompany integration? First and foremost, the applications must be able to talk to each other. In other words, they must be interoperable with each other. This means that they must agree on the common specifications of protocols and data formats. These agreements must be concise and clear. They must be based on open standards as much as possible. They must not be dependent on particular platforms, particular programming languages, and particular middleware. Can XML be a basis for such agreements?
The greatest advantage of XML as a data format language is that it is text-based and human-readable. If you use a binary data format instead and the specification is something like "The value of parameter X is represented as a 4-byte integer in network byte order, beginning at the twelfth octet from the top of the message," you would have to look at a this binary message's hexadecimal dump and understand the message content bit-by-bit for debugging, a tedious and time-consuming task.
By contrast, XML is a text-based format and therefore is human-readable. Further, XML messages can easily be read, created, and modified by using standard and common tools such as text editors or UNIX's string search tools, such as grep. This all makes understanding and analyzing XML messages much easier than any binary formats.
In XML, tags can be named with text strings composed of meaningful words. Suppose you have developed a decent Web application that you want to promote. As a Web application, it receives an HTTP request and returns an XML document as a response (rather than relying on more efficient methods such as Internet Inter-ORB Protocol, or IIOP). You provide your data in XML, so you also publish a DTD or a schema that describes its syntax. Assume that a potential partner would like to do business with you. By accessing your Web site, it learns that you provide an XML-based Web application for automatic B2B messaging. By studying your schema, it finds that it includes tags in plain English, such as <purchaseOrder> and <date>. Taking advantage of XML's simplicity, the company's engineers can develop "glue code" that connects its own business application to your Web application just by using off-the-shelf XML and Web application tools.
Another example of XML's simplicity is its ability to represent tree-structured data. But why would you want to use XML to represent tree-structured data instead of using an open standard such as Abstract Syntax Notation One (ASN.1)? ASN.1 is a binary data representation scheme for tree-structured data and is used in many communication protocols (one of its uses is for the X.509 digital certificate, which we discuss in Chapter 14).
ASN.1 is carefully designed so that it minimizes the size of data. A very efficient protocol handler can be realized by combining an ASN.1 message specification with a good ASN.1 parser implementation. In short, ASN.1 is optimized for efficiency. However, because of this, it has a complicated bit arrangement that makes understanding it hard. You can use automatic protocol generation tools to save some of the effort involved here, but good tools are expensive.
By contrast, XML has a similar ability to represent tree-structured data, but it is much simpler and is easier to understand and work with.
One lesson stands out from what we have learned from the history of the Internet: "Simplicity wins, efficiency loses." On the Internet, the name of the game is openness梩hat is, accessibility and availability to the public. Even if you have an unparalleled technology, it will not win in the market without receiving wide support from most of the affected community. A technology that is relatively less efficient but open and easy to understand compared with its competitors will have a better chance to win on the Internet.
The second substantial benefit of using XML, which is frequently overlooked, is its capability to handle international character sets. Even if you are designing a simple message format, this point alone can be reason enough to adopt XML.
Today, business is often international in scope. This is especially true for Web applications because the Internet easily leaps national borders. It is only natural that business transactions will contain, for example, street names in Chinese and people's names in Arabic. The XML 1.0 Recommendation is defined based on the ISO 10646 (Unicode) character set, so virtually all the characters that are used daily all over the world are legal characters. We discuss character sets and internationalization more in Chapter 3.
All the major IT vendors support XML. There have been many middleware technologies that enable integration between applications over a network, such as Common Object Request Broker Architecture (CORBA) and DCOM, but none of them have had support from all the major IT vendors, including IBM, Microsoft, Sun, Oracle, BEA, HP, and so on. This is probably the largest reason why XML is so popular now.
1.2.4 From Distributed Applications to Decentralized Applications
XML defines a data format but does not specify the middleware to process data. Any application that can read and write XML data can participate in integration regardless of the middleware the application uses. This contrasts with other distributed programming technologies, such as CORBA and DCOM, which require a certain middleware, programming language, or operating system to be used in each component application. Applications that use XML for their data exchange have a better chance of being integrated with other applications because of less dependency on their running environment. An integrated application whose components do not require a particular middleware, programming language, or operating system to be connected to other components is called loosely coupled. Components of a loosely coupled application typically run on a variety of operating systems and middleware located in widely scattered areas.
Integrating multiple Web applications over a network can be seen as reusing existing applications as software components. The history of software engineering is that of abstraction and reuse of software components. In the early days, we had subroutines to make one algorithm implementation be reused many times. The idea of object-oriented programming, which originated from Smalltalk-80, is to encapsulate the details of both algorithms and data structures in an object so that it is reusable even if its internals are changed. Software components, such as JavaBeans and Microsoft's Component Object Model (COM), enable reusing binary components without compiling.
Using a Web application as a software component encapsulates not only the details of the implementation of the Web application, but also the middleware, the programming language, and the operating system that the Web application uses. More importantly, it also hides the details of how this application is managed梩hat is, installation, configuration, updates, backups, and so on. This is particularly important when the cost associated with managing the component is high.
Consider a software component that calculates the optimum travel route from one city to another. This component is provided as a class library along with a set of header files and a database of the latest flight schedules of airline companies. The application developer uses the header files to compile the application, and the administrator installs and configures the component in the running environment. The problem here is that the flight schedule database should be updated whenever there is any change in its contents. The administrator must receive timely updates from the component vendor and incorporate these updates in the database. On the contrary, if this component is provided as a Web application, this management problem can be hidden from the administrator's view.
Encapsulating management is also desirable if the component requires special exp Encapsulating management is also desirable if the component requires special expertise to maintain. A component that requires high security, such as the one that manages customers' private data, is a good example.
1.2.5 The World of Web Services桵ore Dynamic Integration
The idea of integrating Web applications as very loosely coupled software components is now leading us to the next wave of software development, called Web services. Because these components are provided as services that are available from anywhere on the Internet without needing any installation or compilation to use them, the idea of dynamically integrating them at runtime (as opposed to integrating them at design time or compilation time) is becoming more realistic. A suite of new standards that make this idea possible are emerging. In Chapter 13, we discuss this movement in more detail.
|[ directory ]|