Managing Knowledge
Home


Questions

  1. What is the relationship between information work and productivity in contemporary organisations?
  2. Describe the roles of the office in organisation. What are the major activities that take place in offices?
  3. Define an expert system and describe how it can help organisations use their knowledge assets?
  4. Describe three problems and limitations of expert systems.
  5. Define and describe fuzzy logic. What kinds of applications is it suited for?
  6. What are intellignent agents? How can they be used to benefit businesses?

Answer: Question 1

BENEFITS OF USING IT

 

The term Information Work is defined as work performed by using Information Technology  .Today’s computer and telecommunications technologies support the automation of many business processes. Information systems have been developed to improve many business operations.For example,  Office Automation (OA), supports communication between office personnel ( as well as customers ), both near and far, through Email, Vmail and Fax. This allows employees and customers to communicate with one another effectively. It also allows the electronic scheduling of appointements and meetings.

 

Computer, video and teleconferencing facilities allow business partners in diverse places to discuss business issues and make decisions easily without  having to actually travel and converge in one physical location.

 

IT can also facilitate the setting up virtual universities and  colleges.Instead of students going far away places to attend a and a great expense, the college now can go to their door steps.  They can be taught by eminent professors drawn from the academia. All this is possible with the introduction of high capacity transmission media (e.g. optocal fibers ) and ISDN.

 

IT can also provide entertainment. It is possible to receive video on demand (VOD) from far far away places without having to go to a video shop to rent or buy a video tape.

 

Business and research information from around the world can also be obtained though the Internet, Businesses can use Electronic Data Interchange ( EDI), to transact their business. For Example they can place their orders and pay for their purchases using EDI.

 

Businesses can use Computer-aided-Design and Computer-Aided-Manufacture (CAD/CAM) and Manufacturing Resource Planning II (MRPII) to plan, design, produce and control inventories.  

 

Managers can use Executive and Decision Support Systems (ESS and DSS ) to help them in their decision-making. Similarly, marketing, sales, financial and human resource managers can use the Market Intelligence, Sales Monitoring, Financial and Human Resource Information Systems to help them in their respective tasks.

 

Auditors can use audit packages to help them in their audit work. They permit auditors to perform compliance and substantive tests.

 

Doctors can use expert systems (ES) to diagnose and treat patients ( or at least to get a second opinion). Similarly, bank officers can use expert systems to approve loans.

 

Thus, the use of IT can benefit a firm in many ways :

 

·        It can raise productivity

 

·        It can increase sales

 

·        It can reduce costs.

 

·        It can use resources more efficiently

 

·        It can improve customer service.

 

·        It can help in better planning and decision-making.

A firm can use IT in practically every area to improve its business operations. It can also help an organisation to be competitive in today’s increasingly global market place.

Back to Top

Answer: Question 2

The following are basic to office work :

 

a)    receiving or producing information.

 

b)    processing information ( or creating information from data- arranging it for its

       intended recipient’s purposes ) ;

 

c)    storing or communicating information- the most traditional and widely recognised

       office functions of typing, copying, telephoning, mailing etc.

 

d)    control- inspection, audit etc.

 

 

  The overall objective of the office is to provide efficient, effective service to management and other information users throughout the organisation.

 

Effectiveness is measured by the extent to which the activity fulfills its purpose And its users’ requirements. Efficiency is measured by the extent to which resources are utilised without wastage in the pursuit of effectiveness. Resource incluyde finance,materials, equipment and also human time, effort, skills and knowledge. Achieving and maintaining efficiency will usually invlove the elimination of delays in the provision of office services, and adherence to resource budgets set for the departments.

Back to Top

Answer: Question 3

An Expert System is an interactive software that mimics human experts in a specialized field ( e.g. loan approval) and assist the user in the decision-making process. It will prompt the user for additional information or to seek  further clarification.Based  on the information supplied and the information stored internally in the knowledge base (discussed below), it maked recommendations and thus generally assists the user in arriving at a decision.

 

Expert systems have been developed to mimic doctors, financial analysts, tax experts, engineers, lawyers and geologists. Users consult these built-in “software experts” ( using PCs  or workstations) for expert advice, opinions, recommendations, or solutions. An expert system thus enables the user to obtain the services of one or more real human experts without having to actually meet and consult them!

 

An ES has several benefits :

 

1)   It enables the knowledge of experts to be canned in a computer disk. That means valuable knowledge is still available even if the experts are  not unavailable ( they may go on leave on retire) or their services too expensive. ( The services of a medical specialist or a professional tax consultant).

 

2)   It enables users to seek expert advice or opinion without having to actually meet the experts. ( e.g., it is readily available ).

 

3)   It enables decisions to be made in a consistent manner. Thus it can be an excellent training tool.

 

4)   It is inexpensive compared to the charges of human experts.

 

5)   It raises the productivity of decision-makers as they canobtain instant answers to questions. This gives them more time for other tasks.

 

 

An expert system shell  is normally used to develop an expert system. This a framework that helps a developer to build and use an expert system application. It consits of a knowledge acquisition facility, knowledge base, inference engine, explanation facility and user interface.

 

 

KNOWLEDGE ACQUISITION FACILITY

 

 

The Knowledge Acquisition Facility  of an expert system shell allows a knowledge engineer to capture the knowledge  of one or more human experts in a particular area of expertise and store that knowledge in a knowledge base in the form of facts and rules.

 

The knowledge engineer  is like a system analyst. H e must extract the relevant information from one or more domain experts ( i.e., the specialist).This task is by no means trivial. He must interact and spend a considerable amount of time with the experts in order to obtain the necessary information.The information collected are stored in the form of facts and rules that the experts use to arrive at a solution. The knowledge engineer may use the interview and/ or observation technique to obtain the information from the experts.

   

Back to Top

Answer: Question 4

a)        Only certain issues can be solved by expert system.

   

b)        Most of the expert systems require major development which are costly and time consuming.

   

c)        Most of the expert systems need to be updated regularly as there are changes in technologies.

   

Back to Top

Answer: Question 5

Fundamentals of fuzzy sets and fuzzy logic

Henrik Legind Larsen

Aalborg University Esbjerg

 

A new theory extending our capabilities in modeling uncertainty

Fuzzy set theory provides a major newer paradigm in modeling and reasoning with uncertainty.Though there were several forerunners in science and philosophy, in particular in the areas of multivalued logics and vague concepts, Lotfi A. Zadeh, a professor at University of California at Berkeley was the first to propose a theory of fuzzy sets and an associated logic, namely fuzzy logic (Zadeh, 1965). Essentially, a fuzzy set is a set whose members may have degrees of membership between 0 and 1, as opposed to classical sets where each element must have either 0 or 1 as the membership degree—if 0, the element is completely outside the set; if 1, the element is completely in the set. As classical logic is based on classical set theory, fuzzy logic is based on fuzzy set theory.

Major industrial application areas

The first wave: Process control

The first industrial application of fuzzy logic was in the area of fuzzy controllers. It was done by two Danish civil engineers, L.P. Holmblad and J.J. Østergaard, who around 1980 at the company F.L. Schmidt developed a fuzzy controller for cement kilns. Their results were published in 1982 (Holmblad & Østergaard, 1982). Their results were not much notice in the West, but they certainly were in Japan. The Japanese caught the idea, and applied it in an automatic-drive fuzzy control system for subway trains in Sendai City. The final product was extremely successful, and was generally praised as superior to other comparable systems based on classical control. This success encouraged a rapid increase in the Japanese’s interest in fuzzy controller during the late 1980s.This led to applications in other areas, like elevator control systems and air conditioning systems. In the early 1990s, the Japanese began to apply fuzzy controller in consumer products, like camcorders, washing machines, vacuum cleaners, and cars. The Japanese success led to increased interest in Europe and the US in fuzzy controller techniques.

 

The second wave: information systems

The second wave of fuzzy logic systems started in Europe in the early 1990s, namely in the area of information systems, in particular in databases and information retrieval. The first fuzzy logic based search engine was developed by the author in collaboration with professor R.R. Yager, Machine Intelligence Institute, US. It was aimed for application netbased commerce systems, namely, at that time the only in the world, the French Minitel. It was first demonstrated to the public at the Joint International Conference of Artificial Intelligence in 1992 in Chambery, France. In 1999, the technique was adopted by the Danish search engine Jubii. The several ideas and results applied in the technique were published; see, for instance, (Larsen & Yager, 1993, 1997). Internet and the Web gave new interest to application of fuzzy logic technology. In the net based society we have an enormous amount of information and knowledge electronic accessible for decision makers and human in general. Much of this information is inherently uncertain—lack of precision, vague concepts, more or less reliable information, etc. On the other hand, to be useful, users must be able to utilize it, despite the uncertainties. Certainly imperfect information and knowledge cannot be ignored; for instance:“He said that his department and commissioned a study on the effect of requiring higher mileage cars, which would be smaller and could be less safe.”(New York Times, February 22, 1991; source: E.H. Ruspini, AI Certer, SRI Internaitonal). This is an example of information that is inherently imprecise or vague and therefore not well suited for representation and processing by classical binary logic or probabilistic based techniques. Much information is two valuable to be ignored, but it requires a human to identify it and utilize it. With this huge amount of information, we need to computer based tools to find it, evaluate it, extract the meaning, and partly utilize it for decision support. Here fuzzy logic is beginning and will in the future play a major role as tool allowing us to model and reason with the information and knowledge; in fact, fuzzy logic allows us to properly utilize the information in the uncertainty. A newer application area in this line is data mining (text mining, web mining, …) for discovery of knowledge. Hence, the second wave is, in particular due to the internet and the web, likely to be much greater then the first in the control area.

 

How fuzzy sets extends our modeling capabilities

By fuzzy set theory we can provide exact representations of concepts and relations that are vague, that is, with no sharp yes-no borderline between cases covered, and cases not covered, by the concept or relation. This allows us to represent, for instance, that a document deals with a topic T1 to some degree (between 0 and 1), that a user is interested in a topic T2 to some degree, and that a topic T3 implies a topic T4 to some degree. By fuzzy set theory and fuzzy logic, we can not only represent such knowledge, but also utilize it to its full extent, taking the kind and the form of the uncertainties into account. This does not mean than fuzzy logic renders classical logic and probability theory obsolete. On the contrary, though fuzzy sets and fuzzy logic extend membership degrees and truth values from 0 and 1 to the real interval from 0 to 1, the definition of the fuzzy logic formalism still rely on the classical logic. Further more, we apply statistics based on probability theory in fuzzy data mining of knowledge — the main difference being that probabilities now are associated to fuzzy sets. Another advantage of fuzzy logic is that it allows fast processing of large bodies of complex knowledge, since processing is performed by numerical computations and not symbolic unification as in, e.g., logic programming formalisms. As opposed to neural nets, fuzzy logic has the advantage that it supports explicit representation of knowledge, like in symbolic formalisms, allowing us to combine knowledge in a controlled way.

 

2. Outline of the course

The direction taken in the course

By knowledge modeling in the framework of fuzzy sets we introduce a new powerful basis fordevelopment of advanced information systems. It should be mentioned, that though lots of research has been done if fuzzy logic theory that has been much further developed since Zadeh’s seminal paper from 1965, works on fuzzy logic software and knowledge engineering, and efficient algorithms for fuzzy knowledge processing, have been very limited; the latter in particular due to the fact that the well known classical algorithms assume classical binary logic. I hope with this course in fuzzy logic (that had also been referred to as Fuzzy Logic Information Technology, Fuzzy Systems, and Fuzzy Logic Engineering), to contribute to an improvement of this situation.With respect to application frameworks, we shall in particular consider information access in the broad, including database querying, information retrieval, and object recognition—that essentially are solved through some variant of classificatory problem solving—and, further, data mining where “hidden” knowledge is retrieved from the information base, as essentially represents some form of inductive problem solving. Though the focus in this course is fuzzy logic, we shall cover central aspects of information retrieval, database querying, and search engines, as well as knowledge representation and algorithms, as related to the engineering part. As opposed to tradition approaches in teaching these topics (courses and text books), we adopt fuzzy logic as the basic logical framework, and emphasizes algorithms for fuzzy knowledge processing.

Back to Top

Answer: Question 6

 

Intelligent Agents and XML - A method for accessing

webportals in both B2C and B2B E-Commerce

 

Abstract. In E-Commerce today webportals are important and also intelligent

agents grow in significance. Our approach is to combine them in designing

webportals and interfaces for both users and agents. In this paper we discuss the

problems in automatically accessing portals and possible solutions for them

through using OOM methods. The solution selected by us, using an XML-based

standard and dynamically reconfigurable protocols, is described afterwards and

the methods used are shown. Afterwards we briefly present an example, a webportal

for sports information.

Keywords: Agents, OOM, OOP, XML, E-Commerce, Webportals

1 Introduction

Both web portals and intelligent agents are important factors in the Internet and of

growing importance especially in E-Commerce. However, combining these two is not

that easy. The main problem is how agents retrieve information from a webpage,

which is formatted for reading by people. Through the combination of using ebXML

[2] and applying methods for object-oriented design on all parts involved, this gap can

be at least ameliorated. We propose to use object-oriented modeling-techniques not

only for the implementation of the software, but also for the design of the data structure

to be exchanged including dynamic aspects of protocols.

2 Problems of automatically accessing webportals

Webportals provide a unified access to a large set of information on certain topics.

However, different portals contain different content data and different methods of

access. It is therefore hard for customers to locate and buy the information or goods

they are interested in, as the methods and relative location change with each supplier.

At the same time, a unified portal or organization is unrealistic (and probably not

suited for all types of content). This is partly done by providers on purpose to avoid

competition through complicating comparisons and therefore binding customers (if

they can handle a portal, they will not move to another one, where they have to learn

the use anew), both of which are especially important in B2C E-Commerce. In B2B

E-Commerce a difficulty is automation of procurement: Many goods can be bought

cheaper or faster on the WWW, however the work has to be done completely manually

every time (in contrast to this in conventional procurement pre-created forms and

standardized procedures can be used). Therefore the need arises for an a) unified

method for locating, accessing and buying information, which can be b) automated to

a large degree, even including payment [11]. Using the combination of storing data in

XML [4] for presentation on webpages and including the possibility for access by

intelligent agents, both difficulties can be overcome, leading to an enhancement of ECommerce.

Specific problems of the current design are:

? Webpages are designed for different user groups having distinct interests and

varying habits, and according to their content. Also the behavior of the users desiring

information differs. Shoppers often want detailed information on a specific

product in a fast and easy way (B2B), while visitors of e. g. sport portals just want

to browse or become generally informed (B2C).

? Because of diverging interests automatic access to webportals through programs is

complicated, as there is no standardization how data is presented, where data is located

on a webpage, or which categories define the organization of the website.

? The HTTP-protocol is not ideally suited for automatic access because the connection

is closed after each request, passing of parameters is complicated, and negotiations

are not possible.

? The content data of the communication introduces difficulties: Both the syntax and

especially the semantics (more of a problem in automatic access than by humans)

must be the same on both sides of the communication link to allow meaningful interaction.

Providers may not necessarily be delighted by this as they will probably face more

competition [3]. However, they can also profit, e.g. through reaching more interested

users or being able to present data in a special way not only for one specific group

(agents) but also for different user groups as well (Personalization; different preferences

or interests, people with disabilities, etc.).

3 Steps towards solutions

A possible solution would be specifying and using a custom protocol for accessing

data. On the one hand the advantage of this solution is that all parties need only understand

and implement one – the then common – protocol. On the other hand this

solution has the disadvantage that a standard must be specified that is suitable for all

areas and all users. In addition, providers must agree upon it. Moreover, a single standard

coping with everything would be either very complicated, or useable for special

tasks only with difficulties.

Using a binary protocol for the exchange of data is another possibility. This could

be a protocol based on a certain method of serializing data, like Java object serialization

or serializeable MFC objects or a specific program library. The advantage is

gained speed in development and processing, and a relatively low required bandwidth

for communication. But the disadvantage is too serious: This works only on one type

of system and one platform – it is not platform-independent.

As another possibility we consider mobile agents [1], [9] for searching and retrieving

data [12]. They possess some advantages concerning protocols: Communication

with other partners (either other agents or webportals) can be automatically

adapted by the agents for a meaningful interaction with their surrounding. Therefore

they are a good choice for retrieving data when negotiations are necessary. Also, the

mobility of agents saves bandwidth because the bulk of the communication is handled

locally and only the agent (with compressed and filtered results) needs to be transferred

[10].

Agents are also relevant in connection with protocols for payment: Although they

are standardized (e. g. SET), unique systems like vouchers, debiting or private E-Cash

exist (see [15] for an overview of different payment systems in connection with

agents). They usually have many things in common (like identification and transfer of

some data), but the actual data-objects and the sequence of messages differ. Agents

can adapt themselves to these protocols and allow in this way a wider diversity, also

improving E-Commerce.

Using XML for the representation of data would be a good basis for retrieving data

by the agents and also for the provider of it: An agent can easily extract information

from XML as it includes the concept of an explicit definition of the data structure. So

no additional transformation before extraction of information (see [5] for an overview

of products employing this technique) is required. And using XSL (eXtensible

Stylesheet Language) [6] allows different views on and presentations of the same

data, a benefit for providers. Even when modeling the content data alone, an objectoriented

representation is appropriate. The reuse of components allows agents to understand

the data at least partially, while this is impossible when using unique and

proprietary definitions.

4 Reconfigurable protocols for information retrieval

Protocols are an important factor in communicating with agents and always should be

adapted to the business processes and not the other way round. Our way to provide

adaptable state-based protocols is based on a two-fold object-oriented approach: We

build a static implementation hierarchy, as well as a dynamic hierarchy of calling

other protocols as elements of one. The latter can be stacked to any depth, but no

interaction across different levels is possible (subprotocols must terminate before the

parent protocol can resume).

This object-oriented approach also allows extensions of existing protocols in two

ways: First the protocol is implemented object-oriented and can be extended through

subclasses, which overwrite methods (i. e. state transitions) in superclasses. Secondly,

providing it with different “plug-in protocols” as subprotocols at runtime to change its

behavior in some details (user-defined or negotiated with the partner).

An advantage of this approach is that if an agent does not understand a certain subprotocol,

in some cases another one could be substituted (e.g. a known superclass of

the unknown one) or the subprotocol simply be left out. This allows an agent to do

transactions at least in a rudimentary way, e.g. without reliable identification of the

partner or without using special discounts or options. Another advantage for the developer

is that creating protocols dynamically in a hierarchy allows using an objectoriented

modeling approach for the protocol itself and not only its implementation.

5 Modeling content data as a class hierarchy with OOMtechniques

For modeling the content data, OOM-techniques [14] are appropriate to use, too. With

object-oriented analysis you can easily analyze the content data for possible classes

and attributes. These classes and attributes can be transformed into XML representation

without the necessity of an additional encoding.

Even though (pure) XML does not offer the concept of object-oriented programming

(OOP), with XML-Schema [16] it is possible to work with inheritance and datatypes

(like string, float, etc.). Also different namespaces are supported. Including data

types further improves the reliability, as the agent can then (in some cases) retrieve

information even from unknown data types (e.g. retrieving the price by searching for

the only element consisting of the type “Currency”).

The advantages of this object-oriented modeling are that agents can retrieve at least

some information from the data, even though they cannot interpret the specification of

the actual object: understanding one of the superclasses may often suffice. This means

for the owner of the agents that he at least gets a feedback and can decide whether it

makes sense to give the agent more specific information (or abilities) or not.

Another advantage of OOM [8] is that older classes can be used as components if

new data types are required. The specification of classes is also written in XML and

therefore can be distributed easily (e.g. in addition to content data, so the recipient can

manually interpret them through comments or names, even though the agent cannot).

6 Sample implementation

This system was implemented in Java based on an agent system [13] developed at the

institute, which has a special focus on security [7]. The communication is based on

ebXML-messages (a set of specifications for using XML for a modular E-Commerce

framework with a focus on business processes), but also includes local broadcasts,

which are not provided for in the standard. The data transmitted is also modeled in

XML. Currently, as a research project a web-portal for sports is under development,

which will be also accessible by agents.

As an example, the definition of the content data specified for a member of a portal

(called PortalMember) is presented in Fig. 1. First of all, a namespace is declared,

where all XML-Schemas and XML-Files are included. Furthermore in this XMLSchema

for the data of portal members, the schema for members in general (e.g. portals,

clubs, etc.) called MemberData is included, which is used as a “base-class” for

PortalMember. PortalMember inherits from MemberData and is extended with additional

elements and attributes that are specifically needed for members of portals.

<?xml version="1.0" encoding="UTF-8"?>

<xsd:schema targetNamespace="http://www.fim.uni-linz.ac.at"

xmlns="http://www.fim.uni-linz.ac.at"

xmlns:xsd="http://www.w3.org/2000/10/XMLSchema"

elementFormDefault="qualified">

<xsd:include schemaLocation="MemberDataDoc.xsd"/>

<xsd:element name="PortalMember" type="PortalMember" minOccurs="0" maxOccurs="unbounded"/>

<xsd:complexType name="PortalMember">

<xsd:complexContent>

<xsd:extension base="MemberData">

<xsd:sequence>

<xsd:element name="Fee" minOccurs="0" maxOccurs="unbounded">

<xsd:complexType>

<xsd:sequence>

<xsd:element name="Amount" type="xsd:double"/>

<xsd:element name="Currency" type="xsd:string"/>

</xsd:sequence>

</xsd:complexType>

</xsd:element>

<xsd:element name="DEntrance" type="xsd:date"/>

<xsd:element name="DWithDrawal" minOccurs="0" maxOccurs="1" type="xsd:date"/>

<xsd:element name="UserID" type="xsd:string"/>

<xsd:element name="Password" type="xsd:string"/>

</xsd:sequence>

<xsd:attribute name="FeePaid" use="required" value="No">

<xsd:simpleType >

<xsd:restriction base="xsd:string" >

<xsd:enumeration value="Yes"/>

<xsd:enumeration value="No"/>

</xsd:restriction>

</xsd:simpleType>

</xsd:attribute>

</xsd:extension>

</xsd:complexContent>

</xsd:complexType>

</xsd:schema>

Figure 1: Schema for members of portals (Extension of general members)

The example above is used for retrieving information on a portal member (which is

only allowed after identification). The agent can use the information provided e. g. for

autonomously paying the recurring fee or verifying the personal data.

7 Conclusion

As explained, using XML and hierarchical object-oriented modeling of data is not

sufficient, because the semantics also must be specified: Different implementers

should not only create compatible programs, they must also adhere to the same semantics,

which is of special importance in open systems like the Internet. ebXML

with its specification of both syntax and semantics and the storage of them in a public

repository is an important step in this direction.

However, even though ebXML is simpler than EDI, it is not trivial (and cannot be

because of the complicated requirements it should fulfill). It is therefore sensible to

use intelligent agents to support this, providing even more interoperability through

automatic adaptation and flexibility through fast adjustment to new requirements and

platforms. But also agents benefit from using XML: Analyzing and interpreting the

data gets much easier compared to HTML or text-files.

The combination of using XML for the data and intelligent agents for the processing

seems therefore to be ideal, as both benefit from each other. To fully realize them,

however, all aspects, static and dynamic ones, need to be modeled and implemented

with a view on object orientation. This combination allows extending the user-groups

of web-portals to include agents with little work. This includes the benefit of automatic

information retrieval. It also addresses the concerns of information robbery by

competitors through the possibility for identification and/or payment.

 

References

1. Brenner, W., Zarnekow, R., Wittig, H.: Intelligente Softwareagenten. Grundlagen und

Anwendungen. Springer, Berlin (1998)

2. ebXML, http://www.ebxml.org (March 2001)

3. Glushko, R. J., Tenenbaum, J. M., Meltzer, B.: An XML Framework for Agent-based ECommerce.

Communications of the ACM, Vol. 42, No. 3. acm Press, New York (1999)

4. Golfarb, C. F., Prescod, P.: The XML Handbook. Prentice-Hall, New York (1998)

5. Guttman, R., Moukas, A., Maes, P.: Agents as Mediators in Electronic Commerce. In:

Klusch, M. (Ed.): Intelligent Information Agents. Agent-Based Information Discovery and

Management on the Internet. Springer, Berlin (1999)

6. Holzner, S.: XML Complete. McGraw-Hill, New York (1998)

7. Hörmanseder, R., Sonntag, M.: Mobile agent security based on payment; ACM SIG

Operating Systems, Vol. 34, No. 4, New York (2000)

8. Jacobson, I., Ericsson, M., Jacobson, A.: The object advantage - business process

reengineering with object technology. Addison-Wesley, New York (1995)

9. Jennings, N., R.: An agent-based approach for building complex software-systems.

Communication of the ACM, Vol. 44, No. 4, acm Press, New York (2001)

10. Kotz, D., Gray, R. S.: Mobile Agents and the Future of the Internet. ACM SIG Operating

Systems, Vol. 33, No. 3, New York (1999)

11. Mühlbacher, J. R., Sonntag, M.: Teaching Software Engineering and Encouraging

Entrepreneurship through E-Commerce. In: Proceedings 2nd International Conference on

Innovation Through E-Commerce - IeC99. Manchester (1999)

12. Papazoglou, M., P.: Agent-oriented technology in support of e-business. Communication of

the ACM, Vol. 44, No. 4, acm Press, New York (2001)

13. POND – Agent System: http://www.fim.uni-linz.ac.at/Research/Agenten/index.htm

14. Rumbaugh, J.: Object oriented modeling and design. Prentice Hall, New York (1991)

15. Vogler, H., Moschgath, M., Kunkelmann, T.: Enhancing Mobile Agents with Electronic

Commerce Capabilities. In: Klusch, M., Weiß, G. (Ed.): Cooperative Information Agents II.

Learning, Mobility and Electronic Commerce for Information Discovery on the Internet.

Springer, Berlin (1998)

16. W3C: XML Schema: http://www.w3.org/XML/Schema (March 2001)

Back to Top

Copyright © 2003  [Tan Liat Wee]. All rights reserved.
Revised: April 04, 2003 .