16 July 2005

Ontology editor comparison

Ontology editor comparison

This is a nice chart comparing features of a large number of potential Ontology editors. Unfortunately, it is a little dated, but not too much.

It is part of an article on such things, found here.

Nice article, nice list. I just wish that someone would stand up to generate one a little more modern - comparing, for instance Protege 3.0 and 3.1, and others. Wish I had the time to do it myself.

14 July 2005

Components of Ontology: Rules

We believe that the application of relationships between entities must be based on rules for those relationships to create semantically correct communications. There are several reasons for that, but the first (and most obvious) is that without rules governing what objects to fill the slots in a sentence (or other collection of entities for communication) open to building semantically nonsensical collections of entities. As an example of this, consider the basic structure of an American English sentence, which follows the form of SVO (Subject, Verb, Object - as in "Jack carries the ball"). Even if you follow the correct syntax, you can come up with "Banana repairs mountain", which is syntactically correct, but semantically it is nonsense. There have to be rules - rules that state how whatever the verb implies that the subject is doing, then the object can match that original pairing both syntactically and sensibly. Another reason for rules is sense making out of the number of choices. This is especially true of automated systems that are attempting to make semantic sense, as they cannot hope to make sense out of the rapidly approaching infinite number of sources, even when working with a rather small and limited taxonomy.

Grammars and Ontology

Just as we have seen earlier that both taxonomies and knowledge bases are separate from a formal ontology, so too is a grammar separate from a formal ontology.
Grammar is defined as “containing the morphologic, syntactic, and semantic rules for a specific language” [American heritage dictionary reference]. The three elements of that definition can be specifically defined as this –

  • morphology is the study of the rules for forming admissible words

  • syntax is the study of rules for forming admissible sentences

  • semantics is the study of language meaning


The grammar of a system of communications to be used for information system is very similar to this definition. We propose that instead of forming words and sentences, we are forming data elements and meaningful packets of data elements making up more complex ideas than the individual elements are capable of (just as sentences are able to convey a lot more than the individual words). A grammar for a data system is concerned with the meaning of the data elements, the formulation of admissible (able to be transmitted, and able to be received) packets of communicating information based on those elements, and finally the meaningfulness of the packets of data to both systems (the transmitter and the receiver).

A formal ontology contains these same elements, and perhaps goes further in ensuring the specific definition of the concepts, property-exhibiting concepts, and property-values of those concepts that underlay all of the elements being addressed by the grammar. A formal ontology provides a specific attempt at satisfying the needs of the semantic portion of the grammar.

Rules from Three Sources

Within a linguistics system, these rules can come from one of three different places. They can come from the transmitter imparting the communication, or speaker. They can come from the perceiver, or listener, or they can be dictated by the system in question. With spoken language, the only discriminating source is the perceiver, as that is what finally determines the meaning of what is spoken, regardless of what the speaker had in mind. The listener can ask the speaker to restate something, and they can slowly come to understanding (perhaps the speaker can eventually ensure that the listener perceives what the speaker is saying in the manner that the speaker intended, but this is not automatically so). However, with the case of computer systems, we have an advantage. That advantage has been hinted at in the previous section, and we will go over it some more here as it has a great deal of bearing on the system of rules and evaluating them.

The Benefit of Not Understanding

The advantage that an information system has when considering the rules of a system, and the "semantic" meaning of a system-to-system communication method is this - the information system doesn't really understand what is being communicated. That may seem a bit pedantic and overstating things too much at first, but it has many implications. When a person hears a sentence, they "understand" the language. They have pre-conceived opinions about words, patterns, data. An automated system does not understand the words. At best, they have an ontological map that they can use to draw patterns and formulate ingest and process based on those patterns (and the understood implications of order and syntax). But those rules for ingest and process can be delivered (or be in place) before the communication takes place. The systems can agree to share, explicitly and without pre-conceived opinion or prejudice, exactly where in the ontology of the universe of discourse the data elements being exchanged fit, and what they mean. In short, the rules for communication and formulation of semantic ideas can come from the "system" of communication, and don't rely on either the transmitter or the perceiver alone.

This has been, so far, an introduction we felt necessary for the need of rules, and how they can benefit communications. Now we must consider where they reside. Even though we have suggested that the rules are part of the "system", that still leaves a big question. Are they a part of the formal ontology, or should they be up to the collection of systems relying on the ontology for communications? We believe that the answer is both, and this is an explanation of how we see that.

When considering rules, we believe that there is the possibility for two different types:

  • Rules that apply to the possible

  • Rules that apply to the actual


The former is necessarily a superset of the latter, and should be all encompassing and non-limiting. As the formal ontology will have uses and users that the original architects of it cannot conceive of at the time of inception, all of the possible eventualities that can exist between entities should be catered to.

As systems begin to use the formal ontology for specific purposes, there will necessarily be the demands of context and occasion, and the requirements for more detail and explicitness in some areas, and less in other areas. These comprise what are commonly called business rules. These business rules limit the possible down to a set of the actual, but it is a limitation with a purpose. It should eliminate ambiguities and redundancy, and it should make the operation of the overall system of communications more efficient for the purpose at hand.

We see that we have rules divided into the possible and the actual. The first set, the possible, is almost driven by the total matrix of what is possible within the syntax of the system, and these rules should reside within the formal ontology. The second set, the actual, is based on the business rules and use cases of the systems employing the formal ontology, to a certain extent, but also to the existent and changing states of the entities and relationships within the systems communicating, and as the states of those items change, then the rules determining the actual will change. It is very use dependent, yes, but it is also very much state dependent. These rules can not, without there being an insurmountably large number of such rules covering all possibilities, exist within the formal ontology, but should exist either within the systems making use of the formal ontology. Partially, what is the actual can be derived from the changing state of the entities, through implications. For instance, if you have a number of rules governing the possible that state all wheeled vehicle entities are capable of road movement, then the changing state of the location of the wheeled vehicles (sometimes on a road, sometimes not) will be an implicit limitation to the application of this rules (the actual rules).

Rules must be based on Properties

As can be seen from the aspects of this description and definition, the rules that are applied are all based on the properties of the entities and relationships affected. Without properties, there can be no application of rules, and all relationships would be equally applicable to all entities (a system of universality that results in no system at all). There must be properties, the application of rules must be based on properties, and then the connecting of entities via relationships must be based on these rules.

It is enough, based on the section above, to allow that all of the actual limitations of the rules will take place from within the formal ontology based on the changing state of properties and property-values, however it MAY be that some of the limitations on the complete set of possible rules will be made from outside business rules. These are not for the formal ontology to define.

Method for Evaluation

The method for evaluation therefore falls, in this case, not to the use cases, rather to the potential for all possible rules. All possible rules are based on properties, which are exhibited via the range of concepts that apply to an entity or relationship (and the property-values applied to those concepts). The set of all possible rules should therefore be evaluated as to the ranges of concepts that exist to limit said rules. Are the ranges sufficiently limited to allow for the precise application of rules where necessary? And are the ranges sufficiently broad to allow for a finite and understandable set of rules to be in place? If the ranges are not sufficiently broad, then too many small rules are in place, and no system can hope to have sufficient understanding and functional cataloging of all of them.

While use cases are too specific to form the basis of an evaluation concerning the inherent rules of a formal ontology, it is true that the universe of discourse that the ontology is intended to support must be considered. If the types of communications require (generally) a great deal of precision, then the ranges of concepts can be appropriately smaller. On the other hand, if the universe of discourse is itself quite large, and the number of entities to be considered is correspondently very large, then the rules should be broader, and the evaluation of rules should consider broad ranges of concepts in a favorable light.


Tags:

Components of Ontology: Relationships

Relationships within a formal ontology are the means that allow entities to modify and combine with each other. It is important to realize that the relationship itself does not affect an entity, but that the entities that it relates provide the affects or changes. For instance, relating the entities "truck" and "movement" (truck HAS movement) give the idea of a truck moving somewhere. The relationship HAS does nothing for the truck, but it does allow for the semantic idea of the truck having movement (or "moving", using the correct syntactic form).

Concepts of Relationships

Relationships are comprised of concepts, just as the other compounds (entities and non-atomic concepts) are. These concepts are a little more abstract and difficult to see than they are in the other compounds, but they are present nevertheless. The reason they are difficult to see, is because we are not accustomed to thinking of them consciously. When we communicate, the rules of our language and semantics are definitely bounded by the concepts that comprise relationships. For an example, let us consider the binary relationship "tank has crew". There are two entities, tank (the subject of the relationship) and crew (the object of the relationship), brought together by the relationship HAS. These two entities each have a number of concepts, some of which should be apparent.

Consider the relationship "has". In this case, it is being used to define that an entity has as part of itself a number of other entities. To put it simply, think of it in terms of the taxonomical hypernym classes that exist outside of both "tank" and "crew". Those hypernyms are "vehicle" (in the case of "tank"), and "component" (in the case of "crew"). This then becomes "vehicle has component", for a tank is a vehicle, and crew is a component of a vehicle.

At this point, "has" now has a few interesting concepts. First, it has a time subjectivity concept, by which I mean this - if we say a tank has crew, we mean two things.
  1. A tank has the CAPACITY to contain 4 crew members (and needs 4 to function fully).

  2. A tank has the POTENTIALITY of carrying 0,1,2,3 or 4 crew members subject to it's current state (is it in storage, is it in the field, has it been damaged, etc).


Second, the relationship "has" can imply the concept of specificity, semi-specificity (or class specificity), or non-specificity. By this I men that the tank can have either

  • INSTANCE-SPECIFIC crew (Carol, Bob, Ted, and Alice

  • CLASS-SPECIFIC crew (gunner, loader, driver, commander

  • NON-SPECIFIC crew (4 bodies)



These are all concepts of the relationship, which allow it to be redefined, or to have its properties defined.

When looked at that way, the relationship "has" can be divided up into two more precisely defined relationships of "has capacity of" and "has currently".

Aspects of Relationships

Just as entities can have both a real/non-real aspect as well as a tangible/abstract aspect, so relationships have a number of different aspects, some of which were hinted out in the previous example.

One of the aspects of a relationship is the choice between a relationship showing an actuality or a potentiality. This is often a defining factor as to whether the relationship applies to persistent properties (or property values) of an entity (which are often associated with the class of an entity), or if they relationship applies to the non-persistent properties of an entity (which are often associated with the instance of an entity). A class-entity has the entities affecting its potentiality related to it (which indicates a persistent property), and an instance-entity has the entities affecting its actuality related to it (again, this indicates a non-persistent property).

The nature of relationships (and their application to entities) is sometimes based on the changing state of the entity. This gives a temporal basis to the relationship. As with so many other aspects affecting the other components of our formal ontology, this temporal basis is grounded in the use of the ontological definition. Temporal basis, as it is based on the changing state of entities through time, is very much related to events and phenomena (as events have a time component, and phenomena are concerned with changing state).

Finally, some relationships can be redefined based on qualification, as opposed to quantification. It is fine to say that "tank HAS crew", which is a qualification relationship (giving some definition to the tank entity, by relating it to a defining entity), but it is a quantification relationship to list the number of crew that the tank has (whether potential or actual). Quantification and qualification relationships are often used to define the state (current or otherwise) of the subject entity.

The inherent supporting needs for a relationship to exist between two entities are often very easy to accommodate. For instance, if a relationship exhibiting quantification is to exist, then the only structural (syntax) requirement is that both the object entity and subject entity of the relationship must each support the idea of a number of something being related to something else. Thus, syntactically, it is perfectly fine to say that "tank HAS 2 wings". This is syntactically correct, however it is not semantically correct (unless we are working within a universe of discourse that allows for winged tanks). The semantics of relationships are dealt with in the next system (rules).

Method for Evaluating Relationships

Our method for evaluating relationships should be simple, and easy to define, however it (as with the other methods discussed above) must be based in the intended use of the ontological definition.

The types of relationships (potential/actual, qualification/quantification, temporally based) that exist must each be considered to see whether all of the combinations of entities that are required to satisfy the universe of discourse can be assembled.

The syntactic requirements for the relationships are based on the properties (or concepts) of the entities being related must exist, for all the sorts of relationships that need to be supported for the universe of discourse.

Finally, the relationships must be accessible enough via definition that they can be the objects of relationship-controlling rules.


Tags:

Ontological Components: Entities

The ontology of an information system (whether for computer science, cognitive science, information management or other areas) differs somewhat from the Ontology referred to by philosophy. In philosophy, Ontology is the theory of being [reference], it is the theoretical description about all that is, and how it relates to each other, within a universe of discourse. There is a subtle difference existing within the study of ontology for purposes of enabling an information system - a difference that is helpful to keep in mind when considering entities. That difference is this - in an information system, ontology is the study of the REPRESENTATION of all that exists within the universe of discourse.

Taxonomies, Knowledge Bases, and Ontology

An interesting side topic that is of value to consider when discussing entities and the method for evaluating them is this: an ontology is not a taxonomy, nor is it a knowledge base, but it can include both of those things.

A taxonomical structure is a hierarchical classification of all the data or linguistic objects (words, datum, elements, etc.) to be found within a universe of discourse [reference]. A knowledge base is a taxonomy (which is just a hierarchy of classes) that has been populated with enumerations matching the various classes [reference]. A formal ontology includes the former, among other things, but where the elements of a taxonomy differ when they are included in a formal ontology is this - an ontological view of a taxonomy also includes a definition of all the underlying assumptions and properties that define each of the classes in the taxonomy. A formal ontology need not contain the enumerations that exist within a knowledge base (and it seems difficult to imagine such an enumeration being complete in any but the most simple of universes of discourse). As I mentioned this is a side topic, but it is interesting to know what we conceive of when we mention the terms taxonomy and knowledge base.

The entities of a formal ontology include the classified and defined representations of objects, events, and phenomena. Each of these are classified in a hierarchy similar to a taxonomy. They are defined by their component concepts, for all entities are compounds (as defined in the previous section). Some of the characteristic properties of entities derive from their place on the taxonomical hierarchy, and these should be captured as concepts of the entity.

Entity Class and Entity Instance

We see therefore that entities within a formal ontology are classes, in the taxonomical sense, but with a formal definition of how those classes are defined (the exhibition of concepts related to the entity). This (the class) is one of two possible views of entities within a formal ontology. The second possible view of an entity is the potential for enumerations within that class.

It is possible for an enumerated entity to begin to have a separate identity from that of its parent class. This is through the introduction of non-persistent properties. Each enumerated entity shares the same persistent properties, and these are defined by the taxonomical class, as well as all the ontological concepts that define that class. All enumerations deriving from that class inherit those concepts. These are the persistent properties of those enumerations. However, the nature of an entity to exist within space and time, and an entity's nature to change state as it is related to other entities (particularly phenomena), determine that it will have some properties that are non-persistent, and these change not only from enumeration to enumeration, but also over time.

As an example, let us consider the entity "GP40 Diesel Electric Locomotive". This entity is a class within a formal ontology that considers (as part of it's universe of discourse) railroad assets. Some of the persistent properties of this class are these - 3000hp engine, 40 inch wheel diameter, rated weight 257,000 lbs, etc. These are the properties common to the whole class. Now, if we take a particular enumeration of this class, say "CSX engine number 6936", this enumeration takes on a number of non-persistent properties. It's current location (Louisville KY?), it's consist of freight cars, it's driver, it's current paint job, it's maintenance state, it's actual rate. All of these things are properties that can define such an entity, if the ontological need exists. The value of these properties change (or perhaps not be existent) when considering other enumerations of the entity "GP40 Diesel Electric Locomotive". Property exhibiting concepts might exist across the various enumerations, but the value of those properties might change (such as location - each existing instance of a GP40 has the concept location, but the value of that concept changes from instance to instance, and over time).

It is important to make the distinction between the enumerations of a particular taxonomical class, and separate, but related classes. This distinction becomes especially important as we consider hypernyms and hyponyms within a taxonomical hierarchy. Hypernyms and hyponyms are redefinitions of classes, into other classes, at different resolutions. This is easier to see through example, than it is to explain. For instance, the taxonomical class "vehicle" is a hypernym for the classes "chariot", "helicopter", "locomotive" and others. Those other classes are also vehicles, but the more precise class is considered at a higher resolution of detail. Likewise, the taxonomical classes "hydrocodone", "penicillin", and "ritalin" are all hyponyms for the taxonomical class "drug". The class "drug" is similar to the others, but considered at a lower resolution of detail.

In a graphical representation showing the hierarchy of the classes of a taxonomy, the hypernyms are parents and the hyponyms are children. The failings of this sort of representation are that it becomes very complicated very quickly, when you have classes that are hyponyms for multiple "parents", which is very easy within any but the most simple of taxonomies (or ontologies).

The Categories of Entity

Within a formal ontology, there is room for both aspects of an entity, the class and the instance (or, enumeration). The class comes from the roots that a formal ontology has in a taxonomy, and the presence of instances allow the formal ontology to serve as a knowledge base. Both of these are required, as both have separate property issues that need to be addressed by the ontological concept structure. The same is true for all three aspects of entity - object, event, and phenomena.

  1. Objects are easy to comprehend, as they are the objects and things of the world (universe of discourse) for which the formal ontology is describing representations. What is important to understand about objects is that they have several aspects that might not be immediately apparent.

    1. First, objects can tangible or abstract. By naming an object tangible, I mean that it can be a physical object (such as a truck, or a country). Equally valid, however, is that an object can be an abstract thing - something that has definite properties but is not physical. Some examples of abstract objects could include culture, organizations, or decisions.


    2. A second aspect to consider is that when discussing either tangible or abstract objects, those objects do not necessarily need to be "real". It is perfectly valid to have a formal ontology define representation of an entity (object or otherwise) that is non-real, so long as the entity has an "essence" that can be defined through a combination of property exhibiting concepts. This idea of non-real objects can include future objects, nominal objects, or potential objects (and in all cases, they can be tangible or abstract). The specific description of the representation is what is important for a formal ontology to be sound, not necessarily that all of the objects represented be soundly based in reality.


  2. In addition to objects, there is also the sub-category of entity, which we refer to as events. We define an event as "an entity with a time component". An event-entity is similar to an object-entity in that it can be tangible or abstract, but it has a period of time (which, as above with object-entities, need not be a real period of time, but could be future, past, nominal, or potential) during which it represents an entity with defined properties. An example of a tangible event is a rainstorm (with a start time, a stop time, and a tangible object related to it - rainfall). An example of an abstract event is a meeting (an ad hoc organization structure with a time component).


  3. The final sub-category of entity is phenomena. Phenomena are the entities that, when related to other entities through the relationship component of a formal ontology, change the state of the entity they are related to. This is done by affecting the properties or property-values of the affected entity. Phenomena share the aspects of entities in that they can be real or non-real, they can be tangible or abstract, and they can exist over time, similar to an event. Phenomena include the elements of linguistics that we think of as verbs and modifiers. Anything that implies action or change is a phenomena-entity, and some examples are damage, movement, unloading, growth, decay and others. They are related through relationships, and have properties (as all entities do) so that rules can be formed about their applicability for these relationships.



A Method for Entity Evaluation

Our method for evaluating entities must, as with concepts, be based in the intended use of the formal ontology. From that starting basis, we can move to the various aspects required for consideration in our evaluation.


  1. First entities must have properties (more accurately, property exhibiting concepts). The concepts defining entities, and giving them accessible properties, must be apparent and accessible. In fact, for ontological purposes, they must be defined. This is true not only of entity classes, but also of entity instances. It is equally true for not only properties but also property values. In the previous section on concepts, we defined the range of concepts of a compound to be all of the concepts that define that compound - in the case of an entity (which has two possible states of existence - as a class and as an instance), this range can exist in several different states. All of them must be explicitly addressable and apparent for the entity component to be evaluated to be adequate.

  2. The second consideration in our evaluation is the consideration of all the possible entities (objects, events, and phenomena). Are all of the requirements of communications within the universe of discourse satisfied by the enumerated list of all possible entity classes? Are the definitions of the entity instances sufficient to accommodate the needs of the universe of discourse?


  3. The final consideration is the depth of definition that the ranges of concepts provide in defining the entities. Are the entities defined to enough (and not too much) detail to afford the sorts of use they will be put to in the universe of discourse? If we have a universe of discourse that is discussing the movement of cargo through a supply system, then it is necessary for the entity "truck" to have the concepts of capacity, reliability, speed, ease of use, etc as property exhibiting concepts within the formal ontology. The concept of "what color is the seat inside the truck" is probably too much detail. But the lack of "how easy is the cargo bed to access" might be necessary.


Entities within a Fractal Ontology

A word about this final consideration is in order. There exists the idea of a fractal ontology (or "fractology" as a colleague has suggested recently), which implies that the level of examination that the entities and relationships within the formal ontology might change in resolution, depending on the use that it is being put to. To support this sort of idea, then the attendant concepts of entities must exist to appropriately support the highest and lowest resolution of consideration, and all levels in between that may be adopted. At that point, we have a formal ontology that has a dynamic resolution, but if the properties and concepts existing at those different levels are compounds or components of each other as the scale of consideration shifts, then the formal ontology becomes a fractal ontology.

Earlier article on Entities

Tags:

Components of Ontology: Concepts

Earlier, in Evaluation of the C2IEDM as an Interoperability-Enabling Ontology we defined a concept as "anything that has addressable properties". Although this is a loose definition, it fits our usage. Within our formal ontology, concepts exist for several purposes - giving definition and identity to other concepts (which may be compounded of several entailing concepts), giving definition and identity to entities, and finally giving definition and identity to relationships. As these three purposes are very similar (all involve the endowment of definition and identity), they should be considered as part of a class. We will call members of that class compounds. Compounds are defined explicitly as any of the components within our ontology that are composed of several concepts in identity. This includes all of our ontology, with the exception of atomic concepts, and of course the rules (which are not composed of concepts, but are instead applied against concepts and compounds of concepts).

We have earlier mentioned that the various characteristics of concepts are defined as "properties". This is true, but when examined further it is easily seen that these "properties" are actually other concepts. To continue, without confusing the earlier term, we will call these concepts "property exhibiting concepts", or "properties" for short.

It is important to see that although a concept that exhibits a property might be attached to a compound, it is not necessary that the property have the same “value” at all times. For instance, if the concept “location” were to be correctly attributed to an entity, that concept is always linked to that entity, although the value of that concept may change, over time. This changing aspect of a “property exhibiting concept” is called, for convenience, a property value.

The Dimensions of Concepts

For evaluating the "concepts" component of a formal ontology, it becomes helpful to think of concepts in two dimensions.

  1. First, each concept has a horizontal applicability, which consists of all the compounds to which a particular concept can possibly belong. This, we call the domain of a concept. The domain of a concept also contains all of the possible property values that it might exhibit. Very specific concepts will have a small domain (they apply to only a small number of compounds). Broadly applicable concepts will have a very large domain. As an example of this idea, think of the concept "red coloring". There are many, many red things, red is a concept with a very large domain. On the other hand, consider the concept "comprised of antimony". There are not too many things that we can think of that are comprised of antimony, hence that concept has a small domain.


  2. Second, each concept has a vertical applicability, where the concept and a collection of other concepts together define a compound. This we consider the range of concepts that define a compound. The range of concepts that a compound has also contains all of the possible property values that the concepts in that range might have in relation to the compound. Any compound that is even moderately complex, and is non-trivially defined, will have a large range (meaning, that it will take a large number of defining concepts to describe all of the aspects of such a compound). A compound that is either non-complex, or defined in a non-complex manner, will have a small range of concepts defining it.


A Method for Concept Evaluation

For this component (concepts) of a formal ontology to be evaluated for completeness, then both aspects have to be considered.

  1. First, the domain of concepts must be considered. There should be a mix of both general concepts (those with a large domain) and specific concepts (those with a small domain) for an ontology to be effective. If there are too many concepts that have a very small domain, then it will be difficult to compose rules for the formation of relationships between entities (all of the rules will be based on very small concept domains, therefore be very specific in nature, and not easy to wantify or analyze). On the other hand, if there are not enough concepts with a small domain, then it might become difficult to identify very specific entities, and composing rules becomes very easy, but very difficult to apply with precision.


    As an example of this, suppose that the only concepts describing the size of vehicles were this "Motorcycle sized, or smaller" and "Larger than a motorcycle". With this simple set of concepts describing the characteristic size, it becomes very difficult to determine rules about relating a “vehicle” with “the ability to cross a certain bridge”. As the number of vehicles "larger than a motorcycle" is very large, and there is a great range of weights and widths of those vehicles, it can be seen that basing "bridge crossing ability" rules on this (with any sort of precision) is not possible without further subdivision of the size concept.


  2. Second, the ranges of concepts have to be considered. We have described the range of concepts that an entity can have in two broad terms - those concepts exhibiting internal properties, and those exhibiting external properties. These two terms can apply to all compounds, but they are of particular interest in the area of entities. The terms deserve greater explanation, and they seem easy enough to define. Internal properties are those properties that give the compound self-identity. External properties are those properties that define how the entity affects, and is affected by, other entities within the ontology (via relationships).


For an example showing the applicability of internal and external properties, consider the entity granny smith apple. A granny smith apple is an entity that has (among others) the internal properties of being the fruit of a certain tree, having height and weight within a certain range, having a certain color and taste, etc. It also has a number of external properties, such as edible source of nutrition for herbivores and omnivores. If we have the n-ary tuple -

school boy => eating => granny smith apple


We see that we have the entity school boy related to the entity eating. As granny smith apple has the external property listed above, and school boy has the internal characteristic of omnivore, and the entity eating has the property of an act describing the ingesting of edibles, then it becomes clear how all these properties work together allowing for the ontological description of school boy eating granny smith apple.

In evaluating the ranges of concepts that exist for the compounds of a formal ontology, it is important to understand to what purpose the ontological description will be put. If it is to support all possible communications within a universe of discourse, then the sorts of interactions between entities, defined by the possible relationships and rules governing their application, must be considered. At that point, each range of concepts must be evaluated to ensure that it has sufficient coverage of external properties to accommodate all of the possibly required relationships between entities. Secondly, for each level of detail resolution that a formal ontology requires its entities to be considered at, there must be sufficient internal properties for the range of concepts for the entities to be considered sufficiently at that level of detail resolution. If a formal ontology is to be able to support the consideration of entities at several (or many) different levels of resolution, then the requirements of the range of concepts for the entities supporting those levels of resolution must be present.

It is apparent, now, that the evaluation of the concepts of a formal ontology is based heavily in the use that the ontology is to serve within the universe of discourse. This is unavoidable, unless the entire domain and range of all possible concepts were to be described as part of the formal ontology. We also see the basing of evaluation within the realm of "intended use" as being a good thing - it allows an application of detail to exist where it is needed.

Earlier article on concepts

Tags:

The concept of an agglutinative language

I have been considering the possibility of an Agglutinative Language. Some examples from linguistics would be German, Dutch, or Esperanto.

Can the same concept, whereby entities are created as needed from existing entities, apply to data engineering? Is there a way for data entities to (with agility) combine and form new data entities? My gut feeling tells me that there is an understood (unamibiguous) set of metadata describing each data element, that there should be problem in combining several together, to form complex entities. It is what I am proposing can be done in a formal ontology from a collection of concepts brought together to form a compound (complex concept, entity, or relationship).

The idea class inheritance intrigues me here.

Further thought warranted . . .

Tags: agglutinative language, ontology, linguistics

Human-Robot Interaction Conference

Human-Robot Interaction Conference

They have a track on Cognitive Science, and a track on Cognitive Modeling. Perhaps it would be of benefit to take a look.

13 July 2005

XML Topic Maps 1.0

XML Topic Maps (XTM) 1.0

Nice website with the standards and a good explanation of XML topic maps. Groovy.

Tags: xml, topic maps, taxonomy

12 July 2005

Advances in Modal Logic - volume 4

AiML: Volume 4

Very interesting, could be possibly used to form the basis of a temporal ontological description, or at least a temporal predicate logic system.

Tags: ontology, modal logic, predicate logic,

11 July 2005

Formal Ontology = Taxonomy + Knowledge Base + Grammar + ??

If you take the ideas behind a taxonomy (taxonomical classes, hypernyms, hyponyms, related ideas, hierarchy of concepts), plus the components of a knowledge base (a long list of enumerations satisfying the various classes of the taxonomy), plus a grammar (rules defining morphology, syntax, and semantics) you have a good start as to what an ontology is. But what is missing?

My current working definition for the missing part is this: The explicit and rigorous definition of the elements of the taxonomy, the knowledge base, and the grammar, as well as rules defining the relationships among the various components.

Tags: , , ,

10 July 2005

Handbook of Semantic Restructuring

Handbook of Semantic Restructuring

Definitely calls for some further study. Just when I though I got a handle on generative grammars, generative semantics, and gemerative conceptualizations . . . what's next, talking dogs?

09 July 2005

CETIS-The semantic web: How RDF will change learning technology standards

08 July 2005

Dave Beckett's Resource Description Framework Resource Guide

Dave Beckett's Resource Description Framework (RDF) Resource Guide

Very nice - loads of links (take a look) and documents, papers, discussion boards, etc etc etc

Tags: semantic web, rdf

MindRaider - Semantic Web Outliner

MindRaider - Semantic Web Outliner

Darn - the more I look at it, the more I like it. Maybe I should just learn to live with the $0 price tag....

Tags: semantic web

The semantic web

The semantic web

Interest (if slightly dated) article from UK newspaper, the Guardian. Has some very nice things to say about the Semantic Web, and also some other future looking projects.

semantic web

Practical RDF

Practical RDF

Shelley Powers' website about RDF. She wrote the book on practical RDF, and has a nice blog discussing a number of different related issues.

I like RDF, but I think that I like the additions to OWL as well.

I'm just not checked in yet on my decision as to which flavor of OWL. I mean, I really enjoy the idea of having executable ontological information (as in OWL-Lite, or even OWL-DL), but I also like the idea of having a full blown ontological description done of a system using OWL-Full.

Tags: rdf, OWL

Metadata Interoperability

Metadata Interoperability

Nice to see the Library community is potentially having an effect, with it's own struggles in the area of metadata, on the semantic web.

Tags: interoperability, metadata

05 July 2005

What is an ontology and why we need it

What is an ontology and why we need it

Nice paper by Noy and McGuinness - it is on the Stanford protege site, so of course presents the different components of ontology in terms of the protege structures.

Still, good stuff, and helpful to me for delimiting concepts, entities, and taxonomies/knowledge bases/ontologies.

Tags: ,

The Semantic Conception of Truth

The Semantic Conception of Truth

This paper from Alfred Tarski (Berkely, 1944) is very interesting in defining semantic concepts, linguistic structure, metalanguage, object language, and a lot of other cool ideas. Good bedtime reading for linguistic geeks. Like me.

Tags: ,

04 July 2005

Semantics of Prostitution

Prostitution

This is a very interesting semantic map of the different concepts that relate to the idea of prostitution.

I like this chart, because it shows how different concepts, and properties of concepts, can be applied to the various entities. And then, the rules based on categorization of those properties drive the relationships. Groovy.

PECULIARITIES OF VAN CYBERSPACE - SEMANTIC WEB

PECULIARITIES OF VAN CYBERSPACE - SEMANTIC WEB

Interesting take on what the Semantic Web is, and where it is going.

Extending UML for Ontology Development

Extending UML for Ontology Development

Very interesting paper, gives a number of what appear to be very good examples as to how to extend the various diagramming schemes of UML to support the definition of a formal ontology.

Tags: ,

03 July 2005

Workshop Proceedings - Core Ontologies in Ontology Engineering

Core Ontologies in Ontology Engineering 2004

A nice collection of papers and presentations from the 2004 Workshop on Core Ontologies in Ontology Engineering.

Mostly presents a selection of sample domain ontologies, and some of the rules for generating and exploiting those ontologies.

Components of Ontology: Entities

Entities, as defined in 05E-SIW-045, are divided up into several possible classes - objects, actions, events (objects with a time component), and phenomena.

[I think that the interesting thing here is phenomena. After all, entities relate to each other through relations, yet phenomena are the objects of relations (or perhaps the subjects) that denote and decry change of state. Whether it is an action, an emotion, an environmental change, or an affectation on the nature of some other entity - phenomena are almost always defined within the the context of the effects they bring about on other entities. This intrigues me, from the point of view of phenomena being part of a system. Are they, in essence, each special case relations that relate an entity to the new state that the phenomena describes? Not sure. I'm not even sure if I have stated that correctly, but it still interests me.]

Again, from the description in the Toulouse paper, entities are comprised of concepts. I find it interesting how, if looked at in an object oriented fashion, that the structure of Entities constructed of Concepts allows for some Concepts to be part of many different Entities. What is more interesting (and making for some challenges) is that some entities, if defined within two seperate system, but that are modeling the same real life concept, might be composed of not only a divergent list of concepts, but possibly a completely seperate list of concepts. Ouch. That certainly makes the goal of semantic interoperability much harder to accomplish. It also makes the role of a central referential data model that allows for translation between seperate ontologies - much more difficult.


Tags: , ,

Vocabulary and Ontology Workshop

First EDOC Workshop on Vocabularies, Ontologies, and Rules for the Enterprise

Looks good - wish I could travel to the Netherlands in September. Sadly, I'll be in Orlando at SIW (most likely)

Semantic Web Science Association

International Semantic Web Science Association

This organization looks very interesting. I have found copies of their journal (at least the first two years) online.

They have an annual symposium - this years is in Galway, Ireland. I wish I could find the $$ to attend. I am thinking seriously about submitting a paper for next year (2006) as it will be in Georgia.

Tags:

02 July 2005

Agent communication

Jacques Ferber has this to say when defining an agent . . .

An agent is a physical or virtual entity. . .
  1. which is capable of acting in an environment.
  2. which can communicate directly with other agents.
  3. which is driven by a set of tendencies (in the form of individual objectives or of a satisfaction/survival function which it tries to optimize).
  4. which possesses resources of its own.
  5. which is capable of perceiving its environment (but to a limited extent).
  6. which has only a partial representation of its environment (and perhaps none at all).
  7. which possesses skills and can offer services.
  8. which may be able to reproduce itself.
  9. whose behaviour tends towards satisfying its objectives, taking account of the resources and skills available to it and depending on its perception, its representation and the communications it receives.

The part that gets me interested, of course, is item number 2 - the ability to communicate with other agents. Does this infer only agents that communicate using the same system? Or should there be, in an environment supporting the agile combination of agents from a variety of sources, the means for semantic communication between agents using a general system?

I'm betting that somewhere down the road, someone will be interested in the latter. I sure hope so.


Tags: , ,

What are the differences between a vocabulary, a taxonomy, a thesaurus, an ontology, and a meta-model?

What are the differences between a vocabulary, a taxonomy, a thesaurus, an ontology, and a meta-model?

From the website www.metamodel.com, this is a nice concise little article on defining and giving the borders between these five important concepts.

There is a nice comment (from Michael Uschold of Boeing) that summarizes the commonalities, as well as the differences. Good commenting. But then again, Mr. Uschold has quite a nice name going for himself in the Semantics and Semantic web community. See this paper for a good example.

Tags: , , , ,

Singularity Institute for Artificial Intelligence

Singularity Institute for Artificial Intelligence

AI is certainly a driver for the ability to semantically mark and understand data, and also for conceptual interoperability (is it probable that an artificial intelligence system will be one monolithic system? or is it more likely to be a system of systems?).

The singularity folks propose that a singularity event will occur in the next few decades - a time when mankind is capable of artificially creating intelligence greater than his own (through augmentation, or original creation). I'm not quite sure what I think of this yet, more thoughts to follow . . .

Tags:

Metadata? Thesauri? Taxonomies? Topic Maps!

Metadata? Thesauri? Taxonomies? Topic Maps!

This is a great introduction to the concept of topic maps, and how they fit in with other sorts of digital marking for the purpose of capturing an ontological description of information.

As I see it from my readin, I understand a topic map to be similar to a formal ontological description, with a few main exceptions:

  • Topic Maps (TM) are largely visually oriented in presentation
  • TM are more concerned with just the taxonomy
  • Formal Ontologies (FO) are more concerned (than TM) with capturing information about properties of concepts, and rules governing relationships
  • TM seem to be much less rigorous than FO


Tags: ,

Semantic Tagging, an extension to a Group's Thesaurus

Semantic Tagging, an extension to a Group's Thesaurus

Interesting look as to how Haiko Hebig is playing with the idea of "facets" on tags.

A facet is linked to the term in the tag, to give context or a sense of perspective to the tag.

For instance, take the tag-word "fish". As it is interpreted by different perspectives, it can mean different things (and necessarily has different semantic lineage). If you mean "fish" as an animal, that is one view. If you mean "fish" as a food, then that is another. If you mean "fish" as a verb, then that is something totally different.

Tags: , ,