International Journal
INFORMATION THEORIES & APPLICATIONS ISSN 1310-0513
Volume 10 / 2003, Number 2
Editor in chief: Krassimir Markov (Bulgaria) International Editorial Staff
Chairman: Victor Gladun (Ukraine)
Adil Timofeev (Russia)
Aleksey Voloshin (Ukraine) Alexander Eremeev (Russia) Alexander Kleshchev (Russia) Alexander Kuzemin (Ukraine) Alexander Palagin (Ukraine) Anatoliy Shevchenko (Ukraine) Arkady Zakrevskij (Belarus) Avram Eskenazi (Bulgaria) Boicho Kokinov (Bulgaria) Constantine Gaindric (Moldavia) Dimitar Shishkov (Bulgaria)
Galina Rybina (Russia)
Ilia Mitov (Bulgaria) Krassimira Ivanova (Bulgaria) Larissa Zainutdinova (Russia) Neonila Vashchenko (Ukraine) Nikolay Zagorujko (Russia) Peter Barnev (Bulgaria) Peter Stanchev (Bulgaria) Rumyana Kirkova (Bulgaria) Tatyana Gavrilova (Russia) Valery Koval (Ukraine) Vitaliy Lozovskiy (Ukraine) Vladimir Jotsov (Bulgaria) Zinoviy Rabinovich (Ukraine) IJ ITA is official publisher of the scientific papers of the members of
the Association of Developers and Users of Intellectualized Systems (ADUIS).
IJ ITA welcomes scientific papers connected with any information theory or its application.
Original and non-standard ideas will be published with preferences.
Papers must be written in English.
Responsibility for papers published in IJ ITA belongs to authors.
Please get permission to reprint any copyrighted material before you send it to IJ ITA.
IJ ITA rules for preparing the manuscripts are compulsory.
The rules for the papers for IJ ITA as well as the subscription fees are given on www.foibg.com/ijita . The camera-ready copy of the paper should be received by e-mail: [email protected]
International Journal “INFORMATION THEORIES & APPLICATIONS”
Vol.10, Number 2, 2003 Printed in Bulgaria
Edited by the Institute of Information Theories and Applications FOI ITHEA, Bulgaria Publisher: FOI-COMMERCE - Sofia, 1000, P.O.B. 775, Bulgaria
www.foibg.com e-mail: [email protected]
® "Information Theories and Applications" is a trademark of Krassimir Markov Copyright © 2003 FOI-COMMERCE, Publisher
Copyright © 2003 For all authors in the issue.
All rights reserved. ISSN 1310-0513
SELECTION OF THEMATIC NL-KNOWLEDGE FROM THE INTERNET
V.Gladun, A.Tkachev, V.Velichko, N.Vashchenko
Abstract: The paper deals with methods of choice in the INTERNET of natural-language textual fragments relevant to a given theme. Relevancy is estimated on the basis of semantic analysis of sentences.
Recognition of syntactic and semantic connections between words of the text is carried out by the analysis of combinations of inflections and prepositions, without use of categories and rules of traditional grammar.
Choice in the INTERNET of the thematic information is organized cyclically with automatic forming of the new key at every cycle when addressing to the INTERNET.
Keywords: semantic analysis, information search, INTERNET.
1. The purposes and base ideas
Among various variants of practical use of storehouses for the textual information the necessity to find the information having thematic unity prevails. These are needs of a scientist, a journalist, a politician, an official, a writer, a student. Usually the theme arises as one or several concepts, some initial situation having a number of blank valences and situational roles which serve as reference points for search of the new relevant information. The new information gives rise to new directions of search. This complex, sometimes psychologically painful, creative process requires the automated support. Thematic search needs laborious work with the texts stored in libraries, archives, the INTERNET, textual databases. Difficulty of this work consists, in particular, in necessity to select not the whole texts, but relevant to the theme fragments of texts.
The contents of many texts are an interlacing of a number of themes. Thus, a problem arises of search inside textual documents of fragments, relevant to the given theme.
In the paper methods, software and results of selection of the thematic textual information are considered.
The researches submitted in the paper continue the works published in [1-3].
The solving of the problem unites the following actions:
1) selection of texts or fragments of texts relevant to an investigated theme;
2) selection from the relevant information of the most important, first of all, such which defines and connects the most essential terminology of a theme;
3) representation of the chosen information in the user-friendly form.
Implementation of the specified actions is based on the following ideas:
1) to focus a technique of selection of the thematic textual information on the INTERNET, as on the most full storehouse of the textual data;
2) to combine search by key words with the semantic analysis of NL-texts;
3) to use semantic criteria for selection of the most important thematic information;
4) to organize automatic cyclic process of key words formation to investigate the theme as complete as possible.
2. A technique
The initial stage of the thematic information selection consists in search of textual documents in the INTERNET using the given key. Existing methods of information search in the INTERNET give out a lot of unnecessary for user “garbage” information which filtration takes too much time. The way out consists in use of the semantic criteria providing selection of the most essential characteristics of concepts concerning which the information is gathered.
The offered method is based on the assumption, that the most important user information is contained in kernel constructions of sentences. The term “kernel constructions” is used in transformation grammar for designation of simple base judgment by which transformation the sentence as a whole is formed. In our case the kernel construction consists of a subject, a predicate and a link.
The method represents cyclically repeating sequence of the following operations:
1. Selection of the given quantity (parameter) of texts using a key. A set of used search systems is unlimited.
Now the program can use the following search systems: Yandex,Rambler, Meta-Ukraine, Aport, Google.
2. Selection in the found texts of the sentences containing a given key.
3. Selection in set of the sentences that were chosen in item 2, the sentences containing kernel constructions. For item 3 performing the natural-language semantic analyzer is used.
4. Formation of n-step expansions of the kernel of the selected sentences. n-step expansion of the kernel is a part of the sentence containing its kernel, and also the words connected in a tree of dependencies with elements of the kernel by paths which length does not exceed n. n is a user-given parameter.
The item 4 is performed on the basis of the semantic analysis of the sentence.
5. Selection in the set of the sentences chosen in item 3, such sentences in which n-step expansions of the kernel contain the given key.
6. Formation of a new key on the basis of the analysis of semantic representations of before selected sentences. Transfer to the item 1.
The initial key word is given by a user. New keys on the subsequent cycles of the algorithm are chosen among terms that are significant words used only within the limits of investigated domains. The terms are marked in the dictionary.
When choosing a new key, the degree of its relevance to the given theme is taken into account. The relevance is defined on the basis of results of semantic analysis of sentences. At the following cycle of the algorithm the term, that was not used earlier and has the greatest relevancy coefficient, is chosen as a key.
After a choice of a new key, actions 1 - 6 are repeated.
3. The semantic analysis
The basic operation of the semantic analysis of natural-language texts is recognition of the syntactic and semantic relations connecting words of the text. Recognition of relations is carried out on the basis of their descriptions (models). Such models are necessarily present at all methods of the analysis though it is not always obvious. In the majority of the analysis methods the process of recognition of relations is preceded with translation of initial natural-language representation of relations to be recognized in the language of categories of traditional grammar (gender, case, time, etc.). Rules of recognition of syntactic and semantic relations operate with grammatical descriptions of words. Binding to grammatical descriptions of elements of the text results in the following imperfections: heterogeneity of ways of processing separate words and word combinations; bulkiness of processing; complexity of adaptation to changes of lexicon and a user’s domain;
laboriousness of the research. Meanwhile, transition to grammatical descriptions is not an obligatory condition for performance of the semantic analysis of natural-language texts. The information necessary for recognition of syntactic and semantic relations is contained directly in the text. As a proof to that, there are “human”
processes of the analysis of the natural-language texts, which are not connected with grammatical categories and rules. Therefore, it is competent another approach based on use of conformity between relations and means of their expression in natural-language texts. Recognition of syntactic and semantic connections between words is carried out by the analysis of combinations of inflections and prepositions, without using categories and rules of traditional grammar. By virtue of its basic features, such approach allows to exclude the imperfections named above.
Models of relations in which elements of natural-language texts are used for recognition of syntactic and semantic relations, we shall refer to as lexical models of relations. The algorithm of the semantic analysis of natural-language sentences on the basis of lexical models of relations is described in [1-3].
4. Implementation and results
The structure of the program complex realizing processes of thematic knowledge formation consists of the programs which are carrying out the following actions:
1. Selection in the INTERNET of the textual fragments containing a given key.
2. Formation of semantic representations of sentences (the linguistic processor).
3. Selection of sentences, relevant to a theme, on the basis of the analysis of semantic representations of sentences.
4. Choice of a new key.
At the present time lexical data and knowledge bases of the complex are created for Russian language.
As a result of working a program complex the text is formed which consists of separate sentences that are relevant to a theme designated by an initial key which is given by a user. For each sentence, the address of corresponding document is indicated. The set of sentences selected from one document allows generating a conception about its thematic relevance as a whole. The high level of relevance of the document may induce a user to choose this document for detailed studying. The set of all selected sentences throws light on an investigated theme as a whole. The degree of completeness of the selected information on a theme depends on efficiency of the used search machine and quantity of the texts chosen in the INTERNET. Experience of the complex exploitation shows that the set of sentences selected by a program on the basis of the thematic analysis, well correlates with result of “manual” selection of “useful” sentences by an end user. The complex provides the high degree of elimination of the information that is unnecessary for a user.
Conclusion
Above described method of thematic selection of information can be used for the information search not only in the INTERNET, but in any textual databases. We also consider it as the instrument for creation of ontology’s. The merit of the method is effective filtration of the information on the basis of criteria of relevancy to the given theme that is obtained at the cost of semantic analysis of sentences and a cyclic process of automatic selection of a new key at every cycle. The method allows comparatively simple adaptation to changes of a text language.
The literature
1. Gladun V.P. Processes of formation of new knowledge. - Sofia: СД "Педагог". 1994. - 192p. (in Russian).
2. Gladun V.P. Planning of decisions. Kiev: Наукова думка, 1987.-168p. (in Russian).
3. Gladun V.P. Natural language in purposeful systems.//DIALOG-2000. Applied problems. 2000, p.99-102. (in Russian).
Author information
Victor Gladun - V.M.Glushkov Institute of cybernetics of NAS of Ukraine, Prospekt akad. Glushkova 40, 03680 Kiev, Ukraine; e-mail:[email protected]
Alexander Tkachev - V.M.Glushkov Institute of cybernetics of NAS of Ukraine, Prospekt akad. Glushkova 40, 03680 Kiev, Ukraine; e-mail:[email protected]
Vitaly Velichko - V.M.Glushkov Institute of cybernetics of NAS of Ukraine, Prospekt akad. Glushkova 40, 03680 Kiev, Ukraine; e-mail:[email protected]
Neonila Vashchenko - V.M.Glushkov Institute of cybernetics of NAS of Ukraine, Prospekt akad. Glushkova 40, 03680 Kiev, Ukraine; e-mail:[email protected]
PROCESSING OF KNOWLEDGE ABOUT OPTIMIZATION OF CLASSICAL OPTIMIZING TRANSFORMATIONS
Irene L. Artemjeva, Margarita A. Knyazeva, Oleg A. Kupnevich
Abstract: The article describes the structure of an ontology model for Optimization of a sequential program.
The components of an intellectual modeling system for program optimization are described. The functions of the intellectual modeling system are defined.
Keywords: Knowledge based system; Program optimization; Domain ontology
Developing knowledge-based systems for any domain needs constructing its ontology [Kleshchev, 2002].
Ontology is an explicit description of domain notions and contains terms for describing reality and knowledge and agreements restricting the interpretations of these terms. The ontology of a domain defines the structure of knowledge and the structure of domain reality.
The problems in the "Program optimization" domain are mainly grouping around the unification of the notion system for describing program schemes in terms of which one could describe optimizing transformations and standardization of notion system for describing optimizing transformations. The other set of problems has to do with how to effectively use the accumulated knowledge in the problematic area, i.e. is connected with the special training in program optimization, the development of skills of putting theoretical knowledge about program optimization into practice.
The ontology model of the knowledge domain "Sequential program optimization" and its using when developing computer knowledge banks on program optimization is presented in this work.
The ontology model of the "sequential program optimization" domain
The formal description of the terminology of a knowledge domain together with definition of meanings of terms is called "ontology model" [Kleshchev, 2001]. The terms of the knowledge domain "Program optimization (classical optimizing transformations)" can be divided into two groups: (i) the terms for describing programs (the terms of this group will be called the terms for describing the optimization objects), and (ii) the terms for describing the optimization process. Therefore the ontology model of this knowledge also consists of two parts.
The optimization object is a program. The characteristics of a program are always formulated in terms of a mathematical model of this program. The characteristic of a program is the language (a set of programs) this program belongs to. The characteristics of the language are also formulated in terms of a mathematical model of this language. Thus, a number of terms for describing the object of optimization can be divided into two groups: (i) the terms for describing the language model, (ii) the terms for describing the program model.
Before optimizing a program, the language, this program belongs to, must be determined. Consequently, the terms for describing the language model are parameters of the ontology model, and the terms for describing the program model are the unknowns of the ontology model. Then the language model is represented by the values of the parameters, and the program model – by the values of the unknowns.
It is evident that the program model describes not one program but a set of programs that have the same characteristics; the language model describes a set of languages that have the same characteristics.
Therefore the following requirements are set on the language model: it must allow to present basic constructs of imperative programming languages essential for describing sequential OTs; it must be flexible and extensible in order to expand the class of modeled programs, if necessary; the form of presenting program models must be convenient for analyzing both information flows and control flows in the program.
The ontology model of the knowledge domain "Sequential program optimization" consists of two modules. The first module contains the terms for describing the optimization object, the second one – the terms for describing the optimization process.
The first module is an unenriched system of logical relationship with parameters written in sentences of a many-sorted language of the applied logic. Any program consists of fragments that in their turn consist of
other fragments [Kasyanov, 1988] [Knyazeva, 1999], i.e. any program has its syntactic structure that is reflected by a mathematical model of this program.
Each fragment – as an element of the program – has a number of characteristics. First of all, it has the address of the fragment – the unique characteristic unambiguously defining each fragment in the program.
All the fragments can be divided into three groups: declarative statements, imperative statements, and entries of statements. Its class characterizes each fragment, e.g. the fragment can have the class of assignment statement, iteration statement, declarative statements of functions, etc. The function FragClass returns the class of this fragment for the indicated address of the fragment. A set of names of fragment classes (of each group) in the program assigns the values of the parameters Declarative statements, Imperative statements, Entries of statements.
Control arcs assign the syntactic structure of the program. Each control arc connects two fragments of the program. Each control arc has its label identifying a type of connection between the fragments. The function the name of which coincides with the arc label is used to assign the control arc. The value of the parameter Names of control arcs assigns what labels can be owned by control arcs in the program. The control arc area is defined for each control arc. This area is assigned by the value of the functional parameter Control arc area – a function that matches each arc label with a pair consisting of two sets of names of fragment classes: the first set determines what fragments classes can be arguments, the second one determines what fragments classes can be results of the control arc with this label.
There are always a number of various Identifiers in the program. Identifiers can be of different types, e.g.
identifiers of variable, functions, constants, and data types. In the program identifiers of each type make a set the name of which coincides with the name of the type. The value of the parameter Types of identifiers assigns what types of identifiers can be present in the program.
Functions and relationships identifying the structure of the program and some of its characteristics are defined on a set of fragments and identifiers of the program.
Functions with one argument (a fragment or an identifier) are called attributes. Each attribute has its name.
The value of the parameter Names of attributes assign what attributes can be used for describing the characteristics of identifiers or fragments in the program. The definitional domain and the range of attribute values are established for each attribute. They are assigned by the values of the functional parameters Definitional domain and Range of attribute value.
The value of the parameter Names of functions assigns what functions with two or more arguments can be used for describing the characteristics of fragments or identifiers of the program. The definitional domain and the range of values are established for each function. They are assigned by the values of the parameters Definitional function domain and Range of function values.
The value of the parameter Names of relationships assigns what relationships can be defined on a set of fragments and identifiers of the program. The values of the parameters Determination of Relationship for each relationship name assign a formula of truth determination of a relationship between fragments and identifiers of the program.
Correctness is another characteristic of the fragment. The value of correctness is assigned by the predicate Correctness that matches the address of the fragment with the truth if it has all the control arcs and attributes necessary for this fragment. The value of the parameter Determination of correctness assigns control arcs and attributes for each class of fragments.
The value of the parameter Elementary types assigns for the language a set of identifiers of data types that are basic for all the rest data types of this language.
The value of the parameter Modes of generation assigns a set of names of constructors of data types in the language. The values of such characteristics as a base type and the method of constructing a type from the base one are set for each identifier of data type in the program. The value of the first characteristic is assigned by the function BaseType that matches each identifier of the type with a chain of identifiers of types that are used when constructing this data type. The value of the second characteristic is assigned by the function ConstructMethod that matches each identifier of the type with the name of the constructor. The value of the parameter Compatibility of types is a predicate defining the possibility of implicit transformation of one type to another.
The value of the parameter Reserved names assigns a set of names of roles that can be played by fragments or identifiers of the program. The value of the parameter Area of reserved name value matches each name of a role with predicate defining proprieties for a fragment or an identifier playing this role in the program.
The value of the parameter Operators assigns a set of symbols used in the program for identifying operations in expressions (arithmetic, logical, of a transformation, etc.) of the program. The functional parameters Arity and Priority match each symbol of the operation with a number of arguments and priority.
Optimization is understood as a chain of steps at each of them one transformation is applied to an optimized program. Each step will be called a step of the optimization history. The program model written in terms of the language model is the optimization object. The program model changes at each step of optimization: the rule of transformation that is established by the optimizing transformation that is used at the current step of optimization is applied to it. Thus, at each step of optimization there is its own version of the program model with its set of fragment addresses, its set of identifiers, etc. Therefore all the terms defined in the work [Artemjeva, 2002] are functions the first argument of which unambiguously identifies the current program model (i.e. the current set of fragment addresses, the current set of identifiers, etc.). A number of a step of the optimization history plays a role of this argument.
Let us define the terms for describing optimization process. Each optimizing transformation has the following characteristics: the saving block, the context condition, the indicative function, and the application strategy.
Normally an optimizing transformation (OT) is not applied to the whole program but to one of its blocks not necessarily continuous. The set of program fragments necessary for making a decision about optimization will be called candidate for saving blocks and the set of fragments where the context condition is true will be called saving block. Thus the saving block is a candidate for which the context condition of an optimizing transformation is true.
An ordinary saving block is a fixed set of program fragments for which the number of fragments, their types and their mutual location in the program are known. However the saving block consists of two parts. The first part is a tuple of fragments where each element's type and location in the saving block are known.
The second part is a set of tuples of fragments. The number of tuples belonging to the second part of the saving block is variable for different saving blocks of one program but the number of elements of each tuple, types and mutual location of fragments included in the chain are fixed, i.e. these tuples of fragments own certain identical characteristics. When applying an optimizing transformation, all the tuples of fragments of the second part of the saving block change the same way. For example, the saving block contains both declarative statement of a procedure and all its invocations for certain optimizing transformations of procedures (functions). The declarative statement of a procedure is always single but invocation operators can be numerous. All the elements of the set of invocations are assigned by declarative statement of a procedure. Thus, after applying an optimizing transformation, all operators of invocation of a procedure (function) change the same way.
The saving block in the model is represented as a pair the first element of which will be called the simple part of the saving block; the second one will be called the multiple part of the saving block. The first element of the pair is a tuple of addresses of fragments. The second element of the pair is a set of tuples of addresses of fragments. For the example given above, fragments of declarative statement of a function make the simple part of the saving block and a set of tuples of fragments of invocation operators makes the multiple part of the saving block.
Each optimizing transformation has simple and multiple parameters. Each fragment of the simple part of the saving block will be called the value of the simple parameter of the saving block corresponding to it.
The set of fragments included in the set of tuples of fragments of the multiple part of the saving block will be called the value of multiple parameter of the saving block. The function Number of simple parameters of OT assigns the number of elements of the tuple of the simple part of the saving block. The function Number of multiple parameters of OT assigns the number of elements of each chain of the multiple part of the saving block. The functions Classes of simple parameters of OT and Classes of multiple parameters of OT assign chains of classes of fragments forming the simple part of the saving block and chains of classes of fragments forming each tuple of the multiple part of the saving block.
The context condition of the optimizing transformation describes the characteristics of the saving block this optimizing transformation is applied to. In this ontology model the context condition is presented as a predicate the arguments of which are the number of a step of optimization history and a candidate to saving blocks.
There can be several saving blocks in each step of the optimization process. Therefore it is necessary to have a criterion to choose one block form many. Indicative function (IF) is a formula the arguments of which are fragments from the saving block. It matches each saving block with its estimate – the rational number.
Optimization strategy is a formula helping to choose one number from the set of rational numbers – results of indicative function for various saving blocks. Correspondingly the saving block with the chosen characteristic undergoes optimization on this step of history.
Parameters with the identical context are a set of pairs of numbers of parameters of the saving block before and after optimization contexts of which must coincide.
The chain of application of OT assigns the order of OT application. Each OT is applied until there are valid saving blocks left for it.
All the terms defined above are considered as terms for describing knowledge of program optimization, whereas optimization strategy, indicative function and chain of application of OT allow assigning parameters of optimization; context conditions and transformation formulas assign statistic knowledge about optimizing transformations.
Let us further define terms for describing situations of the knowledge domain. These terms are mainly meant for complete recording of the modeling process of application of optimizing transformations to the program.
The situation consists of a chain of optimization steps (history steps). Sets of fragment addresses, identifiers, and values of all attributes, functions and relations in the program are defined for each step. Some fragments, attributes, arcs and identifiers are defined while constructing the program model and known on the first step of optimization but some are recomputed before the beginning of each next step with the help of a special function Enrichment of SMP. The work of this function leads to that all the DSCH class fragments are added to the program model, Begin, End, Parent, SCH arcs are defined for them, all relations and functions are defined, values of all the attributes from a set of Computable attributes are also computed.
At each history step there is a chain of current OT. At the beginning of optimization a chain of Current OTs coincides with a chain of OT application.
The first OT from a chain of current ones for which a set of candidates to the SB is not empty is called applied OT and its number in the application chain is written in Number of Applied OT. A set of estimates is formed for all candidates to SB for applied OT with the help of indicative function and one estimate is selected from this set and with the help of optimization strategy and becomes the characteristic of chosen SB. The saving block corresponding to this characteristic becomes chosen SB. Apart from this, optimized saving block is defined on each step – it is a combination of fragments from next history step that becomes true when permutated to the transformation formula together with chosen SB. Optimized block appears as a result of OT application to the chosen saving block.
As a result of an applied optimizing transformation a number of fragments of the source saving block can be deleted, a number of fragments can be changed, new fragments (with new addresses)can be added, existing identifiers can be deleted or new ones can be added. For each optimizing transformation it is known how many elements will be included in the simple part of optimized saving block and how many elements will be included in each element of the multiple part of the saving block. This information is assigned by functions Number of elements of the simple part of optimized SB and Number of elements of multiple part of optimized SB.
Transformation formula is a predicate the arguments of which are two saving blocks from two consequent history steps; this predicate is true if the second SB is the result of OT application to the first SB.
Tasks given in terms of ontology model
As it follows from the previous section, in the ontology model there are groups of parameters defining Programming Language (PL), Optimizing Transformation Description Language (OTDL), Optimizing Transformations (OT); Optimization Strategy and groups of unknowns defining program characteristics before optimization, complete protocol of optimization process and characteristics of optimized program. Besides it is necessary to enter parameter Estimating function. This function will allow comparing various programs and, thus, to estimate optimization results.
Let us mention main classes of tasks that can be specified in terms of ontology model:
1. given PL, OTDL, an optimizing transformation and a program, required to get an optimized program and check the correctness of OT application;
2. given a program, PL and OTDL, strategy, estimating function, required to get an optimized program, estimate its optimality, check the correctness of OT, Strategy, analyze protocol;
3. given a set of programs, PL and OTDL, strategy and estimating function, required to get a set of optimized programs and estimates of their optimality, study the dependence of program optimality estimate on its characteristics;
4. given a set of programs, PL and OTDL, a set of strategies, estimating function, required to get a set of optimized programs and their optimality estimates, study the dependence of program optimality estimate on its characteristics and applied strategy;
5. given a set of programs, PL and OTDL, a set of strategies and estimating functions, required to get a set of optimized programs and their optimality estimates, study the dependence of program optimality estimate on its characteristics and applied strategy for various estimating functions;
6. given a set of programs, PL and OTDL, a set of strategies, an optimization criterion, required to get a set of optimized programs and their optimality estimates, study the dependence of program optimality estimate on its characteristics, applied strategy and programming language.
From the list given above one can see that all tasks are special cases of one task: given a set of PL, OTDL, input programs on these PL, and also a set of optimizing transformations, strategies of their application and functions – optimality estimates, required to build a set of optimized programs, protocols of their optimization and a set of optimality estimates for all programs on each step of optimization.
Method of solving the task of modelling optimization process
It is obvious that solving any task from the given above comes to solving the following task: with PL, OTDL fixed, the only input program, a set of OT, strategy of their application and function – optimality estimate are defined, required to build an optimized program, get the protocol of the optimization process and optimality estimate.
The algorithm to solve the task is given below.
BEGIN
Step of history=1;
Current OT (Step of history)= Chain of OT application;
Number of applied OT(Step of history)=1 Last step=False
REPEAT
First OT application=True
To analyze SMP(Step of history) REPEAT
IF Not First OT application
THEN Number of applied OT(Step of history)= Number of applied OT(Step of history)+1 Applied OT= Chain of OT application[Number of applied OT (Step of history)]
Candidates to SB(Step of history)=To find saving blocks(Step of history, Applied OT) First application =False;
UNTIL (Candidates to SB(Step of history)≠∅) or
(Number of applied OT (Step history)=Length(Chain of OT application)) IF Candidates to SB(Step of history)≠∅
THEN
FOR SB in Candidates to SB(Step of history) DO SB characteristic(Step of history, SB) =
To build SB characteristics (Indicative function(Step of history, SB))
Characteristic of chosen SB(Step of history)=To realize Strategy (Strategy of OT(characteristic of SB(Step of history)))
Chosen SB=To choose SB (Step of history, Characteristic of chosen SB(Step of history)) Optimized SB(Step of history)=To build optimized SB(Step of history, Chosen SB, Applied OT)
To build new SMP(Step of history+1), where exists the only Optimized SB (Step of history) where Transformation Formula (Step of history, chosen SB(Step of history), Step of history+1, Optimized SB(Step of history)), and the rest context coincides.
Step of history= Step of history+1 ELSE Last step=True
UNTIL Last step=True
Number of Optimization Steps= Step of history END.
This algorithm is simple and obvious enough to serve as a kernel of instrumental system for program optimization. However, the functions applied in it: To analyze SMP, To find saving blocks, to build characteristics of SB, To realize strategy, To build optimized SB and To build new SMP are not that obvious and can be considered as separate subtasks.
The structure of intelligent system on program optimization
The developed ontology, the tasks given in it and proposed methods of solving them provide the basis for the instrumental system for program optimization. Instrumental modeling expert system of program optimization (I_MESPO) is intended to support teaching of classical optimizing transformations.
This system allows the user to describe optimizing transformations, to set their application conditions and transformation rules, to form various sets of optimizing transformations, to assign chains, to trace the program optimization history.
The input data of the system are optimizing transformations knowledge, testing program on an algorithmic high-level language.
The result of the work of the system is the protocol of the optimization history protocol where for each optimization step it is shown what transformation has been applied, what saving blocks have been found on this step, which block has been chosen and what it has been replaced by.
Since this task is connected with the complicated processing of the knowledge given and generation of new one, the given subsystem was done as an expert system that models program optimization process.
The expert system includes a subsystem of visual input of knowledge about program optimization, knowledge base description language translator (Synthesizer), high-level language translator in the model of structured programs, estimating and result visualizing subsystem, integrated shell providing interface among these subsystems.
The work with the system I_MESPO begins with that the researcher defines the system of optimizing transformations, i.e.: assigns a list of OTs, a chain of their applications, and defines context conditions for each OT, transformation formulas, indicative functions and optimization strategies. After assigning the values of these parameters, Synthesizer executes translating knowledge base about program optimization into implemented module, thus creating application expert system (AES).
The functioning of the created application expert system begins with the work of a translator included in the system that transforms the structured program written in a high-level language into the structured program model (SPM). SPM is an internal form of relational presentation of programs that is convenient for analyzing and optimizing. According to this model, the application expert system makes an inference and builds up the optimization history protocol of this program. After receiving this protocol, the estimate and visualizing subsystem allows to estimate the program optimality before and after optimizing with the help of assigned estimating function and to compare the texts of the programs on each optimization step on high-level language and to analyze the implemented changes.
Conclusion and Acknowledgements:
Knowledge processing in the field of program optimization makes it possible to use this knowledge in industry, science and education.
Describing optimizing transformations within the terms of one model must facilitate the unification of different transformations within the framework of one system of OT. Thus, specialists can spend much less effort to study and use optimizing transformations to solve program optimization problems.
The use of the knowledge gives an opportunity to train highly qualified specialists to solve tasks on program optimization.
The access to knowledge through the Internet will attract all specialists interested in knowledge exchange on this problem.
References:
[Artemjeva, 2002] Artemjeva I.L., Knyazeva M.A., Kupnevich O.A. Domain ontology model for the domain "Sequential program optimization". Part 1. The terms for optimization object description. In The Scientific and Technical Information, 2002. (In Russian).
[Kasyanov, 1988] Kasyanov V. N. Optimizing transformations of the programs. Moskow: Nauka, 1988.
[Klesсhev, 2001] Klesсhev A.S., Orlov V.A. Requirements on a computer bank of knowledge. In Proceedings of the Pacific Asian Conference on Intelligent Systems 2001, Seoul, Korea, 2001.
[Kleshchev, 2002] Kleshchev A.S., Chernyakhovskaya M. Yu. The present state of computer knowledge processing.
http://www.dvo.ru/iacp/es/publ/kpe.htm
[Knyazeva, 1999] Knyazeva M.A., Kupnevich O.A. Expert system for simulation of program optimization. Joint NCC&IIS Bull., Comp. Science,12 (1999), 24-28.
Author information:
Irene L. Artemjeva: [email protected] Margarita A. Knyazeva: [email protected] Oleg A. Kupnevich: [email protected]
Institute for Automation & Control Processes, Far Eastern Branch of the Russian Academy of Sciences 5 Radio Street, Vladivostok, Russia
A KNOWLEDGE-ORIENTED TECHNOLOGY OF SYSTEM-OBJECTIVE ANALYSIS AND MODELLING OF BUSINESS-SYSTEMS
M. Bondarenko, V. Matorin, S. Matorin, N. Slipchenko, E. Solovyova
Abstract: A new original method and CASE-tool of system analysis and modelling are represented. They are for the first time consistent with the requirements of object-oriented technology of informational systems design. They essentially facilitate the construction of organisational systems models and increase the quality of the organisational designing and basic technological processes of object application developing.
Keywords: Knowledge, systemology, natural classification, objects modelling, conceptual knowledge.
The civilization sustainable development is based on formation of the informational society as a first stage of the noosphere. At the same time, as the transition to the informational society, as economic activity in it become based on knowledge. This knowledge represents the "informational resource". It directly influences at the material factors of progress and ensures the "phase transition of knowledge into a power", i.e. efficiency of business, production and any administrative solutions.
The submission about the tendency of knowledge-oriented development of an alive nature is entered into scientific practice by V.I. Vernadskiy under a title "the Dan’s principle". The knowledge-oriented development should be considered as the universal tendency enveloping not only biological, but also all other complicated systems. The social (organizational) and information systems also develop in a direction of increasing of a knowledge role for their sustainable functioning.
This tendency is exhibited in the unprecedented growth of knowledge and scientific information; increasing of a role of inclusive, depth knowledge; rapid development of methods and means of knowledge processing, analytical activity, acute need of the appropriate experts, influence of informational resources to all sides of the human activity. The technologies and methods of purchase, extraction, submission, processing of knowledge (knowledge management, knowledge engineering) in substantial aspect also develop in the knowledge-oriented direction (data mining, text mining, knowledge discovery, knowledge mining, object
modelling, ontological engineering). In the foreign expert’s opinion, the development of these directions is broken by absence of the effective methodologies by the availability of developed technologies.
Accumulated in the given spheres experience and potential even more acutely shows the necessity of the account of depth knowledge, objective factors, system and simultaneously object approach to modelling of complicated systems. Grew the role of the veritable human’s resource – “conceptual knowledge”, which becomes the core of the informational resources, knowledge bases, and ontology models. In such new spheres of the scientific-practical activity as, for example, business process reengineering, decision support making, object paradigm has appeared the similar necessity, which was already expressed in expediency of the organization’s mission definition, context account, systems analysis first of all from the point of view of their functionality, correspondence to requests more high level.
The systemology [1] can become the unique scientific basis of such researches. Systemology is a system approach of the new noospheric stage of science development, which comes to change the differentiation of sciences in analytical paradigm - second in the whole history of science after antique stage.
Systemology allows to work successfully with the complicated systems of the first nature i.e. not human created, and with open systems. At the same time, in difference from other system approaches, is ensured the possibility to consider as a system not only objects, but also classes of objects (systems-classes). The development of the systemology of systems-classes has allowed us to synthesize the system and classification analysis for a solution of problems of conceptual modelling of the low formalized problem areas [2, 3].
Systemology most objectively allows getting the next things for the complicated systems of any nature and with any minuteness:
⎯ to understand the reasons of origin, dynamics of becoming and development;
⎯ to define the influence to other systems;
⎯ to explain the outcomes of adaptation and interaction;
⎯ to predict development in various conditions;
⎯ to make conclusions about necessary measures of stable development;
⎯ to prevent crisis situations and to reduce risks;
⎯ to take into account the main properties and priorities.
Systemology really takes into account system effect, i.e. for the first time considers the system as a qualitatively new essence, instead of reduces it to the sum of component parts. It is ensured owing to the consideration of the system for the first time as:
⎯ integral object, instead of as a set;
⎯ the main properties of a system are explained proceeding from properties of a super system;
⎯ the system is considered as functional object;
⎯ "substation" of a system is taken into account, i.e. "material" from which it is made;
⎯ the shaping and operation of a system of any level is considered from “above” and is determined by the "request" of its super system.
Systemology represents an exposition, oriented on methodological use, of concepts and principles of dialectics, which can be interpreted in terms of any concrete science. Besides, it is a unique system approach, which is agreed at a conceptual level not only with formal logic, object-oriented ideology, but also with a complex of modern scientific-practical disciplines engaging the problems of studying and perfecting of organizational systems (the theory of organization, logistics, and business engineering).
Systemological methods can be applied in cognitive direction of researches, which is major component of knowledge-oriented technologies. It is connected first with the orientation on human is now most necessary for maintenance of harmonic interaction of computer systems with the human.
The development and application of systemology in scientific-educational Knowledge acquisition laboratory (NUL PZ) with the cognitive methods has allowed to decide the fundamental, delivered more than 150 years back, problem of a “natural classification” (NK), to reveal and to formalize its regularities and criterions [3]. NК (systematization) as the ideal of a classification is considered as a privileged system chosen by nature, takes into account the essential properties and relations of objects, the maximum amount of the purposes and can form the basis of the most objective and reality adequate models of knowledge. The features of such classification were studied by many scientists, because it has the greatest value, cognitive and prognostic force and makes a basis of a scientific picture of the universe, but only with the help of systemological approach we succeed in opening its laws. The rules of NK can be taken into account in any problem area and
allows creating the effective methods of knowledge systematization and conceptual classification modelling (systemological classification analysis).
The methods of system analysis and instrumental program CASE-tools of their supporting are widely used at the present time for decision of business, administrative and production problems. However, methods and means of traditional system-structural analysis (SADT, DFD, BPwin, etc.), that are used for business- processes modelling, are historically based on procedure-oriented programming paradigm. Therefore the results of their application can’t be immediately used during the developing of object-oriented software.
The most of modern program systems, especially large, at present time are created namely within the frameworks of object-oriented approach. However, the object-oriented analysis (ООА) and language UML are primordial used for software developing. Therefore, they are badly adapted for solution of the problems of business analysis and modelling. At the same time, such problems obligatorily arise, especially during the creation of complex program applications. And what is more, a standard process of object-oriented software developing (Rational Unified Process - RUP) begins with the technological process of business modelling.
A given discrepant situation stipulates the actuality of system and object-oriented methodologies integration.
The researches in this direction, carried in NUL PZ, allowed to work up a new original system-object (systemological) approach and object-oriented systemological methodology of analysis and designing (OMSAD) [4, 5], permitting the marked contradiction. Analytic methods and instrumental means of such approach allow automating the considerable part of analytic work and essentially raising its effectiveness.
Let us consider the basic peculiarities of system-object approach and systemological methodology, and also procedures and possibilities of the new method of system analysis, for the first time consistent with the requirements of object-oriented design.
The traditional system approach (analysis) is peculiar to the procedure (functional) system decomposition, and object approach – is peculiar to the object system decomposition. At the same time all specialists, as of system analysis, as of object approach consider them as orthogonal. In it’s turn, system-object (systemological) approach allows to combine exposure processes of the functional and object structure of analysed system. Thus, the basic peculiarity of the given approach is providing the unity of the decomposition of analysed system, as on functional, as on objective (substantive) sign. This reaches due to the consideration of any system not as a set, but as a functional «flowing» object [1, 2]. Acknowledgement the status of such an object after the system provides a simultaneous calculation of structural, functional and substantial system existence aspects.
To begin with, any system is a component part of the system structure of higher level (super system), because any system is connected and co-operates with other systems. Herewith any link between systems is the process of mutual exchange of elements of definite deep layers of connected systems. Thus, a feature of system is understood as manifestation of it’s activity to be included into links, into exchange flows with other systems in the super system structure. Consequently, from structural point of view a system is a crossroad of incoming and outgoing links (streams), i.e. unit (node).
Secondly, the functioning (activity, work, behaviour) of any system provides or supports the functioning of the super system, to which this support is necessary. At the same time functioning of the system as a support of the functional ability of the super system consists in the providing of the balance of “influx” and “outflow” on the incoming and outgoing links. Consequently, from the functional point of view the system is a function, which provides a balance of incoming into the system and outgoing from the system streams in accordance to that unit, where this system is in the present moment.
Thirdly, any system is not only a unit and function, but also a substance, which plays a role of definite unit in the structure of the super system and provides its functional balance. Consequently, from the substantial point of view a system is an object, realising a function, set by a unit in the structure of the super system.
Given reasoning allows the representing of any system in appearance of the three elements construction – UFO-element (figure 1) [6], i.e. at the same time:
⎯ as the structural element of the super system – unit, as a crossroad of the relations with the other systems;
⎯ as the functional element, doing a definite role for supporting super system by balancing the given unit – function;
⎯ as the substantial element – object, realising the given function in the appearance of some material formation, having constructive, operational and other characteristics.
The basic peculiarity of the OMSAD methodology is the formal-semantic adaptive alphabet of the UFO- elements, and also categorical principle that is used during the analysis and designing of systems. Alphabet
is a collection of units (crossroads of system links), collection of functions, balancing these units, and collection of objects, realising these functions. At the same time, we use facet classification for units collection, defined by the taxonomic categorical classification of the kinds of system’s links (figure 2).
The links classification provides the parametric units classification and constructive determination of symbol semantics of these units. Naturally, the links and units classifications (functions and objects) can be specialised with any degree of accuracy for any concrete domain. The use of classifications for forming the alphabetical collection of the UFO-elements and the possibility of their specialisation turns this collection into the formal-semantic adaptive alphabet.
Parametrical taxonomic classification of UFO-elements represents a classification, in which the objects are systematized depending on functions, which they are realizing, function - depending on what units they are balancing, and the units are determined by that, what crossroad of link they are. This is a conceptual model of application domain in the terms of knots, functions and objects witch plays a role of a " categorical" grid, through which the analyst looks at the domain. The specialization of such a categorical classification model should be carried out in the correspondence with the recommendations of the systemological classification analysis offered in the work [3] and directed on the construction of the classifications, witch takes into account the properties and regularities of the natural classification. In the correspondence with these recommendations during the construction and specialization of the classification the good, natural classification will be obtained, if the definite sequence of operations mentioned below is observed.
Function:
V2 = f1 (V1, E, C);
D = f2 (V1, E)
Object
Substance generator «V2»
Selection of the function or system in accordance with the unit (i.e. super system inquiry).
Balancing «inflow» and «outflow»
streams of the links for supporting the super system functioning, in which structure the given unit is.
The realisation of the system as an object (concrete substance: mass, size, cost, reliability, etc.) according with the functional requirements.
Substance (V1) Energy (E)
Management (C)
Unit (junction):
(V1, E, C) V2, D
Substance (V2)
Data (D)
Interface – realisation:
(unit – function)
Interface – realisation:
(function – object)
Figure 1. «Unit – Function – Object» approach.
Link: L
INFORMATIONAL: I MATERIAL: M
Substantial:
V Energetical:
E By data:
D
By control:
C
. . . . . . . . . . . .
Figure 2. Basic taxonomic classification of system links.
Any units got by combining of the links from the classification can be considered as alphabetical elements.
However with practical point of view it’s expediently to consider not all of the possible combinations, but only such ones, which corresponds to the actual physical laws (for example, to preservation laws). Point is that energy does not exist without any material bearer, information does not exist without any material bearer and administration does not exist without any data transmission. This leads to the relatively small number of variants on the level of the links of the base classification (figure 2 In the given tables we use brief markings for data on the material (VD = D) and power (VED = G) bearers, and for control data on the material (VDC = C) and power (VEDC = Q) bearers. This determines that, at present time, only paper (D and C) and electronic (G and Q) information bearers have the wide diffusion.
The use of alphabet (libraries) of UFO-elements allows formulating the combining rules of these elements naturally following from the systemological approach, for constructing UFO-configurations. We offer to call these rules the rules of system decomposition:
1. The rule of association: elements should be linked together according to the qualitative and quantitative characteristics of links inherent in them;
2. The rule of balance: during the connection of elements to each other (according rule 1) the qualitative end quantitative balance of the inflow and outflow of input and output functional links must be observed at units of the system structure;
3. The rule of realisation: during the connection of elements to each other (according to the rules 1 and 2) the interfaces accordance and the accordance of the objective and functional characteristics must be observed;
4. The rule of closeness: internal (supporting) links (streams) of elements in system must be reserved.
Offered alphabet and named rules forms a formal-semantic normative system of systemological analysis and modelling, formalising by the pattern theory of Grenander funds.
Table 1
Entries: Exits:
Providing
Produc-
tion Substan-
tial Energetic Informa- tional
Adminis- trator
Product Informa- tional
Wastes
Business system V, E, D(G), C(Q)
V E D(G) C(Q) V, E, D(G), C(Q)
D(G) V, E
Substance Vin Vpr Epr D(G)pr C(Q)pr Vout D(G)out Vwst,
Ewst
Energy Vin, Ein –‘’– –‘’– –‘’– –‘’– Eout –‘’– –‘’–
Production Information D(G)in –‘’– –‘’– –‘’– –‘’– D(G)out
C(Q)out –‘’– Vwst
Substance V –‘’– –‘’– –‘’– –‘’– V –‘’– –‘’–
Energy E –‘’– –‘’– –‘’– –‘’– E –‘’– Vwst,
Ewst
Transport Information D(G),
C(Q) –‘’– –‘’– –‘’– –‘’– D(G),
C(Q) –‘’– Vwst
Substance V –‘’– –‘’– –‘’– –‘’– V –‘’– –‘’–
Allocation
Information D(G) –‘’– –‘’– –‘’– –‘’– D(G) –‘’– –‘’–
Besides, OMSAD methodology is using a categorical principle during the construction of models. This principle postulates the necessity of prior assignment (definition) of the synthesised (designed) systems from categorical classification of such systems. Named principle, in fact, in the obvious form fixes the common sense used in the practical analytic work. The point is that decomposition (analysis) and aggregation (synthesis) procedures can be successfully realised only in that case, if they are directed by the final result.
During the realisation of synthesis operation, it’s necessary to know something at least about the kind of synthesised system, and during the realisation of analysis operation it’s necessary to know something about the types of the parts, on which analysed system can be decomposed. Thus, mentioned above alphabet is a
realisation of the categorical principle from the point of view of system analysis procedure. For solving the modelling and designing problems of organisational systems OMSAD methodology is using the systems categories, represented in the table 1.
The experience of the practical using of systemological methodology showed, that context model of any organisation (business-systems), and also of any of it’s subdivision, can be represented as unit from the table 1. For example, workshop, model and tool shops, naturally, are represented as the systems of the material production. Department of main constructor, office of the production technical training, economic planning department, accountancy, labour and salary department, marketing department, etc. is represented as the systems of information production. Department of technical control, department of main mechanic, department of main technologist, provision department, sales department, department of technical documentation, etc. is represented as distributive systems.
Formal-semantic normative analysis and modelling system and business-system categories can be considered, in particular, as the development and addition, for example, of the popular technology SADT. As is well known, this technology grants the formal universal possibilities on constructing of the functional business-processes structures. However it doesn’t take into account the semantics of the domain and does not give to the analytic the information about the concrete interactions between the analysed systems and their possible filling. So, a context modelling and systems decomposition with the SADT funds are heuristic procedures and don’t have any support with the proper CASE-tools (for example BPwin) on the substantial level.
Systemological approach «Unit – Function – Object» and OMSAD methodology allowed to work up a new method of business-systems analysis and modelling (UFO-analysis), which allows to adapt it’s funds to the concrete data domain, i.e. to take into account it’s semantics [6, 7]. Besides, the systems representation with this method as configurations of UFO-elements provides the concordance of the derivable models with the requirements of object-oriented design.
Briefly, the following main steps can represent UFO-analysis procedures:
⎯ Revealing units links in the structure of the modelling (designing) system based on functional links of the system as a whole, defining by the customer or solving problem;
⎯ Revealing of functionality supporting (providing, balancing) found units;
⎯ Determining objects, corresponding to the revealed functionality, i.e. those realising it.
The specific peculiarity of this analysis method is providing automation possibility of these steps. Automation reaches due to using of formally semantic adaptive alphabet. At the same time it’s necessary to take into account prepared beforehand classification of UFO-elements (UFO-library), which contains suitable elements for the given problem (data domain). In this case the first step may be identified with the system analysis stage, the second - with it’s design, and the third - with it’s implementation.
For automated application of UFO-analysis method we developed a program complex «UFO-toolkit», which is the CASE-tool, using knowledge base of the special configuration for providing a component approach to modelling, using semantics of domain and intellectualisation of interaction with user [7]. The tool is intended for object and simulation models construction of complex dynamic (organisational) systems. It has the following features:
⎯ noticeably reduces the designing labour-output ratio owing to intensified automation of analytic activity;
⎯ increases the objectivity of the analysis and the adequacy of modelling;
⎯ automates a models creation process, through the use of ready (alphabetical, library) functional objects, presented in the knowledge base of the Tool in the form of UFO-elements;
⎯ provides «intelligent» interaction with user, making familiar the ready component (UFO-elements).
At the same time if alphabetical elements appear to be program objects, realised as ready classes, then we can talk about UFO-analysis as a part of component technologies and business-objects technology CORBA (Business Object Facility – BOF). In the last situation the program CASE-tool, automating UFO-analysis procedures, can function within the frameworks of component business-objects architecture (Business Object Component Architecture – BOCA). At the same time, it will carry out an organiser (Framework) role, which, integrating business-objects into the functioning system, gives them the working places for realising their tasks. If we consider the alphabetical elements as the engineering elements, then UFO-analysis will be confirmed with the CALS-technology.
Thus, UFO-analysis method represents the development and concrete definition of the OMSAD methodology.
It allows to use the formalised rules of revealing classes and objects of application domain during the ООА
process and to realise the system analysis of the events of different nature, considering them as functional flowing objects. Consequently, UFO-analysis can be considered as the method of system-objective analysis and modelling.
On the whole the considered method and Tool (UFO-technology) provide to user:
⎯ objectivity of analysis and synthesis procedures of organisational systems;
⎯ economy of the man-hours of analysis and modelling, because these procedures both comes to the construction of only one model;
⎯ simplicity and availability of the business-processes analysis and modelling by specialists without special training;
⎯ uniform presentation of external and internal models of the business-system, described by the same modelling language;
⎯ facilitate of models adaptation to the concrete domain (taking into account the semantics domain);
⎯ the possibility of creation and use the libraries (repository) of the model components for different application fields.
Besides, they have the following merits:
⎯ provide the concordance of the system analysis results with requirements of object-oriented design, previously considering as orthogonal;
⎯ provide a possibility of immediate use of the system analysis results during the creation of object- oriented software;
⎯ raises the level of formality and automations of the modelling and analysis procedures;
⎯ guarantee a concordance of all system characteristics due to unification of the different system consideration aspects in one model;
⎯ provide facilitate of the construction of visual models of different abstraction level, representing at the same time a functional and objective structure of system;
⎯ provide a possibility of the modelling of functional system characters, not having a mathematical interpretation or interpreted by any mathematical means, and also simulation of system functioning without any special modelling algorithm.
Represented analysis and modelling technology is used for correcting information-analytic business-systems accompaniment and provides an essential rise of the effectiveness of their activity. Developed method and program tool essentially facilitate the construction of organisational systems models and increase the quality of the organisational design and initial technological processes of developing object applications.
Bibliography
Melnikov G.P. Systemology and linguistic aspects of cybernetics. New York, Paris, Monreal, Tokyo, Melbourne. Gordon and Breach.- 1988.- 440 p.
Bondarenko M.F., Matorin S. I., Solovyova E. A. Analysis of systemological tools for conceptual modelling of application fields // automatic document and mathematical linguistics. New York: Allerton Press Inc. 1997. V. 30, No. 2. P. 33-45.
Соловьёва Е.А. Естественная классификация: системологические основания. Харьков: ХТУРЕ, 1999.222 С.
Matorin S.I. A new technology of system-object analysis and its application for business-systems modelling // EEJET № 1(1) 2003 P 15-20.
Matorin S.I. A New Method of Systemological Analysis Co-ordinated with the Object-Oriented Design Procedure. I //
Cybernetics and Systems Analysis. Plenum Publishing Corporation, 2001. V. 37, No. 4. P. 562-572.
Matorin S.I. A New Method of Systemological Analysis Co-ordinated with the Object-Oriented Design Procedure. II //
Cybernetics and Systems Analysis. Plenum Publishing Corporation, 2002. V. 38, No. 1. P. 100-109.
Маторин С.И. Анализ и моделирование бизнес-систем: системологическая объектно-ориентированная технология / Под ред. М.Ф. Бондаренко; Предисловие Э.В. Попова. (ISBN 966-659-049-2) Харьков: ХТУРЭ, 2002. 322 с.
Author information
Michael Bondarenko – Rector of Kharkov National University of Radioelectronics, Professor; Vasiliy Matorin - Kharkov National University of Radioelectronics, student ; Sergei Matorin - Kharkov National University of Radioelectronics, doctorant ; Nikolay Slipchenko - KHNURE, chief of scientific department, professor ; Ekatherina Solovyova – KHNURE, head of Knowledge Acquisition Laboratory, professor, 14 Lenin Avenew, Kharkov, Ukraine 61166, [email protected]