Based on maximal compatible classes according to compatible relation,the present dissertation builds several new extended rough set models using granules such as meet and join granules and etc,defines new upper and lower approximations and precision measures, explores their interactions and designs algorithms for solving these granules and upper and lower approximations.It studies an improved variable limited tolerance rough set model and its rule mining,proposes concepts of attribute dependency and dependency degree in incomplete information system and gives out new calculate formula for the dependency degree.It discusses not only the relationships between tolerance relation and collision relation but also the relationships between complete covering and general covering and their properties,obtains some necessary and sufficient conditions for a general covering to become a complete covering,designs three algorithms for partitioning set without confliction.This provides a valuable reference for optimal set partition.It suggests the concepts of week consistence and limited default and definite decision rules and discusses conditions under which rules may be reduced to be optimal using discernibility matrix and discernibility functions.This enriches the diversity of decision rule formulations.It studies updating formulas while an attribute moves out and in under tolerance relation and proposes an incremental learning approach for decision rules including limited default and definite decision rules.Using decompositions of different upper and lower approximations in the sense of granule concepts put forward by us by dominance relation,it implements document query expansions and then enhances flexibilities in information retrieval.It investigates target identification approaches in incomplete data fusion systems by using a combination of granule,rough set and Dempster-Shafer evidence theory,establishing several mass assignment functions,belief functions,plausible functions and merging formulas.It supplies a free selection dimension for determining belief intervals.It analyses time complexity for algorithms designed in the thesis and verifies the validation and feasibility of each study here.
|