First order theorem proving with large knowledge bases makes it necessary to select those parts of the knowledge base, which are necessary to prove the theorem at hand. We propose to extend syntactic axiom selection procedures like SInE to the use of the semantics of symbol names. For this not only occurrences of symbol names but also similar names are taken into account. We propose to use a similarity measure based on word embeddings like ConceptNet Numberbatch. An evaluation of this similarity based SInE is given by using problem sets from TPTP's CSR problem class and Adimen-SUMO. This evaluation is done with two very different systems, namely the HYPER tableau prover and the saturation based system E.
In this talk i will present the computation of inconsistency on a planning problem. Thus a function like in the inconsistency measure will be used to map the planning problem onto a number ranging from 0 to 1. Is the planning problem solvable, also consistent, then it gets mapped to a 0. For the unsolvable case a planning problem can be represented as inconsistent. While comparing two inconsistent planning problems, one can be found more inconsistent than the other. Thus it can be measured as in inconsistency measure and ordered by their value. The measuring can be done by stepwise erasement of the inconsistency or by directly calculating on the outcome of the search from the planning problem.
This talk will introduce WeST's new DFG project "Open Argument Mining". The intent is to look for potential overlaps with other projects and where collected data could be shared. The project's goal is to advance the SotA from the research fields argument mining and knowledge graph construction by (1) implementing a knowledge-aware lifelong learning approach, (2) aligning incomplete arguments with known ones and enriching them with background knowledge, and (3) automatically acquiring background knowledge by combining contemporary semantic knowledge basses with focused knowledge expansion.
Generative Adversarial Networks (GANs) are part of the family of generative models, which focus on producing samples that follow the probability distribution of the real dataset. This architecture is adversarial because it implies training two neural networks against each other. To make the training process more intuitive, the author of GANs, Ian Goodfellow, compared one network with a counterfeiter trying to create fake money and the other with the police, trying to detect counterfeit money from real one. This architecture gained a lot of interest in the computer vision field, in areas such as: image to image translation, improvement of image resolution and text to image generation.
Wikipedia is the biggest, free online encyclopaedia that can be expanded by anyone. For the users, who create content on a specific Wikipedia language edition, a social network exists. In this social network users are categorised into different roles. These are normal users, administrators and functional bots. Within the networks, a user can post reviews, suggestions or send simple messages to the "talk page" of another user. Each language in the Wikipedia domain has this type of social network. In this thesis characteristics of the three different roles are analysed in order to learn how they function in one language network of Wikipedia and apply them to another Wikipedia network to identify bots.
Rumor detection and analysis is a complex and multi-dimensional problem in which the content, user’s communicational strategies, or diffusion of rumors can be taken into account. Several researches considering a dimension have been conducted. However, the semantics is an important gap that can be seen in existing studies.
With the growing usage of ontologies in many knowledge-intensive sectors, not only has the number of available ontologies increased considerably, but increasingly they are blowing up in size and becoming more complex to manage. Moreover, modelling domain knowledge in the form of ontologies is labour-intensive work which is expensive from an implementation perspective. There is therefore a strong demand for technologies and automated tools for creating restricted views of ontologies so that existing ontologies can be reused to their full potential. Forgetting is a non-standard reasoning service that seeks to create restricted views of ontologies by eliminating some terms from the ontologies in such a way that all logical consequences are preserved up to the remaining terms.
Research is becoming increasingly digital, interdisciplinary, and data-driven and affects different environments in addition to academia, such as industry, and government. Research output representation, publication, mining, analysis, and visualization are taken to a new level, driven by the increased use of Web standards and digital scholarly communication initiatives. The heterogeneity of scholarly artifacts and their metadata spread over different Web data sources poses a major challenge for researchers with regard to search, retrieval and exploration. In this talk, I present a vision towards an Open Research by community involvement towards creating and curating metadata about scholarly events and publications.
In this talk i will present an ongoing work on improving hands-free interaction. This is an extension of the discussion we had during my last oberseminar. The aim of this work is to improve hands-free interaction for content on the web. We discuss the different types of navigation on the web and our approach in trying to solve the challenge associated with "hops of interaction". We present an early stage prototype which shows our implementation and discuss the future directions for improvement.
In this talk I will present ongoing work on concept contraction in the description logic EL. The aim of this work is to model concept change (how does a concept change due to new input?) as a reformulation of the well known AGM contraction model. We discuss an explicit construction of a concept contraction operator, as well as a set of postulates for such an operator. We then present a representation theorem, which shows that these two definitions are equivalent.