Sie sind hier


Text predictions play an important role in improving the performance of gaze-based text entry systems. However, visual search, scanning, and selection of text predictions require a shift in the user's attention from the keyboard layout. Hence the spatial positioning of predictions becomes an imperative aspect of the end-user experience. In this work, we investigate the role of spatial positioning by comparing the performance of three different keyboards entailing variable positions for text predictions. The experiment result shows no significant differences in the text entry performance, i.e., displaying suggestions closer to visual fovea did not enhance the text entry rate of participants, however, they used more keystrokes and backspace.

13.06.2019 - 10:15 h
Teresa Krämer

Subtasks within commonsense reasoning are the derivation of new information from existing knowledge (forward reasoning) and checking whether a statement (conjecture) can be derived from the given knowledge (backward reasoning). In order to perform the latter task, often a small subset of the knowledge is sufficient to prove the conjecture. The difficulty is to identify and select this subset in an automated way. Selection methods like SInE [1] and Similarity SInE [2] can be used to preselect knowledge that might be relevant. However, those methods do not always select enough or the right knowledge to find proof.

13.06.2019 - 10:15 h

In CUTLER project [1] our aim is to assist policy-makers through data (environmental/economical/social) analysis methods. However, Effective data analytics need visualization tools in order to reveal insights and make sense to experts or even users without technical knowledge. For the development of the social data visualization, we follow CUTLER‘s representation interface and integrated framework with KIbana dashboard. Using the framework, I will describe the visualization widgets for tweet, news comments, and event data, by means of different query methods and interactive visualizations such as tag clouds, geographic mapping, temporal characteristics, and sentiment classifications. I will use sample of crawled data for the CUTLER pilot cities to elaborate the visualizations.

06.06.2019 - 10:15 h
Moiz Rauf

The impact of intercultural exposure can be observed in multi-lingual societies where bilingual speakers often exhibit a switch/mix of grammar and lexicon of more than one language. This phenomenon is an inevitable outcome of language contact and this switch can be observed between sentences (inter-sentential), within a sentence (intra-sentential) or even at the word level. 
Recent research in code-switching (CS) has turned towards utilization of neural networks and word embeddings for these low-resourced languages. The standard approach for a given CS corpora combines information from various existing distributional representations pre-trained on source languages with neural network models to achieve varying degrees of success. 

23.05.2019 - 10:15 h
Stefan Müller

In the past decade, quantitative text analysis has established itself as a frequently used method in political science to study political actors and processes. Typical research questions include the identification of concepts and topics, text classification, and the measurement of latent policy positions. While quantitative text analysis reduces the costs of analysing texts, and while various user friendly open-source libraries have been developed recently, several challenges remain. In this presentation, I will describe popular text-as-data methods from the perspective of implementation in R, one of the most commonly used statistical programming software in the social sciences.

23.05.2019 - 10:15 h

Research at WeST now includes a wider range of politics-related topics than before due to several reasons: Firstly, politics drive behaviors on the Web that trouble society as a whole. Secondly, politics influence content on news and social media. Thirdly, democracy online is a challenge of asking what should and what can be regulated. "Politics" in this sense is broader than the level of politicians and includes the audience level. From a data perspective, politics provide a mapping of expected behaviors by groups and positions within text. My talk briefly overviews, with some ongoing snippets, the following application areas: misinformation, partisanship, discourses, platforms, and retro-theories.

23.05.2019 - 10:15 h

Objective of this thesis is the system identification of ships for multistep prediction, i.e. simulation, with deep learning methods. First-principles modeling of ships is a challenging and expensive task, as it requires complex numerical computations, model tests, sea trials, and expert knowledge in marine engineering. The collection of sensor data during the routine operation of ships enables system identification methods for deriving mathematical models of the vessel dynamics.

09.05.2019 - 10:15 h

In 2013 property paths were introduced with the release of SPARQL 1.1. These property paths allow for describing complex queries in a more concise and comprehensive way. The W3C introduced a formal specification of the semantics of property paths, to which implementations should adhere. Most commonly used RDF stores claim to support property paths. In order to give insight into how well current implementations of property paths work we have developed BeSEPPI, a benchmark for the semantic based evaluation of property path implementations. BeSEPPI measures execution times of queries containing property paths and checks whether RDF stores follow the W3Cs semantics by testing the correctness and completeness of query result sets.

09.05.2019 - 10:15 h

Nowadays, because of increasing threat of fake news to trustworthiness of online information, recognizing the truthfulness of news can help to minimize its potential problems in society. However, finding truthful information from social media contexts, which covers a large number of subjects, is a very complex task. Fake news detection is more than simple keyword spotting task, the truth of statements cannot be assessed only by context of news, and it is needed to automatically understand human behavior and sentiment in social media that usually are vague and dependent on subject which should be interpreted and represented in different ways.

25.04.2019 - 10:15 h
Jan Dillenberger

When processing natural language on the computer, vector representations of words have many fields of application. They enable a computer to score the similarity between two words and allow to determine missing words for an analogy. In 2013 [Mikolov et al., 2013a] published an algorithm that is called word2vec to create such vector representations. It was able to create vector representations that far exceeded the performance of earlier methods. This thesis explains the algorithm’s popular skip-gram model from the perspective of neural networks.

18.04.2019 - 10:15 h