## List of publications and preprints by Tom Hanika

### 2021

- AbstractURLBibTeXEndNoteDOIBibSonomyKoopmann, T., Stubbemann, M., Kapa, M., Paris, M., Buenstorf, G., Hanika, T., Hotho, A., Jäschke, R., Stumme, G.: Proximity dimensions and the emergence of collaboration: a HypTrails study on German AI research. Scientometrics. (2021).Creation and exchange of knowledge depends on collaboration. Recent work has suggested that the emergence of collaboration frequently relies on geographic proximity. However, being co-located tends to be associated with other dimensions of proximity, such as social ties or a shared organizational environment. To account for such factors, multiple dimensions of proximity have been proposed, including cognitive, institutional, organizational, social and geographical proximity. Since they strongly interrelate, disentangling these dimensions and their respective impact on collaboration is challenging. To address this issue, we propose various methods for measuring different dimensions of proximity. We then present an approach to compare and rank them with respect to the extent to which they indicate co-publications and co-inventions. We adapt the HypTrails approach, which was originally developed to explain human navigation, to co-author and co-inventor graphs. We evaluate this approach on a subset of the German research community, specifically academic authors and inventors active in research on artificial intelligence (AI). We find that social proximity and cognitive proximity are more important for the emergence of collaboration than geographic proximity.
@article{koopmann2021proximity,

abstract = {Creation and exchange of knowledge depends on collaboration. Recent work has suggested that the emergence of collaboration frequently relies on geographic proximity. However, being co-located tends to be associated with other dimensions of proximity, such as social ties or a shared organizational environment. To account for such factors, multiple dimensions of proximity have been proposed, including cognitive, institutional, organizational, social and geographical proximity. Since they strongly interrelate, disentangling these dimensions and their respective impact on collaboration is challenging. To address this issue, we propose various methods for measuring different dimensions of proximity. We then present an approach to compare and rank them with respect to the extent to which they indicate co-publications and co-inventions. We adapt the HypTrails approach, which was originally developed to explain human navigation, to co-author and co-inventor graphs. We evaluate this approach on a subset of the German research community, specifically academic authors and inventors active in research on artificial intelligence (AI). We find that social proximity and cognitive proximity are more important for the emergence of collaboration than geographic proximity.},

author = {Koopmann, Tobias and Stubbemann, Maximilian and Kapa, Matthias and Paris, Michael and Buenstorf, Guido and Hanika, Tom and Hotho, Andreas and Jäschke, Robert and Stumme, Gerd},

journal = {Scientometrics},

keywords = {2021 itegpub kde kdepub myown publist regio},

title = {Proximity dimensions and the emergence of collaboration: a HypTrails study on German AI research},

year = 2021

}%0 Journal Article

%1 koopmann2021proximity

%A Koopmann, Tobias

%A Stubbemann, Maximilian

%A Kapa, Matthias

%A Paris, Michael

%A Buenstorf, Guido

%A Hanika, Tom

%A Hotho, Andreas

%A Jäschke, Robert

%A Stumme, Gerd

%D 2021

%J Scientometrics

%R 10.1007/s11192-021-03922-1

%T Proximity dimensions and the emergence of collaboration: a HypTrails study on German AI research

%U https://doi.org/10.1007/s11192-021-03922-1

%X Creation and exchange of knowledge depends on collaboration. Recent work has suggested that the emergence of collaboration frequently relies on geographic proximity. However, being co-located tends to be associated with other dimensions of proximity, such as social ties or a shared organizational environment. To account for such factors, multiple dimensions of proximity have been proposed, including cognitive, institutional, organizational, social and geographical proximity. Since they strongly interrelate, disentangling these dimensions and their respective impact on collaboration is challenging. To address this issue, we propose various methods for measuring different dimensions of proximity. We then present an approach to compare and rank them with respect to the extent to which they indicate co-publications and co-inventions. We adapt the HypTrails approach, which was originally developed to explain human navigation, to co-author and co-inventor graphs. We evaluate this approach on a subset of the German research community, specifically academic authors and inventors active in research on artificial intelligence (AI). We find that social proximity and cognitive proximity are more important for the emergence of collaboration than geographic proximity. - AbstractURLBibTeXEndNoteBibSonomySchäfermeier, B., Stumme, G., Hanika, T.: Topic Space Trajectories: A case study on machine learning literature. Accepted for publication: Scientometrics. (2021).The annual number of publications at scientific venues, for example, conferences and journals, is growing quickly. Hence, even for researchers becomes harder and harder to keep track of research topics and their progress. In this task, researchers can be supported by automated publication analysis. Yet, many such methods result in uninterpretable, purely numerical representations. As an attempt to support human analysts, we present emphtopic space trajectories, a structure that allows for the comprehensible tracking of research topics. We demonstrate how these trajectories can be interpreted based on eight different analysis approaches. To obtain comprehensible results, we employ non-negative matrix factorization as well as suitable visualization techniques. We show the applicability of our approach on a publication corpus spanning 50 years of machine learning research from 32 publication venues. Our novel analysis method may be employed for paper classification, for the prediction of future research topics, and for the recommendation of fitting conferences and journals for submitting unpublished work.
@article{schafermeier2020topic,

abstract = {The annual number of publications at scientific venues, for example, conferences and journals, is growing quickly. Hence, even for researchers becomes harder and harder to keep track of research topics and their progress. In this task, researchers can be supported by automated publication analysis. Yet, many such methods result in uninterpretable, purely numerical representations. As an attempt to support human analysts, we present emphtopic space trajectories, a structure that allows for the comprehensible tracking of research topics. We demonstrate how these trajectories can be interpreted based on eight different analysis approaches. To obtain comprehensible results, we employ non-negative matrix factorization as well as suitable visualization techniques. We show the applicability of our approach on a publication corpus spanning 50 years of machine learning research from 32 publication venues. Our novel analysis method may be employed for paper classification, for the prediction of future research topics, and for the recommendation of fitting conferences and journals for submitting unpublished work.},

author = {Schäfermeier, Bastian and Stumme, Gerd and Hanika, Tom},

journal = {Accepted for publication: Scientometrics},

keywords = {sys:relevantfor:regio 2021 itegpub kde kdepub myown publist},

note = {cite arxiv:2010.12294Comment: 36 pages, 8 figures},

title = {Topic Space Trajectories: A case study on machine learning literature},

year = 2021

}%0 Journal Article

%1 schafermeier2020topic

%A Schäfermeier, Bastian

%A Stumme, Gerd

%A Hanika, Tom

%D 2021

%J Accepted for publication: Scientometrics

%T Topic Space Trajectories: A case study on machine learning literature

%U http://arxiv.org/abs/2010.12294

%X The annual number of publications at scientific venues, for example, conferences and journals, is growing quickly. Hence, even for researchers becomes harder and harder to keep track of research topics and their progress. In this task, researchers can be supported by automated publication analysis. Yet, many such methods result in uninterpretable, purely numerical representations. As an attempt to support human analysts, we present emphtopic space trajectories, a structure that allows for the comprehensible tracking of research topics. We demonstrate how these trajectories can be interpreted based on eight different analysis approaches. To obtain comprehensible results, we employ non-negative matrix factorization as well as suitable visualization techniques. We show the applicability of our approach on a publication corpus spanning 50 years of machine learning research from 32 publication venues. Our novel analysis method may be employed for paper classification, for the prediction of future research topics, and for the recommendation of fitting conferences and journals for submitting unpublished work. - AbstractURLBibTeXEndNoteBibSonomyHanika, T., Schneider, F.M., Stumme, G.: Intrinsic Dimension of Geometric Data Sets. Accepted for publication in: Tohoku Mathematical Journal. (2021).The curse of dimensionality is a phenomenon frequently observed in machine learning (ML) and knowledge discovery (KD). There is a large body of literature investigating its origin and impact, using methods from mathematics as well as from computer science. Among the mathematical insights into data dimensionality, there is an intimate link between the dimension curse and the phenomenon of measure concentration, which makes the former accessible to methods of geometric analysis. The present work provides a comprehensive study of the intrinsic geometry of a data set, based on Gromov's metric measure geometry and Pestov's axiomatic approach to intrinsic dimension. In detail, we define a concept of geometric data set and introduce a metric as well as a partial order on the set of isomorphism classes of such data sets. Based on these objects, we propose and investigate an axiomatic approach to the intrinsic dimension of geometric data sets and establish a concrete dimension function with the desired properties. Our mathematical model for data sets and their intrinsic dimension is computationally feasible and, moreover, adaptable to specific ML/KD-algorithms, as illustrated by various experiments.
@article{hanika2018intrinsic,

abstract = {The curse of dimensionality is a phenomenon frequently observed in machine learning (ML) and knowledge discovery (KD). There is a large body of literature investigating its origin and impact, using methods from mathematics as well as from computer science. Among the mathematical insights into data dimensionality, there is an intimate link between the dimension curse and the phenomenon of measure concentration, which makes the former accessible to methods of geometric analysis. The present work provides a comprehensive study of the intrinsic geometry of a data set, based on Gromov's metric measure geometry and Pestov's axiomatic approach to intrinsic dimension. In detail, we define a concept of geometric data set and introduce a metric as well as a partial order on the set of isomorphism classes of such data sets. Based on these objects, we propose and investigate an axiomatic approach to the intrinsic dimension of geometric data sets and establish a concrete dimension function with the desired properties. Our mathematical model for data sets and their intrinsic dimension is computationally feasible and, moreover, adaptable to specific ML/KD-algorithms, as illustrated by various experiments.},

author = {Hanika, Tom and Schneider, Friedrich Martin and Stumme, Gerd},

journal = {Accepted for publication in: Tohoku Mathematical Journal},

keywords = {2020 data fca geometry kde kdepub mm-space myown publist},

note = {cite arxiv:1801.07985Comment: v2: completely rewritten 28 pages, 3 figures, 2 tables},

title = {Intrinsic Dimension of Geometric Data Sets},

year = 2021

}%0 Journal Article

%1 hanika2018intrinsic

%A Hanika, Tom

%A Schneider, Friedrich Martin

%A Stumme, Gerd

%D 2021

%J Accepted for publication in: Tohoku Mathematical Journal

%T Intrinsic Dimension of Geometric Data Sets

%U http://arxiv.org/abs/1801.07985

%X The curse of dimensionality is a phenomenon frequently observed in machine learning (ML) and knowledge discovery (KD). There is a large body of literature investigating its origin and impact, using methods from mathematics as well as from computer science. Among the mathematical insights into data dimensionality, there is an intimate link between the dimension curse and the phenomenon of measure concentration, which makes the former accessible to methods of geometric analysis. The present work provides a comprehensive study of the intrinsic geometry of a data set, based on Gromov's metric measure geometry and Pestov's axiomatic approach to intrinsic dimension. In detail, we define a concept of geometric data set and introduce a metric as well as a partial order on the set of isomorphism classes of such data sets. Based on these objects, we propose and investigate an axiomatic approach to the intrinsic dimension of geometric data sets and establish a concrete dimension function with the desired properties. Our mathematical model for data sets and their intrinsic dimension is computationally feasible and, moreover, adaptable to specific ML/KD-algorithms, as illustrated by various experiments. - AbstractBibTeXEndNoteBibSonomyHanika, T., Hirth, J.: Exploring Scale-Measures of Data Sets. Accepted for publication at ICFCA 2021. (2021).Measurement is a fundamental building block of numerous scientific models and their creation. This is in particular true for data driven science. Due to the high complexity and size of modern data sets, the necessity for the development of understandable and efficient scaling methods is at hand. A profound theory for scaling data is scale-measures, as developed in the field of formal concept analysis. Recent developments indicate that the set of all scale-measures for a given data set constitutes a lattice and does hence allow efficient exploring algorithms. In this work we study the properties of said lattice and propose a novel scale-measure exploration algorithm that is based on the well-known and proven attribute exploration approach. Our results motivate multiple applications in scale recommendation, most prominently (semi-)automatic scaling.
@article{hanika2021exploring,

abstract = {Measurement is a fundamental building block of numerous scientific models and their creation. This is in particular true for data driven science. Due to the high complexity and size of modern data sets, the necessity for the development of understandable and efficient scaling methods is at hand. A profound theory for scaling data is scale-measures, as developed in the field of formal concept analysis. Recent developments indicate that the set of all scale-measures for a given data set constitutes a lattice and does hence allow efficient exploring algorithms. In this work we study the properties of said lattice and propose a novel scale-measure exploration algorithm that is based on the well-known and proven attribute exploration approach. Our results motivate multiple applications in scale recommendation, most prominently (semi-)automatic scaling.},

author = {Hanika, Tom and Hirth, Johannes},

journal = {Accepted for publication at ICFCA 2021.},

keywords = {2021 data fca kdepub myown preprint preprocessing publist scale-measure scaling},

title = {Exploring Scale-Measures of Data Sets},

year = 2021

}%0 Journal Article

%1 hanika2021exploring

%A Hanika, Tom

%A Hirth, Johannes

%D 2021

%J Accepted for publication at ICFCA 2021.

%T Exploring Scale-Measures of Data Sets

%X Measurement is a fundamental building block of numerous scientific models and their creation. This is in particular true for data driven science. Due to the high complexity and size of modern data sets, the necessity for the development of understandable and efficient scaling methods is at hand. A profound theory for scaling data is scale-measures, as developed in the field of formal concept analysis. Recent developments indicate that the set of all scale-measures for a given data set constitutes a lattice and does hence allow efficient exploring algorithms. In this work we study the properties of said lattice and propose a novel scale-measure exploration algorithm that is based on the well-known and proven attribute exploration approach. Our results motivate multiple applications in scale recommendation, most prominently (semi-)automatic scaling.

### 2020

- AbstractURLBibTeXEndNoteBibSonomyHanika, T., Hirth, J.: On the Lattice of Conceptual Measurements. (2020).We present a novel approach for data set scaling based on scale-measures from formal concept analysis, i.e., continuous maps between closure systems, and derive a canonical representation. Moreover, we prove said scale-measures are lattice ordered with respect to the closure systems. This enables exploring the set of scale-measures through by the use of meet and join operations. Furthermore we show that the lattice of scale-measures is isomorphic to the lattice of sub-closure systems that arises from the original data. Finally, we provide another representation of scale-measures using propositional logic in terms of data set features. Our theoretical findings are discussed by means of examples.
@article{hanika2020lattice,

abstract = {We present a novel approach for data set scaling based on scale-measures from formal concept analysis, i.e., continuous maps between closure systems, and derive a canonical representation. Moreover, we prove said scale-measures are lattice ordered with respect to the closure systems. This enables exploring the set of scale-measures through by the use of meet and join operations. Furthermore we show that the lattice of scale-measures is isomorphic to the lattice of sub-closure systems that arises from the original data. Finally, we provide another representation of scale-measures using propositional logic in terms of data set features. Our theoretical findings are discussed by means of examples.},

author = {Hanika, Tom and Hirth, Johannes},

keywords = {2020 fca kde lattice measurement myown preprint publist scale-measure},

note = {cite arxiv:2012.05267Comment: 19 pages, 6 figures},

title = {On the Lattice of Conceptual Measurements},

year = 2020

}%0 Journal Article

%1 hanika2020lattice

%A Hanika, Tom

%A Hirth, Johannes

%D 2020

%T On the Lattice of Conceptual Measurements

%U https://arxiv.org/abs/2012.05267

%X We present a novel approach for data set scaling based on scale-measures from formal concept analysis, i.e., continuous maps between closure systems, and derive a canonical representation. Moreover, we prove said scale-measures are lattice ordered with respect to the closure systems. This enables exploring the set of scale-measures through by the use of meet and join operations. Furthermore we show that the lattice of scale-measures is isomorphic to the lattice of sub-closure systems that arises from the original data. Finally, we provide another representation of scale-measures using propositional logic in terms of data set features. Our theoretical findings are discussed by means of examples. - URLBibTeXEndNoteDOIBibSonomyStubbemann, M., Hanika, T., Stumme, G.: Orometric Methods in Bounded Metric Data. In: Berthold, M.R., Feelders, A., and Krempl, G. (eds.) Advances in Intelligent Data Analysis XVIII - 18th International Symposium on Intelligent Data Analysis, IDA 2020, Konstanz, Germany, April 27-29, 2020, Proceedings. p. 496--508. Springer (2020).
@inproceedings{DBLP:conf/ida/StubbemannHS20,

author = {Stubbemann, Maximilian and Hanika, Tom and Stumme, Gerd},

booktitle = {Advances in Intelligent Data Analysis XVIII - 18th International Symposium on Intelligent Data Analysis, IDA 2020, Konstanz, Germany, April 27-29, 2020, Proceedings},

editor = {Berthold, Michael R. and Feelders, Ad and Krempl, Georg},

keywords = {sys:relevantfor:regio 2020 itegpub kde kdepub metric myown orometric publist},

pages = {496--508},

publisher = {Springer},

series = {Lecture Notes in Computer Science},

title = {Orometric Methods in Bounded Metric Data},

volume = 12080,

year = 2020

}%0 Conference Paper

%1 DBLP:conf/ida/StubbemannHS20

%A Stubbemann, Maximilian

%A Hanika, Tom

%A Stumme, Gerd

%B Advances in Intelligent Data Analysis XVIII - 18th International Symposium on Intelligent Data Analysis, IDA 2020, Konstanz, Germany, April 27-29, 2020, Proceedings

%D 2020

%E Berthold, Michael R.

%E Feelders, Ad

%E Krempl, Georg

%I Springer

%P 496--508

%R 10.1007/978-3-030-44584-3_39

%T Orometric Methods in Bounded Metric Data

%U https://doi.org/10.1007/978-3-030-44584-3_39

%V 12080 - URLBibTeXEndNoteBibSonomyDownloadFelde, M., Hanika, T., Stumme, G.: Null Models for Formal Contexts. Information. 11, 135 (2020).
@article{felde2020null,

author = {Felde, Maximilian and Hanika, Tom and Stumme, Gerd},

journal = {Information},

keywords = {2020 context fca formal generation itegpub kdepub myown null-model publist},

number = 3,

pages = 135,

publisher = {Multidisciplinary Digital Publishing Institute},

title = {Null Models for Formal Contexts},

volume = 11,

year = 2020

}%0 Journal Article

%1 felde2020null

%A Felde, Maximilian

%A Hanika, Tom

%A Stumme, Gerd

%D 2020

%I Multidisciplinary Digital Publishing Institute

%J Information

%N 3

%P 135

%T Null Models for Formal Contexts

%U https://www.mdpi.com/2078-2489/11/3/135

%V 11 - AbstractURLBibTeXEndNoteBibSonomyHanika, T., Hirth, J.: Knowledge Cores in Large Formal Contexts, http://arxiv.org/abs/2002.11776, (2020).Knowledge computation tasks are often infeasible for large data sets. This is in particular true when deriving knowledge bases in formal concept analysis (FCA). Hence, it is essential to come up with techniques to cope with this problem. Many successful methods are based on random processes to reduce the size of the investigated data set. This, however, makes them hardly interpretable with respect to the discovered knowledge. Other approaches restrict themselves to highly supported subsets and omit rare and interesting patterns. An essentially different approach is used in network science, called \($k$\)-cores. These are able to reflect rare patterns if they are well connected in the data set. In this work, we study \($k$\)-cores in the realm of FCA by exploiting the natural correspondence to bi-partite graphs. This structurally motivated approach leads to a comprehensible extraction of knowledge cores from large formal contexts data sets.
@misc{hanika2020knowledge,

abstract = {Knowledge computation tasks are often infeasible for large data sets. This is in particular true when deriving knowledge bases in formal concept analysis (FCA). Hence, it is essential to come up with techniques to cope with this problem. Many successful methods are based on random processes to reduce the size of the investigated data set. This, however, makes them hardly interpretable with respect to the discovered knowledge. Other approaches restrict themselves to highly supported subsets and omit rare and interesting patterns. An essentially different approach is used in network science, called \($k$\)-cores. These are able to reflect rare patterns if they are well connected in the data set. In this work, we study \($k$\)-cores in the realm of FCA by exploiting the natural correspondence to bi-partite graphs. This structurally motivated approach leads to a comprehensible extraction of knowledge cores from large formal contexts data sets.},

author = {Hanika, Tom and Hirth, Johannes},

keywords = {2020 bigdata cores fca k-cores kde large myown preprint publist},

note = {cite arxiv:2002.11776Comment: 13 pages, 10 figures},

title = {Knowledge Cores in Large Formal Contexts},

year = 2020

}%0 Generic

%1 hanika2020knowledge

%A Hanika, Tom

%A Hirth, Johannes

%D 2020

%T Knowledge Cores in Large Formal Contexts

%U http://arxiv.org/abs/2002.11776

%X Knowledge computation tasks are often infeasible for large data sets. This is in particular true when deriving knowledge bases in formal concept analysis (FCA). Hence, it is essential to come up with techniques to cope with this problem. Many successful methods are based on random processes to reduce the size of the investigated data set. This, however, makes them hardly interpretable with respect to the discovered knowledge. Other approaches restrict themselves to highly supported subsets and omit rare and interesting patterns. An essentially different approach is used in network science, called \($k$\)-cores. These are able to reflect rare patterns if they are well connected in the data set. In this work, we study \($k$\)-cores in the realm of FCA by exploiting the natural correspondence to bi-partite graphs. This structurally motivated approach leads to a comprehensible extraction of knowledge cores from large formal contexts data sets. - AbstractURLBibTeXEndNoteDOIBibSonomyBorchmann, D., Hanika, T., Obiedkov, S.: Probably approximately correct learning of Horn envelopes from queries. Discrete Applied Mathematics. 273, 30 - 42 (2020).We propose an algorithm for learning the Horn envelope of an arbitrary domain using an expert, or an oracle, capable of answering certain types of queries about this domain. Attribute exploration from formal concept analysis is a procedure that solves this problem, but the number of queries it may ask is exponential in the size of the resulting Horn formula in the worst case. We recall a well-known polynomial-time algorithm for learning Horn formulas with membership and equivalence queries and modify it to obtain a polynomial-time probably approximately correct algorithm for learning the Horn envelope of an arbitrary domain.
@article{BORCHMANN202030,

abstract = {We propose an algorithm for learning the Horn envelope of an arbitrary domain using an expert, or an oracle, capable of answering certain types of queries about this domain. Attribute exploration from formal concept analysis is a procedure that solves this problem, but the number of queries it may ask is exponential in the size of the resulting Horn formula in the worst case. We recall a well-known polynomial-time algorithm for learning Horn formulas with membership and equivalence queries and modify it to obtain a polynomial-time probably approximately correct algorithm for learning the Horn envelope of an arbitrary domain.},

author = {Borchmann, Daniel and Hanika, Tom and Obiedkov, Sergei},

journal = {Discrete Applied Mathematics},

keywords = {2020 fca intelligence itegpub kdepub learning machine myown pac publist},

note = {Advances in Formal Concept Analysis: Traces of CLA 2016},

pages = {30 - 42},

title = {Probably approximately correct learning of Horn envelopes from queries},

volume = 273,

year = 2020

}%0 Journal Article

%1 BORCHMANN202030

%A Borchmann, Daniel

%A Hanika, Tom

%A Obiedkov, Sergei

%D 2020

%J Discrete Applied Mathematics

%P 30 - 42

%R https://doi.org/10.1016/j.dam.2019.02.036

%T Probably approximately correct learning of Horn envelopes from queries

%U http://www.sciencedirect.com/science/article/pii/S0166218X19301295

%V 273

%X We propose an algorithm for learning the Horn envelope of an arbitrary domain using an expert, or an oracle, capable of answering certain types of queries about this domain. Attribute exploration from formal concept analysis is a procedure that solves this problem, but the number of queries it may ask is exponential in the size of the resulting Horn formula in the worst case. We recall a well-known polynomial-time algorithm for learning Horn formulas with membership and equivalence queries and modify it to obtain a polynomial-time probably approximately correct algorithm for learning the Horn envelope of an arbitrary domain.

### 2019

- URLBibTeXEndNoteDOIBibSonomyHanika, T., Koyda, M., Stumme, G.: Relevant Attributes in Formal Contexts. In: Endres, D., Alam, M., and Sotropa, D. (eds.) ICCS. pp. 102-116. Springer (2019).
@inproceedings{conf/iccs/HanikaKS19,

author = {Hanika, Tom and Koyda, Maren and Stumme, Gerd},

booktitle = {ICCS},

crossref = {conf/iccs/2019},

editor = {Endres, Dominik and Alam, Mehwish and Sotropa, Diana},

keywords = {2019 fca itegpub kde kdepub myown publist},

pages = {102-116},

publisher = {Springer},

series = {Lecture Notes in Computer Science},

title = {Relevant Attributes in Formal Contexts.},

volume = 11530,

year = 2019

}%0 Conference Paper

%1 conf/iccs/HanikaKS19

%A Hanika, Tom

%A Koyda, Maren

%A Stumme, Gerd

%B ICCS

%D 2019

%E Endres, Dominik

%E Alam, Mehwish

%E Sotropa, Diana

%I Springer

%P 102-116

%R 10.1007/978-3-030-23182-8_8

%T Relevant Attributes in Formal Contexts.

%U http://dblp.uni-trier.de/db/conf/iccs/iccs2019.html#HanikaKS19

%V 11530

%@ 978-3-030-23182-8 - BibTeXEndNoteBibSonomyHanika, T., Kibanov, M., Kropf, J., Laser, S.: Ich denke, es ist wichtig zu verstehen, warum die Netzwerkanalyse jetzt populär und besonders interessant für die Forschung geworden ist. In: Kropf, J. and Laser, S. (eds.) Digitale Bewertungspraktiken. p. 165--188. Springer (2019).
@incollection{hanika2019denke,

author = {Hanika, Tom and Kibanov, Mark and Kropf, Jonathan and Laser, Stefan},

booktitle = {Digitale Bewertungspraktiken},

editor = {Kropf, Jonathan and Laser, Stefan},

keywords = {2019 bewertungspraktiken itegpub kdepub myown publist sociology},

pages = {165--188},

publisher = {Springer},

title = {Ich denke, es ist wichtig zu verstehen, warum die Netzwerkanalyse jetzt populär und besonders interessant für die Forschung geworden ist.},

year = 2019

}%0 Book Section

%1 hanika2019denke

%A Hanika, Tom

%A Kibanov, Mark

%A Kropf, Jonathan

%A Laser, Stefan

%B Digitale Bewertungspraktiken

%D 2019

%E Kropf, Jonathan

%E Laser, Stefan

%I Springer

%P 165--188

%T Ich denke, es ist wichtig zu verstehen, warum die Netzwerkanalyse jetzt populär und besonders interessant für die Forschung geworden ist. - URLBibTeXEndNoteBibSonomyHanika, T., Herde, M., Kuhn, J., Leimeister, J.M., Lukowicz, P., Oeste-Reiß, S., Schmidt, A., Sick, B., Stumme, G., Tomforde, S., Zweig, K.A.: Collaborative Interactive Learning - A clarification of terms and a differentiation from other research fields. CoRR. abs/1905.07264, (2019).
@article{journals/corr/abs-1905-07264,

author = {Hanika, Tom and Herde, Marek and Kuhn, Jochen and Leimeister, Jan Marco and Lukowicz, Paul and Oeste-Reiß, Sarah and Schmidt, Albrecht and Sick, Bernhard and Stumme, Gerd and Tomforde, Sven and Zweig, Katharina Anna},

journal = {CoRR},

keywords = {fca kde kdepub myown preprint publist},

title = {Collaborative Interactive Learning - A clarification of terms and a differentiation from other research fields.},

volume = {abs/1905.07264},

year = 2019

}%0 Journal Article

%1 journals/corr/abs-1905-07264

%A Hanika, Tom

%A Herde, Marek

%A Kuhn, Jochen

%A Leimeister, Jan Marco

%A Lukowicz, Paul

%A Oeste-Reiß, Sarah

%A Schmidt, Albrecht

%A Sick, Bernhard

%A Stumme, Gerd

%A Tomforde, Sven

%A Zweig, Katharina Anna

%D 2019

%J CoRR

%T Collaborative Interactive Learning - A clarification of terms and a differentiation from other research fields.

%U http://dblp.uni-trier.de/db/journals/corr/corr1905.html#abs-1905-07264

%V abs/1905.07264 - URLBibTeXEndNoteDOIBibSonomyHanika, T., Marx, M., Stumme, G.: Discovering Implicational Knowledge in Wikidata. In: Cristea, D., Ber, F.L., and Sertkaya, B. (eds.) ICFCA. pp. 315-323. Springer (2019).
@inproceedings{conf/icfca/Hanika0S19,

author = {Hanika, Tom and Marx, Maximilian and Stumme, Gerd},

booktitle = {ICFCA},

crossref = {conf/icfca/2019},

editor = {Cristea, Diana and Ber, Florence Le and Sertkaya, Baris},

keywords = {2019 concept formal implications itegpub kde kdepub myown publist},

pages = {315-323},

publisher = {Springer},

series = {Lecture Notes in Computer Science},

title = {Discovering Implicational Knowledge in Wikidata.},

volume = 11511,

year = 2019

}%0 Conference Paper

%1 conf/icfca/Hanika0S19

%A Hanika, Tom

%A Marx, Maximilian

%A Stumme, Gerd

%B ICFCA

%D 2019

%E Cristea, Diana

%E Ber, Florence Le

%E Sertkaya, Baris

%I Springer

%P 315-323

%R 10.1007/978-3-030-21462-3_21

%T Discovering Implicational Knowledge in Wikidata.

%U http://dblp.uni-trier.de/db/conf/icfca/icfca2019.html#Hanika0S19

%V 11511

%@ 978-3-030-21462-3 - AbstractURLBibTeXEndNoteBibSonomyDürrschnabel, D., Hanika, T., Stumme, G.: Drawing Order Diagrams Through Two-Dimension Extension, http://arxiv.org/abs/1906.06208, (2019).Order diagrams are an important tool to visualize the complex structure of ordered sets. Favorable drawings of order diagrams, i.e., easily readable for humans, are hard to come by, even for small ordered sets. Many attempts were made to transfer classical graph drawing approaches to order diagrams. Although these methods produce satisfying results for some ordered sets, they unfortunately perform poorly in general. In this work we present the novel algorithm DimDraw to draw order diagrams. This algorithm is based on a relation between the dimension of an ordered set and the bipartiteness of a corresponding graph.
@misc{durrschnabel2019drawing,

abstract = {Order diagrams are an important tool to visualize the complex structure of ordered sets. Favorable drawings of order diagrams, i.e., easily readable for humans, are hard to come by, even for small ordered sets. Many attempts were made to transfer classical graph drawing approaches to order diagrams. Although these methods produce satisfying results for some ordered sets, they unfortunately perform poorly in general. In this work we present the novel algorithm DimDraw to draw order diagrams. This algorithm is based on a relation between the dimension of an ordered set and the bipartiteness of a corresponding graph.},

author = {Dürrschnabel, Dominik and Hanika, Tom and Stumme, Gerd},

keywords = {diagramm fca kdepub myown order preprint publist},

note = {cite arxiv:1906.06208Comment: 16 pages, 12 Figures},

title = {Drawing Order Diagrams Through Two-Dimension Extension},

year = 2019

}%0 Generic

%1 durrschnabel2019drawing

%A Dürrschnabel, Dominik

%A Hanika, Tom

%A Stumme, Gerd

%D 2019

%T Drawing Order Diagrams Through Two-Dimension Extension

%U http://arxiv.org/abs/1906.06208

%X Order diagrams are an important tool to visualize the complex structure of ordered sets. Favorable drawings of order diagrams, i.e., easily readable for humans, are hard to come by, even for small ordered sets. Many attempts were made to transfer classical graph drawing approaches to order diagrams. Although these methods produce satisfying results for some ordered sets, they unfortunately perform poorly in general. In this work we present the novel algorithm DimDraw to draw order diagrams. This algorithm is based on a relation between the dimension of an ordered set and the bipartiteness of a corresponding graph. - URLBibTeXEndNoteBibSonomyHanika, T., Hirth, J.: Conexp-Clj - A Research Tool for FCA. In: Cristea, D., Ber, F.L., Missaoui, R., Kwuida, L., and Sertkaya, B. (eds.) ICFCA (Supplements). pp. 70-75. CEUR-WS.org (2019).
@inproceedings{conf/icfca/HanikaH19,

author = {Hanika, Tom and Hirth, Johannes},

booktitle = {ICFCA (Supplements)},

crossref = {conf/icfca/2019suppl},

editor = {Cristea, Diana and Ber, Florence Le and Missaoui, Rokia and Kwuida, Léonard and Sertkaya, Baris},

keywords = {2019 clojure conexp fca itegpub kde kdepub myown publist},

pages = {70-75},

publisher = {CEUR-WS.org},

series = {CEUR Workshop Proceedings},

title = {Conexp-Clj - A Research Tool for FCA.},

volume = 2378,

year = 2019

}%0 Conference Paper

%1 conf/icfca/HanikaH19

%A Hanika, Tom

%A Hirth, Johannes

%B ICFCA (Supplements)

%D 2019

%E Cristea, Diana

%E Ber, Florence Le

%E Missaoui, Rokia

%E Kwuida, Léonard

%E Sertkaya, Baris

%I CEUR-WS.org

%P 70-75

%T Conexp-Clj - A Research Tool for FCA.

%U http://dblp.uni-trier.de/db/conf/icfca/icfca2019suppl.html#HanikaH19

%V 2378 - URLBibTeXEndNoteDOIBibSonomyFelde, M., Hanika, T.: Formal Context Generation Using Dirichlet Distributions. In: Endres, D., Alam, M., and Sotropa, D. (eds.) ICCS. pp. 57-71. Springer (2019).
@inproceedings{conf/iccs/FeldeH19,

author = {Felde, Maximilian and Hanika, Tom},

booktitle = {ICCS},

crossref = {conf/iccs/2019},

editor = {Endres, Dominik and Alam, Mehwish and Sotropa, Diana},

keywords = {2019 context fca generation itegpub kde kdepub myown publist random},

pages = {57-71},

publisher = {Springer},

series = {Lecture Notes in Computer Science},

title = {Formal Context Generation Using Dirichlet Distributions.},

volume = 11530,

year = 2019

}%0 Conference Paper

%1 conf/iccs/FeldeH19

%A Felde, Maximilian

%A Hanika, Tom

%B ICCS

%D 2019

%E Endres, Dominik

%E Alam, Mehwish

%E Sotropa, Diana

%I Springer

%P 57-71

%R 10.1007/978-3-030-23182-8_5

%T Formal Context Generation Using Dirichlet Distributions.

%U http://dblp.uni-trier.de/db/conf/iccs/iccs2019.html#FeldeH19

%V 11530

%@ 978-3-030-23182-8 - URLBibTeXEndNoteBibSonomyDürrschnabel, D., Hanika, T., Stumme, G.: DimDraw - A Novel Tool for Drawing Concept Lattices. In: Cristea, D., Ber, F.L., Missaoui, R., Kwuida, L., and Sertkaya, B. (eds.) ICFCA (Supplements). pp. 60-64. CEUR-WS.org (2019).
@inproceedings{conf/icfca/DurrschnabelHS19,

author = {Dürrschnabel, Dominik and Hanika, Tom and Stumme, Gerd},

booktitle = {ICFCA (Supplements)},

crossref = {conf/icfca/2019suppl},

editor = {Cristea, Diana and Ber, Florence Le and Missaoui, Rokia and Kwuida, Léonard and Sertkaya, Baris},

keywords = {2019 drawing fca itegpub kde kdepub myown publist},

pages = {60-64},

publisher = {CEUR-WS.org},

series = {CEUR Workshop Proceedings},

title = {DimDraw - A Novel Tool for Drawing Concept Lattices.},

volume = 2378,

year = 2019

}%0 Conference Paper

%1 conf/icfca/DurrschnabelHS19

%A Dürrschnabel, Dominik

%A Hanika, Tom

%A Stumme, Gerd

%B ICFCA (Supplements)

%D 2019

%E Cristea, Diana

%E Ber, Florence Le

%E Missaoui, Rokia

%E Kwuida, Léonard

%E Sertkaya, Baris

%I CEUR-WS.org

%P 60-64

%T DimDraw - A Novel Tool for Drawing Concept Lattices.

%U http://dblp.uni-trier.de/db/conf/icfca/icfca2019suppl.html#DurrschnabelHS19

%V 2378 - BibTeXEndNoteDOIBibSonomyHanika, T.: Discovering Knowledge in Bipartite Graphs with Formal Concept Analysis., (2019).
@phdthesis{phd/dnb/Hanika19,

author = {Hanika, Tom},

keywords = {dissertation fca itegpub kde kdepub myown publist tag},

school = {University of Kassel, Germany},

title = {Discovering Knowledge in Bipartite Graphs with Formal Concept Analysis.},

year = 2019

}%0 Thesis

%1 phd/dnb/Hanika19

%A Hanika, Tom

%D 2019

%R 10.17170/kobra-20190213189

%T Discovering Knowledge in Bipartite Graphs with Formal Concept Analysis. - URLBibTeXEndNoteDOIBibSonomySchaefermeier, B., Hanika, T., Stumme, G.: Distances for wifi based topological indoor mapping. Proceedings of the 16th EAI International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services. ACM (2019).
@inproceedings{Schaefermeier_2019,

author = {Schaefermeier, Bastian and Hanika, Tom and Stumme, Gerd},

booktitle = {Proceedings of the 16th EAI International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services},

keywords = {distance itegpub kde kdepub metric myown publist wifi},

month = {nov},

publisher = {ACM},

title = {Distances for wifi based topological indoor mapping},

year = 2019

}%0 Conference Paper

%1 Schaefermeier_2019

%A Schaefermeier, Bastian

%A Hanika, Tom

%A Stumme, Gerd

%B Proceedings of the 16th EAI International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services

%D 2019

%I ACM

%R 10.1145/3360774.3360780

%T Distances for wifi based topological indoor mapping

%U https://doi.org/10.1145%2F3360774.3360780 - URLBibTeXEndNoteBibSonomyDürrschnabel, D., Hanika, T., Stubbemann, M.: FCA2VEC: Embedding Techniques for Formal Concept Analysis. CoRR (accepted as a chapter of the Springer volume titled "Complex Data Analytics with Formal Concept Analysis"). abs/1911.11496, (2019).
@article{journals/corr/abs-1911-11496,

author = {Dürrschnabel, Dominik and Hanika, Tom and Stubbemann, Maximilian},

journal = {CoRR (accepted as a chapter of the Springer volume titled "Complex Data Analytics with Formal Concept Analysis")},

keywords = {2020 embedding fca kdepub learning machine ml myown preprint publist},

title = {FCA2VEC: Embedding Techniques for Formal Concept Analysis.},

volume = {abs/1911.11496},

year = 2019

}%0 Journal Article

%1 journals/corr/abs-1911-11496

%A Dürrschnabel, Dominik

%A Hanika, Tom

%A Stubbemann, Maximilian

%D 2019

%J CoRR (accepted as a chapter of the Springer volume titled "Complex Data Analytics with Formal Concept Analysis")

%T FCA2VEC: Embedding Techniques for Formal Concept Analysis.

%U http://dblp.uni-trier.de/db/journals/corr/corr1911.html#abs-1911-11496

%V abs/1911.11496 - URLBibTeXEndNoteDOIBibSonomyHanika, T., Marx, M., Stumme, G.: Discovering Implicational Knowledge in Wikidata. In: Cristea, D., Ber, F.L., and Sertkaya, B. (eds.) Formal Concept Analysis - 15th International Conference, ICFCA 2019, Frankfurt, Germany, June 25-28, 2019, Proceedings. p. 315--323. Springer (2019).
@inproceedings{DBLP:conf/icfca/Hanika0S19,

author = {Hanika, Tom and Marx, Maximilian and Stumme, Gerd},

booktitle = {Formal Concept Analysis - 15th International Conference, ICFCA 2019, Frankfurt, Germany, June 25-28, 2019, Proceedings},

editor = {Cristea, Diana and Ber, Florence Le and Sertkaya, Baris},

keywords = {2019 kdepub myown publist},

pages = {315--323},

publisher = {Springer},

series = {Lecture Notes in Computer Science},

title = {Discovering Implicational Knowledge in Wikidata},

volume = 11511,

year = 2019

}%0 Conference Paper

%1 DBLP:conf/icfca/Hanika0S19

%A Hanika, Tom

%A Marx, Maximilian

%A Stumme, Gerd

%B Formal Concept Analysis - 15th International Conference, ICFCA 2019, Frankfurt, Germany, June 25-28, 2019, Proceedings

%D 2019

%E Cristea, Diana

%E Ber, Florence Le

%E Sertkaya, Baris

%I Springer

%P 315--323

%R 10.1007/978-3-030-21462-3\_21

%T Discovering Implicational Knowledge in Wikidata

%U https://doi.org/10.1007/978-3-030-21462-3_21

%V 11511

### 2018

- URLBibTeXEndNoteDOIBibSonomyDoerfel, S., Hanika, T., Stumme, G.: Clones in Graphs. In: Ceci, M., Japkowicz, N., Liu, J., Papadopoulos, G.A., and Ras, Z.W. (eds.) ISMIS. pp. 56-66. Springer (2018).
@inproceedings{conf/ismis/DoerfelHS18,

author = {Doerfel, Stephan and Hanika, Tom and Stumme, Gerd},

booktitle = {ISMIS},

crossref = {conf/ismis/2018},

editor = {Ceci, Michelangelo and Japkowicz, Nathalie and Liu, Jiming and Papadopoulos, George A. and Ras, Zbigniew W.},

keywords = {2018 clones fca graphs kdepub myown publist},

pages = {56-66},

publisher = {Springer},

series = {Lecture Notes in Computer Science},

title = {Clones in Graphs.},

volume = 11177,

year = 2018

}%0 Conference Paper

%1 conf/ismis/DoerfelHS18

%A Doerfel, Stephan

%A Hanika, Tom

%A Stumme, Gerd

%B ISMIS

%D 2018

%E Ceci, Michelangelo

%E Japkowicz, Nathalie

%E Liu, Jiming

%E Papadopoulos, George A.

%E Ras, Zbigniew W.

%I Springer

%P 56-66

%R 10.1007/978-3-030-01851-1_6

%T Clones in Graphs.

%U http://dblp.uni-trier.de/db/conf/ismis/ismis2018.html#DoerfelHS18

%V 11177

%@ 978-3-030-01851-1 - URLBibTeXEndNoteDOIBibSonomyHanika, T., Zumbrägel, J.: Towards Collaborative Conceptual Exploration. In: Chapman, P., Endres, D., and Pernelle, N. (eds.) ICCS. pp. 120-134. Springer (2018).
@inproceedings{conf/iccs/HanikaZ18,

author = {Hanika, Tom and Zumbrägel, Jens},

booktitle = {ICCS},

crossref = {conf/iccs/2018},

editor = {Chapman, Peter and Endres, Dominik and Pernelle, Nathalie},

keywords = {2018 attribute collaborative exploration fca itegpub myown publist},

pages = {120-134},

publisher = {Springer},

series = {Lecture Notes in Computer Science},

title = {Towards Collaborative Conceptual Exploration.},

volume = 10872,

year = 2018

}%0 Conference Paper

%1 conf/iccs/HanikaZ18

%A Hanika, Tom

%A Zumbrägel, Jens

%B ICCS

%D 2018

%E Chapman, Peter

%E Endres, Dominik

%E Pernelle, Nathalie

%I Springer

%P 120-134

%R 10.1007/978-3-319-91379-7_10

%T Towards Collaborative Conceptual Exploration.

%U http://dblp.uni-trier.de/db/conf/iccs/iccs2018.html#HanikaZ18

%V 10872

%@ 978-3-319-91379-7

### 2017

- AbstractURLBibTeXEndNoteDOIBibSonomyBorchmann, D., Hanika, T.: Individuality in Social Networks. In: Missaoui, R., Kuznetsov, S.O., and Obiedkov, S. (eds.) Formal Concept Analysis of Social Networks. p. 19--40. Springer International Publishing, Cham (2017).We consider individuality in bi-modal social networks, a facet that has not been considered before in the mathematical analysis of social networks. We use methods from formal concept analysis to develop a natural definition for individuality, and provide experimental evidence that this yields a meaningful approach for additional insights into the nature of social networks.
@inbook{Borchmann2017,

abstract = {We consider individuality in bi-modal social networks, a facet that has not been considered before in the mathematical analysis of social networks. We use methods from formal concept analysis to develop a natural definition for individuality, and provide experimental evidence that this yields a meaningful approach for additional insights into the nature of social networks.},

address = {Cham},

author = {Borchmann, Daniel and Hanika, Tom},

booktitle = {Formal Concept Analysis of Social Networks},

editor = {Missaoui, Rokia and Kuznetsov, Sergei O. and Obiedkov, Sergei},

keywords = {2017 fca itegpub kde kdepub myown network publist social},

pages = {19--40},

publisher = {Springer International Publishing},

title = {Individuality in Social Networks},

year = 2017

}%0 Book Section

%1 Borchmann2017

%A Borchmann, Daniel

%A Hanika, Tom

%B Formal Concept Analysis of Social Networks

%C Cham

%D 2017

%E Missaoui, Rokia

%E Kuznetsov, Sergei O.

%E Obiedkov, Sergei

%I Springer International Publishing

%P 19--40

%R 10.1007/978-3-319-64167-6_2

%T Individuality in Social Networks

%U https://doi.org/10.1007/978-3-319-64167-6_2

%X We consider individuality in bi-modal social networks, a facet that has not been considered before in the mathematical analysis of social networks. We use methods from formal concept analysis to develop a natural definition for individuality, and provide experimental evidence that this yields a meaningful approach for additional insights into the nature of social networks.

%@ 978-3-319-64167-6 - URLBibTeXEndNoteDOIBibSonomyBorchmann, D., Hanika, T., Obiedkov, S.: On the Usability of Probably Approximately Correct Implication Bases. In: Bertet, K., Borchmann, D., Cellier, P., and Ferré, S. (eds.) Formal Concept Analysis - 14th International Conference, ICFCA 2017, Rennes, France, June 13-16, 2017, Proceedings. pp. 72-88. Springer (2017).
@inproceedings{conf/icfca/BorchmannHO17,

author = {Borchmann, Daniel and Hanika, Tom and Obiedkov, Sergei},

booktitle = {Formal Concept Analysis - 14th International Conference, ICFCA 2017, Rennes, France, June 13-16, 2017, Proceedings},

crossref = {conf/icfca/2017},

editor = {Bertet, Karell and Borchmann, Daniel and Cellier, Peggy and Ferré, Sébastien},

keywords = {2017 base canonical fca itegpub kde kdepub myown pac publist},

pages = {72-88},

publisher = {Springer},

series = {Lecture Notes in Computer Science},

title = {On the Usability of Probably Approximately Correct Implication Bases.},

volume = 10308,

year = 2017

}%0 Conference Paper

%1 conf/icfca/BorchmannHO17

%A Borchmann, Daniel

%A Hanika, Tom

%A Obiedkov, Sergei

%B Formal Concept Analysis - 14th International Conference, ICFCA 2017, Rennes, France, June 13-16, 2017, Proceedings

%D 2017

%E Bertet, Karell

%E Borchmann, Daniel

%E Cellier, Peggy

%E Ferré, Sébastien

%I Springer

%P 72-88

%R 10.1007/978-3-319-59271-8_5

%T On the Usability of Probably Approximately Correct Implication Bases.

%U http://dblp.uni-trier.de/db/conf/icfca/icfca2017.html#BorchmannHO17

%V 10308

%@ 978-3-319-59271-8

### 2016

- URLBibTeXEndNoteBibSonomyAtzmueller, M., Hanika, T., Stumme, G., Schaller, R., Ludwig, B.: Social Event Network Analysis: Structure, Preferences, and Reality. Proc. IEEE/ACM ASONAM. IEEE Press, Boston, MA, USA (2016).
@inproceedings{langeNachtFCA,

address = {Boston, MA, USA},

author = {Atzmueller, Martin and Hanika, Tom and Stumme, Gerd and Schaller, Richard and Ludwig, Bernd},

booktitle = {Proc. IEEE/ACM ASONAM},

keywords = {2016 bipartite fca itegpub kde kdepub myown publist},

publisher = {IEEE Press},

title = {Social Event Network Analysis: Structure, Preferences, and Reality},

year = 2016

}%0 Conference Paper

%1 langeNachtFCA

%A Atzmueller, Martin

%A Hanika, Tom

%A Stumme, Gerd

%A Schaller, Richard

%A Ludwig, Bernd

%B Proc. IEEE/ACM ASONAM

%C Boston, MA, USA

%D 2016

%I IEEE Press

%T Social Event Network Analysis: Structure, Preferences, and Reality

%U https://www.kde.cs.uni-kassel.de/atzmueller/paper/atzmueller-social-event-analysis-asonam16-preprint.pdf - URLBibTeXEndNoteBibSonomyDownloadBorchmann, D., Hanika, T.: Some Experimental Results on Randomly Generating Formal Contexts. In: Huchard, M. and Kuznetsov, S. (eds.) CLA. pp. 57-69. CEUR-WS.org (2016).
@inproceedings{conf/cla/BorchmannH16,

author = {Borchmann, Daniel and Hanika, Tom},

booktitle = {CLA},

crossref = {conf/cla/2016},

editor = {Huchard, Marianne and Kuznetsov, Sergei},

keywords = {2016 context fca formal generating itegpub myown publist random stego},

pages = {57-69},

publisher = {CEUR-WS.org},

series = {CEUR Workshop Proceedings},

title = {Some Experimental Results on Randomly Generating Formal Contexts.},

volume = 1624,

year = 2016

}%0 Conference Paper

%1 conf/cla/BorchmannH16

%A Borchmann, Daniel

%A Hanika, Tom

%B CLA

%D 2016

%E Huchard, Marianne

%E Kuznetsov, Sergei

%I CEUR-WS.org

%P 57-69

%T Some Experimental Results on Randomly Generating Formal Contexts.

%U http://dblp.uni-trier.de/db/conf/cla/cla2016.html#BorchmannH16

%V 1624