ECO
Exact Computing

Le séminaire commun ECO/ESCAPE a lieu le mercredi à 13h30 en salle de séminaire du LIRMM (bâtiment 4).

Thématiques :

Contacts : Bruno Grenet, Andrei Romashchenko



Spectral properties of pseudo-random graphs

Geoffroy Caillat-Grenier, Université de Montpellier
mercredi 7 juillet 2021

Les graphes aléatoires ont des propriétés intéressantes pour les applications. Ces propriétés peuvent être caractérisées par le spectre de la matrice d'adjacence du graphe. S'il existe des moyens déterministes et explicites de produire des graphes présentant de telles propriétés, ces procédés sont souvent extrêmement compliqués. En outre, pour utiliser un graphe purement aléatoire, les ressources requises pour le produire et le stocker seraient contraignantes. Nous avons donc exploré une approche intermédiaire où nous utilisons des générateurs de nombres pseudo-aléatoires afin de générer les graphes en question.

Dans cet exposé, je présenterai les procédés que nous avons mis en place pour générer de tels graphes, les moyens utilisés pour les tester ainsi que les résultats obtenus. Pour cela, j'expliquerai différentes notions permettant de comprendre le lien entre les propriétés combinatoires et les propriétés spectrales des graphes. J'aborderai également certaines applications de ce types de graphes qui nous ont permis de tester nos constructions.

Self-stabilisation of cellular automata on tilings

Siamak Taati, American University of Beirut
mercredi 30 juin 2021

I will discuss the following notion of self-stabilisation in the setting of cellular automata (CA) and shift spaces of finite type (SFT). We say that a CA F self-stabilises on an SFT X if, starting from any finite perturbation of a configuration in X, the CA is able to correct the perturbation and return to X in finitely many steps. We require all the configurations in X to be fixed points of F. I will present efficient solutions for some families of SFTs, but also
provide an example of an SFT for which the self-stabilisation problem is inherently hard. Using the notion of sparseness, it can be shown that if the CA is able to self-stabilise from finite perturbations in sub-quadratic time, then it can also self-stabilise starting from a
Bernoulli random perturbation with a sufficiently small density of errors. This is joint work with Nazim Fatès and Irène Marcovici. [arXiv:2101.12682]

Polynomial and cyclotomic identity testings

Sylvain Perifel, IRIF, Université de Paris
mercredi 23 juin 2021

Polynomial identity testing (PIT) is the problem of deciding whether an arithmetic circuit computes the zero polynomial. It is certainly the main natural problem which has not been derandomized yet. Despite this failure (or rather thanks to it), a lot of restrictions or variants have been successfully studied. In this talk, after an introduction to PIT, I will survey some of these results and focus in particular on a variant called cyclotomic identity testing (CIT): does a polynomial vanish on a primitive (complex) root of unity? In a joint work with Nikhil Balaji, Mahsa Shirmohammadi and James Worrell, we show that CIT is in coNP, and can even be solved in probabilistic polynomial time if we assume the generalized Riemann hypothesis.

Sur la multiplication dans les corps finis avec des algorithmes de type Chudnovsky sur la droite projective

Bastien Pacifico, I2M, Aix-Marseille Université
mercredi 16 juin 2021

Nous proposons une construction générique d'algorithmes d'interpolation sur la droite projective pour multiplier dans une extension de degré quelconque d'un corps fini. Ces algorithmes correspondent aux techniques usuelles d'interpolation polynomiale dans les petites extensions, et sont définis récursivement lorsque le degré de l'extension augmente. Nous verrons que leur complexité bilinéaire est compétitive, et qu'ils sont constructibles en temps polynomial.

An Alternative Approach for SIDH Arithmetic

Laurent Imbert, LIRMM, CNRS, Université de Montpellier
mercredi 19 mai 2021

In this talk, I will present new algorithms for the field arithmetic layers of supersingular isogeny Diffie-Hellman; one of the remaining candidates in the NIST post-quantum standardization process. Our approach uses a polynomial representation of the field elements together with mechanisms to keep the coefficients within bounds during the arithmetic operations. We present timings and comparisons for SIKEp503 and suggest a novel 736-bit prime that offers a $1.17\times$ speedup compared to SIKEp751 for a similar level of security.

Enumerative combinatorics problems for cryptographic primitives based on cellular automata

Luca Mariot,
lundi 17 mai 2021

Cellular Automata (CA) provide an interesting computational model for designing several cryptographic primitives, especially in the symmetric-key setting. Indeed, during the past decades CA have been employed to define pseudorandom generators for stream ciphers, as in the seminal work by Wolfram in the 80s, or S-boxes for block ciphers, such as the rule χ used by Daemen et al. in the design of the Keccak sponge construction. In this talk, we will survey recent results on combinatorial problems related to cryptographic primitives based on CA, focusing on the examples of orthogonal arrays, bent functions and S-boxes. We will also present counting and enumeration problems that are still open for each of these examples, proposing some approaches to address them in future research.

[ attention: un jour et une heure non standard ]

Introduction to Isogeny-Based Cryptography

Cyril Bouvier, LIRMM, CNRS, Université de Montpellier
mercredi 12 mai 2021

Cryptography using isogenies between elliptic curves was proposed as a way to implement post-quantum cryptography. In 2006, the first cryptosystem whose security is based on the hardness of computing isogenies between two given elliptic curves was described. In 2017, another cryptosystem was submitted to the NIST competition for the standardization of post-quantum cryptography. In this talk, I will introduce the concepts behind isogeny-based cryptography and describe three key-exchange protocols that use isogenies between elliptic curves.

A counting argument for combinatorics on words

Matthieu Rosenfeld, LIRMM.
mercredi 28 avril 2021

The Lovász Local Lemma [Lovász, 1975] is one of the central tools of Erdős' probabilistic method. This rather simple lemma has been applied to SAT formulas, graph colorings, hypergraph coloring, combinatorics on words, geometry, and countless other topics. This Lemma essentially tells that given a set of "bad events" under the right conditions the probability that no events occur is nonzero. It implies that there is at least one coloring or one affection of the variables with the desired properties. There are many versions of the Lovász Local Lemma that apply to different situations. Many other techniques that apply to similar problems have emerged in the last 20 years (entropy compression, cluster expansion, local cut lemma...). In [1], I described a new counting argument that can be applied to the same kind of problems. One of the main advantages of this counting argument is that it provides similar bounds (or slightly better bounds), but is really simple. In [2], Wanless and Wood provided a general framework for this counting argument. In the context of combinatorics on words, this counting argument can be pushed further as in [3].

In this talk, I will present this counting argument on a simple example. I will then present how it can be pushed further in the context of combinatorics on words.

[1] Another approach to non-repetitive colorings of graphs of bounded degree. M. Rosenfeld. The Electronic Journal of Combinatorics (2020)

[2] A general framework for hypergraph colouring. I.M. Wanless and D.R. Wood. Arxiv (2020).

[3] Avoiding squares over words with lists of size three amongst four symbols. M. Rosenfeld. Arxiv (2021)

Recognizable morphisms and a decision algorithm for substitutive languages

Revekka Kyriakoglou,
mercredi 14 avril 2021

The concept of recognizability of morphisms originates in the paper of Martin [1] under the term: determinization. This term was first used by Host in his paper on the Ergodic theory of Dynamical Systems [2]. The notion of recognizability came in full bloom after the interest shown by many scientists due to its numerous theoretical applications in various topics, from combinatorics on words to symbolic dynamics. A similar notion is that of circularity. The two terms are often, but not always used as synonymous. This lack of consistency in the literature could lead to confusion. In this seminar, I will present my work on the different notions of recognizability, with the main goal of proving the equivalences and indicating the differences that exist between the different definitions.

In the second part of this seminar, I will present an algorithm that allows us to describe all bispecial words of a substitutive language of a recognizable morphism, together with the set of their left and right extensions. More precisely, given a set of words S, one can associate with every word w ∈ S its extension graph which describes the possible left and right extensions of w in S. Families of sets can be defined from the properties of the extension graph of their elements: acyclic sets, dendric sets, neutral sets, etc. In the specific case of the set of words of a substitutive language of a recognizable morphism, we show that it is decidable whether these properties are verified or not.

[1] John C. Martin. Minimal flows arising from substitutions of non-constant length. Math. Systems Theory, 7:72–82, 1973.
[2] B. Host. Valeurs propres des systèmes dynamiques définis par des substitutions de longueur variable. Ergodic Theory Dynam. Systems, 6(4):529–540, 1986.

An algebraic theory for state complexity

Edwin Hamel, IRIF
mercredi 7 avril 2021

Operational state complexity is an active topic of research in theoretical computer science. However, there is no general framework built for computing state complexities. A usual method is to compute an upper bound by resorting to some ingenious tricks, and then to provide a family of languages, called a witness, whose image by the operation matches this upper bound. Furthermore, witnesses are chosen in an ad hoc manner, and no general results are known.

In 2013, Brzozowski introduced a heuristic to find witnesses that consists in choosing them when they are "complex" in a certain sense. In order to precise and formalize this heuristic, we build a general framework designed for studying the state complexities of a large class of regular operations, which we call $1$-uniform. As reasoning directly on automata is easier to compute state complexities, we build a counterpart to $1$-uniform operations in the space of operations over complete, deterministic and finite automata, which we call modifiers. We show that, for any $1$-uniform regular operation, we can choose a witness in a small pool of DFAs with a large alphabet.

From these results, we derive a method to compute the state complexity of any $1$-uniform operation. This method can be used to recover some well-known state complexity results, like the exact state complexity of the Kleene star, catenation, or of any boolean operation. In addition, we very quickly give some ideas that we used to compute the exact state complexity of the Kleene star composed with symmetric difference, which is a new result.

Finally, we study a class of modifiers defined by simple algebraic properties, called friendly modifiers. We give the regular operations associated with friendly modifiers, and we give their maximal state complexity.

Une méthode de programmation parallèle : d'une formule logique à un programme d'automate cellulaire

Théo Grente, Université de Caen Normandie
mercredi 31 mars 2021

Les automates cellulaires constituent le modèle de calcul parallèle et local par excellence. Ainsi, on peut trouver dans la littérature de nombreux algorithmes sur ce modèle de calcul. Ces algorithmes sont souvent décrits sous la forme de signaux et collisions, sans détailler le programme de l'automate cellulaire qui en résulte.

Je présenterai dans cette exposé une méthode permettant d'obtenir ce programme de manière automatique. Cette méthode consiste dans un premier temps à programmer en logique l'induction résolvant un problème (par exemple, un schéma de signaux et collisions), puis dans un second temps, à appliquer un processus automatique aboutissant au programme de l'automate cellulaire résolvant ce problème.

En plus de cette méthode, je présenterai aussi les caractérisations logiques de chacune des trois classes de complexité en temps minimal des automates cellulaires obtenues par celle-ci, en m'attardant sur la plus générale.

Enfin, si le temps me le permet, j'aborderais un résultat de reconnaissance des langages conjonctifs (une extension des langages algébriques) obtenu par notre méthode de programmation par la logique.

(Fully) Homomorphic Encryption over Finite Fields

Vincent Zucca, Équipe DALI, LIRMM, UPVD
mercredi 17 mars 2021

Homomorphic encryption is a particular kind of encryption which allows to perform computations directly over encrypted data. In this talk, we will be interested in two of the currently most used homomorphic encryption schemes BGV and BFV. Although these two schemes are two different instantiations of the same idea, there are significant differences in their performances and usability. We will start by presenting the two schemes and then see how we can reduce the gap between them. We will finish this talk by presenting a simple, yet non-trivial, practical use case of these schemes to integer comparison. In particular, we will see how the study of certain functions over finite fields permits to speed-up computations.

Trade-offs between size and degree in Polynomial Calculus (2nd part)

Guillaume Lagarde , LaBRI
mercredi 10 mars 2021

Introduced by Cleggs et al. (STOC'96) to capture Gröbner basis computations, Polynomial Calculus Resolution (PCR) is an algebraic proof system for certifying the unsatisfiability of CNF formulas. Impagliazzo et al. (CC'99) established that if an unsatisfiable k-CNF formula over n variables has a refutation of small size in PCR (that is, polynomial size), then this formula also has a refutation of small degree, i.e., O(sqrt(n log n)). A natural question is to know whether we can achieve both small size and small degree in the same refutation. In this talk I will present (and give some ideas of the proof of) our main result: the decrease in degree necessarily comes at the expense of an exponential blow-up in size. Joint work with Jakob Nordström, Dmitry Sokolov, Joseph Swernofsky.

This will be the second (more technical) part of the presentation; see the recording of the first part of the talk on https://webconf.math.cnrs.fr/playback/presentation/2.0/playback.html?meetingId=25a62376586e525af72432cbf9ad7d8aadc2352a-1614772565299 .

Trade-offs between size and degree in Polynomial Calculus

Guillaume Lagarde ,
mercredi 3 mars 2021

Introduced by Cleggs et al. (STOC'96) to capture Gröbner basis computations, Polynomial Calculus Resolution (PCR) is an algebraic proof system for certifying the unsatisfiability of CNF formulas. Impagliazzo et al. (CC'99) established that if an unsatisfiable k-CNF formula over n variables has a refutation of small size in PCR (that is, polynomial size), then this formula also has a refutation of small degree, i.e., O(sqrt(n log n)). A natural question is to know whether we can achieve both small size and small degree in the same refutation. In this talk I will present (and give some ideas of the proof of) our main result: the decrease in degree necessarily comes at the expense of an exponential blow-up in size. Joint work with Jakob Nordström, Dmitry Sokolov, Joseph Swernofsky

Some results about tilings of groups

Etienne Moutot, LIS, Université Aix-Marseille
mercredi 24 février 2021

In this talk I will present tilings of groups (or tilings of their Cayley graphs). I will give an overview of what is known about these tilings, with a particular focus on questions of periodicity (or aperiodicity) and the decidability of their domino problem.

The Surprise Examination Paradox and the Second Incompleteness Theorem

Alexander Shen, LIRMM
mercredi 16 décembre 2020

The Berry paradox is talking about "the minimal natural number that cannot be defined by a phrase of less than one million words." It seems that this phrase defines some specific natural number. But the phrase is shorter than one million words, which looks contradictory. Being formalized, this paradox becomes Chaitin's proof of Gödel's first incompleteness theorem.

There is another famous paradox. Assume we get a message saying that next week we are going to have an unexpected security exercise at the noon, and nobody will know at which day the exercise will happen before it actually happens. Then the exercise cannot be on Sunday (as the day before we would know for sure when it should happen). Having excluded Sunday, we see that the exercise cannot happen on Saturday (again, we would know the date of the exercise the day before it), and so on, till Monday. Kritchman and Raz noted that this paradox can be also formalized and gives a proof of Gödel's second incompleteness theorem (the formal arithmetic arithmetic cannot prove its own consistency). We will discuss this funny argument.

Multiparty Karchmer-Wigderson Games and Threshold Circuits.

Alexander Kozachinskiy, University of Warwick
mercredi 9 décembre 2020

First part of the talk will be devoted to the following problem about circuit complexity of the majority function. Namely, it is classical that for the majority function there exists a logarithmic-depth circuit constructed solely from the 3-bit majorities. This fact goes back to a paper of Valiant from 1984. Now, a drawback of the Valiant’s construction is that it uses probabilistic method and thus is not explicit. Can one derandomize it? More precisely, can one deterministically in polynomial time print a logarithmic-depth circuit computing the majority function and consisting, as in the Valiant’s construction, only of 3-bit majorities? The answer to this question is positive and in the talk we will give a full proof of it. Besides circuit complexity, this also gives some applications in cryptography (Cohen et al., CRYPTO 2013).

In fact, our argument is applicable not only to the majority function. It actually establishes an equivalence (from the circuit size/depth viewpoint) between monotone circuits and circuits consisting only of 3-bit majorities for monotone self-dual functions. A more elaborate version of our agrument also establishes a new connection between circuit complexity and multi-party communication complexity (in spirit of the classical Karchmer-Wigderson games). This allows us to obtain new upper bounds on the circuit complexity of threshold functions (motivated again by cryptographic applications). If time permits, we will discuss these questions in the second part of the talk.

Based on a joint work with V. Podolskii.

Thermal states quantum cryptography

Anne Ghesquiere,
mercredi 2 décembre 2020

In the recent decades, quantum cryptography has begun playing a role in safeguarding communications against eavesdroppers, with large scale experimental implementations of quantum key distribution (QKD) being deployed over dark fibres networks and the commercialisation of quantum random number generators. Traditional QKD follows the BB84 model and rests on the no-cloning of non-orthogonal states. Thermal states QKD on the other hand, is a novel approach to QKD, aiming to for security over short distances but over a wider spectrum of frequencies, allowing for more flexibility in sources and infrastructures. In particular, thermal states quantum cryptography’s goal is to enable perfectly secure key exchange with low power, low memory devices. In this talk, I will briefly describe the BB84 protocol and the physics the security of the protocol relies upon. I will then, describe thermal states quantum key distribution and present evidence of the security in the protocol.

Secret sharing schemes for ports of matoids

Oriol Farràs Ventura, Universitat Rovira i Virgili
mercredi 25 novembre 2020

A secret sharing scheme is a cryptographic method to protect a secret. Given a secret and some randomness, the scheme generates pieces of information, that are called shares, in such a way that the secret can only be recovered from certain subsets of shares. Considering that each share is held by a participant, the access structure of the scheme is defined as the family of coalitions of participants that can recover the secret. In a perfect scheme, coalitions that are not in the access structure cannot get any information about the secret (in the information theoretic sense). Secret sharing schemes have many applications in cryptography as secure multiparty computation or distributed cryptography.

A perfect secret sharing scheme is ideal if the size of each share is equal to the size of the secret, which is the optimal situation. Brickell and Davenport showed that the access structure of an ideal secret sharing scheme is determined by a matroid. Namely, the minimal authorized subsets of an ideal secret sharing schemes are in correspondence with the circuits of a matroid containing a fixed point. In this case, we say that the access structure is a matroid port. It is known that, for an access structure, being a matroid port is not a sufficient condition to admit an ideal secret sharing scheme.

In this talk we will see the main known connections between secret sharing and matroids, and we will present the results of the work "Secret sharing schemes for ports of matroids of rank 3".

Record of the talk: https://webconf.math.cnrs.fr/playback/presentation/2.0/playback.html?meetingId=25a62376586e525af72432cbf9ad7d8aadc2352a-1606301566964

Inequalities for space-bounded complexity

Alexander Shen, LIRMM, Escape
mercredi 21 octobre 2020

Many laws of information theory have the form of some inequalities (the amount of information in a pair (X,Y) does not exceed the sum of the amount of information in X and in Y, for example). These inequalities can be translated from the language of Shannon entropy to the complexity language and vice versa. The complexity is noncomputable, and one may prefer to consider resource-bounded complexity. It was noted (Longpré) that some inequalities remain valid for space-bounded complexity (with appropriate changes). We improve this bound using an old trick of Sipser, and the resulting bound is good enough to show that all the inequalities that are true for unbounded complexity, have a resource-bounded counterpart.

Joint work with Peter Gacs and Andrei Romashchenko.

The talk will be given online.

La complexité de communication de l’accord de clé secrète.

Emirhan Gürpınar, LIRMM, Escape
mercredi 17 juin 2020

Romashchenko et Zimand ont montré dans leur papier de 2019 que deux personnes (Alice et Bob) étant donné des mots personnels, peuvent établir une clé secrète aussi grande que l’information commune de leurs mots en communiquant par une chaîne publique. Nous avons cherché la complexité de communication de ce problème.

Nous étudions des exemples avec des propriétés différentes. Nous utilisons la complexité de Kolmogorov et des arguments spectrales en passant par des graphes bipartites définis pour les cadres initiaux. Les protocoles de communication sont considérés aléatoires (avec des bits privés aléatoires).

Aucun prérequis sauf algèbre linéaire.

Algorithmic Aspects of Information Theory

Dan Suciu, University of Washington
mardi 19 mai 2020

Constraints on entropies are considered to be the laws of information theory. Even though the pursuit of their discovery has been a central theme of research in information theory, the algorithmic aspects of constraints on entropies remain largely unexplored. It is open whether even the simplest decision problem, checking if a linear inequality of entropies is valid, is decidable. This talk discusses several decision problems about constraints on entropies. It starts by presenting upper bounds on several types of decision problems, by placing them in the arithmetic hierarchy. It then discusses "islands of decidability": special restrictions on information inequalities that are decidable. Finally, we present a tight connection between max-information inequalities, and the domination problem between finite structures. A structure B "dominates" a structure A if for any other structure the number of homomorphisms from B is greater than or equal to the number of homomorphisms from A. We show that, checking a max-information inequality is Turing equivalent to checking the domination problem, where B is acyclic.

Joint work with:
Mahmoud Abo Khamis (RelationalAI),
Phokion G. Kolaitis (University of California at Santa Cruz),
Hung Q. Ngo (RelationalAI)

The talk begins at 17h CET (at 8am PST)

Video recording of the seminar (available online until June 5 2020).

Algebra, Tilings et Nivat

Etienne Moutot, ENS de Lyon
mercredi 13 mai 2020

A quel point peut-on créer des structures complexes à l'aide de motifs élémentaires ?
La conjecture de Nivat dit que toute configuration (coloration de la grille Z^2) de
faible complexité (le nombre de motifs qui y apparaissent est "faible")
est nécessairement périodique. Autrement dit, il est impossible des créer des configuration "complexes" (non périodiques) à l'aide d'un petit nombre de motifs de base.

En 2015, Michal Szabados et Jarkko Kari ont publié leur premier article
utilisant une approche algébrique pour s'attaquer à cette conjecture.
Leur idée est de représenter une configuration comme une série formelle,
et en étudiant certaines structures qui lui sont liées (tels que des
idéaux polynomiaux bien choisis). Ce faisant, ils parviennent à exploiter plusieurs théorèmes d'algèbre pour s'approcher de la conjecture de Nivat sous des angles nouveaux.

Dans cet exposé, je présenterai les travaux que j'ai effectué avec Jarkko
Kari dans le continuation de la thèse de Michal Szabados. Je parlerais
de nouveaux théorèmes utilisant ces outils algébriques pour se rapprocher encore
une fois de la conjecture de Nivat.
Une des conséquences de ces résultats est que le problème du domino est décidable pour les sous-shifts de faible complexité. Ce problème est indécidable de manière générale, et si la conjecture de Nivat est vraie, il serait décidable de manière évidente. Un des intérêt de notre résultat est qu'il permet de prouver cette décidabilité, sans pour autant avoir besoin de la conjecture de Nivat, seulement une version affaiblie.

Faster cofactorization with ECM using mixed representations

Cyril Bouvier & Laurent Imbert, LIRMM, CNRS, U. Montpellier
mercredi 29 avril 2020

The Elliptic Curve Method (ECM) is the asymptotically fastest known
method for finding medium-size prime factors of large integers. It is
also a core ingredient of the Number Field Sieve (NFS) for integer
factorization and its variants for computing discrete logarithms. In NFS
and its variants, ECM is extensively used in the cofactorization step (a
subroutine of the sieving phase) which consists of breaking into primes
billions of composite integers of a hundred-ish bits.

In this talk, we propose a novel implementation of ECM in the context of
NFS that requires fewer modular multiplications than any other publicly
available implementation. The main ingredients are: Dixon and Lenstra's
idea on primes combination, fast point tripling, optimal double-base
decompositions and Lucas chains, and a good mix of Edwards and
Montgomery representations.

A note on the spectral argument in communication problems

Andrei Romashchenko, LIRMM, CNRS, U. Montpellier
mercredi 15 avril 2020

From communication complexity to graphs, from graphs to linear algebra, and the full way back.

Produit quasi-linéaire de polynômes creux

Armelle Perret du Cray, LIRMM
mercredi 26 février 2020

Une particularité de l’algorithmique des polynômes creux est l’importante différence qu’il peut y avoir entre la taille de la sortie et celle des entrées. S’il est possible de borner a priori la taille de la sortie cette borne peut s’avérer trop importante en pratique. Dans le cas du produit de deux polynômes creux de T monômes, la borne naturelle sur le nombre de monômes de la sortie est T² alors qu’il peut dans certains cas être beaucoup plus petit. L’algorithme naïf est linéaire en cette borne. Il est possible de calculer en temps quasi-linéaire une borne plus fine, la taille du support structurel c’est-à-dire le nombre de monômes qu’aurait le produit si tous les coefficients valaient 1. En effectuant ce calcul, Andrew Arnold et Daniel S. Roche [1, 2] ont mis au point un algorithme probabiliste quasi-linéaire en la taille du support structurel. Cependant cette borne peut également être quadratique en la taille effective de la sortie. Je présenterai le premier algorithme probabiliste pour le produit de polynômes creux à coefficients entiers de complexité quasi-linéaire en la taille de l’entrée plus celle effective de la sortie. Il repose sur deux ingrédients. Une adaptation de l’algorithme d’interpolation creuse de Qiao-Long Huang [3] qui prend en entrée la taille du support du produit. Cet algorithme est appelé à plusieurs reprises avec une taille croissante. Le deuxième ingrédient est un nouvel algorithme qui permet la vérification du produit de polynômes creux en temps quasi-linéaire.

The Power Error Locating Pairs algorithm

Isabella Panaccione, LIX, Inria - École polytechnique
mercredi 5 février 2020

In this talk, we will focus on particular decoding algorithms for Reed Solomon codes. It is known that for these codes, it is possible to correct an amount of errors equal to or even superior than half the minimum distance. In general though, beyond this bound, the uniqueness of the solution is no longer entailed. Hence, given a vector y ∈ GF(q)^n , it is possible to either use list decoding algorithms, like Sudan algorithm and Guruswami-Sudan algorithm, which return the list of all the closest codewords to y, or unique decoding algorithms, like the Power Decoding algorithm, which return the closest codeword to y if unique at the prize of some failure cases.

In this scenario, I will present a new unique decoding algorithm, that is the Power Error Locating Pairs algorithm. Based on Pellikaan’s Error Correcting Pairs algorithm, it has for Reed Solomon codes the same decoding radius and complexity as Power Decoding algorithm. Moreover, like the Error Correcting Pairs algorithm, it can be performed on all codes which dispose from a special structure (among them, Reed Solomon codes, algebraic geometry codes and cyclic codes).

Joint work with Alain Couvreur

Complexité de l'algorithme d'Euclide

Bruno GRENET, ECO, LIRMM
mercredi 18 décembre 2019

L'algorithme d'Euclide permettant de calculer le PGCD de deux entiers est l'un des plus anciens algorithmes connus. Dans ce court exposé, je présenterai une nouvelle preuve de sa complexité, basée sur l'analyse amortie, encore plus simple que les preuves habituelles. Cette preuve est malgré tout optimale.

Travail en commun avec Ilya Volkovich (U. Michigan).

An LLL Algorithm for Module Lattices

Alice Pellet--Mary, LIP, École Normale Supérieure de Lyon
mercredi 13 novembre 2019

A lattice is a discrete subgroup (i.e., ℤ-module) of ℝⁿ (where ℤ and ℝ are the sets of integers and real numbers). The LLL algorithm is a central algorithm to manipulate lattice bases. It takes as input a basis of a Euclidean lattice, and, within a polynomial number of operations, it outputs another basis of the same lattice but consisting of rather short vectors.

Lattice are used in post-quantum cryptography, as finding a shortest vector of a lattice, given an arbitrary basis, is supposed to be intractable even with a quantum computer. From a practical point of view, many constructions based on lattices use structured lattices, in order to improve the efficiency of the schemes. Most of the time, these structured lattices are R-modules of small dimension, where R is the ring a integers of some number field. As an example, only 1 scheme among the 12 lattice-based candidates for the NIST post-quantum standardization process uses non-structured lattices, all the other ones using module lattices. It is then tempting to try and adapt the LLL algorithm, which works over lattices (i.e., ℤ-modules), to these R-modules, as this would allow us to find rather short vectors of these module lattices.

All the previous works trying to extend the LLL algorithm to other rings of integers R focused on rings R that were Euclidean, as the LLL algorithm over ℤ crucially relies on the Euclidean division. In this talk, I will describe the first LLL algorithm which works in any ring of integers R. This algorithm is heuristic and runs in quantum polynomial time if given access to an oracle solving the closest vector problem in a fixed lattice, depending only on the ring of integers R.

This is a joint work with Changmin Lee, Damien Stehlé and Alexandre Wallet

Multiplications polynomiales sans mémoire

Bruno Grenet, LIRMM
mercredi 15 mai 2019

Le problème de la multiplication de polynômes a été très étudié depuis les années 1960, et différents algorithmes ont été proposés pour atteindre les meilleures complexités en temps. Plus récemment, certains de ces algorithmes ont été étudiés du point de vue de leur complexité en espace, et modifiés pour n'utiliser aucun espace supplémentaire autre que les entrées et sorties, tout en gardant la même complexité en temps asymptotiquement.

Dans ce travail, nous étendons ces résultats de deux façons. D'une part, nous nous demandons si tout algorithme de multiplication polynomiale admet une variante « en place », c'est-à-dire n'utilisant aucun espace supplémentaire, de manière générique. D'autre part, nous considérons deux variantes importantes de ce problème qui ne produisent qu'une partie du résultat, les produits dits court et médian, et nous nous demandons si ces opérations peuvent également être effectuées en place.

Pour répondre de manière (essentiellement) affirmative à ces deux questions, nous proposons une série de réductions ayant comme point de départ n'importe quel algorithme de multiplication de complexité en espace linéaire. Pour le produit complet et le produit court, ces réductions fournissent des variantes en place des algorithmes avec la même complexité en temps asymptotiquement. Pour le produit médian, la réduction implique un facteur logarithmique supplémentaire dans la complexité en temps, quand celle-ci est quasi-linéaire.

Travail en commun avec Pascal Giorgi et Daniel S. Roche (US Naval Academy)

Half-duplex communication complexity

Alexander Smal, St.Petersburg, Department of Steklov Mathematical Institute of Russian Academy of Sciences
mercredi 10 avril 2019

Suppose Alice and Bob are communicating bits to each other in order to compute some function $f$ but instead of a classical communication channel they have a pair of walkie-talkie devices. They can use some classical communication protocol for $f$ where each round one player sends bit and the other one receives it. The question is whether talking via walkie-talkie gives them more power? Using walkie-talkie instead of a classical communication channel allows players two extra possibilities: to speak simultaneously (but in this case they do not hear each other) and to listen at the same time (but in this case they do not transfer any bits). We show that for some definitions this non-classical communication model is in fact more powerful than the classical one as it allows to compute some functions in smaller number of rounds. We also introduce round elimination technique for proving lower bounds in this setting and use it to prove lower bounds for some Boolean functions.

Join work with Kenneth Hoover, Russell Impagliazzo and Ivan Mihajlin

Finite state dimension and its characterizations

Alexander Shen, Equipe Escape, LIRMM
mercredi 3 avril 2019

(joint work with Alexander Kozachinsky)

Normal sequences (where all $k$-bit blocks appear equally often) may be considered as "random" sequences in a very weak sense (incompressible by finite automata). One may measure how close is the sequence to a random one: this can be measured by a notion of "finite state dimension"

We review basic results about normal numbers and finite state dimension, and give new characterizations of finite state dimension both in terms of Shannon entropy and (upper bounds) for Kolmogorov complexity.

(The talk can be considered as an extension to the talk https://info-web.lirmm.fr/collorg/bd096263-41e0-4cfa-9f6b-63f900e1a2f4, but self-contained)

Approximating Kolmogorov complexity is as difficult as computing it

Ruslan Ishkuvatov, Equipe Escape, LIRMM (Projet Racaf)
mercredi 20 mars 2019

It is well known that Kolmogorov complexity function (the length of the shortest program producing a given string) is not computable. A natural question arises: how difficult is to compute some approximation to it -- say, a function that differs from Kolmogorov complexity at most by factor 2, or a function that coincides with Kolmogorov complexity at strings of length $n$ except for $1/n$-fraction? We prove that this is as difficult as the exact computation: given an oracle that performs any of these tasks, we can solve the halting problem (and therefore compute exactly the Kolmogorov complexity function).

Aperiodic points in $Z^2$ -subshifts

Benjamin Hellouin, LRI
mercredi 20 février 2019

We consider the structure of aperiodic points in $Z^2$-subshifts, and in particular the positions at which they fail to be periodic. We prove that if a $Z^2$-subshift contains points whose smallest period is arbitrarily large, then it contains an aperiodic point. This lets us characterise the computational difficulty of deciding if an $Z^2$-subshift of finite type contains an aperiodic point. Another consequence is that $Z^2$-subshifts with no aperiodic point have a very strong dynamical structure and are almost topologically conjugate to some $Z$-subshift.

New lower bounds for the secret sharing on the Vamos matroid

Emirhan Gürpinar, ENS de Lyon
mercredi 13 février 2019

Since late 2000s it is known that non classic information inequalities (so-called non Shannon type inequalities for entropy) can be used to prove lower bounds for the information ratio of secret sharing schemes. Recently Farras et al. observed that such bounds can be proven even without deriving new information inequalities explicitly (that is, we can find corollaries of some unknown non Shannon type inequalities, even without proving or even formulating them). We extend this technique and improve the known lower bound for the information ratio of the secret sharing with an access structure on the Vamos matroid.

Secret sharing: methods of information theory and matroids theory

Andrei Romashchenko, Equipe Escape, LIRMM
mercredi 6 février 2019

Secret sharing is a model of distributing a secret among a group of participants. Each participant is given a "share'' of the secret. The secret can be reconstructed only when a sufficient number of shares (an `"authorized group'' of participants) are combined together. The shares of non-authorized groups of participants should give no information about the secret.

The notion of secret sharing was independently introduced in 1979 by Adi Shamir and George Blakley. Nowadays secret sharing has become an important primitive in several cryptographic protocols.

In this talk we discuss the notion of efficiency (information ratio) of a secret sharing scheme and the connections between secret sharing, matroid theory, and Shannon's information theory. The aim of the talk is to recall the basic definitions and several classic results, and to motivate a more technical forthcoming talk on this subject.

On the complexity of modular composition of generic polynomials

Vincent Neiger, XLIM, Université de Limoges
mercredi 30 janvier 2019

This talk is about algorithms for modular composition of polynomials, and for the related problems of inverse modular composition and modular power projection. For two univariate polynomials a and g over a commutative field, modular composition asks to compute b = h(a) mod g for some given h; inverse modular composition asks to compute h such that h(a) = b mod g for some given b; and modular power projection asks to compute the list of projections ell(1), ell(a mod g), ..., ell(a^{n-1} mod g) for some given linear functional ell over the polynomials modulo g, with n the degree of g. For generic g and a, we propose algorithms whose complexity bound improves upon previous algorithms, which come from Brent and Kung's 1978 approach; the new complexity bound is subquadratic in the degree of g and a even when using cubic-time matrix multiplication. Our improvement comes from the fast computation of specific bases of bivariate ideals and operations with these bases thanks to fast univariate polynomial matrix algorithms.

Parity games and finite automata

Alexander Kozachinsky, Moscow State University, Faculty of Mechanics and Mathematics
mercredi 16 janvier 2019

All known quasi-polynomial time algorithms for parity games are based on the existence of a small finite automata for a certain problem on infinite words. Thus it is interesting to obtain lower bounds on the state complexity of this problem. Some results which make a modest progress towards such lower bounds will be presented.

Sieving algorithms for the Shortest Vector Problem

Elena Kirshanova, LIP, ENS Lyon
mercredi 21 novembre 2018

In this talk I give an overview on the complexity of the algorithms for the Shortest Vector Problem (SVP) with the focus on a particular family of algorithms: sieving algorithms. First, I explain why this problem is important and how the complexity of SVP algorithms impacts parameters for cryptographic protocols. Then, I present recent advances in memory efficient versions of sieving algorithms. I explain locality-sensitive techniques for these type of algorithms. The part of the talk is based on joint works with Gottfried Herold and Thijs Laarhoven. Finally, I present recent advances in practical aspects of sieving algorithm for SVP. I describe technical challenges that arise when one tries to make sieving algorithms practical, and how one can overcome some of them. This part of the talk is the on-going work with Martin R. Albrecht, Leo Ducas, Gottfried Herold, Eamonn W. Postlethwaite, Marc Stevens.

On the Ring-LWE and Polynomial-LWE problems

Alexandre Wallet, LIP, École Normale Supérieure de Lyon
mercredi 14 novembre 2018

The Ring Learning With Errors problem (RLWE) comes in various forms. Vanilla RLWE is the decision dual-RLWE variant, consisting in distinguishing from uniform a distribution depending on a secret belonging to the dual of the ring of integers O_K of a specified number field K. In primal-RLWE, the secret instead belongs to O_K. Both decision dual-RLWE and primal-RLWE enjoy search counterparts. Also widely used is (search/decision) Polynomial Learning With Errors (PLWE), which is not defined using a ring of integers O_K of a number field K but a polynomial ring Z[x]/f for a monic irreducible f in Z[x]. We show that there exist reductions between all of these six problems that incur limited parameter losses. More precisely: we prove that the (decision/search) dual to primal reduction from Lyubashevsky et al. [EUROCRYPT 2010] and Peikert [SCN 2016] can be implemented with a small error rate growth for all rings (the resulting reduction is nonuniform polynomial time); we extend it to polynomial-time reductions between (decision/search) primal RLWE and PLWE that work for a family of polynomials f that is exponentially large as a function of deg f (the resulting reduction is also non-uniform polynomial time); and we exploit the recent technique from Peikert et al. [STOC 2017] to obtain a search to decision reduction for RLWE. The reductions incur error rate increases that depend on intrinsic quantities related to K and f.

Polynomial Linear System Solving with Errors by Simultaneous Polynomial Reconstruction of Interleaved Reed Solomon Codes

Ilaria Zappatore, LIRMM
mercredi 3 octobre 2018

A classical method for solving systems of linear equations A(x)y(x) = b(x), with polynomial coefficients over Fq[x], consists in evaluating the system at L points and then recovering the solution by interpolation. We consider a scenario where evaluations are done by some black boxes that can output some erroneous values. The number of errors is bounded by an integer E. In a recent work, Boyer and Kaltofen provided a deterministic algorithm that uses a number L1 of points which depends on the error bound E. In this work we reexamine this problem as the generalization of the Simultaneous Polynomial Reconstruction of Reed Solomon codes. We provide a probabilistic algorithm which allows to recover the solution using L< L1 evaluation points.

Polynomial identity testing for bounded depth circuits

Alexey Milovanov, Moscou
mercredi 22 août 2018

Polynomial identity testing (PIT) is the problem of efficiently determining whether a multivariate polynomial is identically equal to zero. The polynomial is given as an arithmetic circuit or an algebraic formula.

There exists a simple polynomial-time randomized algorithm for solving PIT - just testing equality to zero at a ``random'' point. However, a deterministic polynomial-time algorithm solving PIT is unknown. Designing an efficient deterministic algorithm is one of the most challenging open problems in computational complexity (there exist a connection between derandomization of PIT and proving circuit lower bounds).

Despite a deterministic polynomial algorithm for PIT for general algebraic circuits is unknown for some restriction classes of circuits it is possible to derandomize PIT. One of unexpected tool for proving such results is Sylvester-Gallai theorem: if there are finitely many distinct points on the real plane s.t., for every
pair of distinct points, the line through them also contains a third point, then they
all lie on the same line.

I will talk about the latest achievements in this area (including my results for 4-depth circuits).

Construction d'un shift effectif et non sofic de dimension 2 ayant un nombre de motifs sous-exponentiel

Julien Destombes, Escape, LIRMM
mercredi 4 juillet 2018

Alors que la caractérisation des shifts sofics est bien connue en dimension 1, celle-ci est plus floue en dimension supérieure. Les preuves du caractère non sofic des shifts présents dans la littérature s'appuient sur l'existence d'un nombre exponentiel de motifs. Nous proposons ici une construction d'un shift effectif ayant un nombre de motifs polynomial.

La complexité de Kolmogorov et le probleme d'agrément de clé secrète

Andrei Romashchenko, Escape, LIRMM
mercredi 20 juin 2018

We show that the mutual information, in the sense of Kolmogorov complexity, of any pair of strings $X$ and $Y$ is equal to the length of the longest shared secret key that two parties, one having $X$ and the other one having $Y$, can establish via a probabilistic protocol with interaction on a public channel. This result is also extended to the protocols with more than two parties. (A joint work with Marius Zimand.)

Algorithms for Structured Linear Systems Solving and their Implementation

Romain Lebreton, LIRMM
mercredi 6 juin 2018

There exists a vast literature dedicated to algorithms for structured matrices, but relatively few descriptions of actual implementations and their practical performance in symbolic computation. In this talk, we consider the problem of solving Cauchy-like systems, and its application to mosaic Toeplitz systems, in two contexts: first over finite fields where basic operations have unit cost, then over Q. We introduce new variants of previous algorithms and describe an implementation of these techniques and its practical behavior. We pay a special attention to particular cases such as the computation of algebraic approximants.

General purpose 2D computing medium

Frédéric Gruau, LRI
vendredi 6 avril 2018

The theoretical-model of our computers: Turing machine offers UNBOUNDED MEMORY. The Von-Neuman computer architecture is the efficient implementation in real hardware. It does not APRIORI BOUND the memory. In fact, memory has become so huge that we tend to think of it as an INEXHAUSTIBLE SOURCE.
In this talk, we propose an alternative synergy theoretical-model$/$architecture, for offering UNBOUNDED COMPUTING POWER.
The theoretical model consist of interconnected agents executing instructions that can create other agents. During the execution of a program, a network of agents grows from a single ancestor. The computing power therefore also grows, without limit.
The architecture, should not APRIORI BOUND the number of Processing Element (PE): it can be a thousand, or \ldots a thousand billions! For such scalable architecture, we achieve performance on two levels: 1- theoeretical : The time and space complexity of a parallel algorithm grows optimaly, as the number of PEs grows. 2- practical: the constant factor behind the complexity formula, is small enough that it can be demonstrated for concrete algorithms and significanty large data, on a standard CPU; We do it for sorting, and matrix multiplication, with a thousand scalars.

Because PEs come as an INEXHAUSTIBLE SOURCE, we have to give up on programming each PE individually. Instead, we consider this arbitrary large ensemble of PE as a ``2D computing-medium'': PEs are fine grain, behave identically, and communicate only with nearest neighbor in physical space. For simplification, we use the simplest kind of computing-medium wich is Cellular Autotata (CA): PEs are CA cells, with small state, north, south, east, west connection, and synchronized update. More generally, one could use amorphous computing-medium which are simply homogeneous isotrope, and quasi-synchrone.

We map the unbounded-power model onto the unbounded-power hardware. In other words, we simulate the development of an agent-network, on a CA, using a (very complex) CA-transition-rule. %That, is the most original and central feature of the approach.
The memory of each PE , stores the same dozen of boolean flag representing spatially extended, physicical-like structures, among which are ``membrane'' ``waves'' and ``channels''.
By associating colors to structures, each PE gets a specific color; The state of the whole hardware is drawn as a 2D picture, with one pixel for each PE. When the CA iterates, state and therefore pixel-color, change. We can see the computation which is going on, as a film. ``Membrane'', ``channels'' and ``waves'' move in 2D space. What really happen is resetting the structure flag on a PE, and setting it on an adjacent PE.
The movements follows physic-like forces of repulsion, or attraction, with two main goals:

1-homogeneize the density of membranes, and

2-divide membranes by pinching at the equator.

Each membrane defines the support of one agent, so dividing a membrane really means creating a new agent.
Homogeneization acts constantly. In contrast, division is triggered by instructions sent by the host.
A division instruction optionnaly inserts a ``channel'' between the two offspring membrane, which represents a connection between agents. The movement of channel, membrane and waves is constrained, to enforce the topological invariant corresponding to their role: membranes remain closed loops, channels remain adjacent to the two connected membranes, waves sweep the entire CA. The computation of each structure's final movement is all the more complex that structure are defined with respect to one another, in a hierarchical way. Constraint and move needs to be propagated upward and downward this hierarchy, for cross-coherence.

A sequence of division-instructions develops a specific network of agents. It represent a program. It is loaded from the host on the CA, by propagating a train of ``waves'' carrying instruction. The CA is a ``General Purpose'' CA (GPCA), because when fed with a different sequence of instructions, it will develop a different network. Furthermore, developing networks is really an expressive parallel programming technique. For example, unlike static circuit, dynamic circuit can capture algorithm whose dependance graph is data-dependant.

The GPCA is unusually highly complex, with 64 bits of state and 20,000 gates. We use a CA-platform, with a language, a compiler, an optimizer and a debugger that allow to program and execute very rapidly, CAs of unprecented complexity. The CA-platform opens a new avenue to start exploring the wide and intricate world of complex CA.

Integer Polynomial Sparse Interpolation with Near-Optimal Complexity

Dan Roche, Computer Science Dept. US Naval Academy
mercredi 14 mars 2018

We investigate algorithms to discover the nonzero coefficients and exponents of an unknown sparse polynomial, provided a way to evaluate the polynomial over any modular ring. This problem has been of interest to the computer algebra community for decades, and its uses include multivariate polynomial GCD computation, factoring, and sparse polynomial arithmetic. Starting with the early works of Zippel, Ben-Or and Tiwari, and Kaltofen, one line of investigation has a key advantage in achieving the minimal number of evaluations of the polynomial, and has received considerable attention and improvements over the years. It is closely related to problems in coding theory and exponential analysis. The downside, however, is that these methods are not polynomial-time over arbitrary fields. A separate line of work starting with Garg and Schost has developed a different approach that works over any finite field. After years of improvements, the complexity of both approaches over ZZ[x] is currently the same. They scale well in most aspects except for the degree; the bit complexity in both cases is currently cubic in the bit-lengths of the exponents. By careful combination of the techniques in both approaches and a few new tricks, we are now able to overcome this hurdle. We present an algorithm whose running time is softly-linear in the size of the output and performs nearly the minimal number of evaluations of the unknown polynomial. Preliminary implementation results indicate good promise for practical use when the unknown polynomial has a moderate number of variables and/or large exponents.

Some algorithms for matrices with displacement structure

Claude-Pierre Jeannerod, LIP, ENS Lyon
mercredi 7 mars 2018

The notion of displacement rank provides a uniform way to deal with various classical matrix structures (such as Toeplitz, Hankel, Vandermonde, Cauchy) and allows for various generalizations. In this talk, we will review some recent progress in the design of asymptotically fast algorithms for multiplying and inverting such matrices. In particular, for n by n matrices of displacement rank alpha, we will see how to reduce the cost of such operations from about alpha^2*n (up to log factors) down to alpha^(w-1)*n, where w < 2.373 is the exponent of (dense, unstructured) matrix multiplication. (Joint work with A. Bostan, C. eMouilleron, and E. Schost.)

Calcul rapide des séries de Puiseux

Adrien Poteaux, LIFL, Université Lille I
mercredi 28 février 2018

Les séries de Puiseux sont un outil essentiel pour l'étude de singularité de courbes algébriques planes. Cet exposé présentera des améliorations de l'algorithme de Newton Puiseux, qui permet de calculer de telles séries. Nous décrirons une stratégie symbolique-numérique pour calculer une approximation numérique de ces séries avec une structure exacte. Puis nous donnerons les grandes idées ayant permis d’améliorer la complexité arithmétique de l’algorithme. Nous concluerons en montrant les limites atteintes par cet algorithme, et évoquerons des pistes qui pourraient les contourner dans le futur.

qDSA: Efficient, compact key exchange and signatures for IoT from elliptic curves and Kummer surfaces

Benjamin Smith, LIX, Inria Saclay
mercredi 14 février 2018

The Internet of Things (IoT) means that an exploding number of small connected devices are increasingly pervading almost every aspect of our daily lives. Securing these devices is extremely important, but it is also a great technical challenge: most of our state-of-the-art cryptographic algorithms are simply too "heavy" to run on these tiny devices with such limited computational and energy resources. This is especially true for asymmetric techniques like key exchange and digital signatures, which by their very nature require intensive algebraic computations.

This talk will describe the qDSA signature scheme, and explain how we have applied the classic geometric theory of Kummer varieties to produce compact, low-memory, high-speed software for high-security key exchange and digital signatures on microcontrollers with very limited memory.

(Joint work with Joost Renes, Radboud University Nijmegen, NL)

Multidimensional SFTs with countably many configurations

Ilkka Törmä, University of Turku, Finland
mercredi 18 octobre 2017

A recurring theme in the theory of tilings in the construction of tile sets whose associated shift spaces are in some sense as restricted as possible. Aperiodic SFTs are one example: all configurations are constrained to have no global period. We consider another restriction, namely, that the set of valid configurations is countable. Such tile sets cannot be aperiodic, but the geometry of their configurations is not as constrained as one might think. In particular, it is possible to carry out some hierarchical constructions.

Mealy automata: dynamic of the action, singular points and Wang tilings

Thibault Godin, IRIF
mercredi 11 octobre 2017

Mealy automata are a powerful tool to generate (semi)groups. Since the 80's, they have been used to solved several major conjectures in (semi)group theory. Although, people started to study them from a (theoretical) computer science perspective only recently. This approach is rich and brought many interesting results and questions.
In this talk, we focus on the dynamic of the action of the group on the regular tree and we show that classical tool from automata theory can be used to simplify and generalize known results and to provide information on this action. Then, we study a closely related problem and use a connection between Mealy automata and Wang tilings to obtain an undecidability theorem.

This is based on a joint work with D. D'Angeli, I. Klimann, M. Picantin, and E. Rodaro.

Stopping time complexity

Alexander Shen, LIRMM, Escape
mercredi 4 octobre 2017

Last year Vovk and Pavlovic suggested the definition of stopping time complexity: how many bits of information is needed to specify when to stop while reading an infinite sequence of bits. (They had some philosophical reasons to introduce this definition, but technically their suggestion can be described in this way.) It turns out that this notion of complexity is a special case of a general scheme of conditional complexities (different topological structures on conditions, programs and descriptions lead to different complexity): the stopping time complexity equals $C(x|x*)$ where $x$ in the condition is understood as information about prefix of a sequence. We'll consider also some other equivalent definitions of the stopping time complexity

On some interesting ternary formulas

Pascal Ochem, ALGCO, LIRMM
mercredi 27 septembre 2017

We show that, up to renaming of the letters, the only infinite ternary words avoiding the formula $ABCAB.ABCBA.ACB.BAC$ (or $ABCAB.BCB.AC$) have the same set of recurrent factors as the fixed point of
$0 \to 012$
$1 \to 02$
$2 \to 1$.
Also, we show that the formula $ABAC.BACA.ABCA$ is 2-avoidable.
This leads to disproving a conjecture of Grytczuk.

A Characterization of Infinite LSP Words

Gwenaël Richomme, LIRMM, Escape
mercredi 12 juillet 2017

G. Fici proved that a finite word has a minimal suffix automaton if and only if
all its left special factors occur as prefixes. He called LSP all finite and infinite words having this latter property. We characterize here infinite LSP words in terms of $S$-adicity. More precisely we provide a finite set of morphisms $S$ and an automaton ${\cal A}$ such that an infinite word is LSP if and only if it is $S$-adic and all its directive words are recognizable by ${\cal A}$.

Factoring integers with ECM on the Kalray MPPA-256 processor

Jérémie Detrey, LORIA, inria Nancy
mercredi 28 juin 2017

The Kalray MPPA-256 is a recent low-power chip which embeds 256
32-bit cores. As such, it can be used as an efficient co-processor
for accelerating computations, bridging the gap between usual CPUs
and the more throughput-oriented GPUs, all the while consuming far
less power.

In this talk, we will present a fast and energy-efficient
implementation of the Elliptic Curve Method (ECM, an algorithm for
factoring integers) on this platform. After a brief presentation of
the ECM and of its important role in cryptanalysis, especially in
the context of factoring RSA keys, we will glance at some of the key
architectural features of the MPPA-256. We will then present an
optimized library for multiprecision modular arithmetic, on top of
which the ECM implementation was built, before concluding with a few
benchmarks and comparisons with the state of the art.

This is a joint work with Masahiro Ishii, Pierrick Gaudry, Atsuo
Inomata, and Kazutoshi Fujikawa.

On the direct product of periodic and quasiperiodic sequences

Andrei Romaschchenko, LIRMM, Escape
mercredi 7 juin 2017

It is known (Avgustinovich, Fon-Der-Flaass, Frid) that a direct product of any periodic bi-infinite sequence $X$ with any quasiperiodic (weakly or strongly recurrent) sequence $Y$ is also quasiperiodic (weakly or strongly recurrent respectively). We will present several generalizatons of this combinatorial lemma and discuss its applications in the study of multidimensional SFT.

Probabilistic analysis of randomness: Sturmian words and continued fractions

Eda Cesaratto, Universidad Nacional de General Sarmiento, Buenos Aires, Argentine
mercredi 24 mai 2017

(Séminaire commun avec le séminaire du Pôle Algo-Calcul)

Sturmian words and continued fractions are representation of numbers that share the fact of being codifications of orbits of dynamical systems.
Probabilistic analysis of parameters, such as the frequency of digits, for the continued fraction expansion of real numbers is now a classical subject of research.

Sturmian words are related to the celebrated Kronecker sequence K(x) of the fractional
parts of the multiples of a real x, and contrary to the continued fraction context, probabilistic analyses are not frequent in the literature.

We perform a probabilistic study of the well-known discrepancy and of the Arnold measure, that can be viewed as measures of randomness, of the sequence K(x). This study gives insight into the pseudo-randomness of the sequence K(x)
for a random input x. We consider two main cases: the case where the ``input'' x is a random real, the case when x is a random rational and exhibit similarities between them.

Joint work with Brigitte Vallée (CNRS, U. de Caen)

Randomness and Kolmogorov extractors with multiple sources

Daniil MUSATOV, Moscow Physics-Technical University
mercredi 17 mai 2017

A randomness extractor is a function that produces "better" or "new" randomness from weak random sources. Extractors with multiple sources get two (or more) independent random variables with sufficiently high min-entropy and output a random variable close to uniform. A Kolmogorov extractor is a function that formalizes the same notion in terms of Kolmogorov complexity. Namely, it gets two strings with sufficiently high Kolmogorov complexity and sufficiently small mutual information and produces a string with complexity close to length. Hitchcock, Pavan and Vinodchadran have shown that randomness and Kolmogorov extractors are essentially the same objects: each extractor of one type is also an extractor of the other type with slightly worse parameters. Recently, new breakthrough explicit constructions of randomness extractors were invented by Chattopadhyay, Zuckerman and Li. We discuss their applications to the theory of Kolmogorov extraction. In particular, the new extractor for three sources appears to be a Kolmogorov extractor for time-bounded Kolmogorov complexity. As far as the speaker knows the case of three sources was not previously considered explicitly. For the more convential case of two sources we probably need even more sophisticated constructions.

Le problème de décomposition de points dans les variétés Jacobiennes hyperelliptiques

Alexandre Wallet, LIP, ENS Lyon
mercredi 17 mai 2017

Le Problème du Logarithme Discret (DLP) dans les variétés abéliennes définies sur des corps finis est réputé difficile en général. Par exemple, son instantiation sur les courbes elliptiques sert de base de sécurité pour des primitives cryptographiques bien connues. En effet, on ne connait actuellement aucune méthode efficace pour calculer des logarithmes discrets elliptiques, hormis pour des classes très restreintes de courbes elliptiques. Un autre famille intéressante de variétés abéliennes est celle des variétés Jacobiennes associées aux courbes algébriques. D'une part, certains algorithmes plus efficaces sont connus pour la résolution du DLP sur ces structures ; d'autre part il est parfois possible de transférer un DLP elliptique sur un DLP dans une variété Jacobienne, ce qui peut avoir des implications dans les choix de paramètres cryptographiques.

L'exposé présentera des résultats améliorant les algorithmes de calcul des logarithmes discrets dans les variétés Jacobiennes des courbes hyperelliptiques, qui sont les courbes les plus proches des courbes elliptiques. En particulier, les améliorations obtenues permettraient la réalisation complète d'un calcul de logarithme discret significatif (184 bits) sur une courbe "binaire" de genre 2. Un tel calcul était inenvisageable avant, et se compare à des records récents (EuroCrypt 2017) de DLP sur les corps finis. Je présenterai aussi quelques travaux plus théoriques visant à exploiter des propriétés structurelles supplémentaires, ainsi que leurs limites. Selon le temps disponible, je pourrai aussi présenter une contribution plus générale au calcul de logarithme discret pour n'importe quelle courbe algébrique.

Reconstruction algorithms for sums of affine powers

Timothée Pecatte, LIP, Lyon
mercredi 19 avril 2017

A sum of affine powers is an expression of the form $f(x) = a_1 * (x - b_1)^e_1 + … + a_s * (x - b_s)^e_s$.
Although quite simple, this model is a generalization of two well-studied models: Waring decomposition and sparsest shift.
For these three models there are natural extensions to several variables,
but this talk will be focused on univariate polynomials.
We will present efficient algorithms for reconstructing the smallest expression for an input polynomial $f$.
These algorithms are built on techniques developed for proving lower bounds on the number of terms $s$ needed to represent a polynomial

Machines à temps infini et ordinaux admissibles

Bruno Durand, LIRMM, Escape
mercredi 8 mars 2017

A préciser

Incompressibility, Kolmogorov complexity and normal numbers

Alexander Shen, LIRMM, Escape
mercredi 1 mars 2017

Normal number is a real number when every block of $k$ bit has the
same limit frequency in the binary representation. They have a
characterization (V.Becher, P.Heiber, 2013) in terms of
incompressibility by finite automata. However, this characterization
does not follow the standard scheme for algorithmic complexity. We
explain that it can be restated (and simplified) by considering
automata as decompressors, not compressors; this gives also simple
proofs for classical results about normal numbers (normal number
remains normal when multiplied by a rational, etc.)

A simple proof of (deterministic) #P-completeness of counting roots in finite fields

Aleksei Milovanov, Moscow state university
mercredi 25 janvier 2017

A counting problem is #P-complete if every problem in #P (counting objects that have some easily decidable property) can be reduced to it. This definition makes sense for different reduction notions. It was known for a long time (J. von zur Gathen, M. Karpinski and I.Shparlinski, Counting curves and their projections, Computational Complexity, 6, 64--99, 1996) that counting solutions of a polynomial equation P(x,y)=0 in F_{p^n} is #P-complete with respect to probabilistic reductions if polynomial is given in a sparse representation (zero coefficients are omitted). We provide a simple proof that improves this result (polynomials are univariate, and reductions are deterministic)

Are there non-stochastic objects?

Nikolay K. Vereshchagin, Moscow State University
mercredi 25 janvier 2017

An object (encoded by a binary string x) is called stochastic if there is a simple plausible statistical explanation for it. As explanations we consider probabilistic Turing machines M whose running time is bounded by a polynomial of the length of the output. We call such a machine M a plausible explanation for x if every property of x recognized by a polynomial time deterministic Turing machine with a short program is shared by a non-negligible fraction of M's outputs. We call x stochastic if there is a plausible explanation M for x such that the program of M is short, i.e., the explanation is simple.

Are all objects stochastic? Perhaps not. For instance, structured objects like paintings might be non-stochastic. We will prove that under some complexity-theoretic assumption there are infinitely many objects that are not stochastic.The assumption states that there is a unary language in NP that is not recognized by any polynomial time probabilistic machine. (A joint work with A. Milovanov.)

An Algebraic Geometric Approach to Multidimensional Symbolic Dynamics

J. Kari, Univ. Turku, Finlande
mercredi 7 décembre 2016

We study low complexity multidimensional words and subshifts using tools of algebraic geometry. We express the word as a multivariate formal power series over integers and investigate the setup when there is an annihilating polynomial: a polynomial whose product with the power series is zero. We prove that the word must then be a sum of periodic words over integers, possibly with unbounded values. As a specific application of the method we obtain an asymptotic version of the well-known Nivat's conjecture.

Subtraction-free computations and cluster algebras

Dima Grigoriev, Laboratoire Paul Painlevé, Université Lille 1
mercredi 7 septembre 2016

Using cluster transformations we design
subtraction-free algorithms for computing Schur polynomials and for
generating spanning trees and arborescences polynomials. The latter
provides an exponential complexity gap between circuits admitting
arithmetic operations $+, \, \times, \, /$ versus $+,\, \times$. In
addition, we establish an exponential complexity gap between
circuits admitting $+,\, -,\, \times,\, /$ versus $+, \, \times, \,
/$. Together with V. Strassen's result on "Vermeidung von
Divisionen" this closes a long-standing problem on comparative
complexity power between all possible subsets of operations $+,\,
-,\, \times,\, /$.

(a joint work with S. Fomin, G. Koshevoy)

Determining sets of quasiperiods of infinite words

Guilhem Gamard, Escape, LIRMM
mercredi 13 juillet 2016

A word is quasiperiodic if it can be obtained by concatenations and
overlaps of a smaller word, called a quasiperiod. Based on links
between quasiperiods, right special factors and square factors, we
introduce a method to determine the set of quasiperiods of a given
right infinite word. Then we study the structure of the sets of
quasiperiods of right infinite words and, using our method, we
provide examples of right infinite words with extremal sets of
quasiperiods (no quasiperiod is quasiperiodic, all quasiperiods
except one are quasiperiodic, \ldots). Our method is also used to
provide a short proof of a recent characterization of quasiperiods
of the Fibonacci word. Finally we extend this result to a new
characterization of standard Sturmian words using a property of
their sets of quasiperiods.

Tight upper bound on splitting by linear combinations for pigeonhole principle

Vsevolod Oparin, St. Petersburg University of the Russian Academy of Sciences.
mercredi 29 juin 2016

SAT is a well-known problem to identify if a given boolean formula has a satisfying assignment. Main technique in solving SAT is DPLL-algorithms. A usual DPLL-algorithm chooses a single variable and order of values to substitute, and run recursively on simplified formulas seeking for a satisfying assignment. We consider an extension of the algorithm which chooses not a single variable, but a linear form modulo 2. There is a known result that formulas, encoding pigeonhole principle on $n$ pigeons and $m$ holes ($PHP_n^m$), still is hard even for linear splitting and requires $2^{\Omega(n)}$ time of work. The known bound for a usual DPLL-algorithm without linear splittings is $2^{\Theta(n \log n)}$. In this talk we show that $PHP_n^m$ can be solved in $2^{O(n)}$ time with linear splittings, and, as a corollary, formulas, encoding perfect matching principle on graphs with $n$ vertices, can be also solved in $2^{O(n)}$.

A Linear Acceleration Theorem for 2D Cellular Automata on all Complete Neighborhoods

Anaël Grandjean, Escape, LIRMM
mercredi 29 juin 2016

Linear acceleration theorems are known for most computational models. Although such results have been proved for two-dimensional cellular automata working on specific neighborhoods, no general construction was known. We present here a technique of linear acceleration for all two-dimensional languages recognized by cellular automata working on complete neighborhoods.

Avoidability of formulas with two variables

Pascal Ochem, LIRMM, AlGCo
mercredi 22 juin 2016

In combinatorics on words, a word $w$ over an alphabet $\Sigma$ is said to avoid a pattern $p$ over an alphabet $\Delta$ of variables if there is no
factor $f$ of $w$ such that $f = h(p)$ where h : $\Delta^* \to \Sigma^*$ is a non-erasing
morphism. A pattern $p$ is said to be $k$-avoidable if there exists an infinite
word over a $k$-letter alphabet that avoids $p$. We consider the patterns
such that at most two variables appear at least twice, or equivalently, the
formulas with at most two variables. For each such formula, we determine
whether it is $2$-avoidable.

Logic for Algorithms

Rod Downey, Victoria University, Wellington, New Zealand
mercredi 15 juin 2016

Graphs and similar objects which can be coded by reasonable parse
languages allow for algorithmic metatheorems to demonstrate parametric
tractability. This talk will review some of the high points in this area. It
will also discuss recent application of this idea to low dimensional topology
by the author and Benjamin Burton.

Introduction au calcul uniforme : élection de leader sur les automates cellulaires à entrée périodique

Nicolas Bacquey , GREYC, Caen
mercredi 11 mai 2016

Tous les modèles de calcul usuels présentent une "singularité" qui permet de démarrer un calcul : tête d'une machine de Turing, état initial, cellule au bord de la configuration d'un automate cellulaire... Est-il possible de définir une notion de calcul sur un modèle dénué d'un maximum de ces singularités ? Comment présenter l'entrée, comment lire un résultat ? Comment définir une complexité ?

On tentera de répondre à ces questions dans le modèle de calcul des automates cellulaires à configuration périodique, modèle dans lequel à la fois les données et le programme sont uniformes dans l'espace. Plus précisément, on présentera une famille d'algorithmes réalisant l'élection de leader sur ces configurations périodiques. Il est impossible d’élire une unique cellule de la configuration par définition ; les algorithmes présentés utilisent des techniques permettant de transformer des différences spatiales en différences temporelles, induisant des ruptures de symétrie. Ainsi, on parvient à élire une unique classe d'équivalence de cellules de la configuration.

On Machines and Melodies - An Invitation to Infinitary Computability

Merlin Carl, Université de Constance Allemagne
mercredi 27 avril 2016

Turing computability provides an intuitive and attractive formal
framework for the analysis of the question which mathematical objects
and functions can be generated by applying a certain mechanical
procedure finitely many times. However, in mathematical practice,
we often encounter examples of objects generated by infinite processes
involving limits, such as adding reals, computing with limits of real
sequences, forming direct limits of directed systems, hulls, closures
etc. This motivates the study of infinitary computations that can account
for the limit phenomenon in such processes and should provide
a framework for extending results from recursion theory to infinitary
mathematical objects.

I will introduce some of the most prominent machine models of
infinitary computability and give an overview of results obtained about
them.

Stable states of perturbed Markov chains

Stéphane Le Roux, Université Libre de Bruxelles
vendredi 8 avril 2016

Given an infinitesimal perturbation of a discrete-time finite Markov chain, we seek the states that are stable despite the perturbation, i.e.the states whose weights in the stationary distributions can be bounded away from $0$ as the noise fades away. Chemists, economists, and computer scientists have been studying irreducible perturbations built with exponential maps. Under these assumptions, Young proved the existence of and computed the stable states in cubic time. We fully drop these assumptions, generalize Young's technique, and show that stability is decidable as long as $f \in O(g)$ is. Furthermore, if the perturbation maps (and their multiplications) satisfy $f \in O(g)$ or $g \in O(f)$, we prove the existence of and compute the stable states and the metastable dynamics at all time scales where some states vanish. Conversely, if the big-O assumption does not hold, we build a perturbation with these maps and no stable state. Our algorithm also runs in cubic time despite the general assumptions and the additional work. Proving the correctness of the algorithm relies on new or rephrased results in Markov chain theory, and on algebraic abstractions thereof.

Fast computation of minimal interpolation bases in Popov form for arbitrary shifts

Vincent Neiger, LIP, ENS Lyon
mercredi 30 mars 2016

Recently, [Zhou - Labahn, 2012] gave an algorithm for Hermite-Padé approximation that can efficiently compute a polynomial matrix whose rows form a basis of all solutions and whose degrees are minimal for some prescribed shift; for efficiency, they require an assumption on the shift. In this talk, we propose an algorithm which is similarly efficient, makes no assumption on the shift, and deals with the more general problem of computing minimal interpolation bases. Those represent bases of solutions for the rational interpolation problem in [Beckermann - Labahn, 2000]; it encompasses for example Hermite-Padé approximation, M-Padé approximation, and the bivariate interpolation step in the Guruswami-Sudan algorithm.

Joint work with Claude-Pierre Jeannerod, Éric Schost, and Gilles Villard.

Code generator matrices as entropy extractors

Alessio Meneghetti, Universita di Trento, Italia
mercredi 17 février 2016

There is a connection between the Walsh spectrum of the output of a binary random number generator (RNG) and the bias of individual bits. This can be used to show how previously known bounds on the performance of linear binary codes as entropy extractors can be derived by considering generator matrices as a selector of a subset of that spectrum. It can also be proved a connection with the code’s weight distribution, which implies a new bound on the performance of a linear extractor.

Efficient Universal Computation by Greedy Molecular Folding

Nicolas Schabanel, LIAFA, Paris
mercredi 10 février 2016

Joint work with: Cody Geary, Pierre-Étienne Meunier and Shinnosuke Seki

We introduce and study the computational power of Oritatami, a theoretical model to explore greedy molecular folding, by which the molecule begins to fold before waiting the end of its production. This model is inspired by our recent experimental work demonstrating the construction of shapes at the nanoscale by folding an RNA molecule during its transcription from an engineered sequence of synthetic DNA. While predicting the most likely conformation is known to be NP-complete in other models, Oritatami sequences fold optimally in linear time. Although our model uses only a small subset of the mechanisms known to be involved in molecular folding, we show that it is capable of efficient universal computation, implying that any extension of this model will have this property as well.

We develop several general design techniques for programming these molecules. Our main result in this direction is an algorithm in time linear in the sequence length, that finds a rule for folding the sequence deterministically into a prescribed set of shapes depending of its environment. This shows the corresponding problem is fixed-parameter tractable although we proved it is NP-complete in the number of possible environments. This algorithm was used effectively to design several key steps of our constructions.

On aperiodic tilings of the plane by polygons

Nikolai Vereshchagin, Moscow University
mercredi 20 janvier 2016

We consider some families of self-similar tilings of the plane by triangles, rectangles and hexagons. Each family we consider corresponds to a substitution scheme. We investigate which of those families of tilings can be defined by local rules.

On algorithms for integer factorization and discrete logarithms computation

Cyril Bouvier, IMB, Université de Bordeaux
mercredi 16 décembre 2015

In this talk, I will present some results obtained during my Ph.D on integer factorization and discrete logarithm computation in finite fields. First, I will present the ECM algorithm for integer factorization and a new method to analyze the elliptic curves used in this algorithm by studying the Galois properties of division polynomials. Then, I will talk about the NFS algorithm for integer factorization and in particular the polynomial selection step for which I will propose improvements of existing algorithms. Finally, I will talk about a common step of the NFS algorithm for integer factorization and the NFS-DL and FFS algorithms for discrete logarithm computations: the filtering step. I will explain this step thoroughly and present an
improvement for which I will study the impact using data from several computations of discrete logarithms and factorizations.

Kolmogorov complexity versions of busy beavers

Mikhail Andreev, Moscou
mercredi 16 décembre 2015

Busy beaver function was introduced by Tibor Rado as maximum number which can be printed by Turing Machine with at most N internal states over {0,1} alphabet. In complexity theory analogue of busy beaver number was introduced by Gacs as maximum number with Kolmogorov complexity at most N. This function is defined up to additive term in argument (as well as complexity is defined up to additive term). One can use different types of Kolmogorov complexity to get different busy beaver functions. In this talk I'm going to give formal definitions of complexity busy beaver for plain and prefix-free Kolmogorov complexity, discuss some basic properties of this functions and relation between them.

Problème de l’arrêt et son approximatisation

Alexander Shen, Escape, LIRMM
mercredi 9 décembre 2015

Often we say that "for any reasonable programming language [this or that]". But what is "reasonable" here? There are several possible requirements: one of them is s-m-n-theorem, or G\"odel property: any other programming language can be efficiently translated. It is not obvious that this guarantees the invariance of the properties we care about: for example, it is not obvious (but true) that for every program we can effectively write infinitely many equivalent programs (we can add comments in a standard programming language, but the translator could stard by deleting them). The Rogers equivalence theorem helps by proving that every two G\"odel programming languages are related by a comptable bijection of programs.

There are some properties for which this is not enough. For example, one may ask whether the halting problems is generically solvable, i.e., whether there is an algorithm that correctly solves it for a fraction of inputs that asymptotically is $1$. Here for some programming languages the answer is trivial --- e.g., if we require that program file that contains "control-Z" immediately terminates, then asymptotically almost all program terminate. (More interesting result of the same time - most of the Turing machines with N states on a one-sided tape fall of the tape and terminate (Hamkins, Myasnikov).) Still there is a natural requirement for the programming language ("efficient optimality", the existence of a translator that increases the length only by $O(1)$), and the corresponding invariance result due to Schnorr.

For the halting problem, however, a weaker property of optimality (without efficiency requirement) is enough, and there is a simple reason related to Chaitin-like $\Omega$-numbers: for an optimal programming language the number of terminating programs of length at most $n$ has complexity $n-O(1)$. This explains why the halting problem is not generically solvable for all optimal languages.

Capacité valuative d'un ensemble de mots infinis

Youssef Fares, LAMFA, Amiens
mercredi 25 novembre 2015

En 1997 Manul Bhargava a donné une généralisation de la notion de factorielle. Cette notion est liée à la notion de suite $p$-ordonnée introduite aussi par Bhargava. En utilisant cette dernière notion, on peut associer à chaque partie $E$ de $Z_p$ une suite appelée suite caractéristique de $E$. Cette suite n'est, en général, pas facile à déterminer.
En considérant un ensemble $M$ de mots infinis (avec un alphabet fini) comme une partie de $Z_p$, on peut s'intéresser
à la suite caractéristique de $M$ et à sa limite asymptotique.
Le but de l'exposé est de rappeler toutes ces notions, de donner des propriétés simples liées à ces notions et d'étudier deux cas particuliers:
1) le cas du sous-shift de type fini
2) Le cas d'un ensemble de mots sturmiens.

Une version combinatoire du problème de Slepian et Wolf

Andrei Romashchenko, Escape, LIRMM
mercredi 14 octobre 2015

(joint work with Daniyar Chumbalov, EPFL)

We study the following combinatorial version of the Slepian-Wolf coding scheme. Two isolated Senders are given binary strings $X$ and $Y$ respectively; the length of each string is equal to $N$, and the Hamming distance between the strings is bounded by some threshold $R$. The Senders compress their strings and communicate the results to the Receiver. Then the Receiver must reconstruct both strings $X$ and $Y$. The aim is to minimise the lengths of the transmitted messages.

This problem lies on the border line between coding theory and communication complexity, and to cope with it we need tools from both fields. We prove nontrivial lower bounds for a deterministic variant of this problem with syndrome-coding, i.e., for the case where at least one of the Senders uses linear encoding. For the probabilistic setting (with randomised parties) we find the theoretical optimum of communication complexity and construct an effective (polynomial time) protocol with optimal lengths of messages.

Doubled patterns are 3-avoidable

Pascal Ochem, AlGCo, LIRMM
mercredi 30 septembre 2015

A pattern is doubled if all of its variables appear at least twice.
In the last talk, we conjectured that the so-called "power series method" and "entropy compression method" are equivalent.
The conjecture is now proven and we use the unified method to show
that most of the doubled patterns are avoidable over the 3-letter alphabet.
There remain 10 doubled patterns that are shown to be 3-avoidable with the use of morphisms.

Tabulating Class Groups of Quadratic Fields

Michael Jacobson, Université de Calgary
mercredi 23 septembre 2015

Class groups of quadratic fields have been studied since the time of Gauss, and in modern times have been used in applications such as integer factorization and public-key cryptography. Tables of class groups are used to provide valuable numerical evidence in support of a number of unproven heuristics and conjectures, including those due to Cohen and Lenstra. In this talk, we discuss recent efforts to extend existing, unconditionally correct tables of both imaginary and real quadratic fields.

Remarque : cet exposé du séminaire Eco-Escape se déroulera dans le cadre plus large du séminaire du pôle algo-calcul

Randomness deficiencies: quantifying randomness

Gleb Novikov, Moscow
mercredi 26 août 2015

Algorithmic randomness divides binary sequences into random (believable as coin tossing outputs) and non-random. Adding a bit to a random sequence, we get a random sequence, so adding billion of zeros also preserves randomness. However, this sequence is ``less random''. Randomness deficiency captures this difference: a sequence is random iff randomness deficiency is finite, but it can be large or small. The classical results (e.g., van Lambalgen pair randomness theorem) have quantitative versions that are inequalities for randomness deficiency. We consider several examples of this kind.

Antistochastic strings and their applications

Aleksei Milovanov, Moscow
mercredi 26 août 2015

Kolmogorov complexity of a string is defined as the minimal length of a program that generates it. This shortest program, however, could require a lot of time to produce the string, while longer programs may be faster. Antistochatic strings are strings for which this trade-off effect happens in the strongest form. It turns out that they have interesting properties that can be used to prove some results in algorithmic information theory and coding theory.

Temps de non calcul des machines de Turing à temps infini

Sabrina Ouazzani, Escape, LIRMM
mercredi 24 juin 2015

Présentation générale des machines de Turing à temps infini puis étude
de quelques propriétés ordinales de clockabilité et d'admissibilité.

Quasiperiodicity and non-computability in tilings

Andrei Romashchenko, Escape, LIRMM
mercredi 20 mai 2015

We study tilings of the plane that combine strong properties of different nature: combinatorial (quasi-periodicity) and algorithmic (non-recursivity). Thus, we come up with a tile set that accepts only quasi-periodic and non-recursive tilings, and discuss some generalisations of this construction.

Synchronisation et automates cellulaires

Gaétan Richard, Greyc, Caen
mercredi 13 mai 2015

Dans l’optique de regarder la robustesse des systèmes complexes, un phénomène qui semble intéressant est la synchronisation: partant de n’importe quel état global, tous les éléments arrivent à se synchroniser. Dans cet exposé, nous nous appuierons sur le modèle discret et régulier des automates cellulaires unidimensionnels afin de pouvoir esquisser une définition formelle du phénomène et explorer quelles conditions permettent son apparition.

An Introduction to Hyperelliptic Curve Arithmetic

Renate Scheidler, University of Calgary, Canada
mercredi 6 mai 2015

Elliptic and low genus hyperelliptic curves represent a very suitable setting for public key cryptography, due to their efficient arithmetic and excellent security properties. Elliptic curves support simpler and faster arithmetic than their hyperelliptic counterparts, but this is offset by the fact genus 2 hyperelliptic curves achieve the same level of security with a base field that is half the size of that used for elliptic curves. Our main protagonist in this primer on hyperelliptic curves will be the Jacobian which is hyperelliptic analogue of the group of points on an elliptic curve. We will introduce a computationally highly useful representation of the elements of the Jacobian, along with their two-step arithmetic consisting of addition and reduction. This talk is targeted at an audience with no or minimal exposure to elliptic curves and no prior knowledge of algebraic curves in general.

Construction de courbes hyperelliptiques à multiplication complexe

Sorina Ionica, Université de Bordeaux 1
mercredi 22 avril 2015

La sécurité des protocoles cryptographiques à base de courbes repose sur la difficulté du problème du logarithme discret. Pour étudier ce problème, on considère le graphe suivant : les noeuds sont des courbes de genre 3 et les arêtes sont des isogénies entres les jacobiennes de ces courbes. Dans ce graphe, qu'on appele graphe d'isogénies, il y a deux types de noeuds : des noeuds hyperelliptiques forts, car le log discret est difficile, et des noeuds non-hyperelliptiques faibles, pour lesquels on dispose d'attaques efficaces. Dans cet exposé, on donne une méthode pour construire des courbes hyperelliptiques de genre 3 pour la cryptographie et on présente une approche pour étudier le log discret en genre 3, via le graphe d'isogénies.

The Collatz conjecture aka the 3n+1 problem

Mario Weitzer, Montanuniversität Leoben, Autriche
mercredi 15 avril 2015

The subject of this talk is the famous Collatz conjecture which is also known as the 3n+1 problem. Let $n$ be any non-negative integer and apply the following transformation: If $n$ is even divide it by 2 and if $n$ is odd multiply it by 3 and add 1. The Collatz conjecture is the unproven observation that no matter what number n one starts with, repeated application of this transformation always eventually yields 1. Though extremely easy to formulate this problem is open since 1937. In this talk we will give an introduction to the problem, present basic facts and interpretations, and deduce and discuss first nontrivial results.

Processus stochastiques d'entropie maximale pour les automates temporisés

Nicolas Basset, Oxford, Royaume-Uni
mercredi 1 avril 2015

D'abord nous rappelons la question généralisée par la suite au cas des automates temporisés : comment générer aléatoirement et de façon quasi uniforme les chemins dans un graphe?
Une solution à cette question est basée sur le processus stochastique d'entropie maximale décrit par Shannon (et connu sous le nom de mesure de Parry en dynamique symbolique et théorie ergodique).
Ensuite nous généralisons les notions de processus stochastique et d'entropie aux automates temporisés.
Nous construisons à l'instar du cas non temporisé un processus stochastique d'entropie maximale pour un automate temporisé.
Ce processus stochastique est défini à l'aide des attributs spectraux (rayon spectral, fonctions propres) de l'opérateur d'Asarin et Degorre, un analogue temporisé de la matrice d'adjacence d'un graphe.
Ce processus permet de faire une génération aléatoire quasi-uniforme des calculs d'un automate temporisé.
Une application est la génération aléatoire (quasi-)uniforme de permutations dont le mot formé par les montées et descentes succesives est dans un langage régulier.

Sémantique des jeux pour la complexité d'ordre supérieur

Hugo Férée, LORIA, Nancy
mercredi 25 mars 2015

Bien que les notions de complexité pour les fonctions d'ordre 1 $(N \to N)$ et 2 $((N \to N) \to N)$ sont bien définies et possèdent de nombreuses caractérisations, ce n'est
pas le cas pour les fonction d'ordre 3 et plus.
Nous proposons d'aborder le problème de la complexité en utilisant la sémantique des jeux, c'est à dire en décrivant un calcul comme un dialogue entre l'objet calculé et ses entrées. Nous définissons alors la taille et la complexité des stratégies pour certains types de jeux dits séquentiels. Cela s'applique en particulier aux jeux associés au langage d'ordre supérieur PCF, ce qui nous permet de définir une notion de complexité sur ces fonctions. La classe de fonctions calculables en temps polynomial qui en découle vérifie en particulier les propriétés minimales que l'on attend d'un analogue de FPTIME pour les fonctions d'ordre supérieur.

Improved Parallel Gaussian Elimination for Gröbner Bases Computations in Finite Fields

Brice Boyer, LIP6, UPMC
mercredi 25 mars 2015

We present a GPLv2 open source C library for linear algebra specialized for eliminating matrices generated during Gröbner Bases computations in algorithms like F4 or F5. We improve on the initial ideas of Faugère and Lachartre (FGL).

Our approach takes even more advantage of the very special structure the corresponding matrices have: quasi unit-triangular sparse matrices with patterns in the data. Optimizing this reduction step is crucial for the overall Gröbner Bases computation.

We first present improved data structures for storing these matrices in binary format, namely by compressing the repeated data in the rows and the column indexes, before gzip-ing the result. We can save up to an order of magnitude in space, allowing us to produce and share a large database of such matrices for benchmarking and testing purpose.

We show efficient blocked data structures for computing the reduced forms with specialized axpy and trsm operations, that take advantage of the patterns in the matrices and their sparsity. For instance, a special multiline storage allows cache friendly storage of sparse rows. We also reduce the number of operations, in a parallel friendly fashion, by changing the order of the operations in the elimination and by not computing the full row echelon form.

Finally, we present experimental results for sequential and parallel computations on NUMA architectures. With our new implementation we get a 5-10\%%%% speed-up for the sequential algorithm depending on the rank. We also get better scaling up until 32 (non hyper-threaded) cores instead of 16: we have speed-ups around $14$ or $16$ for bigger benchmarks. We also save more than twice the amount of memory used during the computation.

The whole alphabet (and then some) agree on a key in one round: making multilinear maps practical

Adeline Langlois, LASEC, EPFL, Lausanne, Suisse
mercredi 18 mars 2015

Multilinear maps have become versatile tools for designing cryptographic schemes since a first approximate realisation candidate was proposed by Garg, Gentry and Halevi (GGH). In this talk, I will introduce the more efficient variant GGHLite. Then I will present the improvements we made to provide an efficient implementation of GGHLite.

While GGHLite represents an asymptotic improvement in parameter sizes compared to GGH, implementing it naively would still not allow to instantiate it for non-trivial parameter sizes. We hence propose a strategy which reduces parameter sizes further and several technical improvements to allow for an efficient implementation. To demonstrate the performance of our results, we also provide an implementation of a non-interactive N-partite Diffie-Hellman key exchange and report on experimental results. For example, due to our improvements we were able to compute a multilinear map of multilinearity level $\kappa$ at security level $\lambda=80$.

This is a joint work with M. Albrecht, C. Cocis and F. Laguillaumie.

Sparse Gröbner Bases: the Unmixed Case

Pierre-Jean Spaenlehauer, LORIA, Nancy
mercredi 4 mars 2015

We present an analog of Gröbner bases for semigroup algebras, and we propose variants of the algorithms F5 and FGLM to compute them. These algorithms provide tools to compute the solutions of sparse systems of equations when all the polynomials share the same monomial support (unmixed case). When these monomials correspond to the points with integer coordinates in a normal lattice polytope and under regularity assumptions, we prove complexity bounds which depend on the combinatorial properties of this polytope. Our prototype "proof-of-concept" implementation shows speed-ups (more than 100 for some examples) compared to classical Gröbner bases software for some families of sparse systems.

Joint work with Jean-Charles Faugère and Jules Svartz.

Complexity and decomposition to the set of factors (joint work with J. Cassaigne, A. Frid and L. Q. Zamboni)

Svetlana Puzynina, Post-doc, Lyon et Sobolev institute, Novosibirsk
mercredi 25 février 2015

In this talk we introduce a new hierarchy of classes of languages and infinite words and its connection with complexity classes. Namely, we say that a language belongs to the class $L_k$ if each word in it can be introduced as a concatenation of $k$ words from a language $S$, such that the number of words of length $n$ in $S$ is bounded by a constant. The class of infinite words whose set of factors is in $L_k$ is denoted by $W_k$. In this talk we focus on the relations between the classes $W_k$ and the subword complexity of infinite words, which is as usual defined as the number of factors of the word of length n. In particular, we prove that the class $W_2$ coincides with the class of infinite words of linear complexity. On the other hand, although the class $W_k$ is included in the class of words of complexity $O(n^{k-1})$, this inclusion is strict for $k > 2$.

Calcul des racines de polynômes à coefficients dans un corps fini

Bruno Grenet, ECO, LIRMM
mercredi 11 février 2015

Trouver les racines d'un polynôme à coefficients dans un corps fini est un problème fondamental du calcul formel, qui sert de brique de base à de nombreux algorithmes du domaine (factorisation, interpolation, ...) et qui a des applications en cryptographie ou pour les codes correcteurs par exemple. D'autre part, il s'agit de l'un des rares problèmes que l'on sait résoudre en temps polynomial probabiliste mais pas en temps polynomial déterministe.

Dans mon exposé, je présenterai les grandes familles d'algorithmes connues pour ce problème, avant de me focaliser sur une nouvelle approche que nous avons récemment proposée. Cette approche est basée sur la transformée de Graeffe, un outil issu de l'analyse numérique, et est particulièrement adaptée à certains corps finis. Cela nous permet d'obtenir de nouvelles bornes de complexité déterministe pour ce problème, ainsi qu'une implantation dans le logiciel libre Mathemagix, plus rapide que les logiciels existants.

Travail réalisé en commun avec Grégoire Lecerf et Joris van der Hoeven.

Bisimulation games in modal logic

Valentin Shehtman, IITP, Moscow, Russia
mercredi 4 février 2015

Bisimulation games are modal logic analogues of classical Ehrenfeucht-Fraisse games. They are closely related to bisimulations of Kripke models (transition systems) and can be used for study of expressive power of different logical languages, for proofs of theorems on completeness and finite model property for logics and theories. In this talk we show how bisimulation games can simplify proofs of local finiteness of nonclassical propositional logics (or the corresponding varieties of algebras); a logic $L$ is called locally finite if for any finite $n$ there are finitely many formulas in $n$ variables up to equivalence in $L$. Well-known results of this kind are the theorems by Segerberg - Maksimova and Kuznetsov - Komori on local finiteness of modal and intermediate logics of finite depth. In this talk we propose various generalizations of these theorems and discuss further lines of research.

Rank-metric codes: duality and MacWilliams identities

Alberto Ravagnani, Université de Neuchâtel (Suisse)
mercredi 21 janvier 2015

Delsarte rank-metric codes are linear spaces of matrices of bounded rank and entries in a given finite field. A different definition of rank-metric code was proposed later by Gabidulin. Both Delsarte and Gabidulin codes have been recently applied in linear network coding for error correction in multicast communications. Gabidulin codes also have applications in (information theoretic) security. In this talk we compare the duality theories of Delsarte and Gabidulin codes, proving that the former can be regarded as a generalization of the latter. We also show how one can derive MacWilliams identities (explicit relations between a code and its dual) for Delsarte codes using a double-counting argument. The original proof by Delsarte is based on the theory of association schemes and regular semilattices. The result also applies to Gabidulin codes. As an application of the results to a classical problem in enumerative combinatorics, we provide a recursive formula for the number of rectangular matrices of given size, rank and h-trace over a finite field.

calcul de polynômes modulaires en dimension 1 et 2

Enea Milio, IMB, Université de Bordeaux 1
mercredi 17 décembre 2014

Un polynôme modulaire de niveau $p$ en dimension 1 est un polynôme qui paramétrise les classes d'isomorphisme de courbes elliptiques muni d'une isogénie de degré $p^2$. Ces polynômes ont des applications cryptographiques, notamment parce qu'ils permettent de construire des courbes elliptiques avec un nombre de points fixés et parce qu'ils permettent de calculer des graphes d'isogénies. Dans un premier temps, nous allons rappeler les différentes notions intervenant dans la définition de ces polynômes et puis nous nous proposons de décrire un algorithme de type évaluation/interpolation pour les calculer. Dans un second temps, nous donnerons une généralisation de cette notion en dimension 2 et décrirons certains résultats que nous avons obtenus.

Autour de la longueur palindromique

Gwenaël Richomme, Escape, LIRMM
mercredi 10 décembre 2014

La longueur palindromique d'un mot fini est la plus petite longueur d'une factorisation en palindromes de ce mot.
A.E. Frid, S. Puzynina et L.Q. Zamboni ont conjecturé que tout mot infini dont la longueur palindromique des facteurs est borné est un mot ultimement périodique. Nous ferons le tour de différents aspects de cette conjecture et montrerons la conjecture dans un cas particulier où la longueur palindromique est remplacée par une version calculée de manière gloutonne.

On discrete compressed sensing problem

Grigori Kabatianski, Institute for Information Transmission Problems, Moscow
mercredi 26 novembre 2014

Compressed sensing (CS) problem can be formulated in the following way [1,2] - to find a real vector $x \in \mathbf{R}^n$ from minimal number of linear measurements, i.e. from equation $s=Hx^T+e$, where vector $x$ is $t$-sparse, i.e. $||x||_0 \leq T$, and vector e errors of measurements is enough small. Discrete version of CS problem replaces restriction $||x||_2$ is enough small on restriction that vector $e$ is $L$-sparse. A particular case of this problem is for instance famous Erdős-Ulam problem "search with a lie" or "problem of 20 questions" [3]. In the talk we give a solution of discrete CS problem in the same way as RS-codes provide the solution of the main coding theory problem for the cases when code length is at most field's cardinality. It is worth to mention that RS codes aren't good for discrete CS problem as they demand the redundancy at least $4TL$, see modern analysis of Prony's algorithm [5]. We have found an optimal solution which uses only $2(T+L)$ redundancy. In the conclusion it will be shown how these results can be applied for a restricted version of ordinary CS problem. The talk is based on joint research of the speaker with Serge Vladuts, Cedric Tavernier and Valery Lomakov.

References

[1] D.L. Donoho, "Compressed sensing", IEEE Transactions on Information Theory, v. 52 (4), pp. 1289-1306, 2006.

[2] E.J. Candes, T. Tao, "Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies? ", IEEE Transactions on Information Theory, v. 52 (12), pp. 5406 - 5425, 2006.

[3] A. Pelc, "Solution of Ulam's Problem on Searching with a Lie", Journal of Comb.Theory, Ser.A 44, pp. 129-140, 1987.

[4] R.Prony, "Essai experimantal et analytique sur les lois del Dilatabilite de fluides" J. de Ecole Polytechnique 1, pp. 24-76, 1795.

[5] M.T. Comer, E.L. Kaltofen, C.Pernet, "Sparse Polynomial Interpolationb and Berlekamp-Massey Algorithms That Correct Outlier Errors in Input Values", in Proc. ISSAC 2011, pp. 138-145.

Randomizing Scalar Multiplication Using Exact Covering Systems of Congruences

Laurent Imbert, LIRMM, CNRS
mercredi 19 novembre 2014

Since the pioneer works of Kocher, side channel attacks have been taken very seriously by researchers and designers of cryptographic algorithms. By attacking the implementations instead of the underlaying mathematical problems, these attacks have proved very powerful against both private and public key algorithms.

In this talk, I will briefly sketch the bases of these attacks and present some common countermeasures. I will then present a novel family of scalar multiplication algorithms (intensely used in elliptic curve cryptography) based on covering systems of congruences. At the lower level, the algorithms are very regular; they consist of a repeating sequence of identical instructions. At an higher level however, they are uniformly randomized and thus behave very irregularly, making simple, differential and horizontal attacks much more difficult.

This is a joint work with Eleonora Guerrini (LIRMM) and Theo Winterhalter (ÉNS Cachan).

Degrés Turing des sous-shifts minimaux

Pascal Vannier, LACL (Paris-Est Créteil)
mercredi 12 novembre 2014

Les sous-shifts sont des sous-ensembles de $\mathbf{Z}^d$ fermés invariants par translation, ou de manière équivalente, définissables par une famille de motifs interdits. Un sous-shift est minimal si il ne contient proprement aucun autre sous-shift, tous les sous-shifts contiennent donc un sous-shift minimal. Un sous-shift minimal a la particularité que tous ses éléments ont les mêmes motifs et que pour tout motif de taille n apparaissant dans une configuration, il existe un $N$ tel que dans toute fenêtre de taille $N$ de n'importe quelle configuration ce motif apparaisse. Nous nous intéressons ici à la structure calculatoire de ces sous-shifts et en particulier à la structure de leurs ensembles de degrés Turing.

Entropy compression and shuffle squares

Pascal Ochem, AlGCo, LIRMM
mercredi 22 octobre 2014

Daniel Gonçalves, Mickael Montassier et Alexandre Pinlou ont développé l'utilisation de la compression d'entropie pour la coloration de graphes. On s'intéresse à cette méthode pour des problèmes d'évitabilité, i.e., prouver qu'il existe un mot infini évitant certains facteurs interdits sur un alphabet aussi petit que possible.

On montre que les shuffle squares sont évitables avec un alphabet de taille 7. Aussi, les caterpillars admettent une coloration non-répétitive par listes de taille 5. Enfin, on conjecture que cette méthode est toujours au moins aussi puissante qu'une méthode concurrente utilisant une série génératrice.

Extension d'une notion de quasipériodicité en deux dimensions

Guilhem Gamard, Escape, LIRMM
mercredi 8 octobre 2014

Un mot est quasipériodique s'il peut être recouvert par des occurrences d'un autre mot, appelé quasipériode. Par exemple, ababaabababa admet aba et ababa comme quasipériodes. En outre, un mot est dit quasipériodique multi-échelles s'il a une infinité de quasipériodes.

En premier lieu, nous rappellerons comment, dans le contexte des mots infinis à droite, ces notions de quasipériodicité et quasipériodicité multi-échelles se relient à d'autres concepts classiques de combinatoire des mots, tels que l'uniforme récurrence ou la complexité en facteurs.

Dans un second temps, nous généraliserons ces notions aux mots infinis bidimensionnels. Nous étudierons alors les résultats de la première partie qui passent à la dimension deux. En outre, nous expliquerons pourquoi certains résultats ne se généralisent pas.

(Gaussian) Shift Radix Systems - Introduction and Results

Mario Weitzer, Montanuniversität Leoben, Autriche
mercredi 3 septembre 2014

Shift Radix Systems (SRS) turned out to be the missing link between two generalizations of positional notation systems - Beta-Expansions and Canonical Number Systems (CNS) - which have been studied extensively during the last decade, but still leave behind many open questions and unsolved problems. In distinction from positional notation systems, where all integers greater than 1 can serve as a basis, in the general cases only the elements of complicated sets satisfy certain natural finiteness conditions. An introduction to SRS is given and results in relation to the characterization of such sets are being presented. These results settle two previously open topological questions. For a specific set it is shown that it is disconnected and so is its complement. For another set known as Pethö's Loudspeaker a conjecture on its shape is formulated and proved to some extent.