ECO
Exact Computing

Le séminaire ECO a lieu au bâtiment 4 du LIRMM ou en ligne le mardi à 15h30 ou le vendredi après-midi.

Thématiques :

Contact : Romain Lebreton



Some faster algorithms for symmetric polynomial systems

Thi Xuan Vu, Johannes Kepler University
lundi 22 avril 2024

Many important polynomial systems have additional structure, for
example, generating polynomials invariant under the action of the
symmetric group. In this talk we consider two questions for such
systems. The first focuses on computing the critical points of a
polynomial map restricted to an algebraic variety, a problem which
appears in many application areas including polynomial optimization
and real algebraic geometry. Our second problem is to decide the
emptiness of algebraic varieties over real fields, a starting point
for many computations in real algebraic geometry. In both cases we
provide efficient probabilistic algorithms which take advantage of
the special invariant structure. In particular in both instances our
algorithms obtain their efficiency by reducing the computations to
ones over the symmetric group orbits and make use of tools such as
weighted polynomial domains and symbolic homotopy methods.

This contains joint works with J.-C. Faugère, G. Labahn,
C. Riener, M. Safey El Din and É. Schost.

Plug-and-play sanitization for TFHE

Malika Izabachène, École Polytechnique
vendredi 5 avril 2024

Fully Homomorphic Encryption allows evaluating an arbitrary function over encrypted data while preserving the privacy of the messages. This property has found numerous applications especially in the case where one would like to process data stored in the cloud in a private way. In this talk, I will focus on the privacy of the algorithm processed by the cloud. Fully Homomorphic Encryption sanitization guarantees that all the information about how a ciphertext has been obtained is destroyed, except the associated message. In particular, it is impossible to say which computation has been processed in order to obtain a sanitized ciphertext, even knowing the secret key. We will see how to build a sanitization algorithm from the TFHE bootstrapping (Asiacrypt 2016) and how it compares to the previous soak-and-spin strategy proposed by Ducas and Stehlé (Eurocrypt 2016).

This is joint work with Florian Bourse.

Publicly Verifiable Zero-Knowledge and Post-Quantum Signatures From VOLE-in-the-Head

Lennart Braun, Aarhus University
mercredi 20 mars 2024

We present a new method for transforming zero-knowledge protocols in the
designated verifier setting into public-coin protocols, which can be
made non-interactive and publicly verifiable. Our transformation applies
to a large class of ZK protocols based on oblivious transfer. In
particular, we show that it can be applied to recent, fast protocols
based on vector oblivious linear evaluation (VOLE), with a technique we
call VOLE-in-the-head, upgrading these protocols to support public
verifiability. Our resulting ZK protocols have linear proof size, and
are simpler, smaller and faster than related approaches based on
MPC-in-the-head.

To build VOLE-in-the-head while supporting both binary circuits and
large finite fields, we develop several new technical tools. One of
these is a new proof of security for the SoftSpokenOT protocol (Crypto
2022), which generalizes it to produce certain types of VOLE
correlations over large fields. Secondly, we present a new ZK protocol
that is tailored to take advantage of this form of VOLE, which leads to
a publicly verifiable VOLE-in-the-head protocol with only 2x more
communication than the best, designated-verifier VOLE-based protocols.

We analyze the soundness of our approach when made non-interactive using
the Fiat-Shamir transform, using round-by-round soundness. As an
application of the resulting NIZK, we present FAEST, a post-quantum
signature scheme based on AES. FAEST is the first AES-based signature
scheme to be smaller than SPHINCS+, with signature sizes between 5.6 and
6.6kB at the 128-bit security level. Compared with the smallest version
of SPHINCS+ (7.9kB), FAEST verification is slower, but the signing times
are between 8x and 40x faster.

This is joint work with Carsten Baum, Cyprien Delpech de Saint Guilhem,
Michael Klooß, Emmanuela Orsini, Lawrence Roy, and Peter Scholl.
- https://eprint.iacr.org/2023/996 (Crypto'23)
- https://faest.info/ (NIST PQC Submission)

On the new generation of symmetric primitives: the AOP (Arithmetization-Oriented Primitives)

Clémence Bouvier, Universität Bochum
vendredi 15 mars 2024

Recently, new symmetric primitives have been proposed for advanced protocols such as Multi-Party Computation (MPC), in combination with Fully Homomorphic Encryption (FHE), or in various Zero-Knowledge (ZK) proof systems. In such protocols, there is a need to minimize the number of multiplications performed by the primitive in large finite fields. Traditional symmetric algorithms, like AES, are then inappropriate in this context, and the advanced protocols need to be combined with new symmetric primitives with special properties, the so-called "Arithmetization-Oriented Primitives" (AOP).

In this talk, we will present new tools for the cryptanalysis and the design of these new primitives. While the number of AOP is increasing significantly, only a few cryptanalysis works have been proposed. Therefore, we will propose cryptanalysis tools from both a theoretical and practical perspective. We will analyze one of the first block ciphers proposed in this new context, namely MiMC, by giving a detailed understanding of the evolution of the algebraic degree of this cipher. We will also suggest some tricks for practical algebraic attacks against some AOP. Finally, we will move to the designer's side and present a new family of AOP, Anemoi, and its main component, the Flystel, exploiting a previously unknown connection with CCZ equivalence.

How to Compress Encrypted Data

Mark Simkin, Ethereum Foundation
mercredi 13 mars 2024

Consider the following simple problem:
Given an (additively homomorphic) encrypted vector of length n and with at most t non-zero entries, by how much and how efficiently can this vector be compressed?
This problem is at the core of many different topics, such as encrypted search protocols or anonymous message delivery systems.

In our work, we present multiple solutions to this problem.
We first show that one can compress the whole vector into O(t) ciphertexts at the cost of slightly higher computational complexities during (de)compression.
Our main result is a new algorithm that compresses the vector into O(t * c / log t) ciphertexts, where 2^{-c} is the correctness error, while allowing for extremely efficient (de)compression algorithms.

Joint work with Nils Fleischhacker and Kasper Green Larsen

Discussion autour du Knapsack Generator

Florette Martinez, LIP6
lundi 5 février 2024

Le Knapsack Generator, proposé en 1985 par Rueppel et Massey est un générateur pseudo aléatoire (PRNG) qui combine un premier PRNG faible, le LFSR, et un problème dur, le subset sum problem, dérivé du problème de sac à dos.
Ce générateur a été attaqué avec succès par Knellwolf et Meyer en 2011. Je discuterai ici d'une nouvelle analyse de cette attaque pour comprendre pourquoi elle marche aussi bien et des différentes attaques que j'ai pu proposer avec Damien Vergnaud et Charles, contre des variantes de ce générateur.

Variants of the Decoding Problem and algebraic cryptanalysis

Pierre Briaud, Simula UiB
lundi 29 janvier 2024

The intractability of decoding generic linear codes is at the core of an important branch of post-quantum cryptography.
In this context, the code is random by design or it is assumed to be so in the security reduction.

This talk will focus on versions of the Decoding Problem where the error vector is structured, in general to achieve better performance.
While combinatorial techniques such as Information Set Decoding are often the method of choice to attack these versions, I will describe the potential of algebraic algorithms.

I will mostly consider the Regular Syndrome Decoding Problem and a paper presented at Eurocrypt 2023.
I will also mention ongoing work on an assumption used in the CROSS submission to new call for signature schemes launched by NIST.

Secure Computations on Shared Polynomials and Application to Private Set Operations

Lucas Ottow, LIRMM
mardi 12 décembre 2023

The aim of secure multi-party computation protocols is to allow a set of players to compute a given function on their secret inputs without revealing any other information than the result of the computation. In this work, we focus on the design of secure multi-party protocols for shared polynomial operations. We consider the classical model where the adversary is honest-but-curious, and where the coefficients (or any secret values) are either encrypted using an additively homomorphic encryption, or shared using a threshold linear secret-sharing scheme. Our protocols terminate after a constant number of rounds and minimize the number of secure multiplications. In their seminal article at PKC 2006, Mohassel and Franklin proposed constant-rounds protocols for the main operations on (shared) polynomials. In this work, we improve the fan-in multiplication of nonzero polynomials, the multi-point polynomial evaluation and the polynomial interpolation (on secret points) to reach a quasi-linear complexity (instead of quadratic in the article of Mohassel and Franklin) in the degree of shared input/output polynomials. Sharing polynomials is a core component to design multi-party protocols for privacy-preserving operations on private sets, like private disjointess test or private set intersection. Using our new protocols, we are able to improve the complexity of such protocols and to design the first variant which always return a correct result.

Decoding of Interleaved Rational Evaluation-Interpolation Codes

Matteo Abbondati, LIRMM
mardi 28 novembre 2023

The evaluation-interpolation scheme is quite extensive in the context of
error-correcting codes (ECC). The most prominent illustration of this phe-
nomenon is exemplified by the well-known Reed Solomon codes, wherein the
encoding is determined by the evaluation of a bounded degree polynomial in
Fq [x] on n distinct points, and the decoding is obtained through an interpola-
tion step utilizing the Chinese Remainder Theorem in the polynomial context.
The same framework can be applied to integers, where the evaluation is
accomplished via the reduction modulo n distinct prime numbers and the inter-
polation is accomplished via the usual Chinese Remainder Theorem, resulting in
the so-called Chinese Remainder Codes. Despite adhering to a similar algebraic
framework in their definitions, there exist inherent distinctions between the lat-
ter and their polynomial counterparts, rendering their study more demanding.
Nevertheless, the natural tendency is to try and extend the ideas from the
decoding algorithms of RS codes to the integer case.
The algebraic structure of the rings (Fq [x], or Z) used in the definition of
these codes allows an extension of them to the rational case, considering the
corresponding fields of fractions (Fq (x), or Q). For the resulting rational codes,
the correction of errors beyond the unique decoding radius using interleaving
techniques can be easily rephrased as an instance of the simultaneous ratio-
nal reconstruction with errors. Studying these problems, we are able to adapt
the same analysis to both the polynomial and the integer cases, obtaining al-
most optimal bounds for the failure probabilities of the corresponding decoding
algorithms.

Algorithms for Drinfeld Modules

Joseph Musleh, University of Waterloo
mardi 14 novembre 2023

Drinfeld modules play a fundamental role in the theory of function fields analogous to that of elliptic curves for number fields. As is the case with elliptic curves, an important invariant associated to any Drinfeld module is the characteristic polynomial of its Frobenius endomorphism, which characterizes isogeny classes. The characteristic polynomial also plays a role in routines for computing the endomorphism ring of a Drinfeld module, and in factoring algorithms based on Drinfeld modules. The analogous problem of elliptic curves has been well-studied due to its role in point counting, with several co-existing algorithms including Schoof's (1985), Satoh's (2000), and Kedlaya's (2001).

In this talk, we will give an introduction to the theory of Drinfeld modules, and an overview of recent work designing and analyzing several algorithms for computing the characteristic polynomial inspired by classical methods due to Schoof and Kedlaya. We will also discuss recent work towards developing an implementation of Drinfeld modules in SageMath.

Algorithmes de type Chudnovsky sur la droite projective pour la multiplication dans les corps finis : quoi de neuf ?

Bastien Pacifico, LIRMM
mardi 26 septembre 2023

Il existe différents modèles permettant de mesurer la complexité d'un algorithme de multiplication dans une extension finie d'un corps fini. La complexité bilinéaire d'un tel algorithme est le nombre de multiplications bilinéaires, dépendant des deux éléments que l'on souhaite multiplier, qu'il exécute.

La méthode de D.V. et G.V. Chudnovsky (1988) permet de construire des algorithmes ayant une bonne complexité bilinéaire. Ce sont des algorithmes d'évaluation/interpolation utilisant les points rationnels de courbes algébriques, i.e. les places rationnelles d'un corps de fonctions. Cette méthode a depuis été généralisée de différentes manières, et notamment à l'évaluation en des places de degrés arbitraires.

Dans cet exposé, nous reverrons la construction récursive d'algorithmes de type Chudnovsky sur le corps des fonctions rationnelles déjà introduite lors d'un précédent séminaire. Nous discuterons ensuite les différents travaux réalisés depuis, concernant l'optimisation de ces algorithmes, leur extension à l'évaluation avec multiplicité, ainsi que leur utilisation pour obtenir des familles d'algorithmes constructibles en temps polynomial et possédant une complexité bilinéaire linéaire en le degré de l'extension.

Reliable Computations with Non Reliable Gates

Andrei Romashchenko, CNRS, LIRMM
mercredi 7 juin 2023

We discuss models of reliable data storage and fault-tolerant computations in presence of local errors. In these models, the input and the output of a storage device or a computational circuit are represented by codewords in an error-correcting code. A circuit is said to compute a function correctly if for every input that is an encoded argument of the function, the output, being decoded, is the value of the function on the given argument. A priori, the entire data is not lost even if a few bits in the codewords are spoiled. So we can preserve the integrity of the initial data and even perform complex computations although a small percentage of gates are corrupted. However, this requires a non trivial construction: even to simply keep the initial data intact, we must perform a permanent self-correcting procedure so that sporadic errors do not destroy, little by little, all the data.

We discuss probabilistic and adversarial models of faults. In the probabilistic model, we suppose that every gate (an elementary processor with a constant size memory) at each step is corrupted with a small probability, and faults of different gates are independent. In the adversarial model, we assume that at each step of computation, a fixed fraction of gates can be corrupted by a malicious adversary. We show how to implement in these settings reliable data storage and even reliable computations of an arbitrary function.

We recall remarkable results related to this problem, beginning with the works of J. von Neumann (the 1950s), developed later by R. Dobrushin and S. Ortyukov (the 1970s), N. Pippenger (the 1980s), P. Gacs and D. Spielman (the 1990s), and some more recent results and open problems.

We will discuss constructions based on locally decodable codes and expanders.

Convergence of the number of period sets in strings

Pengfei Wang, LIRMM
mardi 23 mai 2023

When a word \(w\) can be written \(uvu\) for some words \(u,v\), then \(u\) is called a \emph{border} of \(w\) and the length of \(uv\) is a \emph{period} of \(w\). A word can have several borders/periods, in which case smaller borders are nested into longer ones. For example, consider the word \(abracadabra\): then, \(abra\) is its longest border and corresponds to period 7, but \(a\) also is a border, corresponding to period 10, and finally the empty word is a trivial border corresponding to a period that is the length of \(abracadabra\). Importantly, a period is an offset at which two occurrences of a word \(w\) can overlap themselves.

The notions of borders and periods are key in word combinatorics, in stringology, and especially in pattern matching algorithms. The set of periods of a word impacts how occurrences of this word can appear in random texts.

Consider words of length \(n\). The set of all periods a word of length \(n\) is a subset of \(\{1, 2, \ldots, n\}\). However, any subset of \(\{1, 2, \ldots, n\}\) is not necessarily a valid set of periods. In a seminal paper in 1981, Guibas and Odlyzko have proposed to encode the
set of periods of a word into a \(n\) long binary string, called autocorrelation, where a one denotes at position \(i\) a period. They considered the question of recognizing a valid period set, and also studied the number of valid period sets for length \(n\), denoted
\(\kappa_n\). They conjectured that \(\ln \kappa_n\) asymptotically converges to a constant times of \(\ln^2 n\). If improved lower bounds for \(\ln( \kappa_n )/{\ln^2(n)}\) were proposed in 2001 the question of a tight upper bound has remained opened since Guibas and Odlyzko's
paper. Here, we exhibit an upper bound for this fraction, which implies its convergence, and closes this long standing conjecture. Moreover, we extend our result to find similar bounds for the number of correlations: a generalization of autocorrelations which encodes the overlaps between two strings.

This is a joint work with Eric Rivals and Michelle Sweering. The paper has been accepted to ICALP2023 (https://icalp2023.cs.upb.de/accepted-papers/).

Masking Kyber Compression

Laurent Imbert, CNRS, LIRMM
mercredi 17 mai 2023

In July 2022, NIST has completed the third round of the Post-Quantum Cryptography (PQC) standardization process. A total of four proposals have been selected for standardization, among which Kyber is the only algorithm in the public-key encryption/key encapsulation mechanism (KEM) category. Kyber security is based on the hardness of solving the learning-with-errors (LWE) problem over module lattices.

Over the past years, securing the implementations of Kyber against side-channel attacks has been a very hot topic. A generic approach to thwart high-order attacks consists in masking the secret data using secret sharing schemes. In this talk, I will present a novel approach for masking one of Kyber's internal component, namely the compression and decompression functions.

Correlated Pseudorandomness from the Hardness of Decoding Quasi-Abelian Codes

Maxime Bombar, LIX
mercredi 19 avril 2023

Secure Multiparty Computation can often benefit from using correlated randomness shared between all the parties to achieve better efficiency. A recent paradigm put forth by Boyle et al. showed how Pseudorandom Correlation Generators (PCG's) can be used to generate a large amount of useful form of correlated (pseudo)randomness such as random Oblivious Linear Evaluations (OLE's), using minimal interactions, followed solely by local computations, yielding silent secure two-party computation protocols. This can be extended to N-party computation using so-called programmable PCG's.

Previous constructions for programmable PCG's for OLE suffer from two downsides: (1) They only generate OLE's over large fields, and (2) They rely on a rather recent splittable Ring-LPN assumption which lacks strong security foundations.

In this talk, I will present a way to overcome both limitations by introducing the (decisional) Quasi-Abelian Syndrome Decoding Problem, which generalizes both the usual Decoding Problem of random linear codes, as well as the well-known Quasi-Cyclic Decoding Problem used in the NIST code-based submissions BIKE and HQC. I will show that this new problem is resistant to all attacks from the linear test framework (which contains essentially all known generic attacks against the Decoding Problem), and that it admits a search-to-decision reduction. As a by-product, this permits to identify weak instantiations that were allowed in previous frameworks. This new cryptographic problem allows to build PCG's for the OLE correlation over any finite field F_q with q>2. I will also explain why this approach cannot be used for F_2 by giving an impossibility result.

This is a joint work with Geoffroy Couteau, Alain Couvreur and Clément Ducros.

Cryptographic Accumulators

Anaïs Barthoulot, Orange
vendredi 14 avril 2023

Cryptographic accumulator is a primitive, introduced in 1993 by Benaloh and De Mare, that allows the representation of a set by a short value and offers (non-)membership proofs. Through the years, accumulators have been used for multiple purposes, resulting in new properties specific to individual needs. This is how accumulators became dynamic, universal or even multisets. Unfortunately, all these new properties and functionalities were added separately, giving rise to several definitions of accumulators. That makes it complicated to have an overview of accumulators and their properties. That is why in 2015, Derler, Hanser and Slamanig proposed a unified formal model, dealing with most of existing accumulators’ properties. Their work became a reference when working with accumulators. However, since 2015 new relevant properties of accumulators have been introduced, such as the zero-knowledge security property which extends indistinguishability, or the asynchronous property which allows witnesses to be still correct when a fixed number of operations on the accumulated set have been carried out, without needing an update. Some functionalities, such as witnesses computed not for a single element but for a subset, or the multiset setting which allows an element to be accumulated more than once were not taken into account in the work of Derler et al.. Their model became then obsolete and no one has come up with a new model that complements the old one. In this presentation, I will present you a new unified and up-to-date overview of the cryptographic accumulator primitive. I will also revisit the accumulator by Gosh et al. to obtain a new one satisfying most of the existing properties.

Two-player boundedness counter games

Edwin Hamel,
mardi 28 mars 2023

We consider two-player zero-sum games with winning objectives beyond regular languages, expressed as a parity condition in conjunction with a Boolean combination of boundedness conditions on a finite set of counters which can be incremented, reset to $0$, but not tested. A boundedness condition requires that a given counter is bounded along the play. Such games are decidable, though with non-optimal complexity, by an encoding into the logic WMSO with the unbounded and path quantifiers, which is known to be decidable over infinite trees. Our objective is to give tight or tighter complexity results for particular classes of counter games with boundedness conditions, and study their strategy complexity. In particular, counter games with conjunction of boundedness conditions are easily seen to be equivalent to Streett games, so, they are CoNP-c. Moreover, finite-memory strategies suffice for Eve and memoryless strategies suffice for Adam. For counter games with a disjunction of boundedness conditions, we prove that they are in solvable in NP\cap CoNP, and in PTime if the parity condition is fixed. In that case memoryless strategies suffice for Eve while infinite memory strategies might be necessary for Adam. Finally, we consider an extension of those games with a max operation. In that case, the complexity increases: for conjunctions of boundedness conditions, counter games are EXPTIME-c.

How fast do you heal? A taxonomy for post-compromise security in secure-channel establishment.

Léo Robert, Université Clermont Auvergne
mercredi 22 mars 2023

Post-Compromise Security (PCS) is a property of secure-channel establishment schemes, which limits the security breach of an adversary that has compromised one of the endpoint to a certain number of messages, after which the channel heals. An attractive property, especially in view of Snowden's revelation of mass-surveillance, PCS features in prominent messaging protocols such as Signal. In this talk, we introduce a framework for quantifying and comparing PCS security, with respect to a broad taxonomy of adversaries. The generality and flexibility of our approach allows us to model the healing speed of a broad class of protocols, including Signal, but also an identity-based messaging protocol named SAID, and even a composition of 5G handover protocols. We also apply the results obtained for this latter example in order to provide a quick fix, which massively improves its post-compromise security.

Internet of Things: From modern cryptography to post-quantum cryptography

Clementine Gritti, University of Canterbury
vendredi 17 mars 2023

In this presentation, I will focus on security and privacy in the Internet of Things (IoT), using modern asymmetric cryptography (e.g. RSA, Diffie-Hellman) and post-quantum cryptography (e.g., based on lattices).

IoT is an environment with its challenges and limitations. Application domains are diverse and heterogeneous (e.g., eHealth versus wearable technology). Smart devices forming IoT networks often have constrained resources in terms of computation, communication, and storage. While IoT has positively impacted our everyday life, security and privacy mechanisms are essential since IoT networks and their devices deal with a huge amount of sensitive data. As a result, cryptographic protocols must be developed adequately, by taking into account the inherent features of such an environment. I will present some cryptographic solutions that were developed during my first postdoctoral contract, using modern cryptosystems (e.g., based on the discrete logarithm).

Nevertheless, the proposed cryptographic solutions rely on mathematical problems that will be solved by quantum computers. To prevent successful attacks by quantum computers, post-quantum cryptography (e.g., based on lattices) has been proposed. New quantum-resistant algorithms are on their way for standardisation. However, those algorithms bring extras (e.g., bigger component sizes, bigger calculations) that may impede the usability of post-quantum cryptography in our real world. In particular, with the limited resources of smart devices, IoT and post-quantum cryptography may not coexist. I will present some experimental results from implementing post-quantum algorithms in an IoT context, as a way to check the deployability of post-quantum cryptography in IoT.

The realm of quantifiers

Anela Lolic,
mardi 14 mars 2023

In this lecture we classify quantifiers according to non-emptiness (i.e. always an object is denoted and according to dimension. For non-empty quantifiers Skolemization and some sort of epsilon calculus is always possible. This is not possible for empty quantifiers however positive results concerning eg the prenex fragment are possible (We choose as most natural example in this respect Gödel quantifiers.) Furthermore we classify quantifiers after dimension and show that in a proof-theoretic aspect global views on proofs which contradict Hilbert's pardigma are necessary. This holds for quantifier macros and a fortiori for (partial
calculi for) Henkin quantifiers

Le problème du Domino sur les sous-shifts de pavages par losanges

Victor Lutfalla, Université de Caen
mardi 21 février 2023

Un pavage est un recouvrement du plan par des tuiles qui ne se chevauchent pas. On appelle sous-shift un ensemble de pavages qui est invariant par translation et fermé (pour la topologie habituelle sur les pavages). Ici on s’intéresse au cas où il y a un nombre fini de tuiles à translation près, les tuiles sont des losanges, et à chaque fois que deux tuiles sont adjacentes elle partagent soit un unique sommet en commun soit une arête entière en commun.

Sur ces pavages, on peut imposer des règles locales (à la manière des tuiles de Wang) en ajoutant des couleurs sur les arêtes des tuiles avec la règle que deux tuiles qui partagent une arête doivent avoir la même couleur pour cette arête. Dans ce cas, pour une même tuile géométrique, on peut avoir plusieurs copies avec des couleurs différentes sur les arêtes.

Étant donné un sous-shift X de pavages par losanges, le problème du domino sur X prend en entrée un ensemble fini de règles locales F et renvoie Vrai si il existe un pavage de X qui respecte les règles locales F et Faux dans le cas contraire. Ce problème est co-Récursivement-Énumérable-complet et nous allons présenter une réduction many-one depuis le problème du domino classique sur les tuiles de Wang.

LRPC codes with multiple syndromes: near ideal-size KEMs without ideals

Nicolas Aragon, NAQUIDIS Center
vendredi 17 février 2023

The ROLLO encryption scheme, based on low rank parity-check (LRPC) codes, was considered during the NIST post-quantum standardisation process because of its competitive public key and ciphertext sizes, but was not selected for the third round of the process because of new algebraic attacks. Nonetheless, it remains a competitive encryption scheme even considering the new parameters chosen to resist these attacks.

In this work we propose to use the rank support learning (RSL) problem, which was first introduced in the Durandal signature scheme, to improve the performances of LRPC-based cryptosystems. We show how this technique interacts with the decoding algorithm of LRPC codes, and propose a new encryption scheme called LRPC-MS (Low Rank Parity-Check with Multiple Syndromes) with very competitive performance, both using unstructured or ideal matrices.

Log-S-unit lattices using Explicit Stickelberger Generators to solve Approx Ideal-SVP

Andrea Lesavourey, Irisa
vendredi 3 février 2023

Dans la recherche actuelle de primitives pouvant résister à l'utilisation d'un
ordinateur quantique, une des pistes majeure se base sur les réseaux euclidiens,
et en particulier sur le problème Learning With Errors (LWE). En effet, il existe
une réduction pire cas -- moyen cas vers le problème classique de réseaux qu'est
le Shortest Vector Problem (SVP). Pour des raisons d'efficacité, les schémas
envisagés se basent sur des versions structurées de LWE, comme Ring ou Module-LWE.
Il existe par ailleurs des réductions pire cas -- moyen cas de ces problèmes vers
le SVP restreint respectivement aux réseaux idéaux (Ideal-SVP) et modules
(Module-SVP). C'est pourquoi l'analyse de Ideal-SVP a reçu une attention soutenue
ces dernières années.

Dans cet exposé je ferai d'abord des rappels concernant les réseaux euclidiens, la
cryptographie basée sur ces objets, ainsi que les principales attaques algébriques
sur Ideal-SVP. Je décrirai ensuite le travail fait avec Olivier Bernard, Thuong Huy
et Adeline Roux-Langlois sur la possibilité de résoudre Ideal-SVP dans des corps cyclotomiques.

Nous utilisons des générateurs courts de l'idéal de Stickelberger pour calculer en temps
raisonnable un sous-groupe des Log-S-unités pour des corps de dimension aussi grande que 200.
Nous faisons également des expériences pour évaluer les performances de l'algorithme
Twisted-PHS dans ce mode dégradé. En particulier, nos résultats montrent que l'algorithme
Twisted-PHS donne de meilleurs facteurs d'approximation que l'approche de
Cramer, Ducas et Wesolowski (CDW), atteignant même la borne inférieure volumétrique
établie dans DPW (Ducas, Plançon, Wesolowski).

Propriétés spectrales des graphes aléatoires de Schreier

Geoffroy Caillat-Grenier,
mercredi 18 janvier 2023

Les propriétés de connectivité des graphes réguliers peuvent être étudiées par le biais du spectre de la matrice d'adjacence. En effet, une grande différence entre les deux plus grandes valeurs propres de cette matrice garantit un taux d'expansion élevé. Il se trouve que les graphes aléatoires permettent d'obtenir de bonnes propriétés spectrales utiles pour les applications. Nous nous intéressons au spectre des graphes produits à l'aide de matrices aléatoires inversibles dans un corps fini. Il s'agit de graphes de Schreier du groupe général linéaire.
Cette construction nécessite moins d'aléatoire que les modèles de graphes aléatoires classiques pour lesquels ces propriétés spectrales sont connues. De plus, elle s'adapte à la génération de graphes bipartis biréguliers. Nous prouvons une borne supérieure pour l'espérance de la seconde plus grande valeur propre des graphes obtenus. La démonstration fait appel à la méthode de la Trace et à l'analyse d'une marche aléatoire dans de tels graphes.

How to compute with imaginary quadratic forms ?

Cyril Bouvier, Université de Montpellier
mercredi 14 décembre 2022

In this talk, I will define imaginary quadratic forms and details how a group
law can be constructed on them. I will explain how they can be useful in
cryptographic protocols. I will provide some benchmarks to compare the
efficiency of this group law against other groups used in cryptography.

Guessing words using queries on factors or subwords

Matthieu Rosenfeld, LIRMM
mardi 13 décembre 2022

I will present some results that we recently obtained about words reconstruction problems. In our setting, you can ask queries to an oracle about the number of occurrences of any given subword in an unknown word w, and the task is to reconstruct w in as few queries as possible. Fleischmann, Lejeune, Manea, Nowotka and Rigo, showed that for a binary word w of length n, it is always possible in at most n/2 +1 queries. We prove that O((n log n)^(1/2)) queries suffices. In this talk, I will provide the few necessary definitions and present this result.

This is joint work with Gwenaël Richomme.

Information Inequalities and Representations of Matroids

Carles Padro, Universitat Politècnica de Catalunya
vendredi 9 décembre 2022

Linear information and rank inequalities as, for instance, Ingleton inequality or Zhang-Yeung inequality, are useful tools in information theory and matroid theory. Even though many such inequalities have been found, it seems that most of them remain undiscovered. Improved results have been obtained in recent works by using the properties from which they are derived instead of the inequalities themselves. We apply here this strategy to the classification of matroids and polymatroids according to their linear, algebraic, or entropic representations and to the search for bounds on secret sharing for matroid ports. We discuss the connection of that method with intersection properties of matroids that were introduced in the 90s.

Some hypersurfaces over finite fields, minimal codes and secret sharing schemes

Michela Ceria, Università degli Studi di Bari
vendredi 18 novembre 2022

Les codes linéaires de correction d'erreurs peuvent être utilisés pour construire des schémas de partage de secrets ; cependant, trouver en général les structures d'accès de ces schémas de partage de secret et, en particulier, déterminer des structures d'accès efficaces est difficile. Nous étudions les propriétés de certaines hypersurfaces algébriques sur des corps finis, dont les nombres d'intersection avec les hyperplans ne prennent que quelques valeurs ; ces variétés donnent lieu à des codes linéaires q-divisibles avec au plus 5 poids. De plus, pour q impair, ces codes sont minimaux et nous caractérisons les structures d'accès des schémas de partage de secrets en fonction de leurs codes duaux. En effet, les schémas de partage de secrets ainsi obtenus sont démocratiques, c'est-à-dire que chaque participant appartient au même nombre d'ensembles d'accès minimaux et peut-être aisément décrit.

Linear equations for the weight distribution of codes over finite chain rings

Giulia Cavicchioni, Università di Trento
mercredi 9 novembre 2022

In the past years, codes over finite rings have received much attention after that families of binary non-linear codes have been shown to be equivalent to linear codes over Z_4.
The problem of determining the weight distribution of a linear code, which implies the determination of the minimum distance, is NP-hard.
In this talk, we will focus on the weight distribution of ring-linear codes. In particular, we provide new linear equations for the weight distribution of linear codes over finite chain rings. The identities are obtained by counting the number of some special submatrices of the parity-check matrix of the code. These relations enable us to compute the full weight distribution of codes with small Singleton defects, such as MDS, MDR and AMDR codes.

RQC revisited and more cryptanalysis for Rank-based Cryptography

Maxime Bros, XLIM, Université de Limoges
mercredi 19 octobre 2022

We propose two main contributions: first, we revisit the encryption scheme Rank Quasi-Cyclic (RQC) [1] by introducing new efficient variations, in particular, a new class of rank metric codes, namely the Augmented Gabidulin codes; second, we propose new attacks against the Rank Support Learning (RSL), the Non-Homogeneous Rank Decoding (NHRSD), and the Non-Homogeneous Rank Support Learning (NHRSL) problems.

RSL is primordial for all recent rank-based cryptosystems such as Durandal [2] or LRPC with multiple syndromes [3], moreover, NHRSD and NHRSL, together with RSL, are at the core of our new schemes.

The new attacks we propose are of both types: combinatorial and algebraic. For all these attacks, we provide a precise analysis of their complexity.

Overall, when all of these new improvements for the RQC scheme are put together, and their security evaluated with our different attacks, they enable one to gain 50% in parameter sizes compared to the previous RQC version.
More precisely, we give very competitive parameters, around 11 KBytes, for RQC schemes with unstructured public key matrices. This is currently the only scheme with such short parameters whose security relies solely on pure random instances without any masking assumptions, contrary to McEliece-like schemes.
At last, when considering the case of Non-Homogeneous errors, our scheme permits to reach even smaller parameters.

Some Easy Instances of Ideal-SVP and Implications on the Partial Vandermonde Knapsack Problem

Katharina Boudgoust, Aarhus University, Danemark
mercredi 12 octobre 2022

This talk focuses on the shortest vector problem over so-called ideal lattices. Whereas it is believed to be hard in the worst-case, prior works have shown that there are several specific cases, where the problem becomes easy to solve. We continue this line of work and provide a simple condition under which an ideal lattice defines an easy instance of the shortest vector problem. Namely, we show that the more automorphisms stabilize the ideal, the easier it is to find a short vector in it. This observation was already made for prime ideals in Galois fields, and we extend it to any ideal (whose prime factors are not ramified) of any number field, generalizing the works of Pan et al. (Eurocrypt'21) and Porter et al. (ArXiv'21). We then provide a cryptographic application of this result by showing that particular instances of the partial Vandermonde knapsack problem, also known as partial Fourier recovery problem, can be solved classically in polynomial time. As a proof of concept, we implemented our attack and managed to solve those particular instances for concrete parameter settings proposed in the literature. For random instances, we can halve the lattice dimension with non-negligible probability. We conclude the presentation by clarifying the impacts of our results to lattice-based cryptography.

This is a joint work with Erell Gachon and Alice Pellet-Mary.

Strictly local self-assembly algorithm for a quasiperiodic octagonal tiling

Ilya Galanov, Aix-Marseille Université
mardi 11 octobre 2022

Self-assembly is the process in which the components of a system, whether molecules, polymers, or macroscopic particles, are organized into ordered structures as a result of local interactions between the components themselves, without exterior guidance. This talk is devoted to the self-assembly of aperiodic tilings. Aperiodic tilings serve as a mathematical model for quasicrystals - crystals that do not have any translational symmetry. Because of the specific atomic arrangement of these crystals, the question of how they grow remains open. Simulations strongly support the evidence that the algorithm we developed grows aperiodic cut-and-project tilings with local rules up to an unavoidable but neglectable proportion of missing tiles. In this talk, we state the first theorem regarding the Golden-Octagonal tilings and formulate conjectures for future results.

Constant-depth sorting networks.

Alexander Kozachinskiy,
mardi 4 octobre 2022

My talk is devoted to k-sorting networks. This is a generalization of ordinary sorting networks, in which every comparator sorts not 2, but k numbers. This model is studied since the 70s. Its main motivation is that k-sorting networks yield recursive constructions of ordinary sorting networks. We address the following question: given d and n, what is the minimal k for which there exists a d-depth k-sorting network for n numbers? We answer it for d up to 4. This is joint work with Natalia Dobrokhotova-Maikova and Vladimir Podolskii.

Fighting Fake News in Encrypted Messaging with the Fuzzy Anonymous Complaint Tally System (FACTS)

Dan Roche, US Naval Academy
mercredi 13 juillet 2022

Recent years have seen a strong uptick in both the prevalence and real-world consequences of false information spread through online platforms. At the same time, encrypted messaging systems such as WhatsApp, Signal, and Telegram, are rapidly gaining popularity as users seek increased privacy in their digital lives.
The challenge we address is how to combat the viral spread of misinformation without compromising privacy. Our FACTS system tracks user complaints on messages obliviously, only revealing the message's contents and originator once sufficiently many complaints have been lodged.
Our system is private, meaning it does not reveal anything about the senders or contents of messages which have received few or no complaints; secure, meaning there is no way for a malicious user to evade the system or gain an outsized impact over the complaint system; and scalable, as we demonstrate excellent practical efficiency for up to millions of complaints per day.
Our main technical contribution is a new collaborative counting Bloom filter, a simple construction with difficult probabilistic analysis, which may have independent interest as a privacy-preserving randomized count sketch data structure.
Compared to prior work on message flagging and tracing in end-to-end encrypted messaging, our novel contribution is the addition of a high threshold of multiple complaints that are needed before a message is audited or flagged.
We present and carefully analyze the probabilistic performance of our data structure, provide a precise security definition and proof, and then measure the accuracy and scalability of our scheme via experimentation.
This is joint work with Linsheng Liu, Austin Theriault and Arkady Yerukhimovich.

Commutative algebra applied to coding theory and cryptography (part 2)

Carla Mascia, Università di Trento
mardi 12 juillet 2022

In this seminar, we will show how commutative and computer algebra are exploited to address different problems arising from coding theory and cryptography.
A research area in coding theory is the family of algebraic geometric codes, which are known to achieve good performance. A class of them is the so-called order domain codes. We will present how the computation of Hilbert quasi-polynomials of a non-standard graded ring leads to verifying whether a given code is an order domain or not.
In cryptography, attacks on ciphers are feasible through a suitable commutative-algebra approach. We will illustrate a new algebraic attack on stream ciphers. It is designed from the well-known attack due to Courtois and Meier, and it is especially effective against nonlinear filter generators. We will apply it to one of the stream ciphers submitted to the NIST competition on Lightweight Cryptography, WG-PRNG, proving that the level of security is less than that claimed until now.

A General approach to Ammann bars for aperiodic tilings

Carole Porrier ,
mardi 12 juillet 2022

Ammann bars are formed by segments (decorations) on the tiles of a tiling such that forming straight lines with them while tiling forces non-periodicity. Only a few cases are known, starting with Robert Ammann's observations on Penrose tiles, but there is no general explanation or construction. In this article we propose a general method for cut and project tilings based on the notion of subperiods and we illustrate it with an aperiodic set of 36 decorated prototiles related to what we called Cyrenaic tilings.

(A joint work with Thomas Fernique.)

Commutative algebra applied to coding theory and cryptography

Carla Mascia, Università di Trento
lundi 11 juillet 2022

In this seminar, we will show how commutative and computer algebra are exploited to address different problems arising from coding theory and cryptography.
A research area in coding theory is the family of algebraic geometric codes, which are known to achieve good performance. A class of them is the so-called order domain codes. We will present how the computation of Hilbert quasi-polynomials of a non-standard graded ring leads to verifying whether a given code is an order domain or not.
In cryptography, attacks on ciphers are feasible through a suitable commutative-algebra approach. We will illustrate a new algebraic attack on stream ciphers. It is designed from the well-known attack due to Courtois and Meier, and it is especially effective against nonlinear filter generators. We will apply it to one of the stream ciphers submitted to the NIST competition on Lightweight Cryptography, WG-PRNG, proving that the level of security is less than that claimed until now.

Mitaka: A Simpler, Parallelizable, Maskable Variant of Falcon

Thomas Espitau, NTT Secure Platform Laboratories, Tokyo
mercredi 29 juin 2022

In this talk, we describe the Mitaka signature scheme:
a new hash-and-sign signature scheme over NTRU lattices which can be seen
as a variant of NIST finalist Falcon. It achieves comparable efficiency but is
considerably simpler and easier to parallelize and protect against side-channels,
thus offering significant advantages from an implementation standpoint.
We obtain this signature scheme by replacing the FFO lattice Gaussian sampler
in Falcon by the “hybrid” sampler of Prest, for which we carry out a detailed and
corrected security analysis. In principle, such a change can result in a substantial
security loss, but we show that this loss can be largely mitigated using new techniques
in key generation that allow us to construct much higher quality lattice trapdoors for the
hybrid sampler relatively cheaply.
We also provide a provably secure masking of Mitaka at any order at much lower cost that
previous masking techniques for Gaussian sampling-based signature schemes.

An alternative approach to Polynomial Number System Internal Reduction

Nicolas Méloni, Université de Toulon
mercredi 22 juin 2022

Le système PMNS (Polynomial Modular Number System) est une alternative au système de représentation standard des éléments d'un corps finis sous forme d'entiers en multi-précision. En PMNS, un entier est représenté par un polylome de l'anneau quotient ZZ[X] / P(X) où P un un polynôme de de degré. De manière équivalente, il peut être vu comme un vecteur d'un certain réseau de dimension n. L'opération principale lors de la manipulation de tels objets est la réduction de la norme de ces vecteurs, appelée dans le cadre du PMNS /réduction interne./ C'est l'équivalent de la classique réduction modulaire des corps finis.
La quasi totalité des travaux de ces 15 dernières années utilise un algorithme dérivé de celui de Montgomery pour accomplir cette réduction. Dans ce travail nous verrons deux algorithmes alternatifs à celui de Montgomery adaptés des algorithmes de Babai pour la resolution de CVP (Closest Vector Problem).

Calculabilité ordinale : écriture de réels par machines de Turing à temps infini.

Kenza Benjelloun, Université de Montpellier
mercredi 22 juin 2022

Réplétion soutenance de Master.

Information Inequalities for Structures with Symmetries with Applications to Secret Sharing

Emirhan Gürpinar, ESCAPE LIRMM
mardi 21 juin 2022

We study the properties of secret sharing schemes, where a random secret value is transformed into shares distributed among several participants in such a way that only
the qualified groups of participants can recover the secret value. We improve the lower bounds on the sizes of shares for several specific problems of secret sharing. To this end, we use the method of non-Shannon-type information inequalities going back to Z. Zhang and R.W. Yeung. We employ and extend the linear programming technique that allows to apply new information inequalities indirectly, without even writing them down explicitly. To reduce the complexity of the problems of linear programming involved in the bounds we extensively use symmetry considerations.

Improved Classical and Quantum Algorithms for the Shortest Vector Problem via Bounded Distance Decoding

Yixin Shen, Royal Holloway University of London
mercredi 8 juin 2022

The Shortest Vector Problem (SVP) is arguably the most important computational problem on lattices. In this talk I will present several new provable algorithms that improve the state-of-the-art for this problem both in the classical and quantum case. In doing so we will uncover a new parameter related to the kissing number of a lattice that plays an important role in the complexity of these algorithms. I will argue that this parameter could play an important role in bridging the gap between practical and provable algorithm.

VESPo: Verified Evaluation of Secret Polynomials: with application to low-storage dynamic proofs of retrievability

Clément Pernet, Laboratoire Jean Kuntzmann, Université Grenoble Alpes
mercredi 1 juin 2022

We consider the problem of efficiently evaluating a secret polynomial at a given public point, when the polynomial is stored on an untrusted server. The server performs the evaluation and returns a certificate, and the client can efficiently check that the evaluation is correct using some pre-computed keys. Our protocols support two important features: the polynomial itself can be encrypted on the server, and it can be dynamically updated by changing individual coefficients cheaply without redoing the entire setup. As an important application, we show how these new techniques can be used to instantiate a Dynamic Proof of Retrievability (DPoR) for arbitrary outsourced data storage that achieves low server storage size and audit complexity. Our methods rely only on linearly homomorphic encryption and pairings, and preliminary timing results indicate reasonable performance for polynomials with millions of coefficients, and efficient DPoR with for instance 1TB size databases.

Interactive oracle proofs of proximity for algebraic codes

Sarah Bordage, Équipe Grace, LIX, École Polytechnique
mercredi 11 mai 2022

Most constructions of probabilistically checkable proofs (as well as their interactive variants) involve a proximity testing problem for linear codes. Specifically, the goal is to determine whether an input
word belongs to a certain linear code or if it is far from any codeword. In many cases, such proximity tests can be viewed as low degree tests.

In this talk, we will first describe an "interactive oracle proof of proximity" for Reed-Solomon codes, with linear prover time and logarithmic verification (Ben-Sasson et al., ICALP'2018). Inspired by this protocol, we will propose constructions for multivariate polynomial codes with similar efficiency parameters. If time permits, we will briefly discuss algebraic geometry codes.

Based on collaborations with Daniel Augot, Matthieu Lhotel, Jade Nardi and Hugues Randriam.

Fast list decoding of algebraic geometry codes

Grigory Solomatov, Tel Aviv University
mercredi 27 avril 2022

In this talk, we present an efficient list decoding algorithm for algebraic geometry (AG) codes. They are a natural generalization of Reed-Solomon codes and include some of the best codes in terms of robustness to errors. The proposed decoder follows the Guruswami-Sudan paradigm and is the fastest of its kind, generalizing the decoder for one-point Hermitian codes by J. Rosenkilde and P. Beelen to arbitrary AG codes. In this fully general setting, our decoder achieves the complexity $\widetilde{O}(s \ell^{\omega}\mu^{\omega - 1}(n+g))$, where $n$ is the code length, $g$ is the genus, $\ell$ is the list size, $s$ is the multiplicity and $\mu$ is the smallest nonzero element of the Weierstrass semigroup at some special place.

Joint work with J. Rosenkilde and P. Beelen.

Énumération de réseaux pour Tower NFS : un calcul de logarithme discret de 521 bits

Gabrielle De Micheli, University of California, San Diego
jeudi 21 avril 2022

La variante Tower du crible algébrique (TNFS) est connue pour être asymptotiquement l'algorithme le plus efficace pour résoudre le problème du logarithme discret dans les corps finis de moyennes caractéristiques, lorsque le degré d'extension est composite. Un obstacle majeur à une implémentation efficace de TNFS est la collecte de relations algébriques, qui se fait en dimension supérieure à 2. Cela nécessite la construction de nouveaux algorithmes de crible qui restent efficaces lorsque la dimension augmente.

Dans le travail que je vais présenter, nous surmontons cette difficulté en considérant un algorithme d'énumération de réseaux que nous adaptons à ce contexte spécifique. Nous considérons également un nouvel espace de crible, une sphère de haute dimension, alors que les algorithmes de crible précédents pour le NFS classique considéraient un orthotope. Notre nouvelle technique de crible conduit à un temps d'exécution beaucoup plus petit, malgré la plus grande dimension de l'espace de recherche, et même en considérant une cible plus grande, comme le démontre un calcul record que nous avons effectué dans un corps fini de 521 bits GF(p^6). Le corps fini cible est de la même forme que les corps finis utilisés dans les récentes preuves zero-knowledge de certaines blockchains. Il s'agit de la première implémentation rapportée de TNFS.

Finding lower bounds on the growth and entropy of subshifts over countable groups

Matthieu Rosenfeld , LIRMM
mardi 19 avril 2022

Given a group G, a set of colors A and a set of forbidden patterns F, the subshift X_F is the set of colorings of G by A that avoid the patterns in F. I recently obtained a simple condition on the set of forbidden patterns F that implies that X_F is nonempty and even provides some lower bound on its growth rate ("the size of X_F"). It seems to be more powerful and easier to use than previous results by Aubrun et al.; Bertnshteyn; Miller; Pavlov. The proof of the main theorem relies on two lemmas: the main lemma is quite elementary and only relies on basic combinatorics and the second lemma mostly relies on standard topological arguments. After an introduction, I will present the proof of the first lemma and then provide some applications to strongly aperiodic subshifts, nonrepetitive subshifts, and Kolmogorov complexity of subshifts.

The article is available at https://arxiv.org/abs/2204.00394

Guessing linear recurrence relations for polynomial system solving

Jérémy Berthomieu, Équipe PolSys, LIP6, Sorbonne Université
vendredi 8 avril 2022

Many problems in computer algebra require to guess linear recurrence relations of a sequence. We can mention sparse linear system solving, modular rational reconstruction or polynomial system solving through Faugère and Mou's Sparse-FGLM algorithm for Gröbner bases change of order.
The BerlekampMassey algorithm and its faster variants solve this problem in the one-dimensional case. In the multi-dimensional case, several algorithms generalize the BerlekampMassey algorithm. The so-called BerlekampMasseySakata algorithm does so using polynomial additions and shifts by a monomial, the Scalar-FGLM relies on linear algebra operations on a multi-Hankel matrix and the Artinian Gorenstein Border basis algorithm uses a Gram-Schmidt process.
In this talk, we present algorithms for guessing recurrences based solely on multivariate polynomial divisions, allowing us to revisit both the BerlekampMasseySakata and the Scalar-FGLM algorithms.
We also present variants that can take advantage of the structure of the sequence to guess the relations more efficiently.
Finally, we present a variant of the Sparse-FGLM algorithm for computing the Gröbner basis of a colon ideal.

This is based on joint works with Christian Eder (Technische Universität Kaiserslautern), Jean-Charles Faugère (CryptoNext Security et INRIA) et Mohab Safey El Din (Sorbonne Université)

One block quantifier elimination over the reals: algorithms, complexity and applications

Huu Phuoc Le, PolSys, LIP6, Sorbonne Université
mercredi 6 avril 2022

Quantifier elimination over the reals is one of the central algorithmic problem in effective real algebraic geometry. Given a quantified semi-algebraic formula, it consists in computing a logically equivalent quantifier-free formula.

In this task, I will describe a new efficient algorithm for one block quantifier elimination under some mild assumptions. This computes semi-algebraic formulas which describe a dense subset of the interior of the projection of a given real algebraic set on a subset of coordinates. Interestingly, the output formula is encoded through a determinantal representation which make them easy to evaluate.

Under some genericity assumptions, we use the theory of Groebner bases to prove that our algorithm runs in time O(8^tD^{4nt}) where D bounds the degree of the input variables, n is the number of quantified variables and t the number of parameters. Note that, by contrast with the classical Cylindrical Algebraic Decomposition algorithm (which is doubly exponential in n and t), our algorithm runs in time singly exponential.

We report on practical experiments and show that our implementation outperforms the state-of-the-art software.

Our algorithm relies on properties of Groebner bases, specialization properties of Hermite quadratic forms (a tool for real root counting) and new algorithms for solving zero-dimensional parametric systems.

Next, we show how to apply our contributions to compute the totally real hyperplane sections coming from the theory of real algebraic curves.

This talk gathers joint works with Dimitri Manevich, Daniel Plaumann and Mohab Safey El Din.

Algorithmic randomness and (imprecise) forecasting

Floris Persiau , Ghent University
mardi 5 avril 2022

Imagine we want to predict whether it does or doesn’t rain in Montpellier. We can do so by forecasting a probability $p$ for the event $A$ that it will rain next day. Such forecasts could be bad; e.g., we could daily forecast a probability $0.1$ for the event $A$, while most of the days turn out to be rainy. In terms of statistics, we would say that our probabilities are a bad statistical model for reality; in terms of algorithmic statistics, we say that the observed sequence of outcomes is not random with respect to our predictions.

So, when do we believe that a sequence of forecasts and a sequence of outcomes agree? When do we consider a sequence of outcomes to be random w.r.t. a sequence of predictions? We study this question from the field of algorithmic randomness, and provide you with an elementary introduction by rationalizing the first paragraph. Let $P$ be some measure on the set of all bit sequences, which are interpreted as yes/no results of a series of experiments (e.g., rain or no rain). Then we can associate with every binary string $x$ the conditional probabilities for $0$ and $1$ to appear after $x$. So, the measure $P$ may be equivalently considered as a forecasting system: after some experiments, we end up with a string of outcomes $x$, and the measure $P$ specifies the probability $p$ for the next outcome to be $1$ (say, the probability of rain after several days of known weather). A notion of randomness typically considers such (precise) uncertainty models $P$ and defines what infinite binary outcome sequences are random w.r.t. $P$. From a measure-theoretic point of view, for example, an infinite binary sequence is considered to be random for $P$ if it passes all recursively enumerable statistical tests that are associated with $P$. On the other hand, if we adopt a martingale-theoretic approach, then a sequence is random for $P$ if there is no lower semicomputable betting strategy for getting arbitrarily rich along this sequence without borrowing, where the bets that are allowed are determined by $P$.

In this talk, we will consider possible definitions of randomness w.r.t. forecasting systems, which specify what infinite sequences of binary outcomes are random for a forecasting system $P$. In particular, we will study whether the randomness of an infinite binary sequence can always be described by a precise uncertainty model. We will do so by allowing in our notions of randomness for more general uncertainty models, which we call imprecise (interval) forecasts.

Private Information Retrieval with a coding-theoretic perspective

Julien Lavauzelle, LAGA, Université Paris 8 Vincennes - Saint-Denis
mercredi 23 mars 2022

Private information retrieval (PIR) allows to query an entry from a remote database, without revealing the identity of the desired entry to the servers storing the database. One usually wants to build protocols with low communication complexity, low computation, and low storage overhead.
In this talk, I will review several constructions of information-theoretically secure PIR schemes. As common point, they share the use of coding techniques in order to model queries and/or to pre-encode the database. Constructions vary depending on the way the database is stored (replicated or encoded), or on the parameters one wants to optimize (communication, computation, storage).

New data structure for univariate polynomial approximation and applications to root isolation, numerical multipoint evaluation, and other problems

Guillaume Moroz, Equipe Gamble, Inria Nancy
mercredi 16 février 2022

We present a new data structure to approximate accurately and efficiently a polynomial f of degree d given as a list of coefficients. Its properties allow us to improve the state-of-the-art bounds on the bit complexity for the problems of root isolation and approximate multipoint evaluation. This data structure also leads to a new geometric criterion to detect ill-conditioned polynomials, implying notably that the standard condition number of the zeros of a polynomial is at least exponential in the number of roots of modulus less than 1/2 or greater than 2.

On abelian versions of Nivat's conjecture

Svetlana Puzynina, St. Petersburg State University
mardi 15 février 2022

The complexity of a two-dimensional word w is the function $p_w(m,n)$ counting the number of its rectangular $m\times n$ blocks (or factors). The famous Nivat's conjecture, 1997, gives a link between the complexity and the periodicity of two-dimensional words. It states the following: Let $w$ be a two-dimensional word. If there exists $m, n$ such that $p_{w}(m, n) \leq mn$, then $w$ has a periodicity vector. Two words are called abelian equivalent if, for each letter of the alphabet, they contain the same numbers of occurrences of this letter. The abelian complexity $a_w(m,n)$ is defined as a function counting the number of abelian classes of rectangular m\times n factors. The notion of a complexity and an abelian complexity can be naturally extended from rectangular factors to factors of arbitrary shape (pattern complexity). In this talk, we will discuss the relations between periodicity of two-dimensional words and their abelian (pattern) complexity.

Faster Sparse Matrix Inversion and Rank Computation in Finite Fields

Sílvia Casacuberta Puig,
vendredi 7 janvier 2022

In this talk, we present the algorithm achieving the current best running time value to invert sparse matrices over finite fields, which corresponds to an expected O(n^{2.2131}) time for the current values of fast rectangular matrix multiplication. We achieve the same running time for the computation of the rank and nullspace of a sparse matrix over a finite field.
This improvement relies on two key techniques. First, we adopt the decomposition of an arbitrary matrix into block Krylov and Hankel matrices from Eberly et al. (ISSAC 2007). Second, we show how to recover the explicit inverse of a block Hankel matrix using low displacement rank techniques for structured matrices and fast rectangular matrix multiplication algorithms. We will also explain how to generalize our inversion method to block structured matrices with other displacement operators and strengthen the best known upper bounds for explicit inversion of block Toeplitz-like and block Hankel-like matrices.
Joint work with Rasmus Kyng.

On the hardness of the NTRU problem

Alice Pellet-Mary, CNRS, Université de Bordeaux
mercredi 15 décembre 2021

The NTRU problem is an algorithmic problem over structured lattices that was introduced by Hoffstein, Pipher, and Silverman more than 20 years ago, and which has been used to construct various cryptographic primitives. However, its relations to other lattice problems is still not well understood.
In this talk, we will describe different variants of the NTRU problem, and study how they compare to each other (and to other more classical lattice problems) in terms of reductions.
More precisely, we will show that a search variant of the NTRU problem is at least as hard as the shortest vector problem (SVP) in ideal lattices; and that the decisional variant of NTRU is at least as hard as another search variant of NTRU. Unfortunately, the two search variants of NTRU that are considered in these reductions do not match, meaning that we cannot combine the reductions in order to obtain a reduction from the ideal shortest vector problem to the decisional NTRU problem.

Coppersmith’s block Wiedemann algorithm and modular composition

Gilles Villard,
mercredi 8 décembre 2021

This presentation is largely based on the work "Faster modular composition" (https://arxiv.org/abs/2110.08354) with V. Neiger, B. Salvy, and É. Schost.

Coppersmith 1994 has introduced a block version of Wiedemann’s 1986 algorithm. The method allows to obtain algorithms with best known complexity bounds for various matrix and polynomial problems. We can mention for example: determinant of a matrix over a ring, sparse linear systems and inversion of sparse matrices, annihilating polynomials of structured matrices, and resultant of bivariate polynomials.

We will review the general approach and see how its specialization in terms of manipulations of bivariate polynomials leads to a new Las Vegas algorithm for the composition of two polynomials modulo a third one, over an arbitrary field. The algorithm uses O(n^1.43) field operations, breaking through the 3/2 barrier in the exponent for the first time. The previous fastest algebraic algorithms, designed by Brent and Kung, date back to 1978.

Our approach relies on the computation of sufficiently many independent elements of “small degree” in a bivariate ideal defined from input polynomials. Randomization is used to reduce the general case to this favorable situation. The algorithm can be seen as the Coppersmith algorithm accelerated by the use of structured projections.

Bitcoin et Blockchain

Victor Poupet, LIRMM
mardi 7 décembre 2021

Le Bitcoin est une cryptomonnaie décentralisée inventée en 2008 (mise en circulation en 2009). Le bitcoin repose sur une structure de données partagée appelée blockchain qui contient la liste de toutes les transactions effectuées depuis sa création (environ 690 millions actuellement).

Dans cet exposé je présenterai le fonctionnement de la blockchain, et les solutions qu'elle apporte aux principales difficultés liées à une monnaie totalement décentralisée.

Je ne suis en aucun cas un expert en la matière, les éléments que je présenterai sont principalement tirés de l'article originel de 2008. Toutefois s'il y a des spécialistes présents je serais ravi de discuter également d'autres aspects plus avancés (par exemple les smart contracts sur la blockchain Ethereum).

Slides disponible à l'adresse https://vpoupet.github.io/blockchain/slides.html

Computing the Characteristic Polynomial of Generic Toeplitz-like and Hankel-like Matrices

Hippolyte Signargout, Univ. Lyon, ENS de Lyon, CNRS, Inria, UCBL
vendredi 19 novembre 2021

New algorithms are presented for computing annihilating polynomials of Toeplitz, Hankel, and more generally Toeplitz+Hankel-like matrices over a field. Our approach follows works on Coppersmith’s block Wiedemann method with structured projections, which have been recently successfully applied for computing the bivariate resultant. A first baby steps/giant steps approach—directly derived using known techniques on structured matrices—gives a randomized Monte Carlo algorithm for the minimal polynomial of an n × n Toeplitz or Hankel-like matrix of displacement rank α using O(n^(ω−c(ω)) α^c(ω)) arithmetic operations, where ω is the exponent of matrix multiplication and c(2.373) ≈ 0.523 for the best known value of ω. For generic Toeplitz+Hankel-like matrices a second algorithm computes the characteristic polynomial; in particular, when the displacement rank is considered constant, its cost is O(n^(2−1/ω)). Previous algorithms required O(n^2) operations while the exponents presented here are respectively less than 1.86 and 1.58 with the best known estimate for ω.

Densité des empilements de cercles

Thomas Fernique, LIPN (CNRS & Univ. Paris 13)
mardi 9 novembre 2021

Pour couvrir la plus grande proportion possible du plan euclidien avec des disques unité d'intérieurs deux à deux disjoints, la meilleure solution consiste à centrer les disques sur une grille triangulaire de côté 2 (il s'agit d'un 'empilement hexagonal compact'). Ce problème se généralise en dimensions supérieures, notamment en dimension 3 avec la célèbre conjecture de Kepler. Mais il se généralise aussi en considérant des disques de tailles différentes. C'est ici la question qui nous intéressera, en particulier pour deux tailles de disques. Nous proposons un sympathique survol des résultats connus et invitons les auditeurs intéressés par les techniques de preuves à continuer la discussion après le séminaire.

Enregistrement du séminaire.

Spectral properties of pseudo-random graphs

Geoffroy Caillat-Grenier, Université de Montpellier
mercredi 7 juillet 2021

Les graphes aléatoires ont des propriétés intéressantes pour les applications. Ces propriétés peuvent être caractérisées par le spectre de la matrice d'adjacence du graphe. S'il existe des moyens déterministes et explicites de produire des graphes présentant de telles propriétés, ces procédés sont souvent extrêmement compliqués. En outre, pour utiliser un graphe purement aléatoire, les ressources requises pour le produire et le stocker seraient contraignantes. Nous avons donc exploré une approche intermédiaire où nous utilisons des générateurs de nombres pseudo-aléatoires afin de générer les graphes en question.

Dans cet exposé, je présenterai les procédés que nous avons mis en place pour générer de tels graphes, les moyens utilisés pour les tester ainsi que les résultats obtenus. Pour cela, j'expliquerai différentes notions permettant de comprendre le lien entre les propriétés combinatoires et les propriétés spectrales des graphes. J'aborderai également certaines applications de ce types de graphes qui nous ont permis de tester nos constructions.

Self-stabilisation of cellular automata on tilings

Siamak Taati, American University of Beirut
mercredi 30 juin 2021

I will discuss the following notion of self-stabilisation in the setting of cellular automata (CA) and shift spaces of finite type (SFT). We say that a CA F self-stabilises on an SFT X if, starting from any finite perturbation of a configuration in X, the CA is able to correct the perturbation and return to X in finitely many steps. We require all the configurations in X to be fixed points of F. I will present efficient solutions for some families of SFTs, but also
provide an example of an SFT for which the self-stabilisation problem is inherently hard. Using the notion of sparseness, it can be shown that if the CA is able to self-stabilise from finite perturbations in sub-quadratic time, then it can also self-stabilise starting from a
Bernoulli random perturbation with a sufficiently small density of errors. This is joint work with Nazim Fatès and Irène Marcovici. [arXiv:2101.12682]

Polynomial and cyclotomic identity testings

Sylvain Perifel, IRIF, Université de Paris
mercredi 23 juin 2021

Polynomial identity testing (PIT) is the problem of deciding whether an arithmetic circuit computes the zero polynomial. It is certainly the main natural problem which has not been derandomized yet. Despite this failure (or rather thanks to it), a lot of restrictions or variants have been successfully studied. In this talk, after an introduction to PIT, I will survey some of these results and focus in particular on a variant called cyclotomic identity testing (CIT): does a polynomial vanish on a primitive (complex) root of unity? In a joint work with Nikhil Balaji, Mahsa Shirmohammadi and James Worrell, we show that CIT is in coNP, and can even be solved in probabilistic polynomial time if we assume the generalized Riemann hypothesis.

Sur la multiplication dans les corps finis avec des algorithmes de type Chudnovsky sur la droite projective

Bastien Pacifico, I2M, Aix-Marseille Université
mercredi 16 juin 2021

Nous proposons une construction générique d'algorithmes d'interpolation sur la droite projective pour multiplier dans une extension de degré quelconque d'un corps fini. Ces algorithmes correspondent aux techniques usuelles d'interpolation polynomiale dans les petites extensions, et sont définis récursivement lorsque le degré de l'extension augmente. Nous verrons que leur complexité bilinéaire est compétitive, et qu'ils sont constructibles en temps polynomial.

An Alternative Approach for SIDH Arithmetic

Laurent Imbert, LIRMM, CNRS, Université de Montpellier
mercredi 19 mai 2021

In this talk, I will present new algorithms for the field arithmetic layers of supersingular isogeny Diffie-Hellman; one of the remaining candidates in the NIST post-quantum standardization process. Our approach uses a polynomial representation of the field elements together with mechanisms to keep the coefficients within bounds during the arithmetic operations. We present timings and comparisons for SIKEp503 and suggest a novel 736-bit prime that offers a $1.17\times$ speedup compared to SIKEp751 for a similar level of security.

Enumerative combinatorics problems for cryptographic primitives based on cellular automata

Luca Mariot,
lundi 17 mai 2021

Cellular Automata (CA) provide an interesting computational model for designing several cryptographic primitives, especially in the symmetric-key setting. Indeed, during the past decades CA have been employed to define pseudorandom generators for stream ciphers, as in the seminal work by Wolfram in the 80s, or S-boxes for block ciphers, such as the rule χ used by Daemen et al. in the design of the Keccak sponge construction. In this talk, we will survey recent results on combinatorial problems related to cryptographic primitives based on CA, focusing on the examples of orthogonal arrays, bent functions and S-boxes. We will also present counting and enumeration problems that are still open for each of these examples, proposing some approaches to address them in future research.

[ attention: un jour et une heure non standard ]

Introduction to Isogeny-Based Cryptography

Cyril Bouvier, LIRMM, CNRS, Université de Montpellier
mercredi 12 mai 2021

Cryptography using isogenies between elliptic curves was proposed as a way to implement post-quantum cryptography. In 2006, the first cryptosystem whose security is based on the hardness of computing isogenies between two given elliptic curves was described. In 2017, another cryptosystem was submitted to the NIST competition for the standardization of post-quantum cryptography. In this talk, I will introduce the concepts behind isogeny-based cryptography and describe three key-exchange protocols that use isogenies between elliptic curves.

A counting argument for combinatorics on words

Matthieu Rosenfeld, LIRMM.
mercredi 28 avril 2021

The Lovász Local Lemma [Lovász, 1975] is one of the central tools of Erdős' probabilistic method. This rather simple lemma has been applied to SAT formulas, graph colorings, hypergraph coloring, combinatorics on words, geometry, and countless other topics. This Lemma essentially tells that given a set of "bad events" under the right conditions the probability that no events occur is nonzero. It implies that there is at least one coloring or one affection of the variables with the desired properties. There are many versions of the Lovász Local Lemma that apply to different situations. Many other techniques that apply to similar problems have emerged in the last 20 years (entropy compression, cluster expansion, local cut lemma...). In [1], I described a new counting argument that can be applied to the same kind of problems. One of the main advantages of this counting argument is that it provides similar bounds (or slightly better bounds), but is really simple. In [2], Wanless and Wood provided a general framework for this counting argument. In the context of combinatorics on words, this counting argument can be pushed further as in [3].

In this talk, I will present this counting argument on a simple example. I will then present how it can be pushed further in the context of combinatorics on words.

[1] Another approach to non-repetitive colorings of graphs of bounded degree. M. Rosenfeld. The Electronic Journal of Combinatorics (2020)

[2] A general framework for hypergraph colouring. I.M. Wanless and D.R. Wood. Arxiv (2020).

[3] Avoiding squares over words with lists of size three amongst four symbols. M. Rosenfeld. Arxiv (2021)

Recognizable morphisms and a decision algorithm for substitutive languages

Revekka Kyriakoglou,
mercredi 14 avril 2021

The concept of recognizability of morphisms originates in the paper of Martin [1] under the term: determinization. This term was first used by Host in his paper on the Ergodic theory of Dynamical Systems [2]. The notion of recognizability came in full bloom after the interest shown by many scientists due to its numerous theoretical applications in various topics, from combinatorics on words to symbolic dynamics. A similar notion is that of circularity. The two terms are often, but not always used as synonymous. This lack of consistency in the literature could lead to confusion. In this seminar, I will present my work on the different notions of recognizability, with the main goal of proving the equivalences and indicating the differences that exist between the different definitions.

In the second part of this seminar, I will present an algorithm that allows us to describe all bispecial words of a substitutive language of a recognizable morphism, together with the set of their left and right extensions. More precisely, given a set of words S, one can associate with every word w ∈ S its extension graph which describes the possible left and right extensions of w in S. Families of sets can be defined from the properties of the extension graph of their elements: acyclic sets, dendric sets, neutral sets, etc. In the specific case of the set of words of a substitutive language of a recognizable morphism, we show that it is decidable whether these properties are verified or not.

[1] John C. Martin. Minimal flows arising from substitutions of non-constant length. Math. Systems Theory, 7:72–82, 1973.
[2] B. Host. Valeurs propres des systèmes dynamiques définis par des substitutions de longueur variable. Ergodic Theory Dynam. Systems, 6(4):529–540, 1986.

An algebraic theory for state complexity

Edwin Hamel, IRIF
mercredi 7 avril 2021

Operational state complexity is an active topic of research in theoretical computer science. However, there is no general framework built for computing state complexities. A usual method is to compute an upper bound by resorting to some ingenious tricks, and then to provide a family of languages, called a witness, whose image by the operation matches this upper bound. Furthermore, witnesses are chosen in an ad hoc manner, and no general results are known.

In 2013, Brzozowski introduced a heuristic to find witnesses that consists in choosing them when they are "complex" in a certain sense. In order to precise and formalize this heuristic, we build a general framework designed for studying the state complexities of a large class of regular operations, which we call $1$-uniform. As reasoning directly on automata is easier to compute state complexities, we build a counterpart to $1$-uniform operations in the space of operations over complete, deterministic and finite automata, which we call modifiers. We show that, for any $1$-uniform regular operation, we can choose a witness in a small pool of DFAs with a large alphabet.

From these results, we derive a method to compute the state complexity of any $1$-uniform operation. This method can be used to recover some well-known state complexity results, like the exact state complexity of the Kleene star, catenation, or of any boolean operation. In addition, we very quickly give some ideas that we used to compute the exact state complexity of the Kleene star composed with symmetric difference, which is a new result.

Finally, we study a class of modifiers defined by simple algebraic properties, called friendly modifiers. We give the regular operations associated with friendly modifiers, and we give their maximal state complexity.

Une méthode de programmation parallèle : d'une formule logique à un programme d'automate cellulaire

Théo Grente, Université de Caen Normandie
mercredi 31 mars 2021

Les automates cellulaires constituent le modèle de calcul parallèle et local par excellence. Ainsi, on peut trouver dans la littérature de nombreux algorithmes sur ce modèle de calcul. Ces algorithmes sont souvent décrits sous la forme de signaux et collisions, sans détailler le programme de l'automate cellulaire qui en résulte.

Je présenterai dans cette exposé une méthode permettant d'obtenir ce programme de manière automatique. Cette méthode consiste dans un premier temps à programmer en logique l'induction résolvant un problème (par exemple, un schéma de signaux et collisions), puis dans un second temps, à appliquer un processus automatique aboutissant au programme de l'automate cellulaire résolvant ce problème.

En plus de cette méthode, je présenterai aussi les caractérisations logiques de chacune des trois classes de complexité en temps minimal des automates cellulaires obtenues par celle-ci, en m'attardant sur la plus générale.

Enfin, si le temps me le permet, j'aborderais un résultat de reconnaissance des langages conjonctifs (une extension des langages algébriques) obtenu par notre méthode de programmation par la logique.

(Fully) Homomorphic Encryption over Finite Fields

Vincent Zucca, Équipe DALI, LIRMM, UPVD
mercredi 17 mars 2021

Homomorphic encryption is a particular kind of encryption which allows to perform computations directly over encrypted data. In this talk, we will be interested in two of the currently most used homomorphic encryption schemes BGV and BFV. Although these two schemes are two different instantiations of the same idea, there are significant differences in their performances and usability. We will start by presenting the two schemes and then see how we can reduce the gap between them. We will finish this talk by presenting a simple, yet non-trivial, practical use case of these schemes to integer comparison. In particular, we will see how the study of certain functions over finite fields permits to speed-up computations.

Trade-offs between size and degree in Polynomial Calculus (2nd part)

Guillaume Lagarde , LaBRI
mercredi 10 mars 2021

Introduced by Cleggs et al. (STOC'96) to capture Gröbner basis computations, Polynomial Calculus Resolution (PCR) is an algebraic proof system for certifying the unsatisfiability of CNF formulas. Impagliazzo et al. (CC'99) established that if an unsatisfiable k-CNF formula over n variables has a refutation of small size in PCR (that is, polynomial size), then this formula also has a refutation of small degree, i.e., O(sqrt(n log n)). A natural question is to know whether we can achieve both small size and small degree in the same refutation. In this talk I will present (and give some ideas of the proof of) our main result: the decrease in degree necessarily comes at the expense of an exponential blow-up in size. Joint work with Jakob Nordström, Dmitry Sokolov, Joseph Swernofsky.

This will be the second (more technical) part of the presentation; see the recording of the first part of the talk on https://webconf.math.cnrs.fr/playback/presentation/2.0/playback.html?meetingId=25a62376586e525af72432cbf9ad7d8aadc2352a-1614772565299 .

Trade-offs between size and degree in Polynomial Calculus

Guillaume Lagarde ,
mercredi 3 mars 2021

Introduced by Cleggs et al. (STOC'96) to capture Gröbner basis computations, Polynomial Calculus Resolution (PCR) is an algebraic proof system for certifying the unsatisfiability of CNF formulas. Impagliazzo et al. (CC'99) established that if an unsatisfiable k-CNF formula over n variables has a refutation of small size in PCR (that is, polynomial size), then this formula also has a refutation of small degree, i.e., O(sqrt(n log n)). A natural question is to know whether we can achieve both small size and small degree in the same refutation. In this talk I will present (and give some ideas of the proof of) our main result: the decrease in degree necessarily comes at the expense of an exponential blow-up in size. Joint work with Jakob Nordström, Dmitry Sokolov, Joseph Swernofsky

Some results about tilings of groups

Etienne Moutot, LIS, Université Aix-Marseille
mercredi 24 février 2021

In this talk I will present tilings of groups (or tilings of their Cayley graphs). I will give an overview of what is known about these tilings, with a particular focus on questions of periodicity (or aperiodicity) and the decidability of their domino problem.

The Surprise Examination Paradox and the Second Incompleteness Theorem

Alexander Shen, LIRMM
mercredi 16 décembre 2020

The Berry paradox is talking about "the minimal natural number that cannot be defined by a phrase of less than one million words." It seems that this phrase defines some specific natural number. But the phrase is shorter than one million words, which looks contradictory. Being formalized, this paradox becomes Chaitin's proof of Gödel's first incompleteness theorem.

There is another famous paradox. Assume we get a message saying that next week we are going to have an unexpected security exercise at the noon, and nobody will know at which day the exercise will happen before it actually happens. Then the exercise cannot be on Sunday (as the day before we would know for sure when it should happen). Having excluded Sunday, we see that the exercise cannot happen on Saturday (again, we would know the date of the exercise the day before it), and so on, till Monday. Kritchman and Raz noted that this paradox can be also formalized and gives a proof of Gödel's second incompleteness theorem (the formal arithmetic arithmetic cannot prove its own consistency). We will discuss this funny argument.

Multiparty Karchmer-Wigderson Games and Threshold Circuits.

Alexander Kozachinskiy, University of Warwick
mercredi 9 décembre 2020

First part of the talk will be devoted to the following problem about circuit complexity of the majority function. Namely, it is classical that for the majority function there exists a logarithmic-depth circuit constructed solely from the 3-bit majorities. This fact goes back to a paper of Valiant from 1984. Now, a drawback of the Valiant’s construction is that it uses probabilistic method and thus is not explicit. Can one derandomize it? More precisely, can one deterministically in polynomial time print a logarithmic-depth circuit computing the majority function and consisting, as in the Valiant’s construction, only of 3-bit majorities? The answer to this question is positive and in the talk we will give a full proof of it. Besides circuit complexity, this also gives some applications in cryptography (Cohen et al., CRYPTO 2013).

In fact, our argument is applicable not only to the majority function. It actually establishes an equivalence (from the circuit size/depth viewpoint) between monotone circuits and circuits consisting only of 3-bit majorities for monotone self-dual functions. A more elaborate version of our agrument also establishes a new connection between circuit complexity and multi-party communication complexity (in spirit of the classical Karchmer-Wigderson games). This allows us to obtain new upper bounds on the circuit complexity of threshold functions (motivated again by cryptographic applications). If time permits, we will discuss these questions in the second part of the talk.

Based on a joint work with V. Podolskii.

Thermal states quantum cryptography

Anne Ghesquiere,
mercredi 2 décembre 2020

In the recent decades, quantum cryptography has begun playing a role in safeguarding communications against eavesdroppers, with large scale experimental implementations of quantum key distribution (QKD) being deployed over dark fibres networks and the commercialisation of quantum random number generators. Traditional QKD follows the BB84 model and rests on the no-cloning of non-orthogonal states. Thermal states QKD on the other hand, is a novel approach to QKD, aiming to for security over short distances but over a wider spectrum of frequencies, allowing for more flexibility in sources and infrastructures. In particular, thermal states quantum cryptography’s goal is to enable perfectly secure key exchange with low power, low memory devices. In this talk, I will briefly describe the BB84 protocol and the physics the security of the protocol relies upon. I will then, describe thermal states quantum key distribution and present evidence of the security in the protocol.

Secret sharing schemes for ports of matoids

Oriol Farràs Ventura, Universitat Rovira i Virgili
mercredi 25 novembre 2020

A secret sharing scheme is a cryptographic method to protect a secret. Given a secret and some randomness, the scheme generates pieces of information, that are called shares, in such a way that the secret can only be recovered from certain subsets of shares. Considering that each share is held by a participant, the access structure of the scheme is defined as the family of coalitions of participants that can recover the secret. In a perfect scheme, coalitions that are not in the access structure cannot get any information about the secret (in the information theoretic sense). Secret sharing schemes have many applications in cryptography as secure multiparty computation or distributed cryptography.

A perfect secret sharing scheme is ideal if the size of each share is equal to the size of the secret, which is the optimal situation. Brickell and Davenport showed that the access structure of an ideal secret sharing scheme is determined by a matroid. Namely, the minimal authorized subsets of an ideal secret sharing schemes are in correspondence with the circuits of a matroid containing a fixed point. In this case, we say that the access structure is a matroid port. It is known that, for an access structure, being a matroid port is not a sufficient condition to admit an ideal secret sharing scheme.

In this talk we will see the main known connections between secret sharing and matroids, and we will present the results of the work "Secret sharing schemes for ports of matroids of rank 3".

Record of the talk: https://webconf.math.cnrs.fr/playback/presentation/2.0/playback.html?meetingId=25a62376586e525af72432cbf9ad7d8aadc2352a-1606301566964

Inequalities for space-bounded complexity

Alexander Shen, LIRMM, Escape
mercredi 21 octobre 2020

Many laws of information theory have the form of some inequalities (the amount of information in a pair (X,Y) does not exceed the sum of the amount of information in X and in Y, for example). These inequalities can be translated from the language of Shannon entropy to the complexity language and vice versa. The complexity is noncomputable, and one may prefer to consider resource-bounded complexity. It was noted (Longpré) that some inequalities remain valid for space-bounded complexity (with appropriate changes). We improve this bound using an old trick of Sipser, and the resulting bound is good enough to show that all the inequalities that are true for unbounded complexity, have a resource-bounded counterpart.

Joint work with Peter Gacs and Andrei Romashchenko.

The talk will be given online.

La complexité de communication de l’accord de clé secrète.

Emirhan Gürpınar, LIRMM, Escape
mercredi 17 juin 2020

Romashchenko et Zimand ont montré dans leur papier de 2019 que deux personnes (Alice et Bob) étant donné des mots personnels, peuvent établir une clé secrète aussi grande que l’information commune de leurs mots en communiquant par une chaîne publique. Nous avons cherché la complexité de communication de ce problème.

Nous étudions des exemples avec des propriétés différentes. Nous utilisons la complexité de Kolmogorov et des arguments spectrales en passant par des graphes bipartites définis pour les cadres initiaux. Les protocoles de communication sont considérés aléatoires (avec des bits privés aléatoires).

Aucun prérequis sauf algèbre linéaire.

Algorithmic Aspects of Information Theory

Dan Suciu, University of Washington
mardi 19 mai 2020

Constraints on entropies are considered to be the laws of information theory. Even though the pursuit of their discovery has been a central theme of research in information theory, the algorithmic aspects of constraints on entropies remain largely unexplored. It is open whether even the simplest decision problem, checking if a linear inequality of entropies is valid, is decidable. This talk discusses several decision problems about constraints on entropies. It starts by presenting upper bounds on several types of decision problems, by placing them in the arithmetic hierarchy. It then discusses "islands of decidability": special restrictions on information inequalities that are decidable. Finally, we present a tight connection between max-information inequalities, and the domination problem between finite structures. A structure B "dominates" a structure A if for any other structure the number of homomorphisms from B is greater than or equal to the number of homomorphisms from A. We show that, checking a max-information inequality is Turing equivalent to checking the domination problem, where B is acyclic.

Joint work with:
Mahmoud Abo Khamis (RelationalAI),
Phokion G. Kolaitis (University of California at Santa Cruz),
Hung Q. Ngo (RelationalAI)

The talk begins at 17h CET (at 8am PST)

Video recording of the seminar (available online until June 5 2020).

Algebra, Tilings et Nivat

Etienne Moutot, ENS de Lyon
mercredi 13 mai 2020

A quel point peut-on créer des structures complexes à l'aide de motifs élémentaires ?
La conjecture de Nivat dit que toute configuration (coloration de la grille Z^2) de
faible complexité (le nombre de motifs qui y apparaissent est "faible")
est nécessairement périodique. Autrement dit, il est impossible des créer des configuration "complexes" (non périodiques) à l'aide d'un petit nombre de motifs de base.

En 2015, Michal Szabados et Jarkko Kari ont publié leur premier article
utilisant une approche algébrique pour s'attaquer à cette conjecture.
Leur idée est de représenter une configuration comme une série formelle,
et en étudiant certaines structures qui lui sont liées (tels que des
idéaux polynomiaux bien choisis). Ce faisant, ils parviennent à exploiter plusieurs théorèmes d'algèbre pour s'approcher de la conjecture de Nivat sous des angles nouveaux.

Dans cet exposé, je présenterai les travaux que j'ai effectué avec Jarkko
Kari dans le continuation de la thèse de Michal Szabados. Je parlerais
de nouveaux théorèmes utilisant ces outils algébriques pour se rapprocher encore
une fois de la conjecture de Nivat.
Une des conséquences de ces résultats est que le problème du domino est décidable pour les sous-shifts de faible complexité. Ce problème est indécidable de manière générale, et si la conjecture de Nivat est vraie, il serait décidable de manière évidente. Un des intérêt de notre résultat est qu'il permet de prouver cette décidabilité, sans pour autant avoir besoin de la conjecture de Nivat, seulement une version affaiblie.

Faster cofactorization with ECM using mixed representations

Cyril Bouvier & Laurent Imbert, LIRMM, CNRS, U. Montpellier
mercredi 29 avril 2020

The Elliptic Curve Method (ECM) is the asymptotically fastest known
method for finding medium-size prime factors of large integers. It is
also a core ingredient of the Number Field Sieve (NFS) for integer
factorization and its variants for computing discrete logarithms. In NFS
and its variants, ECM is extensively used in the cofactorization step (a
subroutine of the sieving phase) which consists of breaking into primes
billions of composite integers of a hundred-ish bits.

In this talk, we propose a novel implementation of ECM in the context of
NFS that requires fewer modular multiplications than any other publicly
available implementation. The main ingredients are: Dixon and Lenstra's
idea on primes combination, fast point tripling, optimal double-base
decompositions and Lucas chains, and a good mix of Edwards and
Montgomery representations.

A note on the spectral argument in communication problems

Andrei Romashchenko, LIRMM, CNRS, U. Montpellier
mercredi 15 avril 2020

From communication complexity to graphs, from graphs to linear algebra, and the full way back.

Produit quasi-linéaire de polynômes creux

Armelle Perret du Cray, LIRMM
mercredi 26 février 2020

Une particularité de l’algorithmique des polynômes creux est l’importante différence qu’il peut y avoir entre la taille de la sortie et celle des entrées. S’il est possible de borner a priori la taille de la sortie cette borne peut s’avérer trop importante en pratique. Dans le cas du produit de deux polynômes creux de T monômes, la borne naturelle sur le nombre de monômes de la sortie est T² alors qu’il peut dans certains cas être beaucoup plus petit. L’algorithme naïf est linéaire en cette borne. Il est possible de calculer en temps quasi-linéaire une borne plus fine, la taille du support structurel c’est-à-dire le nombre de monômes qu’aurait le produit si tous les coefficients valaient 1. En effectuant ce calcul, Andrew Arnold et Daniel S. Roche [1, 2] ont mis au point un algorithme probabiliste quasi-linéaire en la taille du support structurel. Cependant cette borne peut également être quadratique en la taille effective de la sortie. Je présenterai le premier algorithme probabiliste pour le produit de polynômes creux à coefficients entiers de complexité quasi-linéaire en la taille de l’entrée plus celle effective de la sortie. Il repose sur deux ingrédients. Une adaptation de l’algorithme d’interpolation creuse de Qiao-Long Huang [3] qui prend en entrée la taille du support du produit. Cet algorithme est appelé à plusieurs reprises avec une taille croissante. Le deuxième ingrédient est un nouvel algorithme qui permet la vérification du produit de polynômes creux en temps quasi-linéaire.

The Power Error Locating Pairs algorithm

Isabella Panaccione, LIX, Inria - École polytechnique
mercredi 5 février 2020

In this talk, we will focus on particular decoding algorithms for Reed Solomon codes. It is known that for these codes, it is possible to correct an amount of errors equal to or even superior than half the minimum distance. In general though, beyond this bound, the uniqueness of the solution is no longer entailed. Hence, given a vector y ∈ GF(q)^n , it is possible to either use list decoding algorithms, like Sudan algorithm and Guruswami-Sudan algorithm, which return the list of all the closest codewords to y, or unique decoding algorithms, like the Power Decoding algorithm, which return the closest codeword to y if unique at the prize of some failure cases.

In this scenario, I will present a new unique decoding algorithm, that is the Power Error Locating Pairs algorithm. Based on Pellikaan’s Error Correcting Pairs algorithm, it has for Reed Solomon codes the same decoding radius and complexity as Power Decoding algorithm. Moreover, like the Error Correcting Pairs algorithm, it can be performed on all codes which dispose from a special structure (among them, Reed Solomon codes, algebraic geometry codes and cyclic codes).

Joint work with Alain Couvreur

Complexité de l'algorithme d'Euclide

Bruno GRENET, ECO, LIRMM
mercredi 18 décembre 2019

L'algorithme d'Euclide permettant de calculer le PGCD de deux entiers est l'un des plus anciens algorithmes connus. Dans ce court exposé, je présenterai une nouvelle preuve de sa complexité, basée sur l'analyse amortie, encore plus simple que les preuves habituelles. Cette preuve est malgré tout optimale.

Travail en commun avec Ilya Volkovich (U. Michigan).

An LLL Algorithm for Module Lattices

Alice Pellet--Mary, LIP, École Normale Supérieure de Lyon
mercredi 13 novembre 2019

A lattice is a discrete subgroup (i.e., ℤ-module) of ℝⁿ (where ℤ and ℝ are the sets of integers and real numbers). The LLL algorithm is a central algorithm to manipulate lattice bases. It takes as input a basis of a Euclidean lattice, and, within a polynomial number of operations, it outputs another basis of the same lattice but consisting of rather short vectors.

Lattice are used in post-quantum cryptography, as finding a shortest vector of a lattice, given an arbitrary basis, is supposed to be intractable even with a quantum computer. From a practical point of view, many constructions based on lattices use structured lattices, in order to improve the efficiency of the schemes. Most of the time, these structured lattices are R-modules of small dimension, where R is the ring a integers of some number field. As an example, only 1 scheme among the 12 lattice-based candidates for the NIST post-quantum standardization process uses non-structured lattices, all the other ones using module lattices. It is then tempting to try and adapt the LLL algorithm, which works over lattices (i.e., ℤ-modules), to these R-modules, as this would allow us to find rather short vectors of these module lattices.

All the previous works trying to extend the LLL algorithm to other rings of integers R focused on rings R that were Euclidean, as the LLL algorithm over ℤ crucially relies on the Euclidean division. In this talk, I will describe the first LLL algorithm which works in any ring of integers R. This algorithm is heuristic and runs in quantum polynomial time if given access to an oracle solving the closest vector problem in a fixed lattice, depending only on the ring of integers R.

This is a joint work with Changmin Lee, Damien Stehlé and Alexandre Wallet

Multiplications polynomiales sans mémoire

Bruno Grenet, LIRMM
mercredi 15 mai 2019

Le problème de la multiplication de polynômes a été très étudié depuis les années 1960, et différents algorithmes ont été proposés pour atteindre les meilleures complexités en temps. Plus récemment, certains de ces algorithmes ont été étudiés du point de vue de leur complexité en espace, et modifiés pour n'utiliser aucun espace supplémentaire autre que les entrées et sorties, tout en gardant la même complexité en temps asymptotiquement.

Dans ce travail, nous étendons ces résultats de deux façons. D'une part, nous nous demandons si tout algorithme de multiplication polynomiale admet une variante « en place », c'est-à-dire n'utilisant aucun espace supplémentaire, de manière générique. D'autre part, nous considérons deux variantes importantes de ce problème qui ne produisent qu'une partie du résultat, les produits dits court et médian, et nous nous demandons si ces opérations peuvent également être effectuées en place.

Pour répondre de manière (essentiellement) affirmative à ces deux questions, nous proposons une série de réductions ayant comme point de départ n'importe quel algorithme de multiplication de complexité en espace linéaire. Pour le produit complet et le produit court, ces réductions fournissent des variantes en place des algorithmes avec la même complexité en temps asymptotiquement. Pour le produit médian, la réduction implique un facteur logarithmique supplémentaire dans la complexité en temps, quand celle-ci est quasi-linéaire.

Travail en commun avec Pascal Giorgi et Daniel S. Roche (US Naval Academy)

Half-duplex communication complexity

Alexander Smal, St.Petersburg, Department of Steklov Mathematical Institute of Russian Academy of Sciences
mercredi 10 avril 2019

Suppose Alice and Bob are communicating bits to each other in order to compute some function $f$ but instead of a classical communication channel they have a pair of walkie-talkie devices. They can use some classical communication protocol for $f$ where each round one player sends bit and the other one receives it. The question is whether talking via walkie-talkie gives them more power? Using walkie-talkie instead of a classical communication channel allows players two extra possibilities: to speak simultaneously (but in this case they do not hear each other) and to listen at the same time (but in this case they do not transfer any bits). We show that for some definitions this non-classical communication model is in fact more powerful than the classical one as it allows to compute some functions in smaller number of rounds. We also introduce round elimination technique for proving lower bounds in this setting and use it to prove lower bounds for some Boolean functions.

Join work with Kenneth Hoover, Russell Impagliazzo and Ivan Mihajlin

Finite state dimension and its characterizations

Alexander Shen, Equipe Escape, LIRMM
mercredi 3 avril 2019

(joint work with Alexander Kozachinsky)

Normal sequences (where all $k$-bit blocks appear equally often) may be considered as "random" sequences in a very weak sense (incompressible by finite automata). One may measure how close is the sequence to a random one: this can be measured by a notion of "finite state dimension"

We review basic results about normal numbers and finite state dimension, and give new characterizations of finite state dimension both in terms of Shannon entropy and (upper bounds) for Kolmogorov complexity.

(The talk can be considered as an extension to the talk https://info-web.lirmm.fr/collorg/bd096263-41e0-4cfa-9f6b-63f900e1a2f4, but self-contained)

Approximating Kolmogorov complexity is as difficult as computing it

Ruslan Ishkuvatov, Equipe Escape, LIRMM (Projet Racaf)
mercredi 20 mars 2019

It is well known that Kolmogorov complexity function (the length of the shortest program producing a given string) is not computable. A natural question arises: how difficult is to compute some approximation to it -- say, a function that differs from Kolmogorov complexity at most by factor 2, or a function that coincides with Kolmogorov complexity at strings of length $n$ except for $1/n$-fraction? We prove that this is as difficult as the exact computation: given an oracle that performs any of these tasks, we can solve the halting problem (and therefore compute exactly the Kolmogorov complexity function).

Aperiodic points in $Z^2$ -subshifts

Benjamin Hellouin, LRI
mercredi 20 février 2019

We consider the structure of aperiodic points in $Z^2$-subshifts, and in particular the positions at which they fail to be periodic. We prove that if a $Z^2$-subshift contains points whose smallest period is arbitrarily large, then it contains an aperiodic point. This lets us characterise the computational difficulty of deciding if an $Z^2$-subshift of finite type contains an aperiodic point. Another consequence is that $Z^2$-subshifts with no aperiodic point have a very strong dynamical structure and are almost topologically conjugate to some $Z$-subshift.

New lower bounds for the secret sharing on the Vamos matroid

Emirhan Gürpinar, ENS de Lyon
mercredi 13 février 2019

Since late 2000s it is known that non classic information inequalities (so-called non Shannon type inequalities for entropy) can be used to prove lower bounds for the information ratio of secret sharing schemes. Recently Farras et al. observed that such bounds can be proven even without deriving new information inequalities explicitly (that is, we can find corollaries of some unknown non Shannon type inequalities, even without proving or even formulating them). We extend this technique and improve the known lower bound for the information ratio of the secret sharing with an access structure on the Vamos matroid.

Secret sharing: methods of information theory and matroids theory

Andrei Romashchenko, Equipe Escape, LIRMM
mercredi 6 février 2019

Secret sharing is a model of distributing a secret among a group of participants. Each participant is given a "share'' of the secret. The secret can be reconstructed only when a sufficient number of shares (an `"authorized group'' of participants) are combined together. The shares of non-authorized groups of participants should give no information about the secret.

The notion of secret sharing was independently introduced in 1979 by Adi Shamir and George Blakley. Nowadays secret sharing has become an important primitive in several cryptographic protocols.

In this talk we discuss the notion of efficiency (information ratio) of a secret sharing scheme and the connections between secret sharing, matroid theory, and Shannon's information theory. The aim of the talk is to recall the basic definitions and several classic results, and to motivate a more technical forthcoming talk on this subject.

On the complexity of modular composition of generic polynomials

Vincent Neiger, XLIM, Université de Limoges
mercredi 30 janvier 2019

This talk is about algorithms for modular composition of polynomials, and for the related problems of inverse modular composition and modular power projection. For two univariate polynomials a and g over a commutative field, modular composition asks to compute b = h(a) mod g for some given h; inverse modular composition asks to compute h such that h(a) = b mod g for some given b; and modular power projection asks to compute the list of projections ell(1), ell(a mod g), ..., ell(a^{n-1} mod g) for some given linear functional ell over the polynomials modulo g, with n the degree of g. For generic g and a, we propose algorithms whose complexity bound improves upon previous algorithms, which come from Brent and Kung's 1978 approach; the new complexity bound is subquadratic in the degree of g and a even when using cubic-time matrix multiplication. Our improvement comes from the fast computation of specific bases of bivariate ideals and operations with these bases thanks to fast univariate polynomial matrix algorithms.

Parity games and finite automata

Alexander Kozachinsky, Moscow State University, Faculty of Mechanics and Mathematics
mercredi 16 janvier 2019

All known quasi-polynomial time algorithms for parity games are based on the existence of a small finite automata for a certain problem on infinite words. Thus it is interesting to obtain lower bounds on the state complexity of this problem. Some results which make a modest progress towards such lower bounds will be presented.

Sieving algorithms for the Shortest Vector Problem

Elena Kirshanova, LIP, ENS Lyon
mercredi 21 novembre 2018

In this talk I give an overview on the complexity of the algorithms for the Shortest Vector Problem (SVP) with the focus on a particular family of algorithms: sieving algorithms. First, I explain why this problem is important and how the complexity of SVP algorithms impacts parameters for cryptographic protocols. Then, I present recent advances in memory efficient versions of sieving algorithms. I explain locality-sensitive techniques for these type of algorithms. The part of the talk is based on joint works with Gottfried Herold and Thijs Laarhoven. Finally, I present recent advances in practical aspects of sieving algorithm for SVP. I describe technical challenges that arise when one tries to make sieving algorithms practical, and how one can overcome some of them. This part of the talk is the on-going work with Martin R. Albrecht, Leo Ducas, Gottfried Herold, Eamonn W. Postlethwaite, Marc Stevens.

On the Ring-LWE and Polynomial-LWE problems

Alexandre Wallet, LIP, École Normale Supérieure de Lyon
mercredi 14 novembre 2018

The Ring Learning With Errors problem (RLWE) comes in various forms. Vanilla RLWE is the decision dual-RLWE variant, consisting in distinguishing from uniform a distribution depending on a secret belonging to the dual of the ring of integers O_K of a specified number field K. In primal-RLWE, the secret instead belongs to O_K. Both decision dual-RLWE and primal-RLWE enjoy search counterparts. Also widely used is (search/decision) Polynomial Learning With Errors (PLWE), which is not defined using a ring of integers O_K of a number field K but a polynomial ring Z[x]/f for a monic irreducible f in Z[x]. We show that there exist reductions between all of these six problems that incur limited parameter losses. More precisely: we prove that the (decision/search) dual to primal reduction from Lyubashevsky et al. [EUROCRYPT 2010] and Peikert [SCN 2016] can be implemented with a small error rate growth for all rings (the resulting reduction is nonuniform polynomial time); we extend it to polynomial-time reductions between (decision/search) primal RLWE and PLWE that work for a family of polynomials f that is exponentially large as a function of deg f (the resulting reduction is also non-uniform polynomial time); and we exploit the recent technique from Peikert et al. [STOC 2017] to obtain a search to decision reduction for RLWE. The reductions incur error rate increases that depend on intrinsic quantities related to K and f.

Polynomial Linear System Solving with Errors by Simultaneous Polynomial Reconstruction of Interleaved Reed Solomon Codes

Ilaria Zappatore, LIRMM
mercredi 3 octobre 2018

A classical method for solving systems of linear equations A(x)y(x) = b(x), with polynomial coefficients over Fq[x], consists in evaluating the system at L points and then recovering the solution by interpolation. We consider a scenario where evaluations are done by some black boxes that can output some erroneous values. The number of errors is bounded by an integer E. In a recent work, Boyer and Kaltofen provided a deterministic algorithm that uses a number L1 of points which depends on the error bound E. In this work we reexamine this problem as the generalization of the Simultaneous Polynomial Reconstruction of Reed Solomon codes. We provide a probabilistic algorithm which allows to recover the solution using L< L1 evaluation points.

Polynomial identity testing for bounded depth circuits

Alexey Milovanov, Moscou
mercredi 22 août 2018

Polynomial identity testing (PIT) is the problem of efficiently determining whether a multivariate polynomial is identically equal to zero. The polynomial is given as an arithmetic circuit or an algebraic formula.

There exists a simple polynomial-time randomized algorithm for solving PIT - just testing equality to zero at a ``random'' point. However, a deterministic polynomial-time algorithm solving PIT is unknown. Designing an efficient deterministic algorithm is one of the most challenging open problems in computational complexity (there exist a connection between derandomization of PIT and proving circuit lower bounds).

Despite a deterministic polynomial algorithm for PIT for general algebraic circuits is unknown for some restriction classes of circuits it is possible to derandomize PIT. One of unexpected tool for proving such results is Sylvester-Gallai theorem: if there are finitely many distinct points on the real plane s.t., for every
pair of distinct points, the line through them also contains a third point, then they
all lie on the same line.

I will talk about the latest achievements in this area (including my results for 4-depth circuits).

Construction d'un shift effectif et non sofic de dimension 2 ayant un nombre de motifs sous-exponentiel

Julien Destombes, Escape, LIRMM
mercredi 4 juillet 2018

Alors que la caractérisation des shifts sofics est bien connue en dimension 1, celle-ci est plus floue en dimension supérieure. Les preuves du caractère non sofic des shifts présents dans la littérature s'appuient sur l'existence d'un nombre exponentiel de motifs. Nous proposons ici une construction d'un shift effectif ayant un nombre de motifs polynomial.

La complexité de Kolmogorov et le probleme d'agrément de clé secrète

Andrei Romashchenko, Escape, LIRMM
mercredi 20 juin 2018

We show that the mutual information, in the sense of Kolmogorov complexity, of any pair of strings $X$ and $Y$ is equal to the length of the longest shared secret key that two parties, one having $X$ and the other one having $Y$, can establish via a probabilistic protocol with interaction on a public channel. This result is also extended to the protocols with more than two parties. (A joint work with Marius Zimand.)

Algorithms for Structured Linear Systems Solving and their Implementation

Romain Lebreton, LIRMM
mercredi 6 juin 2018

There exists a vast literature dedicated to algorithms for structured matrices, but relatively few descriptions of actual implementations and their practical performance in symbolic computation. In this talk, we consider the problem of solving Cauchy-like systems, and its application to mosaic Toeplitz systems, in two contexts: first over finite fields where basic operations have unit cost, then over Q. We introduce new variants of previous algorithms and describe an implementation of these techniques and its practical behavior. We pay a special attention to particular cases such as the computation of algebraic approximants.

General purpose 2D computing medium

Frédéric Gruau, LRI
vendredi 6 avril 2018

The theoretical-model of our computers: Turing machine offers UNBOUNDED MEMORY. The Von-Neuman computer architecture is the efficient implementation in real hardware. It does not APRIORI BOUND the memory. In fact, memory has become so huge that we tend to think of it as an INEXHAUSTIBLE SOURCE.
In this talk, we propose an alternative synergy theoretical-model$/$architecture, for offering UNBOUNDED COMPUTING POWER.
The theoretical model consist of interconnected agents executing instructions that can create other agents. During the execution of a program, a network of agents grows from a single ancestor. The computing power therefore also grows, without limit.
The architecture, should not APRIORI BOUND the number of Processing Element (PE): it can be a thousand, or \ldots a thousand billions! For such scalable architecture, we achieve performance on two levels: 1- theoeretical : The time and space complexity of a parallel algorithm grows optimaly, as the number of PEs grows. 2- practical: the constant factor behind the complexity formula, is small enough that it can be demonstrated for concrete algorithms and significanty large data, on a standard CPU; We do it for sorting, and matrix multiplication, with a thousand scalars.

Because PEs come as an INEXHAUSTIBLE SOURCE, we have to give up on programming each PE individually. Instead, we consider this arbitrary large ensemble of PE as a ``2D computing-medium'': PEs are fine grain, behave identically, and communicate only with nearest neighbor in physical space. For simplification, we use the simplest kind of computing-medium wich is Cellular Autotata (CA): PEs are CA cells, with small state, north, south, east, west connection, and synchronized update. More generally, one could use amorphous computing-medium which are simply homogeneous isotrope, and quasi-synchrone.

We map the unbounded-power model onto the unbounded-power hardware. In other words, we simulate the development of an agent-network, on a CA, using a (very complex) CA-transition-rule. %That, is the most original and central feature of the approach.
The memory of each PE , stores the same dozen of boolean flag representing spatially extended, physicical-like structures, among which are ``membrane'' ``waves'' and ``channels''.
By associating colors to structures, each PE gets a specific color; The state of the whole hardware is drawn as a 2D picture, with one pixel for each PE. When the CA iterates, state and therefore pixel-color, change. We can see the computation which is going on, as a film. ``Membrane'', ``channels'' and ``waves'' move in 2D space. What really happen is resetting the structure flag on a PE, and setting it on an adjacent PE.
The movements follows physic-like forces of repulsion, or attraction, with two main goals:

1-homogeneize the density of membranes, and

2-divide membranes by pinching at the equator.

Each membrane defines the support of one agent, so dividing a membrane really means creating a new agent.
Homogeneization acts constantly. In contrast, division is triggered by instructions sent by the host.
A division instruction optionnaly inserts a ``channel'' between the two offspring membrane, which represents a connection between agents. The movement of channel, membrane and waves is constrained, to enforce the topological invariant corresponding to their role: membranes remain closed loops, channels remain adjacent to the two connected membranes, waves sweep the entire CA. The computation of each structure's final movement is all the more complex that structure are defined with respect to one another, in a hierarchical way. Constraint and move needs to be propagated upward and downward this hierarchy, for cross-coherence.

A sequence of division-instructions develops a specific network of agents. It represent a program. It is loaded from the host on the CA, by propagating a train of ``waves'' carrying instruction. The CA is a ``General Purpose'' CA (GPCA), because when fed with a different sequence of instructions, it will develop a different network. Furthermore, developing networks is really an expressive parallel programming technique. For example, unlike static circuit, dynamic circuit can capture algorithm whose dependance graph is data-dependant.

The GPCA is unusually highly complex, with 64 bits of state and 20,000 gates. We use a CA-platform, with a language, a compiler, an optimizer and a debugger that allow to program and execute very rapidly, CAs of unprecented complexity. The CA-platform opens a new avenue to start exploring the wide and intricate world of complex CA.

Integer Polynomial Sparse Interpolation with Near-Optimal Complexity

Dan Roche, Computer Science Dept. US Naval Academy
mercredi 14 mars 2018

We investigate algorithms to discover the nonzero coefficients and exponents of an unknown sparse polynomial, provided a way to evaluate the polynomial over any modular ring. This problem has been of interest to the computer algebra community for decades, and its uses include multivariate polynomial GCD computation, factoring, and sparse polynomial arithmetic. Starting with the early works of Zippel, Ben-Or and Tiwari, and Kaltofen, one line of investigation has a key advantage in achieving the minimal number of evaluations of the polynomial, and has received considerable attention and improvements over the years. It is closely related to problems in coding theory and exponential analysis. The downside, however, is that these methods are not polynomial-time over arbitrary fields. A separate line of work starting with Garg and Schost has developed a different approach that works over any finite field. After years of improvements, the complexity of both approaches over ZZ[x] is currently the same. They scale well in most aspects except for the degree; the bit complexity in both cases is currently cubic in the bit-lengths of the exponents. By careful combination of the techniques in both approaches and a few new tricks, we are now able to overcome this hurdle. We present an algorithm whose running time is softly-linear in the size of the output and performs nearly the minimal number of evaluations of the unknown polynomial. Preliminary implementation results indicate good promise for practical use when the unknown polynomial has a moderate number of variables and/or large exponents.

Some algorithms for matrices with displacement structure

Claude-Pierre Jeannerod, LIP, ENS Lyon
mercredi 7 mars 2018

The notion of displacement rank provides a uniform way to deal with various classical matrix structures (such as Toeplitz, Hankel, Vandermonde, Cauchy) and allows for various generalizations. In this talk, we will review some recent progress in the design of asymptotically fast algorithms for multiplying and inverting such matrices. In particular, for n by n matrices of displacement rank alpha, we will see how to reduce the cost of such operations from about alpha^2*n (up to log factors) down to alpha^(w-1)*n, where w < 2.373 is the exponent of (dense, unstructured) matrix multiplication. (Joint work with A. Bostan, C. eMouilleron, and E. Schost.)

Calcul rapide des séries de Puiseux

Adrien Poteaux, LIFL, Université Lille I
mercredi 28 février 2018

Les séries de Puiseux sont un outil essentiel pour l'étude de singularité de courbes algébriques planes. Cet exposé présentera des améliorations de l'algorithme de Newton Puiseux, qui permet de calculer de telles séries. Nous décrirons une stratégie symbolique-numérique pour calculer une approximation numérique de ces séries avec une structure exacte. Puis nous donnerons les grandes idées ayant permis d’améliorer la complexité arithmétique de l’algorithme. Nous concluerons en montrant les limites atteintes par cet algorithme, et évoquerons des pistes qui pourraient les contourner dans le futur.

qDSA: Efficient, compact key exchange and signatures for IoT from elliptic curves and Kummer surfaces

Benjamin Smith, LIX, Inria Saclay
mercredi 14 février 2018

The Internet of Things (IoT) means that an exploding number of small connected devices are increasingly pervading almost every aspect of our daily lives. Securing these devices is extremely important, but it is also a great technical challenge: most of our state-of-the-art cryptographic algorithms are simply too "heavy" to run on these tiny devices with such limited computational and energy resources. This is especially true for asymmetric techniques like key exchange and digital signatures, which by their very nature require intensive algebraic computations.

This talk will describe the qDSA signature scheme, and explain how we have applied the classic geometric theory of Kummer varieties to produce compact, low-memory, high-speed software for high-security key exchange and digital signatures on microcontrollers with very limited memory.

(Joint work with Joost Renes, Radboud University Nijmegen, NL)

Multidimensional SFTs with countably many configurations

Ilkka Törmä, University of Turku, Finland
mercredi 18 octobre 2017

A recurring theme in the theory of tilings in the construction of tile sets whose associated shift spaces are in some sense as restricted as possible. Aperiodic SFTs are one example: all configurations are constrained to have no global period. We consider another restriction, namely, that the set of valid configurations is countable. Such tile sets cannot be aperiodic, but the geometry of their configurations is not as constrained as one might think. In particular, it is possible to carry out some hierarchical constructions.

Mealy automata: dynamic of the action, singular points and Wang tilings

Thibault Godin, IRIF
mercredi 11 octobre 2017

Mealy automata are a powerful tool to generate (semi)groups. Since the 80's, they have been used to solved several major conjectures in (semi)group theory. Although, people started to study them from a (theoretical) computer science perspective only recently. This approach is rich and brought many interesting results and questions.
In this talk, we focus on the dynamic of the action of the group on the regular tree and we show that classical tool from automata theory can be used to simplify and generalize known results and to provide information on this action. Then, we study a closely related problem and use a connection between Mealy automata and Wang tilings to obtain an undecidability theorem.

This is based on a joint work with D. D'Angeli, I. Klimann, M. Picantin, and E. Rodaro.

Stopping time complexity

Alexander Shen, LIRMM, Escape
mercredi 4 octobre 2017

Last year Vovk and Pavlovic suggested the definition of stopping time complexity: how many bits of information is needed to specify when to stop while reading an infinite sequence of bits. (They had some philosophical reasons to introduce this definition, but technically their suggestion can be described in this way.) It turns out that this notion of complexity is a special case of a general scheme of conditional complexities (different topological structures on conditions, programs and descriptions lead to different complexity): the stopping time complexity equals $C(x|x*)$ where $x$ in the condition is understood as information about prefix of a sequence. We'll consider also some other equivalent definitions of the stopping time complexity

On some interesting ternary formulas

Pascal Ochem, ALGCO, LIRMM
mercredi 27 septembre 2017

We show that, up to renaming of the letters, the only infinite ternary words avoiding the formula $ABCAB.ABCBA.ACB.BAC$ (or $ABCAB.BCB.AC$) have the same set of recurrent factors as the fixed point of
$0 \to 012$
$1 \to 02$
$2 \to 1$.
Also, we show that the formula $ABAC.BACA.ABCA$ is 2-avoidable.
This leads to disproving a conjecture of Grytczuk.

A Characterization of Infinite LSP Words

Gwenaël Richomme, LIRMM, Escape
mercredi 12 juillet 2017

G. Fici proved that a finite word has a minimal suffix automaton if and only if
all its left special factors occur as prefixes. He called LSP all finite and infinite words having this latter property. We characterize here infinite LSP words in terms of $S$-adicity. More precisely we provide a finite set of morphisms $S$ and an automaton ${\cal A}$ such that an infinite word is LSP if and only if it is $S$-adic and all its directive words are recognizable by ${\cal A}$.

Factoring integers with ECM on the Kalray MPPA-256 processor

Jérémie Detrey, LORIA, inria Nancy
mercredi 28 juin 2017

The Kalray MPPA-256 is a recent low-power chip which embeds 256
32-bit cores. As such, it can be used as an efficient co-processor
for accelerating computations, bridging the gap between usual CPUs
and the more throughput-oriented GPUs, all the while consuming far
less power.

In this talk, we will present a fast and energy-efficient
implementation of the Elliptic Curve Method (ECM, an algorithm for
factoring integers) on this platform. After a brief presentation of
the ECM and of its important role in cryptanalysis, especially in
the context of factoring RSA keys, we will glance at some of the key
architectural features of the MPPA-256. We will then present an
optimized library for multiprecision modular arithmetic, on top of
which the ECM implementation was built, before concluding with a few
benchmarks and comparisons with the state of the art.

This is a joint work with Masahiro Ishii, Pierrick Gaudry, Atsuo
Inomata, and Kazutoshi Fujikawa.

On the direct product of periodic and quasiperiodic sequences

Andrei Romaschchenko, LIRMM, Escape
mercredi 7 juin 2017

It is known (Avgustinovich, Fon-Der-Flaass, Frid) that a direct product of any periodic bi-infinite sequence $X$ with any quasiperiodic (weakly or strongly recurrent) sequence $Y$ is also quasiperiodic (weakly or strongly recurrent respectively). We will present several generalizatons of this combinatorial lemma and discuss its applications in the study of multidimensional SFT.

Probabilistic analysis of randomness: Sturmian words and continued fractions

Eda Cesaratto, Universidad Nacional de General Sarmiento, Buenos Aires, Argentine
mercredi 24 mai 2017

(Séminaire commun avec le séminaire du Pôle Algo-Calcul)

Sturmian words and continued fractions are representation of numbers that share the fact of being codifications of orbits of dynamical systems.
Probabilistic analysis of parameters, such as the frequency of digits, for the continued fraction expansion of real numbers is now a classical subject of research.

Sturmian words are related to the celebrated Kronecker sequence K(x) of the fractional
parts of the multiples of a real x, and contrary to the continued fraction context, probabilistic analyses are not frequent in the literature.

We perform a probabilistic study of the well-known discrepancy and of the Arnold measure, that can be viewed as measures of randomness, of the sequence K(x). This study gives insight into the pseudo-randomness of the sequence K(x)
for a random input x. We consider two main cases: the case where the ``input'' x is a random real, the case when x is a random rational and exhibit similarities between them.

Joint work with Brigitte Vallée (CNRS, U. de Caen)

Randomness and Kolmogorov extractors with multiple sources

Daniil MUSATOV, Moscow Physics-Technical University
mercredi 17 mai 2017

A randomness extractor is a function that produces "better" or "new" randomness from weak random sources. Extractors with multiple sources get two (or more) independent random variables with sufficiently high min-entropy and output a random variable close to uniform. A Kolmogorov extractor is a function that formalizes the same notion in terms of Kolmogorov complexity. Namely, it gets two strings with sufficiently high Kolmogorov complexity and sufficiently small mutual information and produces a string with complexity close to length. Hitchcock, Pavan and Vinodchadran have shown that randomness and Kolmogorov extractors are essentially the same objects: each extractor of one type is also an extractor of the other type with slightly worse parameters. Recently, new breakthrough explicit constructions of randomness extractors were invented by Chattopadhyay, Zuckerman and Li. We discuss their applications to the theory of Kolmogorov extraction. In particular, the new extractor for three sources appears to be a Kolmogorov extractor for time-bounded Kolmogorov complexity. As far as the speaker knows the case of three sources was not previously considered explicitly. For the more convential case of two sources we probably need even more sophisticated constructions.

Le problème de décomposition de points dans les variétés Jacobiennes hyperelliptiques

Alexandre Wallet, LIP, ENS Lyon
mercredi 17 mai 2017

Le Problème du Logarithme Discret (DLP) dans les variétés abéliennes définies sur des corps finis est réputé difficile en général. Par exemple, son instantiation sur les courbes elliptiques sert de base de sécurité pour des primitives cryptographiques bien connues. En effet, on ne connait actuellement aucune méthode efficace pour calculer des logarithmes discrets elliptiques, hormis pour des classes très restreintes de courbes elliptiques. Un autre famille intéressante de variétés abéliennes est celle des variétés Jacobiennes associées aux courbes algébriques. D'une part, certains algorithmes plus efficaces sont connus pour la résolution du DLP sur ces structures ; d'autre part il est parfois possible de transférer un DLP elliptique sur un DLP dans une variété Jacobienne, ce qui peut avoir des implications dans les choix de paramètres cryptographiques.

L'exposé présentera des résultats améliorant les algorithmes de calcul des logarithmes discrets dans les variétés Jacobiennes des courbes hyperelliptiques, qui sont les courbes les plus proches des courbes elliptiques. En particulier, les améliorations obtenues permettraient la réalisation complète d'un calcul de logarithme discret significatif (184 bits) sur une courbe "binaire" de genre 2. Un tel calcul était inenvisageable avant, et se compare à des records récents (EuroCrypt 2017) de DLP sur les corps finis. Je présenterai aussi quelques travaux plus théoriques visant à exploiter des propriétés structurelles supplémentaires, ainsi que leurs limites. Selon le temps disponible, je pourrai aussi présenter une contribution plus générale au calcul de logarithme discret pour n'importe quelle courbe algébrique.

Reconstruction algorithms for sums of affine powers

Timothée Pecatte, LIP, Lyon
mercredi 19 avril 2017

A sum of affine powers is an expression of the form $f(x) = a_1 * (x - b_1)^e_1 + … + a_s * (x - b_s)^e_s$.
Although quite simple, this model is a generalization of two well-studied models: Waring decomposition and sparsest shift.
For these three models there are natural extensions to several variables,
but this talk will be focused on univariate polynomials.
We will present efficient algorithms for reconstructing the smallest expression for an input polynomial $f$.
These algorithms are built on techniques developed for proving lower bounds on the number of terms $s$ needed to represent a polynomial

Machines à temps infini et ordinaux admissibles

Bruno Durand, LIRMM, Escape
mercredi 8 mars 2017

A préciser

Incompressibility, Kolmogorov complexity and normal numbers

Alexander Shen, LIRMM, Escape
mercredi 1 mars 2017

Normal number is a real number when every block of $k$ bit has the
same limit frequency in the binary representation. They have a
characterization (V.Becher, P.Heiber, 2013) in terms of
incompressibility by finite automata. However, this characterization
does not follow the standard scheme for algorithmic complexity. We
explain that it can be restated (and simplified) by considering
automata as decompressors, not compressors; this gives also simple
proofs for classical results about normal numbers (normal number
remains normal when multiplied by a rational, etc.)

A simple proof of (deterministic) #P-completeness of counting roots in finite fields

Aleksei Milovanov, Moscow state university
mercredi 25 janvier 2017

A counting problem is #P-complete if every problem in #P (counting objects that have some easily decidable property) can be reduced to it. This definition makes sense for different reduction notions. It was known for a long time (J. von zur Gathen, M. Karpinski and I.Shparlinski, Counting curves and their projections, Computational Complexity, 6, 64--99, 1996) that counting solutions of a polynomial equation P(x,y)=0 in F_{p^n} is #P-complete with respect to probabilistic reductions if polynomial is given in a sparse representation (zero coefficients are omitted). We provide a simple proof that improves this result (polynomials are univariate, and reductions are deterministic)

Are there non-stochastic objects?

Nikolay K. Vereshchagin, Moscow State University
mercredi 25 janvier 2017

An object (encoded by a binary string x) is called stochastic if there is a simple plausible statistical explanation for it. As explanations we consider probabilistic Turing machines M whose running time is bounded by a polynomial of the length of the output. We call such a machine M a plausible explanation for x if every property of x recognized by a polynomial time deterministic Turing machine with a short program is shared by a non-negligible fraction of M's outputs. We call x stochastic if there is a plausible explanation M for x such that the program of M is short, i.e., the explanation is simple.

Are all objects stochastic? Perhaps not. For instance, structured objects like paintings might be non-stochastic. We will prove that under some complexity-theoretic assumption there are infinitely many objects that are not stochastic.The assumption states that there is a unary language in NP that is not recognized by any polynomial time probabilistic machine. (A joint work with A. Milovanov.)

An Algebraic Geometric Approach to Multidimensional Symbolic Dynamics

J. Kari, Univ. Turku, Finlande
mercredi 7 décembre 2016

We study low complexity multidimensional words and subshifts using tools of algebraic geometry. We express the word as a multivariate formal power series over integers and investigate the setup when there is an annihilating polynomial: a polynomial whose product with the power series is zero. We prove that the word must then be a sum of periodic words over integers, possibly with unbounded values. As a specific application of the method we obtain an asymptotic version of the well-known Nivat's conjecture.

Subtraction-free computations and cluster algebras

Dima Grigoriev, Laboratoire Paul Painlevé, Université Lille 1
mercredi 7 septembre 2016

Using cluster transformations we design
subtraction-free algorithms for computing Schur polynomials and for
generating spanning trees and arborescences polynomials. The latter
provides an exponential complexity gap between circuits admitting
arithmetic operations $+, \, \times, \, /$ versus $+,\, \times$. In
addition, we establish an exponential complexity gap between
circuits admitting $+,\, -,\, \times,\, /$ versus $+, \, \times, \,
/$. Together with V. Strassen's result on "Vermeidung von
Divisionen" this closes a long-standing problem on comparative
complexity power between all possible subsets of operations $+,\,
-,\, \times,\, /$.

(a joint work with S. Fomin, G. Koshevoy)

Determining sets of quasiperiods of infinite words

Guilhem Gamard, Escape, LIRMM
mercredi 13 juillet 2016

A word is quasiperiodic if it can be obtained by concatenations and
overlaps of a smaller word, called a quasiperiod. Based on links
between quasiperiods, right special factors and square factors, we
introduce a method to determine the set of quasiperiods of a given
right infinite word. Then we study the structure of the sets of
quasiperiods of right infinite words and, using our method, we
provide examples of right infinite words with extremal sets of
quasiperiods (no quasiperiod is quasiperiodic, all quasiperiods
except one are quasiperiodic, \ldots). Our method is also used to
provide a short proof of a recent characterization of quasiperiods
of the Fibonacci word. Finally we extend this result to a new
characterization of standard Sturmian words using a property of
their sets of quasiperiods.

Tight upper bound on splitting by linear combinations for pigeonhole principle

Vsevolod Oparin, St. Petersburg University of the Russian Academy of Sciences.
mercredi 29 juin 2016

SAT is a well-known problem to identify if a given boolean formula has a satisfying assignment. Main technique in solving SAT is DPLL-algorithms. A usual DPLL-algorithm chooses a single variable and order of values to substitute, and run recursively on simplified formulas seeking for a satisfying assignment. We consider an extension of the algorithm which chooses not a single variable, but a linear form modulo 2. There is a known result that formulas, encoding pigeonhole principle on $n$ pigeons and $m$ holes ($PHP_n^m$), still is hard even for linear splitting and requires $2^{\Omega(n)}$ time of work. The known bound for a usual DPLL-algorithm without linear splittings is $2^{\Theta(n \log n)}$. In this talk we show that $PHP_n^m$ can be solved in $2^{O(n)}$ time with linear splittings, and, as a corollary, formulas, encoding perfect matching principle on graphs with $n$ vertices, can be also solved in $2^{O(n)}$.

A Linear Acceleration Theorem for 2D Cellular Automata on all Complete Neighborhoods

Anaël Grandjean, Escape, LIRMM
mercredi 29 juin 2016

Linear acceleration theorems are known for most computational models. Although such results have been proved for two-dimensional cellular automata working on specific neighborhoods, no general construction was known. We present here a technique of linear acceleration for all two-dimensional languages recognized by cellular automata working on complete neighborhoods.

Avoidability of formulas with two variables

Pascal Ochem, LIRMM, AlGCo
mercredi 22 juin 2016

In combinatorics on words, a word $w$ over an alphabet $\Sigma$ is said to avoid a pattern $p$ over an alphabet $\Delta$ of variables if there is no
factor $f$ of $w$ such that $f = h(p)$ where h : $\Delta^* \to \Sigma^*$ is a non-erasing
morphism. A pattern $p$ is said to be $k$-avoidable if there exists an infinite
word over a $k$-letter alphabet that avoids $p$. We consider the patterns
such that at most two variables appear at least twice, or equivalently, the
formulas with at most two variables. For each such formula, we determine
whether it is $2$-avoidable.

Logic for Algorithms

Rod Downey, Victoria University, Wellington, New Zealand
mercredi 15 juin 2016

Graphs and similar objects which can be coded by reasonable parse
languages allow for algorithmic metatheorems to demonstrate parametric
tractability. This talk will review some of the high points in this area. It
will also discuss recent application of this idea to low dimensional topology
by the author and Benjamin Burton.

Introduction au calcul uniforme : élection de leader sur les automates cellulaires à entrée périodique

Nicolas Bacquey , GREYC, Caen
mercredi 11 mai 2016

Tous les modèles de calcul usuels présentent une "singularité" qui permet de démarrer un calcul : tête d'une machine de Turing, état initial, cellule au bord de la configuration d'un automate cellulaire... Est-il possible de définir une notion de calcul sur un modèle dénué d'un maximum de ces singularités ? Comment présenter l'entrée, comment lire un résultat ? Comment définir une complexité ?

On tentera de répondre à ces questions dans le modèle de calcul des automates cellulaires à configuration périodique, modèle dans lequel à la fois les données et le programme sont uniformes dans l'espace. Plus précisément, on présentera une famille d'algorithmes réalisant l'élection de leader sur ces configurations périodiques. Il est impossible d’élire une unique cellule de la configuration par définition ; les algorithmes présentés utilisent des techniques permettant de transformer des différences spatiales en différences temporelles, induisant des ruptures de symétrie. Ainsi, on parvient à élire une unique classe d'équivalence de cellules de la configuration.

On Machines and Melodies - An Invitation to Infinitary Computability

Merlin Carl, Université de Constance Allemagne
mercredi 27 avril 2016

Turing computability provides an intuitive and attractive formal
framework for the analysis of the question which mathematical objects
and functions can be generated by applying a certain mechanical
procedure finitely many times. However, in mathematical practice,
we often encounter examples of objects generated by infinite processes
involving limits, such as adding reals, computing with limits of real
sequences, forming direct limits of directed systems, hulls, closures
etc. This motivates the study of infinitary computations that can account
for the limit phenomenon in such processes and should provide
a framework for extending results from recursion theory to infinitary
mathematical objects.

I will introduce some of the most prominent machine models of
infinitary computability and give an overview of results obtained about
them.

Stable states of perturbed Markov chains

Stéphane Le Roux, Université Libre de Bruxelles
vendredi 8 avril 2016

Given an infinitesimal perturbation of a discrete-time finite Markov chain, we seek the states that are stable despite the perturbation, i.e.the states whose weights in the stationary distributions can be bounded away from $0$ as the noise fades away. Chemists, economists, and computer scientists have been studying irreducible perturbations built with exponential maps. Under these assumptions, Young proved the existence of and computed the stable states in cubic time. We fully drop these assumptions, generalize Young's technique, and show that stability is decidable as long as $f \in O(g)$ is. Furthermore, if the perturbation maps (and their multiplications) satisfy $f \in O(g)$ or $g \in O(f)$, we prove the existence of and compute the stable states and the metastable dynamics at all time scales where some states vanish. Conversely, if the big-O assumption does not hold, we build a perturbation with these maps and no stable state. Our algorithm also runs in cubic time despite the general assumptions and the additional work. Proving the correctness of the algorithm relies on new or rephrased results in Markov chain theory, and on algebraic abstractions thereof.

Fast computation of minimal interpolation bases in Popov form for arbitrary shifts

Vincent Neiger, LIP, ENS Lyon
mercredi 30 mars 2016

Recently, [Zhou - Labahn, 2012] gave an algorithm for Hermite-Padé approximation that can efficiently compute a polynomial matrix whose rows form a basis of all solutions and whose degrees are minimal for some prescribed shift; for efficiency, they require an assumption on the shift. In this talk, we propose an algorithm which is similarly efficient, makes no assumption on the shift, and deals with the more general problem of computing minimal interpolation bases. Those represent bases of solutions for the rational interpolation problem in [Beckermann - Labahn, 2000]; it encompasses for example Hermite-Padé approximation, M-Padé approximation, and the bivariate interpolation step in the Guruswami-Sudan algorithm.

Joint work with Claude-Pierre Jeannerod, Éric Schost, and Gilles Villard.

Code generator matrices as entropy extractors

Alessio Meneghetti, Universita di Trento, Italia
mercredi 17 février 2016

There is a connection between the Walsh spectrum of the output of a binary random number generator (RNG) and the bias of individual bits. This can be used to show how previously known bounds on the performance of linear binary codes as entropy extractors can be derived by considering generator matrices as a selector of a subset of that spectrum. It can also be proved a connection with the code’s weight distribution, which implies a new bound on the performance of a linear extractor.

Efficient Universal Computation by Greedy Molecular Folding

Nicolas Schabanel, LIAFA, Paris
mercredi 10 février 2016

Joint work with: Cody Geary, Pierre-Étienne Meunier and Shinnosuke Seki

We introduce and study the computational power of Oritatami, a theoretical model to explore greedy molecular folding, by which the molecule begins to fold before waiting the end of its production. This model is inspired by our recent experimental work demonstrating the construction of shapes at the nanoscale by folding an RNA molecule during its transcription from an engineered sequence of synthetic DNA. While predicting the most likely conformation is known to be NP-complete in other models, Oritatami sequences fold optimally in linear time. Although our model uses only a small subset of the mechanisms known to be involved in molecular folding, we show that it is capable of efficient universal computation, implying that any extension of this model will have this property as well.

We develop several general design techniques for programming these molecules. Our main result in this direction is an algorithm in time linear in the sequence length, that finds a rule for folding the sequence deterministically into a prescribed set of shapes depending of its environment. This shows the corresponding problem is fixed-parameter tractable although we proved it is NP-complete in the number of possible environments. This algorithm was used effectively to design several key steps of our constructions.

On aperiodic tilings of the plane by polygons

Nikolai Vereshchagin, Moscow University
mercredi 20 janvier 2016

We consider some families of self-similar tilings of the plane by triangles, rectangles and hexagons. Each family we consider corresponds to a substitution scheme. We investigate which of those families of tilings can be defined by local rules.

On algorithms for integer factorization and discrete logarithms computation

Cyril Bouvier, IMB, Université de Bordeaux
mercredi 16 décembre 2015

In this talk, I will present some results obtained during my Ph.D on integer factorization and discrete logarithm computation in finite fields. First, I will present the ECM algorithm for integer factorization and a new method to analyze the elliptic curves used in this algorithm by studying the Galois properties of division polynomials. Then, I will talk about the NFS algorithm for integer factorization and in particular the polynomial selection step for which I will propose improvements of existing algorithms. Finally, I will talk about a common step of the NFS algorithm for integer factorization and the NFS-DL and FFS algorithms for discrete logarithm computations: the filtering step. I will explain this step thoroughly and present an
improvement for which I will study the impact using data from several computations of discrete logarithms and factorizations.

Kolmogorov complexity versions of busy beavers

Mikhail Andreev, Moscou
mercredi 16 décembre 2015

Busy beaver function was introduced by Tibor Rado as maximum number which can be printed by Turing Machine with at most N internal states over {0,1} alphabet. In complexity theory analogue of busy beaver number was introduced by Gacs as maximum number with Kolmogorov complexity at most N. This function is defined up to additive term in argument (as well as complexity is defined up to additive term). One can use different types of Kolmogorov complexity to get different busy beaver functions. In this talk I'm going to give formal definitions of complexity busy beaver for plain and prefix-free Kolmogorov complexity, discuss some basic properties of this functions and relation between them.

Problème de l’arrêt et son approximatisation

Alexander Shen, Escape, LIRMM
mercredi 9 décembre 2015

Often we say that "for any reasonable programming language [this or that]". But what is "reasonable" here? There are several possible requirements: one of them is s-m-n-theorem, or G\"odel property: any other programming language can be efficiently translated. It is not obvious that this guarantees the invariance of the properties we care about: for example, it is not obvious (but true) that for every program we can effectively write infinitely many equivalent programs (we can add comments in a standard programming language, but the translator could stard by deleting them). The Rogers equivalence theorem helps by proving that every two G\"odel programming languages are related by a comptable bijection of programs.

There are some properties for which this is not enough. For example, one may ask whether the halting problems is generically solvable, i.e., whether there is an algorithm that correctly solves it for a fraction of inputs that asymptotically is $1$. Here for some programming languages the answer is trivial --- e.g., if we require that program file that contains "control-Z" immediately terminates, then asymptotically almost all program terminate. (More interesting result of the same time - most of the Turing machines with N states on a one-sided tape fall of the tape and terminate (Hamkins, Myasnikov).) Still there is a natural requirement for the programming language ("efficient optimality", the existence of a translator that increases the length only by $O(1)$), and the corresponding invariance result due to Schnorr.

For the halting problem, however, a weaker property of optimality (without efficiency requirement) is enough, and there is a simple reason related to Chaitin-like $\Omega$-numbers: for an optimal programming language the number of terminating programs of length at most $n$ has complexity $n-O(1)$. This explains why the halting problem is not generically solvable for all optimal languages.

Capacité valuative d'un ensemble de mots infinis

Youssef Fares, LAMFA, Amiens
mercredi 25 novembre 2015

En 1997 Manul Bhargava a donné une généralisation de la notion de factorielle. Cette notion est liée à la notion de suite $p$-ordonnée introduite aussi par Bhargava. En utilisant cette dernière notion, on peut associer à chaque partie $E$ de $Z_p$ une suite appelée suite caractéristique de $E$. Cette suite n'est, en général, pas facile à déterminer.
En considérant un ensemble $M$ de mots infinis (avec un alphabet fini) comme une partie de $Z_p$, on peut s'intéresser
à la suite caractéristique de $M$ et à sa limite asymptotique.
Le but de l'exposé est de rappeler toutes ces notions, de donner des propriétés simples liées à ces notions et d'étudier deux cas particuliers:
1) le cas du sous-shift de type fini
2) Le cas d'un ensemble de mots sturmiens.

Une version combinatoire du problème de Slepian et Wolf

Andrei Romashchenko, Escape, LIRMM
mercredi 14 octobre 2015

(joint work with Daniyar Chumbalov, EPFL)

We study the following combinatorial version of the Slepian-Wolf coding scheme. Two isolated Senders are given binary strings $X$ and $Y$ respectively; the length of each string is equal to $N$, and the Hamming distance between the strings is bounded by some threshold $R$. The Senders compress their strings and communicate the results to the Receiver. Then the Receiver must reconstruct both strings $X$ and $Y$. The aim is to minimise the lengths of the transmitted messages.

This problem lies on the border line between coding theory and communication complexity, and to cope with it we need tools from both fields. We prove nontrivial lower bounds for a deterministic variant of this problem with syndrome-coding, i.e., for the case where at least one of the Senders uses linear encoding. For the probabilistic setting (with randomised parties) we find the theoretical optimum of communication complexity and construct an effective (polynomial time) protocol with optimal lengths of messages.

Doubled patterns are 3-avoidable

Pascal Ochem, AlGCo, LIRMM
mercredi 30 septembre 2015

A pattern is doubled if all of its variables appear at least twice.
In the last talk, we conjectured that the so-called "power series method" and "entropy compression method" are equivalent.
The conjecture is now proven and we use the unified method to show
that most of the doubled patterns are avoidable over the 3-letter alphabet.
There remain 10 doubled patterns that are shown to be 3-avoidable with the use of morphisms.

Tabulating Class Groups of Quadratic Fields

Michael Jacobson, Université de Calgary
mercredi 23 septembre 2015

Class groups of quadratic fields have been studied since the time of Gauss, and in modern times have been used in applications such as integer factorization and public-key cryptography. Tables of class groups are used to provide valuable numerical evidence in support of a number of unproven heuristics and conjectures, including those due to Cohen and Lenstra. In this talk, we discuss recent efforts to extend existing, unconditionally correct tables of both imaginary and real quadratic fields.

Remarque : cet exposé du séminaire Eco-Escape se déroulera dans le cadre plus large du séminaire du pôle algo-calcul

Randomness deficiencies: quantifying randomness

Gleb Novikov, Moscow
mercredi 26 août 2015

Algorithmic randomness divides binary sequences into random (believable as coin tossing outputs) and non-random. Adding a bit to a random sequence, we get a random sequence, so adding billion of zeros also preserves randomness. However, this sequence is ``less random''. Randomness deficiency captures this difference: a sequence is random iff randomness deficiency is finite, but it can be large or small. The classical results (e.g., van Lambalgen pair randomness theorem) have quantitative versions that are inequalities for randomness deficiency. We consider several examples of this kind.

Antistochastic strings and their applications

Aleksei Milovanov, Moscow
mercredi 26 août 2015

Kolmogorov complexity of a string is defined as the minimal length of a program that generates it. This shortest program, however, could require a lot of time to produce the string, while longer programs may be faster. Antistochatic strings are strings for which this trade-off effect happens in the strongest form. It turns out that they have interesting properties that can be used to prove some results in algorithmic information theory and coding theory.

Temps de non calcul des machines de Turing à temps infini

Sabrina Ouazzani, Escape, LIRMM
mercredi 24 juin 2015

Présentation générale des machines de Turing à temps infini puis étude
de quelques propriétés ordinales de clockabilité et d'admissibilité.

Quasiperiodicity and non-computability in tilings

Andrei Romashchenko, Escape, LIRMM
mercredi 20 mai 2015

We study tilings of the plane that combine strong properties of different nature: combinatorial (quasi-periodicity) and algorithmic (non-recursivity). Thus, we come up with a tile set that accepts only quasi-periodic and non-recursive tilings, and discuss some generalisations of this construction.

Synchronisation et automates cellulaires

Gaétan Richard, Greyc, Caen
mercredi 13 mai 2015

Dans l’optique de regarder la robustesse des systèmes complexes, un phénomène qui semble intéressant est la synchronisation: partant de n’importe quel état global, tous les éléments arrivent à se synchroniser. Dans cet exposé, nous nous appuierons sur le modèle discret et régulier des automates cellulaires unidimensionnels afin de pouvoir esquisser une définition formelle du phénomène et explorer quelles conditions permettent son apparition.

An Introduction to Hyperelliptic Curve Arithmetic

Renate Scheidler, University of Calgary, Canada
mercredi 6 mai 2015

Elliptic and low genus hyperelliptic curves represent a very suitable setting for public key cryptography, due to their efficient arithmetic and excellent security properties. Elliptic curves support simpler and faster arithmetic than their hyperelliptic counterparts, but this is offset by the fact genus 2 hyperelliptic curves achieve the same level of security with a base field that is half the size of that used for elliptic curves. Our main protagonist in this primer on hyperelliptic curves will be the Jacobian which is hyperelliptic analogue of the group of points on an elliptic curve. We will introduce a computationally highly useful representation of the elements of the Jacobian, along with their two-step arithmetic consisting of addition and reduction. This talk is targeted at an audience with no or minimal exposure to elliptic curves and no prior knowledge of algebraic curves in general.

Construction de courbes hyperelliptiques à multiplication complexe

Sorina Ionica, Université de Bordeaux 1
mercredi 22 avril 2015

La sécurité des protocoles cryptographiques à base de courbes repose sur la difficulté du problème du logarithme discret. Pour étudier ce problème, on considère le graphe suivant : les noeuds sont des courbes de genre 3 et les arêtes sont des isogénies entres les jacobiennes de ces courbes. Dans ce graphe, qu'on appele graphe d'isogénies, il y a deux types de noeuds : des noeuds hyperelliptiques forts, car le log discret est difficile, et des noeuds non-hyperelliptiques faibles, pour lesquels on dispose d'attaques efficaces. Dans cet exposé, on donne une méthode pour construire des courbes hyperelliptiques de genre 3 pour la cryptographie et on présente une approche pour étudier le log discret en genre 3, via le graphe d'isogénies.

The Collatz conjecture aka the 3n+1 problem

Mario Weitzer, Montanuniversität Leoben, Autriche
mercredi 15 avril 2015

The subject of this talk is the famous Collatz conjecture which is also known as the 3n+1 problem. Let $n$ be any non-negative integer and apply the following transformation: If $n$ is even divide it by 2 and if $n$ is odd multiply it by 3 and add 1. The Collatz conjecture is the unproven observation that no matter what number n one starts with, repeated application of this transformation always eventually yields 1. Though extremely easy to formulate this problem is open since 1937. In this talk we will give an introduction to the problem, present basic facts and interpretations, and deduce and discuss first nontrivial results.

Processus stochastiques d'entropie maximale pour les automates temporisés

Nicolas Basset, Oxford, Royaume-Uni
mercredi 1 avril 2015

D'abord nous rappelons la question généralisée par la suite au cas des automates temporisés : comment générer aléatoirement et de façon quasi uniforme les chemins dans un graphe?
Une solution à cette question est basée sur le processus stochastique d'entropie maximale décrit par Shannon (et connu sous le nom de mesure de Parry en dynamique symbolique et théorie ergodique).
Ensuite nous généralisons les notions de processus stochastique et d'entropie aux automates temporisés.
Nous construisons à l'instar du cas non temporisé un processus stochastique d'entropie maximale pour un automate temporisé.
Ce processus stochastique est défini à l'aide des attributs spectraux (rayon spectral, fonctions propres) de l'opérateur d'Asarin et Degorre, un analogue temporisé de la matrice d'adjacence d'un graphe.
Ce processus permet de faire une génération aléatoire quasi-uniforme des calculs d'un automate temporisé.
Une application est la génération aléatoire (quasi-)uniforme de permutations dont le mot formé par les montées et descentes succesives est dans un langage régulier.

Sémantique des jeux pour la complexité d'ordre supérieur

Hugo Férée, LORIA, Nancy
mercredi 25 mars 2015

Bien que les notions de complexité pour les fonctions d'ordre 1 $(N \to N)$ et 2 $((N \to N) \to N)$ sont bien définies et possèdent de nombreuses caractérisations, ce n'est
pas le cas pour les fonction d'ordre 3 et plus.
Nous proposons d'aborder le problème de la complexité en utilisant la sémantique des jeux, c'est à dire en décrivant un calcul comme un dialogue entre l'objet calculé et ses entrées. Nous définissons alors la taille et la complexité des stratégies pour certains types de jeux dits séquentiels. Cela s'applique en particulier aux jeux associés au langage d'ordre supérieur PCF, ce qui nous permet de définir une notion de complexité sur ces fonctions. La classe de fonctions calculables en temps polynomial qui en découle vérifie en particulier les propriétés minimales que l'on attend d'un analogue de FPTIME pour les fonctions d'ordre supérieur.

Improved Parallel Gaussian Elimination for Gröbner Bases Computations in Finite Fields

Brice Boyer, LIP6, UPMC
mercredi 25 mars 2015

We present a GPLv2 open source C library for linear algebra specialized for eliminating matrices generated during Gröbner Bases computations in algorithms like F4 or F5. We improve on the initial ideas of Faugère and Lachartre (FGL).

Our approach takes even more advantage of the very special structure the corresponding matrices have: quasi unit-triangular sparse matrices with patterns in the data. Optimizing this reduction step is crucial for the overall Gröbner Bases computation.

We first present improved data structures for storing these matrices in binary format, namely by compressing the repeated data in the rows and the column indexes, before gzip-ing the result. We can save up to an order of magnitude in space, allowing us to produce and share a large database of such matrices for benchmarking and testing purpose.

We show efficient blocked data structures for computing the reduced forms with specialized axpy and trsm operations, that take advantage of the patterns in the matrices and their sparsity. For instance, a special multiline storage allows cache friendly storage of sparse rows. We also reduce the number of operations, in a parallel friendly fashion, by changing the order of the operations in the elimination and by not computing the full row echelon form.

Finally, we present experimental results for sequential and parallel computations on NUMA architectures. With our new implementation we get a 5-10\%%%% speed-up for the sequential algorithm depending on the rank. We also get better scaling up until 32 (non hyper-threaded) cores instead of 16: we have speed-ups around $14$ or $16$ for bigger benchmarks. We also save more than twice the amount of memory used during the computation.

The whole alphabet (and then some) agree on a key in one round: making multilinear maps practical

Adeline Langlois, LASEC, EPFL, Lausanne, Suisse
mercredi 18 mars 2015

Multilinear maps have become versatile tools for designing cryptographic schemes since a first approximate realisation candidate was proposed by Garg, Gentry and Halevi (GGH). In this talk, I will introduce the more efficient variant GGHLite. Then I will present the improvements we made to provide an efficient implementation of GGHLite.

While GGHLite represents an asymptotic improvement in parameter sizes compared to GGH, implementing it naively would still not allow to instantiate it for non-trivial parameter sizes. We hence propose a strategy which reduces parameter sizes further and several technical improvements to allow for an efficient implementation. To demonstrate the performance of our results, we also provide an implementation of a non-interactive N-partite Diffie-Hellman key exchange and report on experimental results. For example, due to our improvements we were able to compute a multilinear map of multilinearity level $\kappa$ at security level $\lambda=80$.

This is a joint work with M. Albrecht, C. Cocis and F. Laguillaumie.

Sparse Gröbner Bases: the Unmixed Case

Pierre-Jean Spaenlehauer, LORIA, Nancy
mercredi 4 mars 2015

We present an analog of Gröbner bases for semigroup algebras, and we propose variants of the algorithms F5 and FGLM to compute them. These algorithms provide tools to compute the solutions of sparse systems of equations when all the polynomials share the same monomial support (unmixed case). When these monomials correspond to the points with integer coordinates in a normal lattice polytope and under regularity assumptions, we prove complexity bounds which depend on the combinatorial properties of this polytope. Our prototype "proof-of-concept" implementation shows speed-ups (more than 100 for some examples) compared to classical Gröbner bases software for some families of sparse systems.

Joint work with Jean-Charles Faugère and Jules Svartz.

Complexity and decomposition to the set of factors (joint work with J. Cassaigne, A. Frid and L. Q. Zamboni)

Svetlana Puzynina, Post-doc, Lyon et Sobolev institute, Novosibirsk
mercredi 25 février 2015

In this talk we introduce a new hierarchy of classes of languages and infinite words and its connection with complexity classes. Namely, we say that a language belongs to the class $L_k$ if each word in it can be introduced as a concatenation of $k$ words from a language $S$, such that the number of words of length $n$ in $S$ is bounded by a constant. The class of infinite words whose set of factors is in $L_k$ is denoted by $W_k$. In this talk we focus on the relations between the classes $W_k$ and the subword complexity of infinite words, which is as usual defined as the number of factors of the word of length n. In particular, we prove that the class $W_2$ coincides with the class of infinite words of linear complexity. On the other hand, although the class $W_k$ is included in the class of words of complexity $O(n^{k-1})$, this inclusion is strict for $k > 2$.

Calcul des racines de polynômes à coefficients dans un corps fini

Bruno Grenet, ECO, LIRMM
mercredi 11 février 2015

Trouver les racines d'un polynôme à coefficients dans un corps fini est un problème fondamental du calcul formel, qui sert de brique de base à de nombreux algorithmes du domaine (factorisation, interpolation, ...) et qui a des applications en cryptographie ou pour les codes correcteurs par exemple. D'autre part, il s'agit de l'un des rares problèmes que l'on sait résoudre en temps polynomial probabiliste mais pas en temps polynomial déterministe.

Dans mon exposé, je présenterai les grandes familles d'algorithmes connues pour ce problème, avant de me focaliser sur une nouvelle approche que nous avons récemment proposée. Cette approche est basée sur la transformée de Graeffe, un outil issu de l'analyse numérique, et est particulièrement adaptée à certains corps finis. Cela nous permet d'obtenir de nouvelles bornes de complexité déterministe pour ce problème, ainsi qu'une implantation dans le logiciel libre Mathemagix, plus rapide que les logiciels existants.

Travail réalisé en commun avec Grégoire Lecerf et Joris van der Hoeven.

Bisimulation games in modal logic

Valentin Shehtman, IITP, Moscow, Russia
mercredi 4 février 2015

Bisimulation games are modal logic analogues of classical Ehrenfeucht-Fraisse games. They are closely related to bisimulations of Kripke models (transition systems) and can be used for study of expressive power of different logical languages, for proofs of theorems on completeness and finite model property for logics and theories. In this talk we show how bisimulation games can simplify proofs of local finiteness of nonclassical propositional logics (or the corresponding varieties of algebras); a logic $L$ is called locally finite if for any finite $n$ there are finitely many formulas in $n$ variables up to equivalence in $L$. Well-known results of this kind are the theorems by Segerberg - Maksimova and Kuznetsov - Komori on local finiteness of modal and intermediate logics of finite depth. In this talk we propose various generalizations of these theorems and discuss further lines of research.

Rank-metric codes: duality and MacWilliams identities

Alberto Ravagnani, Université de Neuchâtel (Suisse)
mercredi 21 janvier 2015

Delsarte rank-metric codes are linear spaces of matrices of bounded rank and entries in a given finite field. A different definition of rank-metric code was proposed later by Gabidulin. Both Delsarte and Gabidulin codes have been recently applied in linear network coding for error correction in multicast communications. Gabidulin codes also have applications in (information theoretic) security. In this talk we compare the duality theories of Delsarte and Gabidulin codes, proving that the former can be regarded as a generalization of the latter. We also show how one can derive MacWilliams identities (explicit relations between a code and its dual) for Delsarte codes using a double-counting argument. The original proof by Delsarte is based on the theory of association schemes and regular semilattices. The result also applies to Gabidulin codes. As an application of the results to a classical problem in enumerative combinatorics, we provide a recursive formula for the number of rectangular matrices of given size, rank and h-trace over a finite field.

calcul de polynômes modulaires en dimension 1 et 2

Enea Milio, IMB, Université de Bordeaux 1
mercredi 17 décembre 2014

Un polynôme modulaire de niveau $p$ en dimension 1 est un polynôme qui paramétrise les classes d'isomorphisme de courbes elliptiques muni d'une isogénie de degré $p^2$. Ces polynômes ont des applications cryptographiques, notamment parce qu'ils permettent de construire des courbes elliptiques avec un nombre de points fixés et parce qu'ils permettent de calculer des graphes d'isogénies. Dans un premier temps, nous allons rappeler les différentes notions intervenant dans la définition de ces polynômes et puis nous nous proposons de décrire un algorithme de type évaluation/interpolation pour les calculer. Dans un second temps, nous donnerons une généralisation de cette notion en dimension 2 et décrirons certains résultats que nous avons obtenus.

Autour de la longueur palindromique

Gwenaël Richomme, Escape, LIRMM
mercredi 10 décembre 2014

La longueur palindromique d'un mot fini est la plus petite longueur d'une factorisation en palindromes de ce mot.
A.E. Frid, S. Puzynina et L.Q. Zamboni ont conjecturé que tout mot infini dont la longueur palindromique des facteurs est borné est un mot ultimement périodique. Nous ferons le tour de différents aspects de cette conjecture et montrerons la conjecture dans un cas particulier où la longueur palindromique est remplacée par une version calculée de manière gloutonne.

On discrete compressed sensing problem

Grigori Kabatianski, Institute for Information Transmission Problems, Moscow
mercredi 26 novembre 2014

Compressed sensing (CS) problem can be formulated in the following way [1,2] - to find a real vector $x \in \mathbf{R}^n$ from minimal number of linear measurements, i.e. from equation $s=Hx^T+e$, where vector $x$ is $t$-sparse, i.e. $||x||_0 \leq T$, and vector e errors of measurements is enough small. Discrete version of CS problem replaces restriction $||x||_2$ is enough small on restriction that vector $e$ is $L$-sparse. A particular case of this problem is for instance famous Erdős-Ulam problem "search with a lie" or "problem of 20 questions" [3]. In the talk we give a solution of discrete CS problem in the same way as RS-codes provide the solution of the main coding theory problem for the cases when code length is at most field's cardinality. It is worth to mention that RS codes aren't good for discrete CS problem as they demand the redundancy at least $4TL$, see modern analysis of Prony's algorithm [5]. We have found an optimal solution which uses only $2(T+L)$ redundancy. In the conclusion it will be shown how these results can be applied for a restricted version of ordinary CS problem. The talk is based on joint research of the speaker with Serge Vladuts, Cedric Tavernier and Valery Lomakov.

References

[1] D.L. Donoho, "Compressed sensing", IEEE Transactions on Information Theory, v. 52 (4), pp. 1289-1306, 2006.

[2] E.J. Candes, T. Tao, "Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies? ", IEEE Transactions on Information Theory, v. 52 (12), pp. 5406 - 5425, 2006.

[3] A. Pelc, "Solution of Ulam's Problem on Searching with a Lie", Journal of Comb.Theory, Ser.A 44, pp. 129-140, 1987.

[4] R.Prony, "Essai experimantal et analytique sur les lois del Dilatabilite de fluides" J. de Ecole Polytechnique 1, pp. 24-76, 1795.

[5] M.T. Comer, E.L. Kaltofen, C.Pernet, "Sparse Polynomial Interpolationb and Berlekamp-Massey Algorithms That Correct Outlier Errors in Input Values", in Proc. ISSAC 2011, pp. 138-145.

Randomizing Scalar Multiplication Using Exact Covering Systems of Congruences

Laurent Imbert, LIRMM, CNRS
mercredi 19 novembre 2014

Since the pioneer works of Kocher, side channel attacks have been taken very seriously by researchers and designers of cryptographic algorithms. By attacking the implementations instead of the underlaying mathematical problems, these attacks have proved very powerful against both private and public key algorithms.

In this talk, I will briefly sketch the bases of these attacks and present some common countermeasures. I will then present a novel family of scalar multiplication algorithms (intensely used in elliptic curve cryptography) based on covering systems of congruences. At the lower level, the algorithms are very regular; they consist of a repeating sequence of identical instructions. At an higher level however, they are uniformly randomized and thus behave very irregularly, making simple, differential and horizontal attacks much more difficult.

This is a joint work with Eleonora Guerrini (LIRMM) and Theo Winterhalter (ÉNS Cachan).

Degrés Turing des sous-shifts minimaux

Pascal Vannier, LACL (Paris-Est Créteil)
mercredi 12 novembre 2014

Les sous-shifts sont des sous-ensembles de $\mathbf{Z}^d$ fermés invariants par translation, ou de manière équivalente, définissables par une famille de motifs interdits. Un sous-shift est minimal si il ne contient proprement aucun autre sous-shift, tous les sous-shifts contiennent donc un sous-shift minimal. Un sous-shift minimal a la particularité que tous ses éléments ont les mêmes motifs et que pour tout motif de taille n apparaissant dans une configuration, il existe un $N$ tel que dans toute fenêtre de taille $N$ de n'importe quelle configuration ce motif apparaisse. Nous nous intéressons ici à la structure calculatoire de ces sous-shifts et en particulier à la structure de leurs ensembles de degrés Turing.

Entropy compression and shuffle squares

Pascal Ochem, AlGCo, LIRMM
mercredi 22 octobre 2014

Daniel Gonçalves, Mickael Montassier et Alexandre Pinlou ont développé l'utilisation de la compression d'entropie pour la coloration de graphes. On s'intéresse à cette méthode pour des problèmes d'évitabilité, i.e., prouver qu'il existe un mot infini évitant certains facteurs interdits sur un alphabet aussi petit que possible.

On montre que les shuffle squares sont évitables avec un alphabet de taille 7. Aussi, les caterpillars admettent une coloration non-répétitive par listes de taille 5. Enfin, on conjecture que cette méthode est toujours au moins aussi puissante qu'une méthode concurrente utilisant une série génératrice.

Extension d'une notion de quasipériodicité en deux dimensions

Guilhem Gamard, Escape, LIRMM
mercredi 8 octobre 2014

Un mot est quasipériodique s'il peut être recouvert par des occurrences d'un autre mot, appelé quasipériode. Par exemple, ababaabababa admet aba et ababa comme quasipériodes. En outre, un mot est dit quasipériodique multi-échelles s'il a une infinité de quasipériodes.

En premier lieu, nous rappellerons comment, dans le contexte des mots infinis à droite, ces notions de quasipériodicité et quasipériodicité multi-échelles se relient à d'autres concepts classiques de combinatoire des mots, tels que l'uniforme récurrence ou la complexité en facteurs.

Dans un second temps, nous généraliserons ces notions aux mots infinis bidimensionnels. Nous étudierons alors les résultats de la première partie qui passent à la dimension deux. En outre, nous expliquerons pourquoi certains résultats ne se généralisent pas.

(Gaussian) Shift Radix Systems - Introduction and Results

Mario Weitzer, Montanuniversität Leoben, Autriche
mercredi 3 septembre 2014

Shift Radix Systems (SRS) turned out to be the missing link between two generalizations of positional notation systems - Beta-Expansions and Canonical Number Systems (CNS) - which have been studied extensively during the last decade, but still leave behind many open questions and unsolved problems. In distinction from positional notation systems, where all integers greater than 1 can serve as a basis, in the general cases only the elements of complicated sets satisfy certain natural finiteness conditions. An introduction to SRS is given and results in relation to the characterization of such sets are being presented. These results settle two previously open topological questions. For a specific set it is shown that it is disconnected and so is its complement. For another set known as Pethö's Loudspeaker a conjecture on its shape is formulated and proved to some extent.