Blog entries

  • FOSDEM PGDay 2014

    2014/02/11 by Rémi Cardona

    I attended PGDay on January 31st, in Brussels. This event was held just before FOSDEM, which I also attended (expect another blog post). Here are some of the notes I took during the conference.

    https://fosdem.org/2014/support/promote/wide.png

    Statistics in PostgreSQL, Heikki Linnakangas

    Due to transit delays, I only caught the last half of that talk.

    The main goal of this talk was to explain some of Postgres' per-column statistics. In a nutshell, Postgres needs to have some idea about tables' content in order to choose an appropriate query plan.

    Heikki explained which sorts of statistics gathers, such as most common values and histograms. Another interesting stat is the correlation between physical pages and data ordering (see CLUSTER).

    Column statistics are gathered when running ANALYZE and stored in the pg_statistic system catalog. The pg_stats view provides a human-readable version of these stats.

    Heikki also explained how to determine whether performance issues are due to out-of-date statistics or not. As it turns out, EXPLAIN ANALYZE shows for each step of the query planner how many rows it expects to process and how many it actually processed. The rule of thumb is that similar values (no more than an order of magnitude apart) mean that column statistics are doing their job. A wider margin between expected and actual rows mean that statistics are possibly preventing the query planner from picking a more optimized plan.

    It was noted though that statistics-related performance issues often happen on tables with very frequent modifications. Running ANALYZE manually or increasing the frequency of the automatic ANALYZE may help in those situations.

    Advanced Extension Use Cases, Dimitri Fontaine

    Dimitri explained with very simple cases the use of some of Postgres' lesser-known extensions and the overall extension mechanism.

    Here's a grocery-list of the extensions and types he introduced:

    • intarray extension, which adds operators and functions to the standard ARRAY type, specifically tailored for arrays of integers,
    • the standard POINT type which provides basic 2D flat-earth geometry,
    • the cube extension that can represent N-dimensional points and volumes,
    • the earthdistance extension that builds on cube to provide distance functions on a sphere-shaped Earth (a close-enough approximation for many uses),
    • the pg_trgm extension which provides text similarity functions based on trigram matching (a much simpler thus faster alternative to Levenshtein distances), especially useful for "typo-resistant" auto-completion suggestions,
    • the hstore extension which provides a simple-but-efficient key value store that has everyone talking in the Postgres world (it's touted as the NoSQL killer),
    • the hll extensions which implements the HyperLogLog algorithm which seems very well suited to storing and counting unique visitor on a web site, for example.

    An all-around great talk with simple but meaningful examples.

    http://tapoueh.org/images/fosdem_2014.jpg

    Integrated cache invalidation for better hit ratios, Magnus Hagander

    What Magnus presented almost amounted to a tutorial on caching strategies for busy web sites. He went through simple examples, using the ubiquitous Django framework for the web view part and Varnish for the HTTP caching part.

    The whole talk revolved around adding private (X-prefixed) HTTP headers in replies containing one or more "entity IDs" so that Varnish's cache can be purged whenever said entities change. The hard problem lies in how and when to call PURGE on Varnish.

    The obvious solution is to override Django's save() method on Model-derived objects. One can then use httplib (or better yet requests) to purge the cache. This solution can be slightly improved by using Django's signal mechanism instead, which sound an awful-lot like CubicWeb's hooks.

    The problem with the above solution is that any DB modification not going through Django (and they will happen) will not invalidate the cached pages. So Magnus then presented how to write the same cache-invalidating code in PL/Python in triggers.

    While this does solve that last issue, it introduces synchronous HTTP calls in the DB, killing write performance completely (or killing it completely if the HTTP calls fail). So to fix those problems, while introducing limited latency, is to use SkyTools' PgQ, a simple message queue based on Postgres. Moving the HTTP calls outside of the main database and into a Consumer (a class provided by PgQ's python bindings) makes the cache-invalidating trigger asynchronous, reducing write overhead.

    http://www.logilab.org/file/210615/raw/varnish_django_postgresql.png

    A clear, concise and useful talk for any developer in charge of high-traffic web sites or applications.

    The Worst Day of Your Life, Christophe Pettus

    Christophe humorously went back to that dreadful day in the collective Postgres memory: the release of 9.3.1 and the streaming replication chaos.

    My overall impression of the talk: Thank $DEITY I'm not a DBA!

    But Christophe also gave some valuable advice, even for non-DBAs:

    • Provision 3 times the necessary disk space, in case you need to pg_dump or otherwise do a snapshot of your currently running database,
    • Do backups and test them:
      • give them to developers,
      • use them for analytics,
      • test the restore, make it foolproof, try to automate it,
    • basic Postgres hygiene:
      • fsync = on (on by default, DON'T TURN IT OFF, there are better ways)
      • full_page_writes = on (on by default, don't turn it off)
      • deploy minor versions as soon as possible,
      • plan upgrade strategies before EOL,
      • 9.3+ checksums (createdb option, performance cost is minimal),
      • application-level consistency checks (don't wait for auto vacuum to "discover" consistency errors).

    Materialised views now and in the future, Thom Brown

    Thom presented on of the new features of Postgres 9.3, materialized views.

    In a nutshell, materialized views (MV) are read-only snapshots of queried data that's stored on disk, mostly for performance reasons. An interesting feature of materialized views is that they can have indexes, just like regular tables.

    The REFRESH MATERIALIZED VIEW command can be used to update an MV: it will simply run the original query again and store the new results.

    There are a number of caveats with the current implementation of MVs:

    • pg_dump never saves the data, only the query used to build it,
    • REFRESH requires an exclusive lock,
    • due to implementation details (frozen rows or pages IIRC), MVs may exhibit non-concurrent behavior with other running transactions.

    Looking towards 9.4 and beyond, here are some of the upcoming MV features:

    • 9.4 adds the CONCURRENTLY keyword:
      • + no longer needs an exclusive lock, doesn't block reads
      • - requires a unique index
      • - may require VACUUM
    • roadmap (no guarantees):
      • unlogged (disables the WAL),
      • incremental refresh,
      • lazy automatic refresh,
      • planner awareness of MVs (would use MVs as cache/index).

    Indexes: The neglected performance all-rounder, Markus Winand

    http://use-the-index-luke.com/img/alchemie.png

    Markus' goal with this talk showed that very few people in the SQL world actually know - let alone really care - about indexes. According to his own experience and that of others (even with competing RDBMS), poorly written SQL is still a leading cause of production downtime (he puts the number at around 50% of downtime though others he quoted put that number higher). SQL queries can indeed put such stress on DB systems and cause them to fail.

    One major issue, he argues, is poorly designed indexes. He went back in time to explain possible reasons for the lack of knowledge about indexes with both SQL developers and DBAs. One such reason may be that indexes are not part of the SQL standard and are left as implementation-specific details. Thus many books about SQL barely cover indexes, if at all.

    He then took us through a simple quiz he wrote on the topic, with only 5 questions. The questions and explanations were very insightful and I must admit my knowledge of indexes was not up to par. I think everyone in the room got his message loud and clear: indexes are part of the schema, devs should care about them too.

    Try out the test : http://use-the-index-luke.com/3-minute-test

    PostgreSQL - Community meets Business, Michael Meskes

    For the last talk of the day, Michael went back to the history of the Postgres project and its community. Unlike other IT domains such as email, HTTP servers or even operating systems, RDBMS are still largely dominated by proprietary vendors such as Oracle, IBM and Microsoft. He argues that the reasons are not technical: from a developer stand point, Postgres has all the features of the leading RDMBS (and many more) and the few missing administrative features related to scalability are being addressed.

    Instead, he argues decision makers inside companies don't yet fully trust Postgres due to its (perceived) lack of corporate backers.

    He went on to suggest ways to overcome those perceptions, for example with an "official" Postgres certification program.

    A motivational talk for the Postgres community.

    http://fosdem2014.pgconf.eu/files/img/frontrotate/slonik.jpg

  • Logilab présent à pgDay Toulouse

    2017/06/16 by Philippe Pepiot

    Le 8 juin 2017 nous avons assisté à pgDay, le moment de rencontre et de conférences de la communauté PostgreSQL francophone, qui s'est déroulée au campus de Météo France à Toulouse.

    https://www.logilab.org/file/10126216/raw/logo_pgfr_sans_900_400x400.png

    Partitionement

    Gilles Darold nous a fait un tour d'horizon des solutions de partitionnement, de la méthode manuelle avec des triggers et d'héritage de table en passant par l'extension pg_partman jusqu'au partitionnement déclaratif de la future version 10 avec la syntaxe PARTITION OF

    Le partitionnement permet de gérer plus facilement la maintenance et les performances de tables avec beaucoup d'enregistrements.

    Transaction autonomes

    Le même Gilles Darold nous a parlé des transactions autonomes c'est-à-dire des transactions qui s'exécutent dans une transaction parente et qui peut être validée ou annulée indépendamment de celle-ci, ce qui peut être utile pour enregistrer des événements.

    PostgreSQL buffers

    Vik Fearing nous a expliqué le fonctionnement et l'interaction des différents tampons mémoire dans PostgreSQL.

    Les pages sont chargées du disque vers les shared_buffers, qui sont partagés par toutes les connexions, et ont un usageCount entre un et cinq qui est incrémenté à chaque fois qu'elle est accédée. Lorsqu'une nouvelle page doit être chargée, un mécanisme de clock-sweep boucle sur le cache et décrémente l'usageCount et quand il vaut zéro la page est remplacée par la nouvelle. Ainsi pour une page avec un usageCount à cinq, il faudra au moins cinq tours des shared_buffers par le clock-sweep avant quelle ne soit remplacée.

    En cas d'un accès à une grosse table pour ne pas vider tout le cache, PostgreSQL utilise un tampon circulaire (ou ring buffer) limité en taille pour cette table.

    Les tables temporaires utilisent un tampon dédié, le temp_buffers.

    Quand une page est modifiée, elle l'est d'abord dans les wal buffers qui sont écrits sur disque lors du commit par le processus wal writer.

    Le writer process parcoure les shared_buffers tout les bgwriter_delay (200ms) et écrit sur disque un certain nombre de pages qui ont été modifiées, ce nombre est contrôlé par les paramètres bgwriter_lru_maxpages et bgwriter_lru_multiplier.

    Des checkpoint s'exécutent aussi tout les checkpoint_timeout ou plus fréquemment quand la taille des wals dépasse la valeur du paramètre max_wal_size. Lors d'un checkpoint on cherche des pages à écrire (ou dirty pages) et on les trie pour éviter les écritures aléatoires sur le disque. Le paramètre checkpoint_completion_target permet d'étaler la vitesse d'écriture entre deux checkpoint. Idéalement on veut qu'ils se déclenchent toujours par timeout et que l'écriture soit la plus étalée pour avoir des performances de lecture et d'écriture constantes.

    Pour déboguer l'utilisation des buffers et les I/O disques il y a la table pg_stat_bgwriter, l'extension pg_buffercache, et le paramètre track_io_timing à utiliser avec EXPLAIN (ANALYZE, BUFFERS).

    Les pires pratiques PostgreSQL

    Thomas Reiss et Philippe Beaudoin nous ont présenté quelques unes des plus mauvaises pratiques avec PostgreSQL, notamment de celle répandue du manque ou d'excès d'index. À ce sujet Dalibo a développé l'outil PoWA qui analyse l'activité d'une base et fait des suggestions d'index. Attention aussi à la tentation de (trop) destructurer les données, PostgreSQL possède de nombreux types qui offrent une garantie forte sur la consistance des données et de nombreuses opérations dessus, par exemple les types ranges.

    La communauté des développeurs de PostgreSQL

    Daniel Vérité nous a fait un historique de Ingres puis Postgres, et enfin PostgreSQL avant de nous montrer des statistiques sur les commits et la liste de diffusion utilisée pour le développement de PostgreSQL

    Les éléphants mangent-ils des cubes ?

    Cédric Villemain nous a parlé des fonctionnalités de PostgreSQL pour des requêtes de type OLAP. L'implémentation de TABLESAMPLE qui permet de faire des requêtes sur un échantillon aléatoire d'une table. Le paramètre default_statistic_target et la nouvelle commande de la version 10 CREATE STATISTICS qui permettent d'obtenir de meilleurs statistiques sur la distribution de la table et donc d'avoir de meilleurs plans d'exécution.

    Aussi depuis la version 9.4, la syntaxe GROUP BY ROLLUP permet de faire des agrégats sur plusieurs GROUP BY dans une seule requête. Auparavant il fallait faire plusieurs UNION pour obtenir le même résultat.

    À noter aussi l'utilisation d'index BRIN et BLOOM.

    Comment fonctionne la recherche plein texte ?

    Adrien Nayrat nous a présenté les fonctions de recherche plein texte dans PostgreSQL et le moyen de l'améliorer en créant ses propres configurations et dictionnaires de mots, ainsi qu'à la rendre plus performante avec les index GIN et GIST.

    GeoDataScience

    Olivier Courtin nous a montré avec un exemple concret comment PostgreSQL pouvait être un environnement idéal pour la géomatique et le machine learning avec les extensions PostGIS, ainsi que plpythonu utilisé pour exécuter du code python directement sur le serveur PostgreSQL. L'extension dédiée crankshaft propose des API basées sur scipy et scikit-learn et peut être appelée via des procédures SQL.

    https://www.logilab.org/file/10126217/raw/freefall.gif