• Home
  • Posts
новина лютий 2023 2 resized

After the introduction of interception of errors from servers in Sentry instead of mail, the time has come to clean up the multi-million piles of mail in GMail, which have accumulated there for years (and now errors are aggregated and rotated in Sentry 🙂 ). As it turned out, this is not an easy task. The GMail interface does not allow you to delete millions of emails at once. Deleting 100 conversations at a time is, of course, not an option. Checking “all” via the search to delete doesn’t seem to do anything on such volumes (we assume the operation aborts due to a timeout under the hood of GMail). After spending many days trying to do it manually, nothing came of it.

Next, we tried cleaning the mail via IMAP. But this method also, as it turned out, does not work – at first the letters are marked as deleted (even if you delete not all at once, but N thousand at a time). But after a while they come back (again suspecting a timeout in the GMail engine).

Finally found a guaranteed way to delete all emails in the GMail inbox, albeit slowly. To do this, you need to activate the POP3 protocol and set in the settings to delete emails after they are downloaded to the mail client via POP3. We used the Interlink client, but any client that can endlessly loop automatically to collect mail will do. The e-mail client retrieves mail using the POP3 protocol quite slowly: about one message per second, but the downloaded e-mails are guaranteed to be deleted from the GMail servers. In total, cleaning a box with one million letters requires about 2 weeks of autonomous operation of such email client, but at least there is no need to perform manual operations.

Made standard pillars to limit the checking speed of RAID arrays, which is useful for DB servers that may suffer during check.

Added the ability to use install_root and systemd_unit_name to the salt-minion installation state, which allows us to make connection settings to our salt masters “secondary”, in case the main salt minion is occupied by another role.

For one of the clients, we implemented a small but useful feature: GitLab has very little time reporting capabilities. Having experience working with timelogs in our accounting, we implemented the following: opened RO connections to PostgreSQL embedded in GitLab; connected the database to a private instance of Metabase; wrote SQL queries and created a dashboard in this system. This allowed the client to have more detailed analytics regarding the hours spent by his devs.

Added an ability to use dump_prefix_cmd for the database dump commands in rsnapshot_backup. This makes it possible to prefix dump commands, for example with ionice -c 3 nice, which is useful for some installations where an intensive dump of the database can affect the stability of the production environment.

The ability to include shared files for variables has been added as the variables_from_files key to gitlab-admin.

Made a standard docker-alive check. As it turned out, the docker daemon can also freeze sometimes, this monitoring check allows us to detect such problems.

Faced with the implementation in GitLab of a limit on the number of CI-CD variables that GitLab displays in the UI. Found a fix: it is necessary to tune the parameters through the gitlab-rails console:

Plan.default.actual_limits.update!(project_ci_variables: 5000)
Plan.default.actual_limits.update!(group_ci_variables: 5000)
Plan.default.actual_limits.update!(ci_instance_level_variables: 1000)
Share this post