Project

General

Profile

Actions

Bug #4350

closed

Bug #4252: Slow queries for list precalculation (heavy parallel sequential scan 8-20 sec per each 10 mins)

Enum table not populated for cesnet_inspectionerrors

Added by Radko Krkoš over 5 years ago. Updated over 5 years ago.

Status:
Closed
Priority:
Normal
Assignee:
Category:
Development - Core
Target version:
Start date:
10/05/2018
Due date:
% Done:

100%

Estimated time:
To be discussed:

Description

The table enum_cesnet_inspectionerrors that should contain a list of unique values from the cesnet_inspectionerrors array is empty. This is probably due to an omission during the implementation of the new enumeration gathering system (and even more so during the testing).
A test query shows that there are multiple possible existing values.
This was confirmed on all hub, alt, dev.
The impact is probably none as there does not seem to be any user of this enumeration (a probable reason it slipped during testing). Does this enum need to exist at all? Or should the inspection errors editbox (only visible in Admin mode) be in fact a drop-down selector?


Related issues

Related to Mentat - Task #4228: Hawat: Make use of Flask-Cache pluginRejected07/27/2018

Actions
Actions #1

Updated by Jan Mach over 5 years ago

  • Status changed from New to In Progress
Actions #2

Updated by Jan Mach over 5 years ago

  • Related to Task #4228: Hawat: Make use of Flask-Cache plugin added
Actions #3

Updated by Jan Mach over 5 years ago

  • Status changed from In Progress to Feedback
  • Assignee changed from Jan Mach to Radko Krkoš
  • % Done changed from 0 to 100

Fix deployed to mentat-alt for testing, but at this point the database is stale and the mentat-precache.py module is unable to finish. The event search page returns a 500 error because of the database unavailability.

Actions #4

Updated by Jan Mach over 5 years ago

Database works now, the long running operations are now finished.

Actions #5

Updated by Radko Krkoš over 5 years ago

  • Assignee changed from Radko Krkoš to Jan Mach

I seem not to understand the problem here. Please explain. Was it the 1000s run time of the first data load?

Actions #6

Updated by Jan Mach over 5 years ago

  • Assignee changed from Jan Mach to Radko Krkoš

Radko Krkoš wrote:

I seem not to understand the problem here. Please explain. Was it the 1000s run time of the first data load?

Yes, there were some heavy database operations and the first run time was really high. I just wanted to log somewhere, that because the first run was taking so long the web interface was no working due to the missing cache file. I also logged that for future reference and for your convenience. I think that unless there was some serious problem with the database at that time we can disregard this note.

In my opinion this issue can be closed, but I am leaving that up to the author of the bug request.

Actions #7

Updated by Radko Krkoš over 5 years ago

  • Status changed from Feedback to Closed
  • Target version set to 2.2

Jan Mach wrote:

Radko Krkoš wrote:

I seem not to understand the problem here. Please explain. Was it the 1000s run time of the first data load?

Yes, there were some heavy database operations and the first run time was really high. I just wanted to log somewhere, that because the first run was taking so long the web interface was no working due to the missing cache file. I also logged that for future reference and for your convenience. I think that unless there was some serious problem with the database at that time we can disregard this note.

Yes, I see. Thanks for explanation. Such run time is unfortunately to be expected as the common optimization of parallel scan is disabled in PostgreSQL 10 in data modification queries (in 11 it is enabled for simple queries such as this).

In my opinion this issue can be closed, but I am leaving that up to the author of the bug request.

Agreed, closing, I am happy with the result.

Actions

Also available in: Atom PDF