Tom Lane wrote:
> Alvaro Herrera <[email protected]> writes:
> > Evidently there is a problem right there. If I simply add an "order by
> > tenthous" as proposed by Peter, many more errors appear; and what errors
> > appear differs if I change shared_buffers. I think the real fix for
> > this is to change the hand-picked values used in the brinopers table, so
> > that they all pass the test using some reasonable ORDER BY specification
> > in the populating query (probably tenk1.unique1).
>
> I may be confused, but why would the physical ordering of the table
> entries make a difference to the correct answers for this test?
> (I can certainly see why that might break the brin code, but not
> why it should change the seqscan's answers.)
We create the brintest using a scan of tenk1 LIMIT 100, without
specifying the order. So whether we find rows that match each test query
is pure chance.
> Also, what I'd just noticed is that all of the cases that are failing are
> ones where the expected number of matching rows is exactly 1. I am
> wondering if the test is sometimes just missing random rows, and we're not
> seeing any reported problem unless that makes it go down to no rows. (But
> I do not know how that could simultaneously affect the seqscan case ...)
Yeah, we compare the ctid sets of the results, and we assume that a
seqscan would get that correctly.
> I think it would be a good idea to extend the brinopers table to include
> the number of expected matches, and to complain if that's not what we got,
> rather than simply checking for zero.
That sounds reasonable.
--
Álvaro Herrera http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services