Skip to content

BUG: Inconsistent type resolution for 0d arrays #10322

Closed
@eric-wieser

Description

@eric-wieser

Using direct ufunc calls below to eliminate operator overloading from the test case:

>>> one_eps = 1.00000001
>>> np.greater_equal(np.array(1.0, np.float32), np.array(one_eps, np.float64))
False
>>> np.greater_equal(np.array([1.0], np.float32), np.array(one_eps, np.float64))
array([ True])  # wrong!
>>> np.greater_equal(np.array([1.0], np.float32), np.array([one_eps], np.float64))
array([False])
>>> np.greater_equal(np.array(1.0, np.float32), np.array([one_eps], np.float64))
array([False])

Caused by result_type having some unusual rules:

>>> np.result_type(np.float32, np.float64)
dtype('float64') # ok
>>> np.result_type(np.float32, np.float64([1]))
dtype('float64') # ok
>>> np.result_type(np.float32, np.float64(1))
dtype('float32') # what

This seems wrong to me, but appears to be deliberate.

It seems the goal here is to interpret python scalars as loosely-typed (in the presence of arrays). This handles thing like:

  • letting integer literals choose between signed and unsigned
  • letting number literals have smaller representations than float64 and int_ if they'd fit

However, the method is too greedy, and also (in the presence of arrays) discards type information from:

  1. np.number instances with an explicit type, like np.int64(1) and np.float32(1)
  2. 0d ndarray instances, by decaying them to their scalars

The problem is that PyArray_ResultType (and the ufunc type resolver) has lost the information about where it's array arguments came from, and whether their types are explicit or implicit.

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions