Closed
Description
Using direct ufunc calls below to eliminate operator overloading from the test case:
>>> one_eps = 1.00000001
>>> np.greater_equal(np.array(1.0, np.float32), np.array(one_eps, np.float64))
False
>>> np.greater_equal(np.array([1.0], np.float32), np.array(one_eps, np.float64))
array([ True]) # wrong!
>>> np.greater_equal(np.array([1.0], np.float32), np.array([one_eps], np.float64))
array([False])
>>> np.greater_equal(np.array(1.0, np.float32), np.array([one_eps], np.float64))
array([False])
Caused by result_type
having some unusual rules:
>>> np.result_type(np.float32, np.float64)
dtype('float64') # ok
>>> np.result_type(np.float32, np.float64([1]))
dtype('float64') # ok
>>> np.result_type(np.float32, np.float64(1))
dtype('float32') # what
This seems wrong to me, but appears to be deliberate.
It seems the goal here is to interpret python scalars as loosely-typed (in the presence of arrays). This handles thing like:
- letting integer literals choose between signed and unsigned
- letting number literals have smaller representations than
float64
andint_
if they'd fit
However, the method is too greedy, and also (in the presence of arrays) discards type information from:
np.number
instances with an explicit type, likenp.int64(1)
andnp.float32(1)
- 0d
ndarray
instances, by decaying them to their scalars
The problem is that PyArray_ResultType
(and the ufunc type resolver) has lost the information about where it's array arguments came from, and whether their types are explicit or implicit.