declaration is incompatible declaration is incompatible with "double y0(double)"
I try to combined the opencv and cuda together,here is some of the code(not all)[code]#include <cv.h>
#include <highgui.h>

#include <cutil.h>

#include <stdio.h>
#include <stdlib.h>
#include <math.h>


#define DISTORTION_NUM 4

__constant__ double dis[DISTORTION_NUM];
__constant__ double pixel = 0.006400;
__constant__ double x0 = -0.024100;
__constant__ double y0 = 0.058500;
__constant__ double f = 24.405500;
__constant__ int u0 = 5616/2;
__constant__ int v0 = 3744/2;

unsigned int timer = 0;
//................some of functions....................
__global__ void undistort(uchar *srcdata, uchar *dstdata, int width, int height, int channels, int step)
{
//将threadIdx/BlockIdx映射到像素位置
int u = threadIdx.x+blockIdx.x*blockDim.x;
int v = threadIdx.y+blockIdx.y*blockDim.y;

double delta_u,delta_v;
int u_coor,v_coor;
double x,y;
double xtmp,ytmp;
double r;
CvPoint2D64f dstPoint;
//.....................lots of code..............
}
[/code]
when i compile the code,got some of warnings and one error[code]# nvcc -g -o unditort `pkg-config --libs --cflags opencv` -lm -I/root/NVIDIA_GPU_Computing_SDK/C/common/inc/ -arch=sm_13 undistort.cu
In file included from /root/NVIDIA_GPU_Computing_SDK/C/common/inc/cutil_inline.h:20,
from undistort.cu:4:
/root/NVIDIA_GPU_Computing_SDK/C/common/inc/cutil_inline_runtime.h:80:1: warning: "MIN" redefined
In file included from /usr/include/opencv/cxcore.h:70,
from /usr/include/opencv/cv.h:58,
from undistort.cu:1:
/usr/include/opencv/cxtypes.h:205:1: warning: this is the location of the previous definition
In file included from /root/NVIDIA_GPU_Computing_SDK/C/common/inc/cutil_inline.h:20,
from undistort.cu:4:
/root/NVIDIA_GPU_Computing_SDK/C/common/inc/cutil_inline_runtime.h:81:1: warning: "MAX" redefined
In file included from /usr/include/opencv/cxcore.h:70,
from /usr/include/opencv/cv.h:58,
from undistort.cu:1:
/usr/include/opencv/cxtypes.h:209:1: warning: this is the location of the previous definition
/usr/include/opencv/cxmat.hpp(346): warning: integer conversion resulted in a change of sign

/usr/include/opencv/cxmat.hpp(3722): warning: integer conversion resulted in a change of sign

/usr/include/opencv/cxmat.hpp(4116): warning: integer conversion resulted in a change of sign

undistort.cu(16): error: declaration is incompatible with "double y0(double)"
/usr/include/bits/mathcalls.h(241): here

1 error detected in the compilation of "/tmp/tmpxft_00000b2f_00000000-4_undistort.cpp1.ii".
[/code]
Now i get some questions:1.does only arch>=sm_13 can support double ?
2.what is the mean of error ( __constant__ double y0 = 0.058500;)?
3.I don't know exactly what the headfiles of cuda i should include,and did i use nvcc options correct?
your help would be greatly appreciated /biggrin.gif' class='bbc_emoticon' alt=':biggrin:' />
I try to combined the opencv and cuda together,here is some of the code(not all)
#include <cv.h>

#include <highgui.h>



#include <cutil.h>



#include <stdio.h>

#include <stdlib.h>

#include <math.h>





#define DISTORTION_NUM 4



__constant__ double dis[DISTORTION_NUM];

__constant__ double pixel = 0.006400;

__constant__ double x0 = -0.024100;

__constant__ double y0 = 0.058500;

__constant__ double f = 24.405500;

__constant__ int u0 = 5616/2;

__constant__ int v0 = 3744/2;



unsigned int timer = 0;

//................some of functions....................

__global__ void undistort(uchar *srcdata, uchar *dstdata, int width, int height, int channels, int step)

{

//将threadIdx/BlockIdx映射到像素位置

int u = threadIdx.x+blockIdx.x*blockDim.x;

int v = threadIdx.y+blockIdx.y*blockDim.y;



double delta_u,delta_v;

int u_coor,v_coor;

double x,y;

double xtmp,ytmp;

double r;

CvPoint2D64f dstPoint;

//.....................lots of code..............

}


when i compile the code,got some of warnings and one error
# nvcc -g  -o unditort `pkg-config --libs --cflags opencv` -lm -I/root/NVIDIA_GPU_Computing_SDK/C/common/inc/ -arch=sm_13  undistort.cu 

In file included from /root/NVIDIA_GPU_Computing_SDK/C/common/inc/cutil_inline.h:20,

from undistort.cu:4:

/root/NVIDIA_GPU_Computing_SDK/C/common/inc/cutil_inline_runtime.h:80:1: warning: "MIN" redefined

In file included from /usr/include/opencv/cxcore.h:70,

from /usr/include/opencv/cv.h:58,

from undistort.cu:1:

/usr/include/opencv/cxtypes.h:205:1: warning: this is the location of the previous definition

In file included from /root/NVIDIA_GPU_Computing_SDK/C/common/inc/cutil_inline.h:20,

from undistort.cu:4:

/root/NVIDIA_GPU_Computing_SDK/C/common/inc/cutil_inline_runtime.h:81:1: warning: "MAX" redefined

In file included from /usr/include/opencv/cxcore.h:70,

from /usr/include/opencv/cv.h:58,

from undistort.cu:1:

/usr/include/opencv/cxtypes.h:209:1: warning: this is the location of the previous definition

/usr/include/opencv/cxmat.hpp(346): warning: integer conversion resulted in a change of sign



/usr/include/opencv/cxmat.hpp(3722): warning: integer conversion resulted in a change of sign



/usr/include/opencv/cxmat.hpp(4116): warning: integer conversion resulted in a change of sign



undistort.cu(16): error: declaration is incompatible with "double y0(double)"

/usr/include/bits/mathcalls.h(241): here



1 error detected in the compilation of "/tmp/tmpxft_00000b2f_00000000-4_undistort.cpp1.ii".


Now i get some questions:1.does only arch>=sm_13 can support double ?

2.what is the mean of error ( __constant__ double y0 = 0.058500;)?

3.I don't know exactly what the headfiles of cuda i should include,and did i use nvcc options correct?

your help would be greatly appreciated /biggrin.gif' class='bbc_emoticon' alt=':biggrin:' />

#1
Posted 04/12/2012 03:05 PM   
CUDA 4.1 added a function with the signature

[code]
double y0(double);
[/code]
to the CUDA math library, which is one of the Bessel functions. The function is named in accordance with long-standing Unix usage.

Variables and function names share the same namespace, therefore it is not possible to have a variable named y0 if there is already a function y0(). You will get the same effect on a host system that supports the Bessel function y0(), in particular Linux. For example, the following does not compile under gcc on Linux (it will work just fine if you define VAR_NAME to be, say, t):

[code]
#include <math.h>
#include <stdio.h>
#include <stdlib.h>

#define VAR_NAME y0

int main (void)
{
double VAR_NAME = 0.58;
double res = y0 (VAR_NAME);

printf ("y0(%23.16e) = %23.16e\n", VAR_NAME, res);

return 0;
}
[/code]
CUDA 4.1 added a function with the signature





double y0(double);


to the CUDA math library, which is one of the Bessel functions. The function is named in accordance with long-standing Unix usage.



Variables and function names share the same namespace, therefore it is not possible to have a variable named y0 if there is already a function y0(). You will get the same effect on a host system that supports the Bessel function y0(), in particular Linux. For example, the following does not compile under gcc on Linux (it will work just fine if you define VAR_NAME to be, say, t):





#include <math.h>

#include <stdio.h>

#include <stdlib.h>



#define VAR_NAME y0



int main (void)

{

double VAR_NAME = 0.58;

double res = y0 (VAR_NAME);



printf ("y0(%23.16e) = %23.16e\n", VAR_NAME, res);



return 0;

}

#2
Posted 04/12/2012 08:29 PM   
[quote name='njuffa' date='13 April 2012 - 04:29 AM' timestamp='1334262575' post='1395481']
CUDA 4.1 added a function with the signature

[code]
double y0(double);
[/code]
to the CUDA math library, which is one of the Bessel functions. The function is named in accordance with long-standing Unix usage.

Variables and function names share the same namespace, therefore it is not possible to have a variable named y0 if there is already a function y0(). You will get the same effect on a host system that supports the Bessel function y0(), in particular Linux. For example, the following does not compile under gcc on Linux (it will work just fine if you define VAR_NAME to be, say, t):

[code]
#include <math.h>
#include <stdio.h>
#include <stdlib.h>

#define VAR_NAME y0

int main (void)
{
double VAR_NAME = 0.58;
double res = y0 (VAR_NAME);

printf ("y0(%23.16e) = %23.16e\n", VAR_NAME, res);

return 0;
}
[/code]
[/quote]
awesome /thanks.gif' class='bbc_emoticon' alt=':thanks:' /> i think i get the mean of the error,[code]/usr/include/bits/mathcalls.h(241): here[/code]have a function y0(),and the linux system supports the Bessel function y0().So i will try use another name of y0 var.And do you have any ideal of the Question 1/3,i am newbie thanks again /yes.gif' class='bbc_emoticon' alt=':yes:' />
[quote name='njuffa' date='13 April 2012 - 04:29 AM' timestamp='1334262575' post='1395481']

CUDA 4.1 added a function with the signature





double y0(double);


to the CUDA math library, which is one of the Bessel functions. The function is named in accordance with long-standing Unix usage.



Variables and function names share the same namespace, therefore it is not possible to have a variable named y0 if there is already a function y0(). You will get the same effect on a host system that supports the Bessel function y0(), in particular Linux. For example, the following does not compile under gcc on Linux (it will work just fine if you define VAR_NAME to be, say, t):





#include <math.h>

#include <stdio.h>

#include <stdlib.h>



#define VAR_NAME y0



int main (void)

{

double VAR_NAME = 0.58;

double res = y0 (VAR_NAME);



printf ("y0(%23.16e) = %23.16e\n", VAR_NAME, res);



return 0;

}




awesome /thanks.gif' class='bbc_emoticon' alt=':thanks:' /> i think i get the mean of the error,
/usr/include/bits/mathcalls.h(241): here
have a function y0(),and the linux system supports the Bessel function y0().So i will try use another name of y0 var.And do you have any ideal of the Question 1/3,i am newbie thanks again /yes.gif' class='bbc_emoticon' alt=':yes:' />

#3
Posted 04/13/2012 01:29 AM   
(1) Yes, you need a device with compute capability >= 1.3 to get double precision support
(3) If you compile a .cu file through nvcc the compiler will auto-include CUDA header files. You will still need to include system header files like stdio.h if you use device-side printf() etc. If you call CUDA runtime API functions from a .c file compiled with your host compiler you will need to include cuda_runtime.h. I am reasonably sure you don't to specify additional header files, but I rarely call CUDA runtime APIs from .c source, so if someone has more detailed information, please speak up.

I would recommend reading the CUDA C Programming Guide, it will answer most (if not all) of your questions. I know the Programming Guide can seem a bit overwhelming at first, so you might want to consider using an additional introductory book such as "CUDA by Example".
(1) Yes, you need a device with compute capability >= 1.3 to get double precision support

(3) If you compile a .cu file through nvcc the compiler will auto-include CUDA header files. You will still need to include system header files like stdio.h if you use device-side printf() etc. If you call CUDA runtime API functions from a .c file compiled with your host compiler you will need to include cuda_runtime.h. I am reasonably sure you don't to specify additional header files, but I rarely call CUDA runtime APIs from .c source, so if someone has more detailed information, please speak up.



I would recommend reading the CUDA C Programming Guide, it will answer most (if not all) of your questions. I know the Programming Guide can seem a bit overwhelming at first, so you might want to consider using an additional introductory book such as "CUDA by Example".

#4
Posted 04/13/2012 07:54 PM   
[quote name='njuffa' date='14 April 2012 - 03:54 AM' timestamp='1334346865' post='1395965']
(1) Yes, you need a device with compute capability >= 1.3 to get double precision support
(3) If you compile a .cu file through nvcc the compiler will auto-include CUDA header files. You will still need to include system header files like stdio.h if you use device-side printf() etc. If you call CUDA runtime API functions from a .c file compiled with your host compiler you will need to include cuda_runtime.h. I am reasonably sure you don't to specify additional header files, but I rarely call CUDA runtime APIs from .c source, so if someone has more detailed information, please speak up.

I would recommend reading the CUDA C Programming Guide, it will answer most (if not all) of your questions. I know the Programming Guide can seem a bit overwhelming at first, so you might want to consider using an additional introductory book such as "CUDA by Example".
[/quote]
Thank you very much,i am clearly understand your replies,and i have read the book "cuda by example",just have no detail about compile options,and i will read " nvcc guide " ,thanks a lot /biggrin.gif' class='bbc_emoticon' alt=':biggrin:' />
[quote name='njuffa' date='14 April 2012 - 03:54 AM' timestamp='1334346865' post='1395965']

(1) Yes, you need a device with compute capability >= 1.3 to get double precision support

(3) If you compile a .cu file through nvcc the compiler will auto-include CUDA header files. You will still need to include system header files like stdio.h if you use device-side printf() etc. If you call CUDA runtime API functions from a .c file compiled with your host compiler you will need to include cuda_runtime.h. I am reasonably sure you don't to specify additional header files, but I rarely call CUDA runtime APIs from .c source, so if someone has more detailed information, please speak up.



I would recommend reading the CUDA C Programming Guide, it will answer most (if not all) of your questions. I know the Programming Guide can seem a bit overwhelming at first, so you might want to consider using an additional introductory book such as "CUDA by Example".



Thank you very much,i am clearly understand your replies,and i have read the book "cuda by example",just have no detail about compile options,and i will read " nvcc guide " ,thanks a lot /biggrin.gif' class='bbc_emoticon' alt=':biggrin:' />

#5
Posted 04/14/2012 03:16 AM   
Scroll To Top