Radiomics methods are essential in cancer image analysis due to their excellent ability in extracting quantitative imaging features. Existing radiomics methods adopt either statistical models for data preprocessing and feature engineering or deep learning methods to shift the burden of feature engineering to the learning algorithm. These methods assume that the input images are independent and ignore their underlying relations, for example, images from the same tissue or the same individual are likely to have similar background or foreground. Taking advantage of these relations is difficult since they are usually unknown and effective relations should be task-specific, which hinders the application of existing graph neural networks (GNNs). To overcome these challenges, we develop an Image-Graph based neural Network, in which the image graph (i.e., the discrete structure) is learned together with refined features by minimizing the task-specific loss. Hence, our method applies to scenarios where image relations are unknown, and the learned graph is required to be task-specific. Experimental results on four real datasets collected from five different hospitals show that our method achieves better area under the curve than recently proposed radiomics and GNNs. We also demonstrate that our method effectively learns useful graphs for specific tasks on different datasets.