Finding a highly interpretable nonlinear model has been an important yet challenging problem, and related research is relatively scarce in the current literature. To tackle this issue, we propose a new algorithm called Feat-ABESS based on a framework that utilizes feature transformation and selection for re-interpreting many machine learning algorithms. The core idea behind Feat-ABESS is to parameterize interpretable feature transformation within this framework and construct an objective function based on these parameters. This approach enables us to identify a proper interpretable feature transformation from the optimization perspective. By leveraging a recently advanced optimization technique, Feat-ABESS can obtain a concise and interpretable model. Moreover, Feat-ABESS can perform nonlinear variable selection. Our extensive experiments on 205 benchmark datasets and case studies on two datasets have demonstrated that Feat-ABESS can achieve powerful prediction accuracy while maintaining a high level of interpretability. The comparison with existing nonlinear variable selection methods exhibits Feat-ABESS has a higher true positive rate and a lower false discovery rate.